Title
stringlengths
12
257
Annotation
stringlengths
101
3.94k
PDF
stringlengths
38
45
Latex
stringlengths
1
261k
Title: Mapping the aliphatic hydrocarbon content of interstellar dust in the Galactic plane
Abstract: We implement a new observational method for mapping the aliphatic hydrocarbon content in the solid phase in our Galaxy, based on spectrophotometric imaging of the 3.4 $\mu$m absorption feature from interstellar dust. We previously demonstrated this method in a field including the Galactic Centre cluster. We applied the method to a new field in the Galactic centre where the 3.4 $\mu$m absorption feature has not been previously measured and we extended the measurements to a field in the Galactic plane to sample the diffuse local interstellar medium, where the 3.4 $\mu$m absorption feature has been previously measured. We have analysed 3.4 $\mu$m optical depth and aliphatic hydrocarbon column density maps for these fields. Optical depths are found to be reasonably uniform in each field, without large source-to-source variations. There is, however, a weak trend towards increasing optical depth in a direction towards $b=0^{\circ}$ in the Galactic centre. The mean value of column densities and abundances for aliphatic hydrocarbon were found to be about several $\rm \times 10^{18} \, cm^{-2}$ and several tens $\times 10^{-6}$, respectively for the new sightlines in the Galactic plane. We conclude that at least 10-20% of the carbon in the Galactic plane lies in aliphatic form.
https://export.arxiv.org/pdf/2208.01058
\label{firstpage} \begin{keywords} methods: observational, techniques: photometric, infrared: ISM, ISM: dust, extinction, ISM: abundances, astrochemistry. \end{keywords} \section{Introduction}\label{Section1} Carbon is a key element in the evolution of organic material in the Universe. There is a rich carbon chemistry in our Galaxy and other galaxies because of its abundance and its ability to form complex molecules. Carbon chemistry starts in the circumstellar medium of evolved massive stars \citep{Henning1998}. It branches through different phases of the interstellar medium (ISM)\@. The material cycle between the stars and the gas in the ISM leads to the delivery of organic molecules in molecular clouds and planetary systems. A special sub-class of organic molecules called prebiotic molecules are thought to play a major role in the formation of life on Earth \citep{ChybaSagan1992, Ehrenfreund2010, Ehrenfreund2000}. Therefore, it is important to understand the life cycle of organic molecules and of the element carbon in space. Carbon is the $4^{th}$ most abundant element in the Universe. The elemental carbon abundance is the total carbon abundance in the gas and solid phases of the ISM. The total carbon abundance observed in the ISM should be in agreement with the cosmic carbon abundance estimations \citep{Zuo2021a, Zuo2021b, HensleyDrain2021}. The cosmic carbon abundance derived from the Solar atmosphere \citep{Grevesse1998, Turcotte2002, Chiappini2003, Asplund2005, Asplund2009, Asplund2021}, the Solar System abundances \citep{Lodders2003, Asplund2021}, Sun-like stars \citep{Bedell2018}, the atmosphere of young stars \citep{SnowWitt1995, Sofia2001, Bensby2006, Przybilla2008} imply that, there is up to $\sim$$358$\,ppm\footnote{ppm: parts per million} of carbon available in the ISM. Carbon is one of the major dust-forming elements. The cosmic carbon abundance sets a limit to the maximum amount of carbonaceous material available for making the interstellar dust (ISD) that accounts for the observed extinction, so that carbon abundance hidden in the ISD should be in accordance with the available carbon and other dust-forming elements in the ISM \citep{Zuo2021a, Zuo2021b, HensleyDrain2021}. The depletion of carbon in the gas phase refers to its missing part with respect to cosmic abundance. However, carbon depletion from the gas phase is not sufficient to account for the observed extinction. This has been called the \textit{carbon crisis} \citep{Kim1996, Cardelli1996, Henning2004, Wang2015, Zuo2021b}. The wavelength dependence of the extinction (A$_{\lambda}$) can be defined by an extinction curve. The interstellar extinction curves give clues as to the size and chemical composition of the dust particles \citep{Cardelli1989, Fitzpatrick1999}. The regimes covering the far-ultraviolet (UV) and mid-infrared (MIR) exhibit features of the main components of dust, carbonaceous and siliceous materials \citep{DraineISD2003, Gordon2019, Gordon2021, HensleyDrain2021}. Studies of the extinction curves show that they are spatially variable through the Galaxy. The size, structure and chemical composition of the ISD play a role in this variability \citep{Fitzpatrick1999, Fitzpatrick2019, Zuo2021b}. Since the composition and structure of dust is variable, \citep{Henning2004, Jones2012a, Jones2012b, Jones2012c, Jones2013, Jones2019, DraineHensley2021}, elemental abundance estimations taking into account particle size to reproduce extinction curves are prone to large discrepancies \citep{Mathis1977, Draine1984, Kim1996, Mathis1996, Li1997, Mishra2017, Zuo2021b}. Therefore, an observational method to trace carbonaceous ISD is required, as this invisible solid component is a reservoir for organic material and the element carbon \citep{Sandford1991, VanDishoeck2014}. Carbon molecules in the gas phase can be studied using the electromagnetic spectrum from ultraviolet to radio frequencies. Although complex molecules can be discerned in diffuse molecular gas through their vibrational and rotational spectra, it is more challenging to study large molecules, as their spectra are complex \citep{McGuire2018}. In particular, when they are in the solid phase, only vibrational spectra can be used for searching for some chemical groups and for probing the carbonaceous material in the ISD\@. There are some useful emission and absorption features in the infrared spectral region for this purpose. The 3.4\,$\mu$m (2940 cm$^{-1}$) absorption feature is of particular interest since it is prominent and suitable for observational measurements. This feature arises due to the aliphatic C--H stretch of methylene (--CH$_{2}$--) and methyl (--CH$_{3}$) groups in carbonaceous material in the ISD\@. The strength of the 3.4\,$\mu$m absorption feature is proportional to the number of aliphatic C--H bonds. To estimate the amount of aliphatic hydrocarbons in solid phase in the ISM, measurements of the 3.4\,$\mu$m absorption from ISD can be combined with the absorption coefficient measurements of interstellar dust analogues (ISDAs) produced in a laboratory. We undertook laboratory measurements in order to provide a revised value for the absorption coefficient of aliphatic hydrocarbons (\citealt{Gunay2018}, hereafter Paper 1). We produced ISDAs under simulated circumstellar / interstellar-like conditions. The integrated absorption coefficient ($A$, cm molecule$^{-1}$) and line width ($\Delta \bar{\nu}$, cm$^{-1}$) (for the low-resolution spectra it becomes the filter bandwidth) that were measured from ISDAs and the optical depth ($\tau$) of the 3.4\,$\mu$m absorption can be used to obtain column density (N, cm$^{-2}$) of aliphatic hydrocarbon groups, as follows: \begin{equation}\label{eq:1} N =\frac{\tau \Delta \bar{\nu}} {A} \end{equation} In order to investigate the amount and distribution of aliphatic hydrocarbon abundances incorporated in the ISD in our Galaxy we need to map the 3.4\,$\mu$m optical depth, $\tau_{3.4\,\mu m}$, over wide fields. This requires us to obtain optical depth at 3.4\,$\mu$m for as many sightlines as possible. Measurement of the optical depth can be done readily for individual sightlines by single point or long-slit spectroscopy. As these spectroscopic processes are comparatively slow and require long observing times, Integral Field Spectroscopy (IFS) \citep{Allington2006} is used to speed up observations by simultaneously obtaining spectra in a two-dimensional field. It has become an important method in cases where there is a need to examine the spectra of extended objects (such as the ISM) as a function of position. However, narrow-band imaging can also be used to obtain spatially resolved spectral information for larger fields of view. We previously implemented a new method (\citealt{Gunay2020}, hereafter Paper 2) and trialed it through the dusty sightlines of the diffuse interstellar medium (DISM) towards a field that contains the centre of the Galaxy (hereafter Field A), where the optical depth of the 3.4\,$\mu$m absorption had already been reported in the literature (references in Paper 2). We did this in order to test the veracity of this new method, using narrow-band filters spread across the absorption feature. Several well-studied bright sources in the field were used whose data were available in NASA/IPAC Infrared Science Archive (IRSA\footnote{https://irsa.ipac.caltech.edu/frontpage/}). Their brightness values are listed in Spitzer\footnote{https://irsa.ipac.caltech.edu/Missions/spitzer.html} and 2MASS\footnote{https://irsa.ipac.caltech.edu/Missions/2mass.html} catalogs. Their spectra were previously reported by \cite{Chiar2002} (hereafter [C02]) and \cite{Moultaka2004} (hereafter [M04]), and provided calibration. We used these to determine zero points for our measurements, and thence optical depths at 3.4\,$\mu$m (as described in detail in Paper 2). We applied Equation~\ref{eq:1} using the 3.4\,$\mu$m narrow-band filter width (62\,cm$^{-1}$). We then calculated aliphatic hydrocarbon column densities using the absorption coefficient of (\textit{A} = $4.7\times10^{-18}$\,cm\,group$^{-1}$) from Paper 1. In demonstrating that the technique worked, we were able to produce an aliphatic hydrocarbon column density map for the Galactic Centre cluster in Paper 2. We found a mean value of $\tau_{3.4\,\mu m}$ $\sim$ 0.2, corresponding to a typical aliphatic hydrocarbon column density of $\sim 3 \times10^{18}$\,cm$^{-2}$. Further, there were indications that the column density is increasing in a direction towards the Galactic mid-plane. Normalised aliphatic carbon abundances in ppm were also calculated based on a gas-to-extinction ratio $N(H) = 2.04 \times 10^{21}$\,cm$^{-2}$ mag$^{-1}$ \citep{Zhu2017}, by assuming A$_{V}$$\sim$30 mag. A mean value of 43\,ppm aliphatic hydrocarbon abundance was found. Comparing this to the ISM total carbon abundance of 358\,ppm \citep{Sofia2001}, we found that approximately 12$\%$ of the carbon is in aliphatic form. This shows that ISD is an important reservoir for aliphatic hydrocarbons in the Galactic field studied. In this work, we have obtained maps of the 3.4\,$\mu$m optical depth across the two new fields and investigated the amount of aliphatic hydrocarbons. We compared the new maps with the maps previously obtained for Galactic Centre cluster field (Paper 2). We discuss the amount and distribution of aliphatic hydrocarbons in these three fields in the Galactic plane. We also tried to investigate whether there are similarities between the distribution of the aliphatic hydrocarbons in ISD and the total dust, for the Galactic Centre fields. However we could not find any relation. This paper is organised as follows. The observations and data reduction are described in Section~\ref{Section2}, data analysis in Section~\ref{Section3}, mapping applications in Section~\ref{Section4}, results in Section~\ref{Section5}, and the discussion and conclusions are presented in Section~\ref{Section6}. \section{Observations and Data Reduction}\label{Section2} In this third paper we have now applied this new technique for mapping the 3.4\,$\mu$m optical depth to two other fields; one in the Galactic Centre region (hereafter Field B), but well separated from the well-studied central cluster, the other one in the Galactic plane (hereafter Field C), well away from the Galactic Centre. To the best of our knowledge, Field B has not yet been examined for the aliphatic hydrocarbon absorption feature. It has been chosen to contain several sources with brightness m$_{L}$ \textless 7$^{m}$ and many with m$_{L}$ \textless 10$^{m}$, which allow us to apply the method. The brightest 180 sources with m$_{L}$ \textless 10$^{m}$ were used for the spectrophotometric measurements. Field C was chosen to be well away from the Galactic Centre to determine whether the method could be applied to the spiral arm regions of the Milky Way. Field C samples the DISM and is less affected by strong extinction, unlike the Galactic Centre region fields, which sample other more local extinction of the ISM of Galactic nucleus \citep{Moultaka2019, Geballe2021}. It has been chosen towards the IRAS 18511+0146 cluster, where the 3.4$\,\mu$m absorption was previously reported by \cite{Ishii2002} (hereafter [I02]) and \cite{Godard2012} (hereafter [G12]). We have applied the method by using the fluxes of two sources from the IRAS 18511+0146 cluster listed in [G12] and compared our measurements with the optical depths reported in [I02] and [G12]. The spectrophotometric observations in this study cover a larger field of view (137 arcsec) than the previous studies on IRAS 18511+0146 \citep{Vig2007, Vig2017} and include optical depth measurements of more sources than [I02] and [G12]. There are several L--band (3.6$\,\mu$m) sources with brightness of m$_{L}$ \textless 11$^{m}$ and photometric data available \citep{Vig2007, Vig2017}. We detected 18 sources and used 15 of them with m$_{L}$ \textless 11$^{m}$ for the spectrophotometric measurements. Observations were carried out on the 3.8\,m United Kingdom Infrared Telescope (UKIRT) using the UIST camera\footnote{https://www.ukirt.hawaii.edu/instruments/uist/uist.html} ($1 - 5\,\mu$m imager--spectrograph, 1024 $\times$ 1024 InSb array, 0.12 arcsec /pixel with 123 arcsec field of view). Data were obtained through service observations for two fields in the Galactic Centre region; Field A and Field B (Project ID: U/15B/D01, 15--25 September 2015) and additionally for one field in the Galactic plane; Field C (Project ID: U/16B/D01, 20--21 September 2016). The results for Field A (which includes the Galactic Centre itself) were reported in Paper 2, and the methodology used here follows that described in that paper. We summarise the parameters for the three fields in Table~\ref{tab:1}. Spectrophotometric measurements had been obtained using narrow-band filters at 3.05$\,\mu$m, 3.29$\,\mu$m, 3.4$\,\mu$m, 3.5$\,\mu$m, 3.6$\,\mu$m and 3.99$\,\mu$m. However, we used only measurements with three filter 3.29$\,\mu$m, 3.4$\,\mu$m, 3.6$\,\mu$m in order to measure the optical depth of the 3.4$\,\mu$m absorption feature, as we followed the same methodology in our previous work (Paper 2). We calculated the sensitivity thresholds (mag) for signal-to-noise ratio (S/N) = 5 using the UIST online calculator\footnote{http://www.ukirt.hawaii.edu/cgi-bin/ITC/itc.pl} for the total integration time we applied for each filter (see table~2 in Paper~2). The fields were imaged with a 3$\times$3 jitter pattern and 1 minute of integration per jitter position. The 9 point jitter pattern was then repeated, to achieve a total on-source integration time of 18 minutes. While Fields A and B were observed with a 20 arcsec grid pattern, Field C used a 7 arcsec grid, being less crowded. The pixel scale was 0.12 arcsec. The resultant fields of view (FoV) were 163 arcsec for Fields A and B and 137 arcsec for Field C (see Table~\ref{tab:1}). The seeing ranged between 0.7 to 0.8 arcsec at 3.6$\,\mu$m during the observations. \begin{table*} \begin{center} \caption{Parameters for the observed fields.} \label{tab:1} \centering \begin{tabular}{| c | c | c | c | } \hline Targets & Field A & Field B & Field C \\ \hline Longitude & $l: 359.945^{\circ}$ & $l: 359.765^{\circ}$ & $l: 034.821^{\circ}$ \\ Latitude & $b: -0.045^{\circ}$ & $b: -0.045^{\circ}$ & $b: 0.352^{\circ}$ \\ \hline Field of View&163 arcsec & 163 arcsec & 137 arcsec \\ \hline \multirow{1}{*}{Jitter}&9pt jitter & 9pt jitter & 9pt jitter \\ \multirow{1}{*}{Pattern}&offsets of $20''$ & offsets of $20''$ & offsets of $7''$\\ \hline Observing Dates & September 2015 & September 2015 & September 2016\\ \hline \end{tabular} \end{center} \end{table*} Data reduction was carried out using the Starlink\footnote{http://starlink.eao.hawaii.edu/starlink} / ORAC-DR\footnote{http://www.starlink.ac.uk/docs/sun230.htx/sun230.html} data reduction pipeline using the recipe described in Paper 2. For each jittered frame a reduced image is created and added into a master mosaic to improve signal to noise. The final mosaics were not trimmed to the dimensions of a single frame, thus the noise is greater in the peripheral areas which have received less total exposure time. The resultant images for each of the three fields in the 3.4$\,\mu$m filter are shown in Figure~\ref{fig1}. \section{Data Analysis}\label{Section3} \subsection{Photometric measurements} As per Paper 2, optimal photometry was carried out by using the Starlink / GAIA\footnote{http://star-www.dur.ac.uk/~pdraper/gaia/gaia.html} package, involving profile fitting of bright and isolated stars \citep{Naylor1998}. Using the sensitivity thresholds for each filter (see table~2 in Paper~2), we determined that the minimum brightness level of 11.9$^{m}$ is required for the measurements with 3.29$\,\mu$m filter, 12.2$^{m}$ is required for the 3.4$\,\mu$m filter and 12.1$^{m}$ is required for the 3.6$\,\mu$m filter to satisfy the S/N = 5 criterion. To ensure that the optical depth measurements are obtained with as highest S/N ratio as possible, we eliminated the stars fainter than 11.0$^{m}$. Previously, we analysed the brightest 200 sources with magnitudes of $m_{3.6\,\mu m}$ $\le$ 9.5$^{m}$ in Field A (Paper 2). For Field B the number of bright sources is a little less than in Field A and the brightest 180 sources with magnitudes $m_{3.6\,\mu m}$ $\le$ 10$^{m}$ are selected. For Field C, only 15 sources with magnitudes of $m_{3.6\,\mu m}$ $\le$ 11$^{m}$ are found to be useful. \subsection{Zero point calibration} In the previous study, to calibrate Field A data, we extracted spectral fluxes for the bright L--band sources in the GC cluster, as measured by [C02] and [M04] to determine zero points (ZPs) for each filter. After the comparison, GCIRS 7 as measured by [C02] was chosen (hereafter referred to as GCIRS7--C02) to calibrate Field A data (described in detail in Paper 2). The fluxes for GCIRS7--C02 and the ZPs so obtained have been presented in Paper 2 (tables \ref{tab:3} and \ref{tab:4}). In this study, we applied the same ZP values to calibrate the narrow-band measurements of the new fields. This also allowed us to compare and interpret the optical depth measurements of Field B and Field C together with Field A. Unlike Field A, where we previously compared our results with the values obtained by [M04], there are no sources within the Field B available to provide a self-consistency check on this calibration, since the optical depths at 3.4$\,\mu$m have not been measured before. In the case of Field C there are sources previously studied in the literature (\citealt{Vig2007, Vig2017}, [I02] and [G12]), so we used Field C for checking the cross-calibration procedure we applied. We present the calibrated fluxes for the sources in Field C at 3.3$\,\mu$m, 3.4$\,\mu$m and 3.6$\,\mu$m as well as the derived 3.4\,$\mu$m optical depths, $\tau_{3.4\,\mu m}$, in Table \ref{tab:2}, together with their galactic and celestial coordinates. Their 2MASS K-band (2.17\,$\mu$m) band brightnesses and Spitzer IRAC Ch1(3.6\,$\mu$m) brightnesses are also listed and corresponding SSTGLMA\footnote{https://irsa.ipac.caltech.edu/data/SPITZER/GLIMPSE/} (Spitzer Space Telescope The GLIMPSE Archive) designations are indicated. For comparison, we also note the corresponding IDs of the three sources for which the 3.4\,$\mu$m optical depths were previously measured by [G12] (i.e.\ S7, S10, S11) in the footnote to Table \ref{tab:2}. \begin{table*} \begin{center} \caption{Source IDs, galactic and celestial coordinates (J2000) in degrees, fluxes ($\times$\,10$^{-17}$ W\,cm$^{-2}$ $\mu$m$^{-1}$) and 3.4\,$\mu$m optical depths ($\tau_{3.4\,\mu m}$) determined for the sources in Field C. In addition, corresponding SSTGLMA designations, and brightness values (mag) from 2MASS at 2.17$\,\mu$m and Spitzer IRAC at 3.6$\,\mu$m, are also given}. \centering \label{tab:2} \begin{tabular}{| c | c | c | c | c | c c c | c | c | c c |} \hline Source & \multirow{2}{*}{ \textit {l} } & \multirow{2}{*}{ \textit {b} } & \multirow{2}{*}{ RA } & \multirow{2}{*}{ Dec} & \multicolumn{3}{c|}{Fluxes } & \multirow{2}{*}{$\tau_{3.4\,\mu m}$} & \multirow{2}{*}{ SSTGLMA } & \multicolumn{2}{c|}{Brightnesses} \\ No & & & & & 3.3$\,\mu$m & 3.4$\,\mu$m & 3.6$\,\mu$m & & &2.17$\,\mu$m & 3.6$\,\mu$m \\ \hline 1 & 34.820 & 0.351 & 283.4091 & 1.8401 & 41.03 & 32.53 & 41.92 & 0.24 & - & - & - \\ 2 & 34.808 & 0.344 & 283.4094 & 1.8261 & 8.36 & 6.37 & 6.20 & 0.18 & G034.8087+00.3453 & 6.10 & 6.79 \\ 3 & 34.817 & 0.347 & 283.4112 & 1.8359 & 0.82 & 0.82 & 1.19 & 0.14 & G034.8183+00.3482 & 10.97 & 7.68 \\ 4 & 34.824 & 0.366 & 283.3973 & 1.8511 & 1.04 & 0.67 & 0.86 & 0.39 & - & - & - \\ 5 & 34.817 & 0.345 & 283.4126 & 1.8353 & 0.30 & 0.37 & 0.63 & 0.10 & G034.8185+00.3467 & 12.66 & 7.75 \\ 6 & 34.825 & 0.355 & 283.4076 & 1.8471 & 0.52 & 0.33 & 0.35 & 0.33 & G034.8266+00.3565 & 9.25 & 9.01 \\ 7 & 34.823 & 0.333 & 283.4259 & 1.8351 & 0.32 & 0.22 & 0.20 & 0.26 & G034.8243+00.3348 & 9.76 & 9.58 \\ 8 & 34.811 & 0.328 & 283.4245 & 1.8217 & 0.16 & 0.14 & 0.18 & 0.20 & G034.8117+00.3299 & 12.27 & 9.93 \\ 9 & 34.820 & 0.335 & 283.4227 & 1.8329 & 0.14 & 0.11 & 0.13 & 0.20 & G034.8208+00.3366 & 11.95 & 10.14 \\ 10 & 34.833 & 0.342 & 283.4226 & 1.8478 & 0.19 & 0.12 & 0.13 & 0.34 & G034.8341+00.3435 & 10.44 & 10.22 \\ 11 & 34.827 & 0.349 & 283.4136 & 1.8459 & 0.11 & 0.08 & 0.08 & 0.29 & G034.8282+00.3506 & 11.02 & 10.61 \\ 12 & 34.813 & 0.349 & 283.4073 & 1.8330 & 0.05 & 0.04 & 0.06 & 0.33 & G034.8139+00.3504 & 13.11 & 10.95 \\ 13 & 34.817 & 0.356 & 283.4027 & 1.8403 & 0.06 & 0.04 & 0.06 & 0.41 & G034.8184+00.3578 & 13.17 & 11.07 \\ 14 & 34.818 & 0.352 & 283.4072 & 1.8389 & 0.08 & 0.06 & 0.05 & 0.19 & - & - & - \\ 15 & 34.810 & 0.335 & 283.4185 & 1.8240 & 0.05 & 0.03 & 0.04 & 0.44 & G034.8110+00.3362 & 12.78 & 11.50 \\ \hline \end{tabular} \begin{flushleft} \begin{footnotesize} Note that the IDs for the sources measured by [G12] are as follows: 1:S7, 3:S10, 5:S11. \end{footnotesize} \end{flushleft} \end{center} \end{table*} The calibrated spectra for the Field C sources are shown in Figure \ref{fig2} (on the left panel). They are compared with the flux levels (on the right panel) calculated using interpolation between 2MASS K and Spitzer IRAC Ch1 measurements, and found to be largely consistent. The linear continua between 3.3$\,\mu$m and 3.6$\,\mu$m are indicated by dotted lines in Figure \ref{fig2}, from which the optical depth of the 3.4$\,\mu$m absorption feature was calculated. Since the optical depth is given by $\tau = -\ln$(I/I$_{0}$), where I is the measured flux and I$_{0}$ is the estimated continuum emission flux at 3.4\,$\mu$m, there is an uncertainty in optical depth values due to continuum flux estimations as we do not know the black body spectrum of the background stellar light source. This uncertainty has been mentioned in [I02], [M04] and [G12] and also discussed in detail in Paper 2. Moreover, the 3.4$\,\mu$m absorption feature is superimposed on the long wavelength wing of the broad 3.1$\,\mu$m H$_{2}$O ice absorption band for lines of sight through Field A, B and C ([I02], [C02], [M04] and [G12]) and this adds another complication for the determination of the optical depth of 3.4\,$\mu$m absorption. To estimate \textit{the lowest limit} of aliphatic hydrocarbon absorptions, \textit{local linear continua} between 3.3$\,\mu$m to 3.6$\,\mu$m were applied to the spectra in [I02], [M04] and [G12]. Similarly, in our previous work, optical depths for Field A sources were calculated by interpolating across the aliphatic absorption feature from 3.3$\,\mu$m to 3.6$\,\mu$m to yield minimum aliphatic hydrocarbon absorptions. In this study we followed the same methodology for the optical depth measurements, so that we can compare our results with the measurements of Field A and the minimum levels of the aliphatic hydrocarbon absorptions reported in [I02] and [G12]. There are three sources in Field C: S7, S10 and S11, that have previously measured 3.4$\,\mu$m optical depths in [G12]. The brightest of these, S7, is partially saturated, therefore our derived values for it are not reliable. However, the results we obtained for the other two sources, S10 and S11, were found to be similar to the literature values given in [G12] (see Table~\ref{tab:3}). Then we compared our results with the optical depth measurements reported by [I02] for the line of sight through the IRAS 18511+0146 cluster. We found the measurements of S10 and S11 in agreement with the maximum level they reported despite the fact that their methods involve spectroscopic data (they measured aliphatic hydrocarbon absorptions around 3.4$\,\mu$m, produced by methylene and methyl groups separately) (see Table~\ref{tab:3}). \begin{table*} \begin{center} \footnotesize \caption{Optical depth of the 3.4\,$\mu$m absorption feature for previously measured sources S7, S10 and S11 in Field C based on the GCIRS7--C02 calibration, compared to the values determined in [G12] and [I02]. } \label{tab:3} \centering \begin{tabular}{| c | c | c | c |} \hline \multirow{2}{*}{ } & \multicolumn{3}{c|}{$\tau_{3.4\,\mu m}$} \\ \cline{2-4} & S7 & S10 & S11 \\ \hline this study & 0.239 & 0.137 & 0.123 \\ \hline [G12] & 0.073 & 0.093 & 0.119 \\ \hline [I02] & \multicolumn{3}{c|}{ $0.087^{1}$ - $0.066^{2}$ } \\ \hline \end{tabular} \end{center} \begin{footnotesize} The reported optical depths values for methylene (1) and methyl (2) groups, respectively. \end{footnotesize} \end{table*} We did, however, also check this calibration method with several other methods for determining the fluxes for the sources in Field C. This included using the fluxes for sources S10 and S11 in the spectra of [G12] to provide the calibration (i.e.\ instead of using the spectrum of GCIRS7 from [C02]). We also used the photometric fluxes for GCIRS7 in Field A and for S10 \& S11 in Field C, obtained by interpolating the 2MASS K-band (2.17$\,\mu$m) and Spitzer IRAC\footnote{https://irsa.ipac.caltech.edu/data/SPITZER/} Ch1-band (3.6$\,\mu$m) fluxes \citep{Vig2007} onto the narrow filter bands for our observations (and including a correction for the flux at 3.4\,$\mu$m for the reported 3.4\,$\mu$m optical depths from [C02] and [G12]), to provide the flux calibration. In all cases internally consistent results were found for each of these 5 additional calibration methods, with a near constant offset between the derived 3.4\,$\mu$m optical depths for all the sources in Field C found for each of these calibration methods and that determined when using fluxes derived for GCIRS7 from [C02]; these offsets range from 0.03 to 0.1 dependent on the particular calibration source chosen. These offsets are less than the differences in 3.4\,$\mu$m optical depths determined in Field A by different authors for the same source, as these authors apply different analysis methods to their data for a source. For instance, for GCIRS7, [C02] and [C04] determined a 3.4\,$\mu$m optical depth of 0.15 and 0.41, respectively, using the different sets of spectral data they each obtained. \cite{Moultaka2004} also compared the derived 3.4\,$\mu$m optical depths for a given source applying different methods to estimate the continuum level of the spectrum. For example, for the source IRS16C they found the 3.4\,$\mu$m optical depths ranging from 0.14 to 0.49, depending on the method they used. It is clear that it is difficult to obtain precise values for the optical depth, though the relative differences between sources are more reliably obtained, as we demonstrated in Paper 2\@. Given these uncertainties, the offsets we obtained (i.e.\ from 0.03 to 0.1) are less than the variations in 3.4\,$\mu$m optical depth determined for the same source by the mentioned studies above. Since we determined that using GCIRS7--C02 provided the best calibration set for Field A data in Paper 2, we have then applied it to the data for Fields B and C. \section{Mapping Applications}\label{Section4} For mapping applications, 180 sources in Field B and for 15 sources in Field C (which are generally fainter than GC field sources) were selected to satisfy the S/N criteria. The biases in aliphatic hydrocarbon maps arise primarily from two sources. The first is that the distances of the background sources are largely unknown since it is not possible to distinguish a reddened high mass, hot star from a low mass, cooler, intrinsically redder star. This means that we cannot be certain whether variations in the absorption are due to density or distances. The second bias arises because the data for each field are brightness limited. For the brightness limited bias, we split the data into quartiles according their brightness. We would like to note that there would be bias due to effective temperature and extinction degeneracy and some of the faintest objects could be the most reddened objects. However, we show below that the similar result was found independent of the brightness in the resultant quartile maps. The issue of where the sources lie was found to be not a serious concern since we could not find large source-to-source variations in maps prepared by using the quartile data sets. In the case of the Galactic Centre this is in fact the gas and dust in the central regions of the Galaxy. It is likely that most of the sources in Field A and Field B are at roughly the same distance and so have the same columns of absorbing material in front of them. For Field C, we have very limited number of sources to draw up a conclusion. However, since Field C samples IRAS 18511+0146 cluster sources, we can assume that most of the sources are sampling the same columns of absorbing material. We also compared the optical depths and the brightness of the sources by using the resultant maps. However, we could not find any correlation. \subsection{Application to a new field in the GC: Field B} We present the fluxes for 180 sources measured at 3.3$\,\mu$m, 3.4$\,\mu$m and 3.6$\,\mu$m in Field B in Table \ref{tab:4} and the derived 3.4\,$\mu$m optical depths in Table \ref{tab:5}, together with their celestial coordinates. Optical depths were found to vary over a relatively small range across the field with the mean value of $0.36 \pm 0.09$. We checked whether systematic biases due to flux levels of sources (to satisfy S/N=5) may have affected the optical depth determinations by dividing the sources into four quartiles based on their fluxes, similarly we applied previously for Field A (described in detail in Paper 2). The mean 3.4\,$\mu$m optical depths and standard deviation in each quartile are shown in Table \ref{tab:6}. The differences between the derived 3.4\,$\mu$m optical depths are not significant, consistent with the flux level not biasing its determination. \begin{table*} \begin{center} \caption{Calibrated fluxes ($\times$\,10$^{-18}$ W\,cm$^{-2}$ $\mu$m$^{-1}$) for the Field B sources used for 3.4$\,\mu$m optical depth calculations.} \centering \label{tab:4} \begin{tabular}{| p {0.6 cm} p {0.7 cm} p {0.7 cm} p {0.7 cm} | p {0.6 cm} p {0.7 cm} p {0.7 cm} p {0.7 cm} | p {0.6 cm} p {0.7 cm} p {0.7 cm} p {0.7 cm} | p {0.6 cm} p {0.7 cm} p {0.7 cm} p {0.7 cm} |} \hline Source & \multicolumn{3}{c|}{Fluxes } & Source & \multicolumn{3}{c|}{Fluxes } & Source & \multicolumn{3}{c|}{Fluxes} & Source &\multicolumn{3}{c|}{Fluxes} \\ \multirow{1}{*}{No} & 3.3$\,\mu$m & 3.4$\,\mu$m & 3.6$\,\mu$m & \multirow{1}{*}{No} & 3.3$\,\mu$m & 3.4$\,\mu$m &3.6$\,\mu$m &\multirow{1}{*}{No} & 3.3$\,\mu$m & 3.4$\,\mu$m & 3.6$\,\mu$m & \multirow{1}{*}{No} & 3.3$\,\mu$m & 3.4$\,\mu$m & 3.6$\,\mu$m \\ \hline 1 & 46.2 & 36.6 & 47.1 & 46 & 6.2 & 3.5 & 4.5 & 91 & 3.4 & 2.2 & 2.7 & 136 & 2.0 & 1.4 & 1.7 \\ 2 & 48.4 & 35.6 & 42.4 & 47 & 4.8 & 3.5 & 4.4 & 92 & 4.3 & 2.0 & 2.7 & 137 & 2.0 & 1.3 & 1.6 \\ 3 & 34.5 & 25.4 & 31.4 & 48 & 5.3 & 3.7 & 4.4 & 93 & 2.5 & 1.7 & 2.7 & 138 & 2.1 & 1.3 & 1.6 \\ 4 & 30.2 & 16.6 & 24.2 & 49 & 5.1 & 3.6 & 4.4 & 94 & 3.3 & 2.3 & 2.7 & 139 & 2.0 & 1.5 & 1.6 \\ 5 & 33.2 & 23.2 & 24.0 & 50 & 7.1 & 3.7 & 4.3 & 95 & 2.5 & 2.0 & 2.7 & 140 & 1.9 & 1.2 & 1.6 \\ 6 & 18.9 & 14.2 & 21.1 & 51 & 4.4 & 3.2 & 4.3 & 96 & 3.2 & 2.2 & 2.7 & 141 & 1.7 & 1.1 & 1.6 \\ 7 & 23.5 & 16.5 & 20.8 & 52 & 5.3 & 3.5 & 4.2 & 97 & 3.3 & 2.2 & 2.7 & 142 & 1.6 & 1.1 & 1.6 \\ 8 & 20.4 & 14.0 & 20.4 & 53 & 3.8 & 2.6 & 4.1 & 98 & 2.0 & 1.4 & 2.6 & 143 & 1.6 & 1.0 & 1.6 \\ 9 & 11.8 & 10.7 & 19.7 & 54 & 4.6 & 3.0 & 4.1 & 99 & 3.4 & 2.0 & 2.5 & 144 & 2.1 & 1.3 & 1.5 \\ 10 & 15.0 & 11.3 & 16.7 & 55 & 4.6 & 2.9 & 4.1 & 100 & 3.0 & 1.7 & 2.5 & 145 & 1.8 & 1.2 & 1.5 \\ 11 & 14.2 & 9.9 & 14.3 & 56 & 4.7 & 3.3 & 4.0 & 101 & 2.4 & 1.8 & 2.5 & 146 & 1.7 & 1.1 & 1.5 \\ 12 & 21.5 & 12.2 & 13.4 & 57 & 3.7 & 2.8 & 4.0 & 102 & 2.7 & 1.9 & 2.5 & 147 & 1.9 & 1.2 & 1.5 \\ 13 & 16.2 & 11.4 & 13.3 & 58 & 3.5 & 2.6 & 4.0 & 103 & 2.3 & 1.7 & 2.4 & 148 & 2.2 & 1.4 & 1.5 \\ 14 & 11.9 & 8.5 & 12.1 & 59 & 4.7 & 3.0 & 4.0 & 104 & 2.5 & 1.6 & 2.4 & 149 & 1.3 & 0.9 & 1.5 \\ 15 & 9.4 & 7.4 & 11.3 & 60 & 4.7 & 2.9 & 4.0 & 105 & 2.8 & 1.8 & 2.3 & 150 & 1.5 & 1.0 & 1.3 \\ 16 & 12.0 & 7.8 & 10.9 & 61 & 4.6 & 2.8 & 3.9 & 106 & 3.1 & 1.6 & 2.3 & 151 & 1.6 & 1.2 & 1.3 \\ 17 & 11.0 & 7.4 & 9.6 & 62 & 3.7 & 3.0 & 3.9 & 107 & 2.6 & 2.0 & 2.3 & 152 & 1.5 & 0.8 & 1.3 \\ 18 & 12.3 & 6.8 & 9.4 & 63 & 4.9 & 2.9 & 3.9 & 108 & 2.5 & 1.6 & 2.3 & 153 & 1.9 & 1.0 & 1.3 \\ 19 & 10.7 & 5.9 & 9.2 & 64 & 3.5 & 2.5 & 3.9 & 109 & 2.6 & 1.6 & 2.3 & 154 & 1.8 & 1.0 & 1.3 \\ 20 & 9.9 & 6.3 & 8.7 & 65 & 4.3 & 2.8 & 3.9 & 110 & 2.6 & 1.8 & 2.3 & 155 & 1.6 & 1.2 & 1.3 \\ 21 & 9.4 & 6.6 & 8.1 & 66 & 4.3 & 3.0 & 3.9 & 111 & 2.8 & 1.8 & 2.2 & 156 & 1.6 & 0.9 & 1.3 \\ 22 & 9.8 & 5.9 & 8.1 & 67 & 3.0 & 2.5 & 3.8 & 112 & 2.8 & 1.9 & 2.2 & 157 & 1.7 & 1.0 & 1.2 \\ 23 & 9.8 & 6.4 & 7.8 & 68 & 4.4 & 3.1 & 3.8 & 113 & 2.9 & 1.8 & 2.2 & 158 & 1.4 & 1.0 & 1.2 \\ 24 & 9.1 & 6.5 & 7.7 & 69 & 4.1 & 2.8 & 3.7 & 114 & 3.0 & 1.6 & 2.2 & 159 & 1.3 & 0.9 & 1.1 \\ 25 & 7.1 & 5.1 & 6.7 & 70 & 4.1 & 3.0 & 3.5 & 115 & 2.7 & 1.8 & 2.2 & 160 & 1.2 & 0.8 & 1.1 \\ 26 & 8.3 & 5.5 & 6.5 & 71 & 3.9 & 2.7 & 3.5 & 116 & 2.0 & 1.5 & 2.2 & 161 & 1.5 & 0.8 & 1.1 \\ 27 & 7.8 & 5.6 & 6.5 & 72 & 4.2 & 2.8 & 3.5 & 117 & 2.2 & 1.5 & 2.1 & 162 & 1.4 & 0.7 & 1.1 \\ 28 & 6.8 & 4.9 & 6.5 & 73 & 4.5 & 3.0 & 3.5 & 118 & 2.3 & 1.5 & 2.1 & 163 & 1.6 & 0.8 & 1.1 \\ 29 & 7.9 & 4.8 & 6.4 & 74 & 3.7 & 2.6 & 3.3 & 119 & 2.6 & 1.7 & 2.1 & 164 & 1.5 & 0.9 & 1.0 \\ 30 & 5.6 & 4.5 & 6.3 & 75 & 4.0 & 2.8 & 3.3 & 120 & 2.7 & 1.6 & 2.1 & 165 & 1.3 & 0.9 & 1.0 \\ 31 & 7.6 & 5.0 & 6.2 & 76 & 4.0 & 2.6 & 3.2 & 121 & 2.4 & 1.5 & 2.1 & 166 & 1.1 & 0.8 & 1.0 \\ 32 & 6.0 & 4.4 & 6.2 & 77 & 3.7 & 2.3 & 3.2 & 122 & 2.8 & 1.7 & 2.0 & 167 & 1.5 & 0.9 & 1.0 \\ 33 & 4.4 & 3.7 & 6.1 & 78 & 3.9 & 2.5 & 3.1 & 123 & 2.3 & 1.5 & 2.0 & 168 & 1.3 & 0.6 & 1.0 \\ 34 & 6.5 & 4.5 & 5.8 & 79 & 3.5 & 2.5 & 3.1 & 124 & 2.4 & 1.6 & 1.9 & 169 & 1.2 & 0.7 & 1.0 \\ 35 & 6.9 & 4.4 & 5.8 & 80 & 3.8 & 2.2 & 3.0 & 125 & 2.5 & 1.6 & 1.9 & 170 & 1.1 & 0.7 & 1.0 \\ 36 & 5.5 & 4.1 & 5.5 & 81 & 3.5 & 2.5 & 3.0 & 126 & 2.5 & 1.4 & 1.9 & 171 & 1.3 & 0.8 & 1.0 \\ 37 & 7.2 & 3.7 & 5.4 & 82 & 2.8 & 1.9 & 3.0 & 127 & 2.0 & 1.3 & 1.8 & 172 & 1.1 & 0.8 & 1.0 \\ 38 & 7.6 & 5.3 & 5.1 & 83 & 3.5 & 2.3 & 3.0 & 128 & 2.2 & 1.4 & 1.8 & 173 & 1.3 & 0.8 & 1.0 \\ 39 & 4.7 & 3.3 & 5.1 & 84 & 3.6 & 2.5 & 3.0 & 129 & 2.3 & 1.6 & 1.8 & 174 & 1.1 & 0.7 & 0.9 \\ 40 & 5.6 & 3.9 & 5.0 & 85 & 4.4 & 2.1 & 2.9 & 130 & 2.2 & 1.5 & 1.8 & 175 & 1.2 & 0.6 & 0.9 \\ 41 & 5.8 & 4.2 & 4.9 & 86 & 3.7 & 2.2 & 2.9 & 131 & 2.0 & 1.3 & 1.8 & 176 & 1.0 & 0.6 & 0.9 \\ 42 & 5.5 & 3.4 & 4.9 & 87 & 3.4 & 2.3 & 2.8 & 132 & 2.0 & 1.3 & 1.7 & 177 & 0.9 & 0.6 & 0.9 \\ 43 & 4.9 & 3.6 & 4.7 & 88 & 3.6 & 2.1 & 2.8 & 133 & 2.1 & 1.4 & 1.7 & 178 & 1.1 & 0.7 & 0.8 \\ 44 & 5.7 & 3.8 & 4.7 & 89 & 3.3 & 2.2 & 2.8 & 134 & 2.0 & 1.3 & 1.7 & 179 & 0.7 & 0.6 & 0.8 \\ 45 & 4.1 & 2.8 & 4.5 & 90 & 3.2 & 2.3 & 2.7 & 135 & 1.9 & 1.4 & 1.7 & 180 & 1.0 & 0.6 & 0.8 \\ \hline \end{tabular} \end{center} \end{table*} \begin{table*} \begin{center} \caption{Celestial coordinates (J2000) in degrees and the optical depths for the 3.4$\,\mu$m absorption feature for the Field B sources.} \centering \label{tab:5} \begin{tabular}{| p {0.3 cm} p {0.9 cm} p {1.1 cm} p {0.4 cm} | p {0.3 cm} p {0.9 cm} p {1.1 cm} p {0.4 cm} | p {0.3 cm} p {0.9 cm} p {1.1 cm} p {0.4 cm} | p {0.3 cm} p {0.9 cm} p {1.1 cm} p {0.4 cm} |} \hline No & RA & Dec & $\tau_{3.4}$ & No & RA & Dec & $\tau_{3.4}$ & No & RA & Dec & $\tau_{3.4}$ & No & RA & Dec & $\tau_{3.4}$ \\ \hline 1 & 266.3042 & -29.1602 & 0.18 & 46 & 266.2836 & -29.1629 & 0.40 & 91 & 266.2875 & -29.1705 & 0.30 & 136 & 266.3030 & -29.1611 & 0.24 \\ 2 & 266.3025 & -29.1626 & 0.21 & 47 & 266.3126 & -29.1699 & 0.22 & 92 & 266.2832 & -29.1448 & 0.53 & 137 & 266.3142 & -29.1482 & 0.34 \\ 3 & 266.3159 & -29.1596 & 0.22 & 48 & 266.3279 & -29.1647 & 0.24 & 93 & 266.2914 & -29.1578 & 0.34 & 138 & 266.2883 & -29.1472 & 0.36 \\ 4 & 266.2910 & -29.1410 & 0.46 & 49 & 266.3039 & -29.1798 & 0.22 & 94 & 266.3055 & -29.1532 & 0.27 & 139 & 266.3082 & -29.1708 & 0.19 \\ 5 & 266.3010 & -29.1593 & 0.21 & 50 & 266.2876 & -29.1405 & 0.47 & 95 & 266.3076 & -29.1646 & 0.21 & 140 & 266.3146 & -29.1802 & 0.31 \\ 6 & 266.3240 & -29.1552 & 0.27 & 51 & 266.3311 & -29.1623 & 0.24 & 96 & 266.3011 & -29.1587 & 0.27 & 141 & 266.2961 & -29.1492 & 0.36 \\ 7 & 266.3170 & -29.1558 & 0.25 & 52 & 266.3264 & -29.1743 & 0.25 & 97 & 266.3084 & -29.1496 & 0.31 & 142 & 266.3118 & -29.1658 & 0.27 \\ 8 & 266.2954 & -29.1552 & 0.32 & 53 & 266.3081 & -29.1404 & 0.35 & 98 & 266.3254 & -29.1505 & 0.40 & 143 & 266.2952 & -29.1511 & 0.37 \\ 9 & 266.3098 & -29.1553 & 0.24 & 54 & 266.3212 & -29.1406 & 0.32 & 99 & 266.2858 & -29.1595 & 0.39 & 144 & 266.3264 & -29.1473 & 0.30 \\ 10 & 266.2942 & -29.1709 & 0.25 & 55 & 266.3158 & -29.1403 & 0.37 & 100 & 266.2883 & -29.1442 & 0.46 & 145 & 266.2868 & -29.1729 & 0.31 \\ 11 & 266.2932 & -29.1643 & 0.30 & 56 & 266.3254 & -29.1623 & 0.24 & 101 & 266.3112 & -29.1722 & 0.23 & 146 & 266.3279 & -29.1550 & 0.33 \\ 12 & 266.2844 & -29.1589 & 0.36 & 57 & 266.3032 & -29.1795 & 0.25 & 102 & 266.3022 & -29.1558 & 0.27 & 147 & 266.3191 & -29.1600 & 0.27 \\ 13 & 266.3212 & -29.1770 & 0.23 & 58 & 266.2929 & -29.1760 & 0.26 & 103 & 266.3063 & -29.1772 & 0.27 & 148 & 266.3210 & -29.1479 & 0.27 \\ 14 & 266.3246 & -29.1454 & 0.28 & 59 & 266.2985 & -29.1775 & 0.30 & 104 & 266.3285 & -29.1793 & 0.30 & 149 & 266.3251 & -29.1395 & 0.38 \\ 15 & 266.2986 & -29.1668 & 0.24 & 60 & 266.3311 & -29.1443 & 0.35 & 105 & 266.2853 & -29.1564 & 0.33 & 150 & 266.3228 & -29.1427 & 0.34 \\ 16 & 266.2872 & -29.1621 & 0.33 & 61 & 266.2830 & -29.1721 & 0.34 & 106 & 266.2912 & -29.1421 & 0.52 & 151 & 266.3232 & -29.1639 & 0.22 \\ 17 & 266.3064 & -29.1449 & 0.30 & 62 & 266.3107 & -29.1608 & 0.19 & 107 & 266.3211 & -29.1619 & 0.18 & 152 & 266.3223 & -29.1381 & 0.45 \\ 18 & 266.2898 & -29.1459 & 0.44 & 63 & 266.3148 & -29.1377 & 0.39 & 108 & 266.2943 & -29.1548 & 0.35 & 153 & 266.2914 & -29.1437 & 0.48 \\ 19 & 266.2885 & -29.1381 & 0.50 & 64 & 266.3140 & -29.1455 & 0.33 & 109 & 266.2988 & -29.1491 & 0.37 & 154 & 266.2831 & -29.1653 & 0.39 \\ 20 & 266.3188 & -29.1436 & 0.34 & 65 & 266.3091 & -29.1432 & 0.33 & 110 & 266.3020 & -29.1699 & 0.25 & 155 & 266.3210 & -29.1556 & 0.18 \\ 21 & 266.3223 & -29.1720 & 0.24 & 66 & 266.3152 & -29.1674 & 0.25 & 111 & 266.3074 & -29.1513 & 0.30 & 156 & 266.3022 & -29.1437 & 0.38 \\ 22 & 266.2829 & -29.1776 & 0.33 & 67 & 266.3049 & -29.1607 & 0.20 & 112 & 266.2988 & -29.1479 & 0.27 & 157 & 266.2828 & -29.1594 & 0.40 \\ 23 & 266.3239 & -29.1513 & 0.29 & 68 & 266.3147 & -29.1582 & 0.24 & 113 & 266.2876 & -29.1549 & 0.33 & 158 & 266.2995 & -29.1514 & 0.25 \\ 24 & 266.3206 & -29.1682 & 0.22 & 69 & 266.3218 & -29.1522 & 0.28 & 114 & 266.2846 & -29.1485 & 0.42 & 159 & 266.3224 & -29.1787 & 0.24 \\ 25 & 266.3170 & -29.1524 & 0.25 & 70 & 266.3124 & -29.1790 & 0.21 & 115 & 266.3180 & -29.1782 & 0.26 & 160 & 266.3192 & -29.1482 & 0.31 \\ 26 & 266.3125 & -29.1510 & 0.29 & 71 & 266.3013 & -29.1668 & 0.27 & 116 & 266.3299 & -29.1723 & 0.27 & 161 & 266.2973 & -29.1782 & 0.44 \\ 27 & 266.3134 & -29.1536 & 0.22 & 72 & 266.3201 & -29.1461 & 0.28 & 117 & 266.2953 & -29.1683 & 0.29 & 162 & 266.2874 & -29.1602 & 0.40 \\ 28 & 266.3086 & -29.1775 & 0.25 & 73 & 266.3181 & -29.1712 & 0.26 & 118 & 266.2922 & -29.1608 & 0.31 & 163 & 266.2993 & -29.1373 & 0.47 \\ 29 & 266.2853 & -29.1736 & 0.33 & 74 & 266.3234 & -29.1597 & 0.26 & 119 & 266.3201 & -29.1513 & 0.34 & 164 & 266.2963 & -29.1803 & 0.34 \\ 30 & 266.3129 & -29.1626 & 0.20 & 75 & 266.3079 & -29.1555 & 0.24 & 120 & 266.3297 & -29.1790 & 0.37 & 165 & 266.2990 & -29.1569 & 0.26 \\ 31 & 266.3158 & -29.1466 & 0.30 & 76 & 266.3171 & -29.1639 & 0.31 & 121 & 266.2967 & -29.1581 & 0.36 & 166 & 266.2988 & -29.1639 & 0.22 \\ 32 & 266.3286 & -29.1543 & 0.24 & 77 & 266.2890 & -29.1669 & 0.34 & 122 & 266.2921 & -29.1736 & 0.34 & 167 & 266.3283 & -29.1702 & 0.28 \\ 33 & 266.2943 & -29.1658 & 0.21 & 78 & 266.3247 & -29.1672 & 0.28 & 123 & 266.2965 & -29.1784 & 0.29 & 168 & 266.2843 & -29.1427 & 0.47 \\ 34 & 266.3211 & -29.1594 & 0.26 & 79 & 266.3214 & -29.1570 & 0.25 & 124 & 266.3089 & -29.1504 & 0.29 & 169 & 266.2896 & -29.1451 & 0.56 \\ 35 & 266.3309 & -29.1767 & 0.32 & 80 & 266.2880 & -29.1508 & 0.40 & 125 & 266.3212 & -29.1444 & 0.28 & 170 & 266.3102 & -29.1664 & 0.32 \\ 36 & 266.3089 & -29.1791 & 0.24 & 81 & 266.3080 & -29.1546 & 0.23 & 126 & 266.2963 & -29.1390 & 0.44 & 171 & 266.3095 & -29.1575 & 0.28 \\ 37 & 266.2922 & -29.1408 & 0.50 & 82 & 266.2913 & -29.1525 & 0.37 & 127 & 266.2937 & -29.1622 & 0.36 & 172 & 266.2973 & -29.1737 & 0.24 \\ 38 & 266.3149 & -29.1574 & 0.19 & 83 & 266.3162 & -29.1496 & 0.30 & 128 & 266.2928 & -29.1781 & 0.30 & 173 & 266.3036 & -29.1701 & 0.30 \\ 39 & 266.3289 & -29.1461 & 0.30 & 84 & 266.3205 & -29.1546 & 0.28 & 129 & 266.3272 & -29.1751 & 0.24 & 174 & 266.2953 & -29.1531 & 0.33 \\ 40 & 266.3229 & -29.1518 & 0.26 & 85 & 266.2908 & -29.1378 & 0.54 & 130 & 266.2871 & -29.1779 & 0.27 & 175 & 266.2845 & -29.1509 & 0.40 \\ 41 & 266.3135 & -29.1565 & 0.22 & 86 & 266.3138 & -29.1428 & 0.37 & 131 & 266.3267 & -29.1570 & 0.33 & 176 & 266.2906 & -29.1760 & 0.36 \\ 42 & 266.2846 & -29.1626 & 0.38 & 87 & 266.3177 & -29.1619 & 0.26 & 132 & 266.2986 & -29.1597 & 0.32 & 177 & 266.2994 & -29.1678 & 0.24 \\ 43 & 266.3255 & -29.1609 & 0.24 & 88 & 266.2903 & -29.1673 & 0.35 & 133 & 266.2988 & -29.1469 & 0.27 & 178 & 266.3222 & -29.1605 & 0.24 \\ 44 & 266.3165 & -29.1542 & 0.29 & 89 & 266.3082 & -29.1451 & 0.28 & 134 & 266.2957 & -29.1596 & 0.32 & 179 & 266.3021 & -29.1534 & 0.26 \\ 45 & 266.2933 & -29.1503 & 0.37 & 90 & 266.3228 & -29.1607 & 0.26 & 135 & 266.3003 & -29.1667 & 0.20 & 180 & 266.3208 & -29.1728 & 0.37 \\ \hline \end{tabular} \end{center} \end{table*} \begin{table*} \begin{center} \footnotesize \caption{Mean fluxes ($\times$\,10$^{-18}$ W\,cm$^{-2}$ $\mu$m$^{-1}$) and 3.4 $\mu$m optical depths ($\tau_{3.4}$) for each of the quartiles in Field B.} \label{tab:6} \centering \begin{tabular}{| c | c | c | c | c | } \hline & Flux Group 1 & Flux Group 2 & Flux Group 3 & Flux Group 4 \\ \hline Flux & 11.2 & 3.5 & 2.2 & 1.2 \\ \hline $\tau_{3.4\,\mu m}$ & 0.35$\pm$0.08 & 0.36 $\pm$0.07 & 0.39 $\pm$0.07 & 0.40$\pm$0.10 \\ \hline \end{tabular} \end{center} \end{table*} We then considered the spatial distribution of the 3.4\,$\mu$m optical depth for the 45 sources in each flux (at 3.6\,$\mu$m) quartile range within Field B, with maps shown in Figure~\ref{fig3}. In this way, the effect of any possible individual erroneous measurement was also examined. While there is clearly some scatter in the distribution as there are areas within the field with few sources, the same pattern is apparent in each of the quartile ranges. There is a gradient across the field from SE to NW, with the 3.4\,$\mu$m optical depth rising approximately 50\% from one side to the other. This is independent of the brightness of individual sources and thus not likely to arise from the larger uncertainties in 3.4\,$\mu$m optical depth derived from the fainter sources. The faintest flux group does indeed show more variation in the pattern, most likely due to lower S/N in this group. We conclude that we are able to reliably measure the 3.4\,$\mu$m optical depth using this technique of imaging through narrow band filters in Field B, to a sensitivity level of $\sim$10$^{-18}$ W\,cm$^{-2}$ $\mu$m$^{-1}$ at 3.6$\,\mu$m ($\sim$10 mag). Given the increased scatter in the lowest flux quartile we remove the faintest 20 sources from the list to take the brightest 160 sources and plot their 3.4\,$\mu$m optical depths in Figure \ref{fig4}. The location of the background sources are shown by black dots. On the left panel size of the dots are proportional to their 3.4\,$\mu$m optical depth in the line of sights. On the right panel size of the dots are proportional to the flux (at 3.6\,$\mu$m) of the sources. The optical depth, in general, is seen to rise from $\sim 0.2$ to $\sim 0.6$ moving from SE to NW across this field near the centre of the Galaxy. There could readily be local source-to-source variations in 3.4\,$\mu$m optical depth, superimposed on a broader trend. However, examination of Figure \ref{fig4} (right panel), does not suggest this to be significant as little inter-source scatter is apparent. \subsection{Application to a field in the Galactic Plane: Field C (IRAS 18511+0146)} We used the resultant spectra of 15 sources to provide a low resolution map of the 3.4$\,\mu$m optical depth across the Field C in Figure \ref{fig5}. Colour bars and contours indicate the 3.4\,$\mu$m optical depth levels, which were found to range from 0.1 to 0.4 across the field. On the left panel, the location of the background sources are shown and size of the dots are proportional to the flux (at 3.6\,$\mu$m) of the sources (the brightness of S7 is not shown but its location indicated by a \textit{star symbol} ($\ast$) as it is far brighter than the other sources and partly saturated). On the right panel, size of the dots are proportional to their 3.4$\,\mu$m optical depth in the line of sight. There is no correlation between fluxes and optical depths, indicating no obvious bias in the determination of the latter, though we note that the care must be taken in interpretation given the low number of irregularly spaced points (15) used in the map's creation. For the Galactic Centre region, maps obtained with four different sets of stars are compared to each other and found to be consistent. However, we could not apply this test to Field C as there is an insufficient number of sources. \section{Results} \label{Section5} Together with the previous results for Field A, our study of Field B and Field C has brought the largest perspective to date on the amount and distribution of solid phase hydrocarbons. We present the outcomes of this work below. \subsection{Aliphatic Hydrocarbon Column Densities} We calculated aliphatic hydrocarbon column densities for Fields B \& C based on the 3.4\,$\mu$m optical depth values in Table~\ref{tab:4} and Table~\ref{tab:5} by applying Eq.~\ref{eq:1} and using the aliphatic hydrocarbon absorption coefficient of \textit{A} = $4.7\times10^{-18}$\,cm\,group$^{-1}$ (as determined in Paper 1). Since optical depths of 3.4\,$\mu$m were obtained by spectrophotometry, we used 3.4\,$\mu$m narrow-band filter width of 62\,cm$^{-1}$, instead of the equivalent width of the 3.4\,$\mu$m absorption feature (108.5\,cm$^{-1}$, see Paper 1) to calculate the aliphatic hydrocarbon column densities. Therefore, the resultant aliphatic hydrocarbon column densities show lower levels than could be detected by spectrophotometry. The resultant maps of aliphatic hydrocarbon column densities are shown in galactic coordinates for Field B and C in Figure~\ref{fig6} in comparison with the aliphatic hydrocarbon column density map of Field A (Paper 2). We found that there is a rise in hydrocarbon column density through to the Galactic plane in Field B similar to that of Field A. \subsection{Relation with ISD Distribution} For the Galactic Centre fields, to investigate whether there is a similarities between the distribution of the aliphatic hydrocarbons in ISD and the total dust, the maps showing the extinction and/or reddening in visible and near infrared regions have been examined. However, a map with similar resolution and coverage that will show extinction and/or reddening through the line of sight of the GC ($\sim$8 kpc) has not been found in the literature \citep{Sumi2004, Marshall2006, Schodel2007, Schodel2010, Nogueras-Lara2018, Green2019, Chen2019}, as the sightline is obscured due to intervening opaque clouds in visible and near infrared wavelengths and there is lack of photometric data of proper background sources. There are many luminous background sources in the near infrared region, however, the majority of them are cool red giants. The GC is totally obscured in the ultraviolet and visible wavelength regions. Thus effects of high extinction and low temperature on IR photometry cannot be easily separated. Therefore highly obscured blue, young massive stars can hardly be distinguished from less obscured red, old low-mass stars \citep{Geballe2010}. We tried to probe the dust distribution using maps that are prepared based on thermal emission of ISD in the far infrared region \citep{Schlegel1998, Peek2010, Schlafly2011, PlanckCollaboration2014}, where the extinction can be sufficiently low to enable us to investigate the ISM through the line of sight of the GC. However, we were also unable to find a map with comparable resolution and coverage to analyse interstellar dust distribution at longer wavelengths. Finally, we decided to analyse the reddening by using the Two Micron All Sky Survey Catalog \citep{Skrutskie2006} and Spitzer GLIMPSE Catalog \citep{Ramirez2007} data sets used in \citealt{Geballe2010}, which were kindly made available by the authors for our use. We prepared colour excess maps that cover the GC fields with a resolution which can enable us to make a meaningful comparison. We used photometric brightness measurements at 2MASS J-band (1.25\,$\mu$m), H-band (1.65\,$\mu$m), K-band (2.17\,$\mu$m) and Spitzer IRAC Ch1-band (3.6\,$\mu$m), Ch2-band (4.5\,$\mu$m), Ch3-band (5.8\,$\mu$m), Ch4-band (8\,$\mu$m) and obtained 2MASS J$-$K, 2MASS/Spitzer K$-$L (Ch1) and Spitzer IRAC Ch1$-$IRAC Ch2 colour excesses of all bright L--band sources (m$_{L}$ \textless 8$^{m}$) in the GC region. We plotted colour excess maps and compared them to check whether there are similar trends with the aliphatic hydrocarbon column density maps. We present 2MASS J$-$K, 2MASS/Spitzer K$-$L (Ch1) and Spitzer L$-$M (Ch2) colour excess maps in Figure \ref{fig7} (note that the colour excess levels of the maps are different: highest for J$-$K and lowest for L$-$M). We also indicate the location of the sources by black dots whose sizes are proportional to their fluxes. However, we cannot determine any relation between the fluxes and colour excesses. We would like to note that in the colour excess maps presented here, there would be bias due to the extinction and effective temperature degeneracy mentioned above. There are different methods to overcome this degeneracy \citep{Indebetouw2005, Marshall2006, Gonzalez2012, Hanson2014}. However, an extensive study of extinction through the GC sightlines is out of the scope of this study. There is a bias and uncertainty due to circumstellar dust around the mass losing asymptotic giant and ascending giant branch stars. They appear redder and this can be accounted for ISD \citep{Marshall2006, Nogueras-Lara2018}. The matching trends between the colour excess maps presented in Figure \ref{fig7}, prove that they sufficiently reflect the distribution of interstellar matter. However, we could not detect any matching trends between the colour excess maps and aliphatic hydrocarbon column density maps. \subsection{Aliphatic Hydrocarbon Abundances} We converted our results into aliphatic hydrocarbon abundance ratios, in order to compare with carbon abundances (C/H) reported in the literature. Normalised aliphatic hydrocarbon abundances (ppm) were estimated based on gas-to-extinction ratio $N(H) = 2.04 \times 10^{21}$\,cm$^{-2}$ mag$^{-1}$ \citep{Zhu2017}. Average extinction toward the central parsec in the GC is reported to be A$_{Ks}$$\sim$2.5 mag and A$_{V}$$\sim$30 mag \citep{Scoville2003, Nishiyama2008, Fritz2011, Schodel2010}. \cite{RiekeLebofsky1985} found A$_{V}$ to A$_{K}$ ratio is 9, but more recent studies indicate even higher values of A$_{V}$ to A$_{K}$ ratio is $\sim$ 29 \citep{Gosling2006}. Extinction towards the GC also varies on scales of arcseconds \citep{Scoville2003, Schodel2007, Schodel2010}. However, the extinction maps in the literature have very different resolutions or coverage or both, which therefore limit their use in obtaining the gas density distribution in each field. Therefore, we assumed extinction is invariant to estimate normalised aliphatic hydrocarbon abundances. For Field A, previously we assumed A$_{V}$$\sim$30 mag (see Paper 1 and Paper 2). In this study, we followed the same approximation for Field B as it is in the GC region. Since there is a debate on the extinction towards Field C, to estimate the lowest abundance level, we assumed A$_{V}$$\sim$20 mag, which is the highest value for the DISM reported by \cite{Godard2012} and \cite{Vig2017}. The resultant aliphatic hydrocarbon abundance levels (ppm) that are estimated based on an invariant A$_{V}$ are shown in galactic coordinates for Field C and B in Figure~\ref{fig6} in comparison with Field A (for details see Paper 2). \subsection{Summary} For completeness, we summarise and compare the results for Fields A, B \& C in Table \ref{tab:7}, which lists average, minimum and maximum 3.4\,$\mu$m optical depth values, aliphatic hydrocarbon column densities and abundances for each field, together with an estimate of the fraction of the aliphatic carbon in the ISM that lies in the ISD\@. \begin{table*} \begin{center} \footnotesize \caption{Minimum, maximum and average 3.4\,$\mu$m optical depth ($\tau_{3.4\,\mu m}$) values, with the corresponding aliphatic hydrocarbon column densities ($\times$\,10$^{18}$ cm$^{-2}$) and abundances (ppm) for Fields A, B and C\@. For the optical depth values the standard deviations are given. We also note the number of sources measured in each field in the footnote to this Table.} \label{tab:7} \centering \begin{tabular}{| c | c | c | c | c | } \hline && Field A & Field B & Field C \\ \hline &Min. & 0.07 & 0.18 & 0.10 \\ Optical depth ($\tau_{3.4\,\mu m}$) & Max. & 0.43 & 0.62 & 0.44 \\ & Average &0.20 & 0.36 & 0.27 \\ & Sigma & 0.06 & 0.09 & 0.10 \\ \hline & Min. & 0.92 & 2.40 & 1.36 \\ Column density ($\times$\,10$^{18}$ cm$^{-2}$ ) & Max. & 5.67 & 7.16 & 5.83 \\ & Average & 2.64 & 4.78 & 3.56 \\ \hline & Min. & 15 & 39 & 33 \\ Abundance (ppm) & Max. & 93 & 117 & 143 \\ & Average & 43 & 78 & 87 \\ \hline Aliphatic C ($\%$)&& 12 & 22 & 24 \\ \hline \end{tabular} \end{center} \begin{footnotesize} Note that number of sources measured in Field A, B and C are as follows: 200, 180, 15. \end{footnotesize} \end{table*} For Field A, the optical depth at 3.4\,$\mu$m ranges from 0.07 to 0.43, the column density of aliphatic hydrocarbon rises from $\sim$$9.2\times10^{17}$\,cm$^{-2}$ to $\sim$$5.7\times10^{18}$\,cm$^{-2}$, and the corresponding abundances with respect to hydrogen range from $\sim$15\,ppm to $\sim$93\,ppm. The mean value of $\tau_{3.4\,\mu m}$ $\sim 0.2$ corresponds to a typical column density of aliphatic hydrocarbon of $2.6\times10^{18}$\,cm$^{-2}$ and 43\,ppm aliphatic hydrocarbon abundance. For Field B, the optical depth at 3.4\,$\mu$m ranges from 0.18 to 0.54, the column density of aliphatic hydrocarbon rises from $\sim$$2.4\times10^{18}$\,cm$^{-2}$ to $\sim$$7.2\times10^{18}$\,cm$^{-2}$, and the corresponding abundances with respect to hydrogen range from $\sim$39\,ppm to $\sim$117\,ppm. The mean value of $\tau_{3.4\,\mu m}$ $\sim$ 0.36 corresponds to a typical column density of aliphatic hydrocarbon of $4.8\times10^{18}$\,cm$^{-2}$ and 78\,ppm aliphatic hydrocarbon abundance. For Field C, the optical depth at 3.4\,$\mu$m ranges from 0.10 to 0.44, the column density of aliphatic hydrocarbon rises from $\sim$$1.4\times10^{18}$\,cm$^{-2}$ to $\sim$$5.8\times10^{18}$\,cm$^{-2}$, and the corresponding abundance with respect to hydrogen ranges from $\sim$33\,ppm to $\sim$143\,ppm. The mean value of $\tau_{3.4\,\mu m}$ $\sim$ 0.27 corresponds to a typical column density of aliphatic hydrocarbon of $3.6\times10^{18}$\,cm$^{-2}$ and 87\,ppm aliphatic hydrocarbon abundance. The final line of Table \ref{tab:7} lists the percentage of the total carbon abundance in aliphatic form under the assumption that the total carbon abundance for the ISM is 358\,ppm \citep{Sofia2001}. We obtain an average 43 ppm, 78 ppm and 87 ppm aliphatic hydrocarbon levels for Field A, Field B and Field C, respectively. In addition to gas phase abundances, these amounts correspond to 12\%, 22\% and 24\% of the cosmic carbon abundances. We conclude that at least 10--20\% of the carbon along these sightlines in the Galactic plane lies in aliphatic form. \section{Discussion and Conclusions} \label{Section6} By this work we have clarified that for three fields in the Galactic disk, an important part of cosmic carbon is in aliphatic hydrocarbon form in the ISD. We also showed that the spectrophotometric method is applicable to other fields in the Galactic plane. We showed in Paper 2 that optical depth at 3.4\,$\mu$m for a field in the Galactic Centre (Field A) can be reliably measured using flux measurements made through a series of narrow band filters that are able to sample the spectrum. We have applied this method to two new fields: $\sim 0.2^{\circ}$ to the East (Field B) and $\sim 35^{\circ}$ to the West (Field C) lying in the Galactic plane. In all cases optical depths were able to be measured and found to lie in the range $\sim 0.1 - 0.6$. We found larger optical depth through the line of sight of Field B compared to Field A, based on the calibration fluxes we obtained by the spectroscopic measurements of a source (GCIRS 7) in Field A. However, caution needs to be taken since the sources in Field B lack spectroscopic information. We also found that, through the line of sight of Field C (IRAS 18511+0146 cluster), there is a larger optical depth than Field A. This result was checked by cross-calibration tests by using the spectroscopic fluxes of Field C sources from the literature to calibrate Field A data which yields consistent results. Complete consistency cannot be expected since the spectroscopic studies involve different steps that we were not able to implement in this study, such as polynomial continuum fitting (which requires higher resolution spectra) or normalisation of the spectra with the star's blackbody curve (which requires to measure temperature of the all sources in the field of view). Since we applied linear continua to low-resolution spectra, which are smoother than the real spectrum, our results may yield minimum values for the optical depth levels of 3.4$\,\mu$m absorption. The 3.4$\,\mu$m optical depths are measured simultaneously for each field so the relative variability of column densities for each field are more reliable than the measurements reported by different spectroscopic studies in the literature (for more details see Paper 2). The other advantage of the method is that it allows us to trace abundances through the large FoV, compared to single point or long-slit spectroscopy, which is not sufficient to provide spatial information for the FoV. Although recently \textit{integral field spectroscopy} (IFS) has become an important alternative to long-slit spectroscopy, the narrow-band imaging can be still preferred to obtain spatially resolved spectral data for larger FoVs. Using the spectrophotometric method, we measured the optical depth across the fields that covers 163 arcsec $\times$ 163 arcsec for Field A and B, 137 arcsec $\times$ 137 arcsec for Field C. While the range in extrema between minimum and maximum values measured for 3.4\,$\mu$m optical depth may appear relatively large (for Field A from 0.07 to 0.43, for Field B from 0.18 to 0.62 and for Field C from 0.10 to 0.44), the majority of sources are all within 0.1 of the mean 3.4\,$\mu$m optical depth measured in their field. For both Fields A and B there is a mild gradient in the optical depth of 3.4$\,\mu$m running in same direction across the 2 arcmin field, rising by about a factor of 50\% in increasing towards $b=0^{\circ}$. The pattern is similar between these fields. There are too few sources measured in Field C, however, to draw any conclusions as to whether a gradient exists across this field. As we showed by dividing data into quartiles for the Galactic Centre fields, where the number of the sources are sufficiently large, the general optical depth trends in each quartile are found to be consistent. Although interpolation of data caused some structure variations in the quartile maps due to location of the sources in use, without large source-to-source variations, 3.4\,$\mu$m optical depths are found to be reasonably uniform across the fields. However, in case of large uncertainty in the spectrophotometric measurements of individual sources due to low S/N and uncertainties in continuum determination \citep{Chiar2002, Moultaka2004, Godard2012} or presence of intrinsic spectral properties through some lines of sight there might be source-to-source variations. There might be also bias in the measurements since some of the GC sources are known that they have thick circumstellar dust shells \citep{Roche1985, Tanner2005, Viehmann2005, Pott2008, Moultaka2009} and there are also many sources that are problematic to classify \citep{Buchholz2009, Hanson2014, Dong2017, Nogueras-Lara2018}. However, the uniformity in our measurements suggests that the aliphatic hydrocarbon absorption is dominated by material distributed along the sightline in DISM rather than local to each source, and so our method measures large-scale properties of the foreground molecular medium. We wanted to compare the resultant aliphatic hydrocarbon optical depth maps obtained in this study with the maps in the literature. \cite{Moultaka2005} and \cite{Moultaka2015} provided the map of the optical depth of 3.4\,$\mu$m absorption feature for the GC sightline. However, the coverage of the 3.4\,$\mu$m absorption maps are considerably smaller than the maps obtained in this study. Therefore a meaningful comparison is not possible. They also argued that there is a residual 3.4\,$\mu$m absorption produced by the local medium of the central parsec of the GC. Through the line of sight of the GC, different components of the ISM (diffuse and dense ISM, circumnuclear ring, mini spiral etc.) are superimposed \citep{Muzic2007, Moultaka2009, Ferriere2012, Sale2014, Yusef-Zadeh2017, Moultaka2019, Murchikova2019, Geballe2021}. However, 3.4\,$\mu$m hydrocarbon absorption is assumed to dominantly take place in the DISM \citep{Chiar2002}. Of course, hydrocarbons can be found in all ISM components but by dust evolution, their observable properties change (i.e., ISD could be mixed or covered by ices of the volatiles in dense ISM or in molecular clouds) \citep{Jones2012a, Jones2012b, Jones2012c, Chiar2013, Jones2019, Potapov2021}. There are also masking effects by other features arising from the different components of the ISM and the 3.4$\,\mu$m absorption feature can be superimposed with other features, such as the long wavelength wing of the broad 3.1$\,\mu$m H$_{2}$O ice absorption band ([I02], [C02], [M04] and [G12]). We also tried to explore if there is a correlation between aliphatic hydrocarbon column density and the ISD by comparing our maps with extinction and reddening maps, although a relation between two cannot necessary be expected. Extinction occurs due to combined effect of absorption and scattering of light and can be used to probe the amount of total gas, ice and dust density together. Although extinction curves are shaped by optical properties of ISD \citep{Draine1984, Cardelli1989, Fitzpatrick1999}, they are not sufficient to reveal to chemical composition of the ISDs. Since our measurement is focused on the 3.4\,$\mu$m feature, the resultant maps reflect distribution of a chemical group: hydrocarbons in the ISD. Importantly, we showed that this distribution is independent from the ISD distribution. In addition to the spectrophotometric measurements of the 3.4\,$\mu$m feature, spectrophotometric measurements of the 9.8\,$\mu$m feature has a potential to reveal the siliceous dust distribution but it has not been implemented yet. Spectrophotometric measurement of 3.4\,$\mu$m aliphatic hydrocarbon and 9.8\,$\mu$m silicate absorption features have a potential for the future applications e.g. James Webb Space Telescope (JWST) as discussed in \citealt{Gordon2019}. A comparison of the carbonaceous and siliceous dust maps can help to reveal the abundance distribution of major dust forming elements (i.e. C, O, Si) in the ISM \citep{Kim1996, Cardelli1996, Henning2004, WangLi2015, Zuo2021a, Zuo2021b, HensleyDrain2021, Gordon2021, DraineHensley2021}. It has been expected that gas-phase abundances are often assumed to be invariable with respect to the local interstellar conditions although recent studies imply that there are variations \citep{DeCiaNature2021, Zuo2021a, Zuo2021b}. There is an argument by \cite{DeCiaNature2021} on the presence of line-of-sight inhomogeneities in elemental abundances, and measured column densities could be affected by the ISM being composed of individual clouds with very different depletion strengths and/or abundances. In this study, the aliphatic hydrocarbon optical depth is found to be a little higher for Field B than Field A. While this variation occurs within $0.2^{\circ}$ in the centre of the Galaxy, the optical depth in Field C in the Galactic plane is similar to that of the Galactic Centre. While three fields is, of course, a limited number to be drawing conclusions from, this is consistent with reasonably constant levels of absorption at 3.4\,$\mu$m across the Galactic plane. The resultant aliphatic hydrocarbon column densities have been obtained free from the extinction values. For the three fields, the average aliphatic hydrocarbon column density level found to be several $\rm \times 10^{18}\,cm^{-2}$. We obtained the aliphatic hydrocarbon abundances of several $\rm tens \times 10^{-6}$ (ppm) accordingly for these sightlines. The statements also assume a relatively constant extinction and total carbon abundance, and so caution needs to be applied in order-interpreting it. In addition, there might possibly be larger amounts of aliphatic hydrocarbons in the ISD as we used 3.4\,$\mu$m narrow-band filter width of 62\,cm$^{-1}$ instead of the $3.4\,\mu$m aliphatic hydrocarbon absorption feature equivalent width of 108.5\,cm$^{-1}$ (Paper 1). Therefore we conclude that an overall fractional abundance of carbon in the aliphatic form from 10--20\% in the ISM\@ at least. We also note that the cosmic carbon abundances are still in debate as different studies are not fully consistent yet \citep{DraineHensley2021, Zuo2021a, Zuo2021b}. Using spectroscopic measurements through the different Galactic sightlines, \cite{Parvathi2012} found higher carbon abundance levels (i.e. $\sim$$464$\,ppm towards HD206773) than the cosmic carbon abundance estimations. However, \cite{DeCiaNature2021} implied that the interstellar abundances of refractory elements in the local ISM may be $\sim$55\% of solar, which puts new limitations on elemental abundances in ISD. Importantly, the average aliphatic hydrocarbon abundances found in this study do not exceed the solid phase carbon abundance levels obtained by recent studies (i.e. \citealt{Parvathi2012, Jones2013, Mishra2017, Zuo2021b}) and imply that some of the solid carbon is available in aromatic, olefinic and other forms in ISD to produce all observable features \citep{Jones2013}, in particular the 2175\,\AA\, bump \citep{Stecher1965}, which is the strongest extinction feature produced by electronic transitions in carbonaceous material \citep{Mathis1977, Kwok2009, Li2019, Dubosq2020, Xing2020}. By the support of laboratory studies which allow us to estimate the aliphatic hydrocarbon / total carbon ratios in the ISD, the $3.4\,\mu$m aliphatic hydrocarbon maps can play an important role in solving the carbon crisis, understanding interstellar carbonaceous material cycle and chemical evolution of the Galaxy. The interstellar matter cycle provides the raw material for the formation of stars and planets \citep{Oberg2021}. Stars and planets are formed deep inside dense clouds where siliceous and carbonaceous ingredient of dust covered by ices \citep{Jones2016a, Jones2016b, Oberg2021, Potapov2021}. The gravitational collapse of an interstellar cloud led to the formation of a protoplanetary disks \citep{Andrews2020, Oberg2021} and dust and ice plays role in the formation of the planetesimals (planets, asteroids, and comets) as they collide and stick \citep{Weidenschilling1980}. Assuming the carbon-to-silicon abundance ratio of the the solar photosphere and the ISM is similar (C/Si $\sim$10) \citep{Anderson2017, Asplund2021}, we can estimate that there would be an important amount of aliphatic hydrocarbon available during the planetesimal formation stage. Therefore, beside the role of siliceous / mineral dust and ice (i.e. \citealt{SalterFraser2009, Fraser2010, HillFraser2015, Demirci2019, Musiolik2021}), the possible role of organic material and aliphatic hydrocarbons in dust aggregation and pebble formation \citep{DominikTielens1997, Kazuaki2019, Bischoff2020, Anders2021} which leads to planetesimal formation need to be further investigated using primordial material in the planetesimal samples and analogue materials \citep{schmidtgunay2019} in laboratory experiments. Hydrocarbon groups in ISD are useful to trace the reservoir of organic material and so that prebiotic molecules in the ISM. Some part of the prebiotic molecules have been likely preserved in carbonaceous dust in the planetesimal formation regions \citep{Ehrenfreund2010, Kwok2016, vanDishoeck2020, Ehrenfreund2021, Oberg2021}. The chemical compositions of the planet-forming disks determine hospitality to life \citep{Bergin2015, Oberg2021}. With advent of the new techniques and telescopes, we are able to observe the planet-forming disks and galaxies with great resolution. Future applications of our method will enhance our understanding of the distribution of carbonaceous dust, organic and prebiotic material in space. \section*{Acknowledgments} We would like to thank Dr. Tom Geballe and Dr. Takeshi Oka for their support and for supplying data. BG would like to thank to The Scientific and Technological Research Council of Turkey (T\"{U}B\.{I}TAK) for their support in this work through the 2214/A International Research Fellowship Programme. TWS is supported by the Australian Research Council (CE170100026 and DP190103151). The University of New South Wales (UNSW) seeded this work through the award of a Faculty interdisciplinary grant. We also wish to thank the staff at the UKIRT telescope for their help in gathering the data used for this paper through their service programme, in particular Watson Varricatt and Tom Kerr who undertook the observations. This research has made use of the NASA/IPAC Infrared Science Archive, which is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology. \section*{Data Availability} The data used in this study will be made available by the corresponding authors upon request. \bibliographystyle{mn2e}{} \bibliography{List}
Title: OGLE-2019-BLG-0362Lb: A super-Jovian-mass planet around a low-mass star
Abstract: We present the analysis of a planetary microlensing event OGLE-2019-BLG-0362 with a short-duration anomaly $(\sim 0.4\, \rm days)$ near the peak of the light curve, which is caused by the resonant caustic. The event has a severe degeneracy with $\Delta \chi^2 = 0.9$ between the close and the wide binary lens models both with planet-host mass ratio $q \simeq 0.007$. We measure the angular Einstein radius but not the microlens parallax, and thus we perform a Bayesian analysis to estimate the physical parameters of the lens. We find that the OGLE-2019-BLG-0362L system is a super-Jovian-mass planet $M_{\rm p}=3.26^{+0.83}_{-0.58}\, M_{\rm J}$ orbiting an M dwarf $M_{\rm h}=0.42^{+0.34}_{-0.23}\, M_\odot$ at a distance $D_{\rm L} =5.83^{+1.04}_{-1.55}\, \rm kpc$. The projected star-planet separation is $a_{\perp} = 2.18^{+0.58}_{-0.72}\, \rm AU$, which indicates that the planet lies beyond the snow line of the host star.
https://export.arxiv.org/pdf/2208.04230
\jkashead \section{Introduction} As of 8th April 2022, 130\footnote{https://exoplanetarchive.ipac.caltech.edu/index.html} confirmed exoplanets have been discovered by microlensing. Host stars of the microlensing planets have wide mass ranges from brown dwarfs to Sun-like stars, and the majority of them are low-mass M dwarf stars, which comprise most of the stars in our Galaxy. By contrast, host stars of exoplanets detected by the radial velocity and transit methods, which detected over 95\% of 5009$^1$ exoplanets discovered so far, are mostly Sun-like stars. Using these two methods, it is difficult to detect planets around low-mass M dwarf stars because the stars are faint. This is because the radial velocity and transit methods depend on the brightness of stars, but the microlensing depends on their mass, not their brightness. In addition, the great majority of the low-mass host stars detected by microlensing have giant planets beyond their snow lines where ices condense in the protoplanetary disk \citep{kennedy2008}, while Sun-like host stars detected by the two other methods have mostly planets inside the their snow lines, regardless of the planet mass (see Figure 10 of \citet{zhu2021}). Hence, microlensing planets play a very important role to constrain not only planet formation theories, such as the core accretion and gravitational instability, which were constructed based on observed exoplanets, but also the distribution of exoplanets around all types of host stars. A planetary signal induced by microlensing is unpredictable and its duration decreases as the planet mass decreases (e.g., a few hours for a Earth-mass planet and about a day for a Jupiter-mass planet). Thus, for the detection of the microlensing planetary signal, it is highly advantageous to have 24 hr continuous high-cadence observations. In order to conduct the 24 hr observations, Korea Microlensing Telescope Network (KMTNet; \citealt{kim2016}) with $4.0\ \rm deg^{2}$ field of view (FOV) was established at three different southern sites, which are located at the Cerro Tololo Interamerican Observatory in Chile (KMTC), the South African Astronomical Observatory (KMTS), and the Siding Spring Observatory in Australia (KMTA), and its experiment was officially initiated in 2016. KMTNet covers 27 Galactic bulge fields at cadences ranging from $\Gamma = 0.2\, \rm hr^{-1}$ to $\Gamma = 4\, \rm hr^{-1}$, with about 12 deg$^2$ at $\Gamma = 4\, \rm hr^{-1}$, 29 deg$^2$ at $\Gamma = 1\, \rm hr^{-1}$, 44 deg$^2$ at $\Gamma = 0.4\, \rm hr^{-1}$, 12 deg$^2$ at $\Gamma = 0.2\, \rm hr^{-1}$, and total coverage of 97 deg$^2$ \citep{shin2016}. Thanks to the 24 hr high-cadence observations of KMTNet, many planetary events that were not detected or noticed by the Optical Gravitational Lensing Experiment (OGLE; \citealt{udalski2003}) or the Microlensing Observations in Astrophysics (MOA; \citealt{sumi2016}) were detected by the KMTNet (e.g., \citealt{shin2016}; \citealt{hwang2018a}; \citealt{kim2020}; \citealt{kim2021a}; \citealt{kim2021b}; \citealt{zang2021a}; \citealt{hwang2022}). Hence, since the beginning of test observations in 2015, about 50\% of all confirmed microlensing exoplanets have been detected by KMTNet. Considering that it took about 25 years to detect the other 50\% exoplanets before KMTNet, it is obvious how amazing KMTNet is doing. In addition, very low planet-star mass ratio ($q$) events with $q < 10^{-4}$, which were rarely detected before KMTNet, are often being detected by KMTNet (\citealt{shvartzvald2017}; \citealt{hwang2018b}; \citealt{udalski2018}; \citealt{han2018}; \citealt{gould2020}; \citealt{herrera-martin2020}; \citealt{ryu2020}; \citealt{han2021}; \citealt{kondo2021}; \citealt{zang2021a}; \citealt{zang2021b}; \citealt{yee2021}; \citealt{han2022a}; \citealt{han2022b}; \citealt{hwang2022}; \citealt{wang2022}). Such events will be helpful to better constrain the planet frequency as a function of the plant-star mass ratio, which was done by \citet{shvartzvald2016} and \citet{suzuki2016}. Moreover, since the KMTNet microlensing observations, planetary events caused by lens systems that are composed of giant planets beyond the snow line of low-mass host stars are routinely being detected. The event OGLE-2019-BLG-0362 is one of such events. In this paper, we report on the discovery of a giant planet orbiting an M dwarf from the analysis of the microlensing event OGLE-2019-BLG-0362. The mass and distance of lens systems can be directly measured from the measurement of two parameters of microlens parallax $\pie$ and angular Einstein ring radius $\thetae$. This is because they are defined as \begin{equation} M_{\rm L} = {\thetae\over{\kappa\pie}}; \quad \dl = {{\rm AU}\over{\pie\thetae + \pis}}, \end{equation} where $\kappa \equiv 4G/(c^2\rm AU) \simeq 8.14\, {\rm mas}/M_\odot$, $\pis={\rm AU}/\ds$ is the parallax of the source star, and $\ds$ is the distance to the source \citep{gould2000}. For the event, only $\thetae$ was measured, thus the physical parameters of the lens system were estimated from a Bayesian analysis \citep{jung2018}. \citet{kim2021b} showed that the lens masses inferred from Bayesian analyses are determined almost entirely by the measured $\thetae$ and are relatively insensitive to the relative lens-source proper motion $\murel$ and specific Galactic model prior. \citet{shan2019} also showed that the true distribution of masses for events with measured masses and \textit{Sptizer} parallaxes is consistent with the inferred masses from Bayesian analyses derived for those events. Therefore, it is believed that the physical lens parameters estimated from the Bayesian analysis are reliable at least in a statistical sense. \section{Observations} The microlensing event OGLE-2019-BLG-0362 occurred at equatorial coordinates $({\rm RA, Dec})=(17:33:51.66,-24:48:37.3)$, corresponding to the Galactic coordinates $(l, b) = (2.11^{\circ}, 4.41^{\circ})$. The event was first detected by the Early Warning System of the OGLE. OGLE uses a 1.3 m telescope with $1.4\ \rm deg^{2}$ FOV at Las Campanas Observatory in Chile. The event OGLE-2019-BLG-0362 is located in the OGLE IV field BLG715, which is observed with a low cadence of $\Gamma \sim 1\, \rm night^{-1}$. The KMTNet also found this event, and it was designated as KMT-2019-BLG-0075. KMTNet uses three identical 1.6m telescopes with $4.0\ \rm deg^{2}$ FOV, as mentioned in Section~1. The event is located in the KMT field BLG16 with a cadence of $\Gamma = 0.4\, \rm hr^{-1}$. Most KMT data were taken in the $I$ band, and some data were taken in $V$ band in order to determine the color of the source star. The KMT data for the modeling were reduced by the pySIS pipeline based on the difference imaging method (\citealt{alard1998}; \citealt{albrow2009}). For the construction of the color-magnitude diagram (CMD) of stars around the source and characterization of the source color, the KMTC $I$- and $V$-band images were used, and they were reduced by the pyDIA pipeline developed by \citet{albrow2007}. The OGLE data were also reduced by difference imaging analysis (\citealt{alard1998}; \citealt{wozniak2000}). We renormalized the errors of the data sets obtained from each photometry pipeline using the method of \citet{yee2012}. \section{Light Curve Analysis} The event OGLE-2019-BLG-0362/KMT-2019-BLG-0075 has a short duration anomaly ($\sim 0.4\, \rm days$) near the peak. The anomaly was covered by only three KMT data points (two KMTC and one KMTS data), while it was not covered by OGLE. In other words, there are no binary lensing features that can be unambiguously identified, such as a clearly double-peaked caustic-crossing. In such cases, the short duration anomaly can be produced by either a typical binary lensing with a single source (2L1S) or a single lensing with two sources (1L2S). \subsection{binary lens model (2L1S)} We first conduct the standard binary lens modeling. The standard binary lens event is described with seven parameters. Three of the seven parameters are the single lensing parameters $(\tzero, \uo, \te)$, where $\tzero$ is the time of the closest source approach to the lens, $\uo$ is the impact parameter (i.e., the lens-source separation at $t=\tzero$ in units of $\thetae$), and $\te$ is the Einstein radius crossing time of the event. Another three parameters are the binary lensing parameters $(s, q, \alpha)$, in which $s$ is the projected separation of the binary lens components in units of $\thetae$, $q$ is the mass ratio of the two lens components, and $\alpha$ is the angle between the source trajectory and the binary axis. The last one is the normalized source radius $\rho = \thetas/\thetae$, where $\thetas$ is the angular radius of the source star. Because the short-duration anomaly is likely to be caused by the caustic, we incorporate the limb-darkening variation of the finite source star in the modeling. The brightness variation of the source by the limb-darkening effect is computed by $S \propto 1 - \Gamma(1-3\cos\phi/2)$, where the $\Gamma$ is the limb-darkening coefficient and $\phi$ is the angle between the normal to the surface of the source star and the line of sight \citep{an2002}. According to the source type that will be discussed in Section~4 , we adopt the limb-darkening coefficient of $\Gamma_{I} = 0.45$ from \citet{claret2000}. Besides these parameters, there are two flux parameters for each observatory, which are the source flux $f_{s,i}$ and blended flux $f_{b,i}$ of the $i$th observatory. The two flux parameters $(f_{s,i}, f_{b,i})$ are modeled by $F_{i}(t) = f_{s,i}A_{i}(t) + f_{b,i}$, where $A_i$ is the magnification as a function of time at the $i$th observatory \citep{rhie1999}. Then, the $(f_{s,i}, f_{b,i})$ of each observatory are obtained from a linear fit. We carry out a grid search for the binary lensing parameters $(s, q, \alpha)$ to find local $\chi^2$ minima using a downhill approach based on the Markov Chain Monte Carlo (MCMC) method. The ranges of each parameter are $-1 \leqslant {\rm log} s \leqslant 1$, $-4 \leqslant {\rm log} q \leqslant 0 $, and $0 \leqslant \alpha \leqslant 2\pi$ with (100, 100, 21) uniform grid steps, respectively. During the grid search, $(s,q)$ are fixed and the other parameters are allowed to vary in the MCMC chain. As a result, we find four local solutions of $(s,q)=(0.93,4.53\times10^{-3})$, $(0.85,1.04\times10^{-2})$, $(1.18, 4.13\times10^{-3})$, and $(1.35, 1.38\times10^{-2})$, which indicate that the event has a well-known close/wide degeneracy $s \leftrightarrow 1/s$. This close/wide degeneracy arises from the similarity in shape between the caustics induced by a close binary with $s<1$ and a wide binary with $s > 1$ (\citealt{griest1998}; \citealt{dominik1999}). Figure \ref{fig:jkasfig1} shows the result of the grid search. We then carry out an additional modeling in which the local solutions are set to the initial values and all parameters are allowed to vary. From this, we find that each of the two close and wide solutions converges to $(s, q) = (0.90, 7.43\times 10^{-3})$ and $(1.23, 7.11\times 10^{-3})$, respectively. The best-fit light curves of the close and wide models are shown in Figure \ref{fig:jkasfig2}. The $\chi^2$ of the close model is smaller by $0.9$ than that of the wide model, and thus the event is severely degenerate. The close and wide best-fit parameters with their 68\% uncertainty range from the MCMC method are listed in Table \ref{tab:jkastable1}, and the geometries of the two models are presented in Figure \ref{fig:jkasfig3}. Even though the event timescale $(\te = 22\ \rm days)$ is not enough to measure the microlens parallax, we conduct the binary lens modeling with both the microlens parallax and lens orbital motion effects. This is because that the orbital motion effect of the lens system can mimic the parallax signal (\citealt{batista2011}; \citealt{skowron2011}). The microlens parallax is described by $\bm{\pie} = (\pien, \piee)$, in which the two components are given in equatorial coordinates \citep{gould1994}. The lens orbital motion is described by two parameters $(ds/dt, d\alpha/dt)$, which represent the change rates of the binary separation and the orientation angle of the binary axis, respectively. As we expected, we find that the parallax+orbital model is very weakly improved by $\delcs =2.6$ compared to the standard model, and the microlens parallax is not usefully constrained. \begin{table}[t!] \caption{Lensing parameters of the binary lens model.\label{tab:jkastable1}} \centering \begin{tabular}{lrr} \toprule Parameter & Close & Wide \\ \midrule $\chi^2$/dof & $1567.987/1611$ & $1568.897/1611$ \\ $t_0$ (HJD$^\prime$) & $8563.8590 \pm 0.0216$ & $8563.8840 \pm 0.0225$ \\ $u_0$ & $0.0982 \pm 0.0047$ & $0.1020 \pm 0.0044$ \\ $\te$ (days) & $22.2158 \pm 0.7249$ & $21.8263 \pm 0.6839$ \\ $s$ & $0.8980 \pm 0.0208$ & $1.2342 \pm 0.0286$ \\ $q(10^{-3})$ & $7.4282\pm 1.5289$ & $7.1095 \pm 1.5735$ \\ $\alpha$ (rad) & $1.2303 \pm 0.0107$ & $1.2486 \pm 0.0101$ \\ $\rho$ & $0.0034 \pm 0.0004$ & $0.0031 \pm 0.0004$ \\ $f_{s,\rm ogle}$ & $0.1498 \pm 0.0070$ & $0.1550 \pm 0.0067$ \\ $f_{b,\rm ogle}$ & $0.0236 \pm 0.0069$ & $0.0184 \pm 0.0066$ \\ \bottomrule \end{tabular} \tabnote{ HJD$^\prime$ = HJD - 2450000.} \end{table} \begin{table}[t!] \caption{Lensing parameters of the binary source model.\label{tab:jkastable2}} \centering \begin{tabular}{lr} \toprule Parameter & 1L2S \\ \midrule $\chi^2$/dof & $2068.634/1611$ \\ $t_{0,1}$ (HJD$^\prime$) & $8563.4507 \pm 0.0311$ \\ $u_{0,1}$ & $ 0.1213 \pm 0.0064$ \\ $t_{0,2}$ (HJD$^\prime$) & $8564.6231 \pm 0.0015$ \\ $u_{0,2}(10^{-3})$ & $0.0221 \pm 0.2751$ \\ $\te$ (days) & $22.8831 \pm 0.8945$ \\ $\rho_1$ & ... \\ $\rho_2$ & $0.0060 \pm 0.0003$ \\ $q_{F}$ & $ 0.0636 \pm 0.0016$ \\ $f_{s,\rm ogle}$ & $0.1420 \pm 0.0079$ \\ $f_{b,\rm ogle}$ & $0.0314\pm 0.0078$ \\ \bottomrule \end{tabular} \end{table} \subsection{Binary source model (1L2S)} For a single lensing event induced by a binary source, the observed flux $F$ is the superposition of fluxes from the single lensing events of the two source stars (\citealt{griest1992}; \citealt{han1997}; \citealt{gaudi1998}), \begin{equation} F = \fsone A_1 + \fstwo A_2, \end{equation} where $\fsone$ and $\fstwo$ are the fluxes of the primary $(\rm S_1)$ and companion $(\rm S_2)$ sources, respectively, and $A_1$ and $A_2$ are the lensing magnifications by the primary and companion sources. Thus, the total magnification for the binary source lensing \citep{hwang2013} is represented by \begin{equation} A = {A_{1} \fsone + A_2 \fstwo\over{\fsone + \fstwo}} = {A_1 + q_{F} A_2\over{1 + q_{F}}}, \end{equation} where $q_{F}=\qflux$, and \begin{equation} A_{i} = {u^{2}_{i} + 2\over{u_{i} \sqrt{u^{2}_{i} + 4}}}; \quad u_i = \left[u^{2}_{i} + {\left({t-t_{0,i}\over{\te}}\right)^2}\right]. \end{equation} In order to mimic the anomaly induced by the binary lens, as shown in Figure \ref{fig:jkasfig2}, one of the two sources has to pass close to the lens, which means that its $\uo$ is very small, thus making it highly magnified. For the 1L2S modeling, we need 8 parameters: the single lens parameters for the two sources $\rm S_1$ and $\rm S_2$, $(t_{0,1}, u_{0,1}, \rho_1)$ and $(t_{0,2}, u_{0,2}, \rho_2)$, $\te$, and the flux ratio of the two sources $q_{F}$ (\citealt{griest1992}; \citealt{jung2017}). The $(\tzero,\uo, \te, \rho)$ of the binary lens solution are set to the initial values of the parameters $(t_{0,1}, u_{0,1}, \te, \rho_2)$, while we set the initial values of the parameters $(t_{0,2}, u_{0,2}, q_{F})$ by considering the peak time and the magnification of the short duration anomaly that was obtained from the binary lens. The best-fit parameters for the binary source model are presented in Table \ref{tab:jkastable2}. From the result of the 1L2S modeling, we find that the $\delcs$ between 2L1S and 1L2S models is $\delcs = 500.6$. This means that the event OGLE-2019-BLG-0362/KMT-2019-BLG-0075 is caused by a binary lens system. \section{Angular Einstein radius} In order to measure the angular Einstein radius $\thetae$, one should measure the angular source radius and so obtain $\thetae = \thetas/\rho$. As mentioned in Section~2, KMT data were taken in the $I-$ and $V-$bands to measure the source color. The measured instrumental color and magnitude of the source are $(V-I)=3.35$ and $I=20.06$, which are obtained from a regression and the source flux of the best-fit model, respectively. The angular source radius is estimated from the intrinsic color and magnitude of the source, in which they are obtained from the offset between the red giant clump and the source positions on the instrumental CMD, \begin{equation} \Delta (V-I, I) = (V-I, I)_0 - (V-I, I)_{\rm cl,0}. \end{equation} We thus construct the KMTC CMD from the pyDIA pipeline. From the CMD, we find that the color and magnitude of the clump are $(V-I, I)_{\rm cl}=(3.61, 17.44)$. However, the measured instrumental source color was obtained from three very low-magnified $V$ band points on the wing of the light curve, and the extinction toward the event is high as $A_I=3.04$, making it doubtful that the color is reasonable. In addition, unfortunately, there was no magnified $V$ band data for OGLE. Hence, we combine the KMTC CMD and CMD constructed from the Galactic bulge images taken from the \textit{Hubble} Space Telescope (\textit{HST}) \citep{holtzman1998}. The combination of the two CMDs is performed by calibrating the positions of the clumps on each CMD. Figure 4 shows the combined CMD. From the CMD, we find that the source color is $(V-I)=3.27 \pm 0.13$, which is estimated by taking the average of the calibrated $HST$ stars that are in the ranges of $1.1 \lesssim (V-I)_0 \lesssim 1.5$ and $17.5 \lesssim I_0 \lesssim 17.6$. The offset is thus $\Delta(V-I, I)=(-0.34, 2.62)$. We adopt the intrinsic color and magnitude of the clump: $(V-I)_{\rm cl,0}=1.06$ from \citet{bensby2011} and $I_{\rm cl,0}=14.37$ from \citet{nataf2013}. As a result, we find that the intrinsic color and magnitude of the source are $(V-I, I)_0 = (0.72 \pm 0.13, 16.99 \pm 0.01)$. This indicates that the source is a G-type turn-off star or a G-type subgiant. The intrinsic $(V-I)_0$ source color is converted to $(V-K)_0$ color using the color-color relation of \citet{bessell1988}, and then adopting the $(V-K)_0$ to the color-surface brightness relation of \citet{kervella2004}, we obtain the angular source radius $\thetas = 1.40 \pm 0.19\, \rm \mu as$ for the close and wide models. We then estimate the angular Einstein radii for the close and wide models as \begin{equation} \thetae = \thetas/\rho = \left\lbrace \begin{array}{ll} 0.416 \pm 0.082\, \textrm{mas} & \textrm{(close)} \\ 0.443 \pm 0.065\, \textrm{mas} & \textrm{(wide)}. \end{array} \right. \end{equation} The relative lens-source proper motion is estimated as \begin{equation} \murel = \thetae/\te = \left\lbrace \begin{array}{ll} 6.85 \pm 1.32\, \textrm{mas\, yr$^{-1}$} & \textrm{(close)} \\ 7.41 \pm 1.08\, \textrm{mas\, yr$^{-1}$} & \textrm{(wide)}. \end{array} \right. \end{equation} \section{Physical lens properties} For the event OGLE-2019-BLG-0362, the microlens parallax was not measured, and thus we cannot directly measure the physical parameters of the planetary lens system. In this case, we perform a Bayesian analysis with the measured $\thetae$ and $\te$ in order to constrain the physical lens parameters. The Bayesian analysis assumes that all stars have an equal probability to host a planet of the measured mass ratio (\citealt{bhattacharya2021}; \citealt{vandorou2020}). For the Bayesian analysis, we follow the procedures of \citet{jung2018}, but we use the new Galactic model constructed by \citet{jung2021}. The new Galactic model of \citet{jung2021} includes the bulge mean velocity taken from stars in the Gaia catalog, disk density profile and disk velocity dispersion from the Robin-based model \citep{bennett2014}, while the remaining parameters, including the bulge mean velocity, the bulge density profile, and mass function, are the same as those of \citet{jung2018}. Figure 5 shows the posterior probability distribution of the mass and distance of the host star derived from the Bayesian analysis for the two models. Due to the close/wide degeneracy, each model has a different $\thetae$. Thus, we find that the estimated masses of the host and planet are \begin{equation} \label{eqn:mass} (M_{\rm host}, M_{\rm p})= \left\lbrace \begin{array}{ll} (0.42^{+0.34}_{-0.23}\, \rm M_\odot,\, 3.26^{+0.83}_{-0.58}\, M_{\rm J})\, \textrm{(close)} \\ (0.45^{+0.33}_{-0.24}\, \rm M_\odot,\, 3.34^{+0.78}_{-0.58}\, M_{\rm J})\, \textrm{(wide)}, \end{array} \right. \end{equation} where $M_{\rm p}=qM_{\rm host}$, and the distance to the lens is \begin{equation} \label{eqn:distance} \dl= \left\lbrace \begin{array}{ll} 5.83^{+1.04}_{-1.55}\, \rm kpc\, \textrm{(close)} \\ 5.72^{+1.03}_{-1.57}\, \rm kpc\, \textrm{(wide)}. \end{array} \right. \end{equation} The projected star-planet separations of the close and wide models are \begin{equation} \label{eqn:distance} a_\perp = s\dl\thetae = \left\lbrace \begin{array}{ll} 2.18^{+0.58}_{-0.72}\,\rm AU\, \textrm{(close)} \\ 3.13^{+0.73}_{-0.98}\,\rm AU\, \textrm{(wide)}, \end{array} \right. \end{equation} respectively. The physical values of the lens system are the median values of the Bayesian posterior distributions, and their uncertainties indicate the 68\% confidence intervals (i.e., $1\sigma$ errors) of the distributions. Considering the snow line of $a_{\rm snow}=2.7(M/M_\odot)$\citep{kennedy2008}, the planet is orbiting beyond the snow line of the M dwarf star. However, it could also be a K dwarf star because the host star has the mass of $0.19 \sim 0.78\, M_\odot$ at $1\sigma$ level. Due to the severe close/wide degeneracy, the physical lens parameters for the two models are almost the same. Considering of the estimated lens distance by the Bayesian analyses and $\murel \simeq 7\, \rm mas\, yr^{-1}$, the lens system is likely to be located at the disk. The physical lens parameters for the two models are presented in Table \ref{tab:jkastable3}. \begin{table}[t!] \caption{Physical lens parameters.\label{tab:jkastable3}} \centering \begin{tabular}{lrr} \toprule Parameter & Close & Wide \\ \midrule $M_{\rm host}$ $(M_\odot)$ & $0.42^{+0.34}_{-0.23}$ & $0.45^{+0.33}_{-0.24}$ \\ $M_{\rm p}\, (M_{\rm J})$ & $3.26^{+0.83}_{-0.58}$ & $3.34^{+0.78}_{-0.58}$\\ $D_{\rm L}$ (kpc) & $5.83^{+1.04}_{-1.55}$ & $5.72^{+1.03}_{-1.57}$ \\ $a_{\perp}$ (au) & $2.18^{+0.58}_{-0.72}$ & $3.13^{+0.73}_{-0.98}$\\ $\mu_{\rm rel}\, (\rm mas\ yr^{-1})$ & $6.86 \pm 1.32$ & $7.42 \pm 1.08$ \\ \bottomrule \end{tabular} \tabnote{The physical parameters are obtained by the Bayesian analyses. The representative values are chosen as the median values of the Bayesian posterior distributions, and their uncertainties indicate the 68\% confidence intervals of the distributions.} \end{table} \section{Summary} We reported a planetary system discovered from the analysis of the microlensing event OGLE-2019-BLG-0362/KMT-2019-BLG-0075. The event has a distinctive anomaly feature near the peak of the light curve, thus it looks like a typical 2L1S event. However, the anomaly was covered by only three KMT data points due to its short duration of $0.4\, \rm days$, thus making it difficult to securely insist that this is a 2L1S event. We thus conducted two kinds of modelings with 1L2S and 2L1S, which can produce the short duration anomaly. As a result, it is found that the event is induced by the 2L1S system because the $\chi^2$ of the 1L2S model is much larger than that of the 2L1S model by $\delcs=501$. The binary lensing solution is subject to the close/wide degeneracy, and this degeneracy is very severe because of $\delcs < 1$ between the close and wide models. Due to a relatively short event timescale of $\te = 22\, \rm days$, the microlens parallax was not measured. We thus carried out a Bayesian analysis, and from this, it is found that the lens is composed of $(M_{\rm host}, M_{\rm p}) = (0.42^{+0.34}_{-0.23}\, \rm M_\odot,\, 3.26^{+0.83}_{-0.58}\, M_{\rm J})$ for the close model, while for the wide model it is composed of $(M_{\rm host}, M_{\rm p})=(0.45^{+0.33}_{-0.24}\, \rm M_\odot,\, 3.34^{+0.78}_{-0.58}\, M_{\rm J})$. The distances to the lens for the close and wide models are $\dl = 5.83^{+1.04}_{-1.55}\, \rm kpc$ and $5.72^{+1.03}_{-1.57}\, \rm kpc$, respectively. The Bayesian distributions of the lens distance and relative lens-source proper motion of $\murel \simeq 7\, \rm mas\, yr^{-1}$ indicate that the lens is likely to be located at the disk. Due to $a_\perp > 2\, \rm AU$, it is found that the planet orbits beyond the snow line of an M dwarf or a K dwarf. The relative lens-source proper motion is $\murel \simeq 7\, \rm mas\, yr^{-1}$, and thus the lens will be separated from the source by $\simeq 70\, \rm mas$ in 2029, at which point one can measure the lens flux from adaptive optics of next-generation 30 m telescopes. \acknowledgments Work by S.-J. Chung was was supported by the Korea Astronomy and Space Science Institute under the R\&D program (Project No. 2022-1-830-04) supervised by the Ministry of Science and ICT. J.C.Y. acknowledges support from N.S.F Grant No. AST-2108414. Work by C.H. was supported by the grants of National Research Foundation of Korea (2020R1A4A2002885 and 2019R1A2C2085965). This research has made use of the KMTNet system operated by the KASI and the data were obtained at three sites of CTIO in Chile, SAAO in South Africa, and SSO in Australia.
Title: Battle of the Predictive Wavefront Controls: Comparing Data and Model-Driven Predictive Control for High Contrast Imaging
Abstract: Ground-based high contrast exoplanet imaging requires state-of-the-art adaptive optics (AO) systems in order to detect extremely faint planets next to their brighter host stars. For such extreme AO systems (with high actuator count deformable mirrors over a small field of view), the lag time of the correction (which can impact our system by the amount the wavefront has changed by the time the system is able to apply the correction) which can be anywhere from ~1-5 milliseconds, can cause wavefront errors on spatial scales that lead to speckles at small angular separations from the central star in the final science image. One avenue for correcting these aberrations is predictive control, wherein previous wavefront information is used to predict the future state of the wavefront in one-system-lag's time, and this predicted state is applied as a correction with a deformable mirror. Here, we consider two methods for predictive control: data-driven prediction using empirical orthogonal functions and the physically-motivated predictive Fourier control. The performance and robustness of these methods have not previously been compared side-by-side. In this paper, we compare these predictors by applying them as post-facto methods to simulated atmospheres and on-sky telemetry, to investigate the circumstances in which their performance differs, including testing them under different wind speeds, C_n^2 profiles, and time lags. We also discuss future plans for testing both algorithms on the Santa Cruz Extreme AO Laboratory (SEAL) testbed.
https://export.arxiv.org/pdf/2208.00984
\keywords{high-contrast imaging, adaptive optics, predictive wavefront control, empirical orthogonal functions, Kalman filtering, Linear Quadratic Gaussian control} \section{INTRODUCTION} \label{sec:intro} % For ground-based astronomy, image quality is impacted by atmospheric turbulence, where varying temperature zones of Earth's atmosphere change the index of refraction in air, which introduces delays in the path length of light travelling from a star to a telescope, leading to aberrations in the final science image. The solution is adaptive optics (AO), wherein phase information is collected with a wavefront sensor and active correction by deformable optics within the telescope path allow these phase delays to be counteracted; hence, wavefront errors are corrected before the light reaches the final science detector. In particular, for an extreme adaptive optics (XAO) system, a bright, on-axis natural guide star and a large number of DM actuators are used to achieve a good correction over a small (e.g. few arcsecond) field of view. Extreme AO is required for high contrast imaging systems, which use coronagraphs to null the coherent portion of the starlight. As the wavefront error in an AO system increases, the portion of light that can be rejected by the coronagraph decreases, leading to speckles throughout the image plane that closely mimic planets. Many current XAO systems (for example the Gemini Planet Imager\cite{Bailey2016} and SPHERE\cite{Milli2017}) see their overall performance limited by temporal effects, termed bandwidth error. While faster computers and deformable mirrors, as well as increasingly efficient readout detector technology can shorten an AO system's lag time, often the system lag is dominated by how long a wavefront sensor needs to expose to get a high signal-to-noise wavefront measurement. Predictive control can mitigate the bandwidth error without reducing the wavefront sensor exposure time, by predicting the wavefront of the system one step into the future. While some atmospheric turbulence will always be stochastic, the bulk motion is accurately represented\cite{Poyneer2009} by the simple Frozen Flow Hypothesis\cite{Taylor1938}, wherein we imagine static phase screens translating across the telescope pupil at the velocity of their associated wind layers. It is therefore reasonable to conclude that previous measurements of the wavefront contain information that can be used to predict the future shape of the wavefront. With this work, we focus on two disparate predictive control methods: empirical orthogonal functions\cite{Guyon2017} (EOF), as a method that depends purely on data to find linear relationships through time and space, and predictive Fourier control\cite{Poyneer2007} (PFC) , which uses data to identify wind layers in a system and inform a prediction with those layers. As two solutions to the same problem, the performance of EOF and PFC have yet to be directly compared. With this paper we will study their performance over simulated atmospheric data and saved on-sky telemetry from the W.M. Keck Observatory. In sections \ref{sec:eof} and \ref{sec:pfc}, we introduce empirical orthogonal functions and predictive Fourier control as two methods of prediction and present their mathematical formalism. In section \ref{sec:results}, we examine the performance of these methods, using residual wavefront error, Strehl, and the stability of the correction as metrics to evaluate their performance, and discuss the comparative performance of the two predictors. In section \ref{sec:conclusions}, we make conclusions and discuss next steps for testing these methods on the Santa Cruz Extreme AO Laboratory (SEAL) testbed. \section{EMPIRICAL ORTHOGONAL FUNCTIONS AS A DATA-DRIVEN PREDICTOR} \label{sec:eof} Empirical Orthogonal Functions (EOF)\cite{Guyon2017} looks for linear relationships in the evolution of the wavefront over time and space. EOF has been successfully run both in simulation\cite{Guyon2017}, during day-time testing at Keck Observatory\cite{Jensen2019}, as well as on-sky at the Keck Observatory\cite{Kooten2022} and the Subaru Telescope\cite{Guyon2018}. Here we will briefly summarize the math behind building and applying the EOF predictive filter; the mathematical formalism is taken from Guyon \& Males (2017)\cite{Guyon2017} unless specified. We consider wavefront sensor data at time $t$, to be a collection of m points $\mathbf{w}(t) = [\mathbf{w}_0(t), ... \mathbf{w}_{m-1}(t)]$, that maps to modes (Fourier, Zernike, etc) or zones in the wavefront sensor. We then build history vectors of n of these wavefront sensor exposures each one system time lag apart, such that \begin{equation} \mathbf{h}(t) = \begin{bmatrix} \mathbf{w}_0(t) \\ \mathbf{w}_1(t) \\ ... \\ \mathbf{w}_{m-1}(t) \\ \mathbf{w}_0(t -dt) \\ ... \\ \mathbf{w}_{m-1}(t-(n-1)dt) \end{bmatrix} \end{equation} We build a predictive filter $\mathbf{F}$, that when applied to a history vector will predict a future state: \begin{equation} \mathbf{F} \mathbf{h}(t) = \begin{bmatrix} \mathbf{w}_0(t+dt) \\ ... \\ \mathbf{w}_{m-1}(t+dt) \end{bmatrix} \end{equation} To calculate $\mathbf{F}$ we build a matrix $\mathbf{D}$ which contains a series of history vectors used for training data, and we compare to a matrix $\mathbf{P}$ which pairs each history vector at time $t$ to its future state at time $t+dt$. From there we calculate the filter $\mathbf{F}$ as a simple minimization: \begin{equation} \textrm{min}_\mathbf{F}||\mathbf{D}^T\mathbf{F}^T - \mathbf{P}^T||^2 \end{equation} In a departure from Guyon \& Males (2017)\cite{Guyon2017}, we solve this with a least squares pseudo-inversion\cite{Jensen2019}, with a regularization constant $\alpha$ (which we set to $\alpha=1$ for simulations.) \begin{eqnarray} \mathbf{F} &=& ((\mathbf{D}^T)^\dagger \mathbf{P}^T)^T \\ \mathbf{F} &=& \mathbf{P}\mathbf{D}^T(\mathbf{D}\mathbf{D}^T + \alpha \mathbf{I})^{-1} \end{eqnarray} We found that using 60000 history vectors of training data and history vectors made up of 3 exposures provided a consistent correction (apart from the 7 m/s layer, see further discussion in Section \ref{sec:discuss}), a result also found by Jensen-Clem (2019)\cite{Jensen2019}, and used these parameters for our prediction performance estimates. \section{PREDICTIVE FOURIER CONTROL AS A MODEL-DRIVEN PREDICTOR} \label{sec:pfc} Predictive Fourier control (PFC)\cite{Poyneer2007} uses data to extract information about wind layers from the system, and then builds a predictive filter informed by those wind layers, as an update to a traditional Linear Quadratic Gaussian controller\cite{LGQ1993, Kulcsar2006}. By definition, the predictor starts with no knowledge of the wind speed or direction. Here we will briefly summarize the math behind building and applying the PFC predictive filter; the mathematical formalism is taken from Poyneer (2007)\cite{Poyneer2007} unless specified. (Further details are available in Appendix \ref{sec:pfcmath}.) In short, PFC consists of 4 main steps: \begin{enumerate} \item Decompose turbulent phase screens into complex spatial Fourier modes. Each Fourier mode becomes a separate control problem and a filter is calculated individually for each coefficient. For each coefficient: \item Make a power spectral density (PSD) function by looking at how a single coefficient evolves in temporal frequency space. \item Isolate and fit peaks in the PSDs (which map to wind layers in an atmosphere). \item Feed the coefficients of each fit to a Kalman filter. \end{enumerate} From the perspective of a telescope system (under a Frozen Flow assumption\cite{Taylor1938}), wind layers will present as static turbulent scenes crossing over the telescope primary mirror at the speed of the wind at that height. Given enough data in time to adequately sample a wind velocity, these pupil crossing times will have high signal to noise peaks across spatial Fourier modes, as different spatial scales will cross with different frequencies. Therefore, when turbulent phase screens are decomposed into Fourier modes, each coefficient contains information on a spatial frequency of a particular size that maps to a series of non-conflicting controllers. A PSD for each coefficient reveals the peak frequency, height, and width for each wind layer, which provides input to a controller specific to that coefficient. At a given Fourier mode indexed (k, l) (over wavefront sensor subapertures), a peak at some $\nu$ indicates a velocity ($v_{x}, v_{y}$) component of: \begin{equation} \nu = - \frac{(kv_{x} + lv_{y})}{D} \end{equation} where D is the telescope diameter. While in practice, the algorithm is not meant to deliver identifications of wind-layers in a system, testing the wind identification on simulated known atmospheres shows recovery of both single and multi-layer wind layers. Figure \ref{fig:wind_id_good} shows examples of successful recovery of rejected wind layers. See Appendix \ref{sec:pfcmath} for some exploration of less optimal peak recovery and a discussion of potential causes. Peaks can be identified and fit using built-in \texttt{scipy} tools (see Appendix \ref{sec:pfcmath} for further details.) Also present in each PSD is a peak that occurs at $\nu=0$, due to a 0 Hz frequency peak (DC peak) from the Fourier transform that builds the PSD \cite{Poyneer2007}. DC noise encapsulates the static offset in a system, which may be caused by consistent noise from electric components in a realistic system, or mean offset from zero for any distribution. Wind velocity peaks that occur near zero (in frequency space) are difficult to resolve, as they blend into the static DC peak. Further description of the minimum resolvable peaks and treatment of the DC peak is available in Appendix \ref{sec:pfcmath}. For each peak, a height $\sigma^2$, and placement (i.e., peak width and location in angular frequency space) $\alpha$, are fit with \begin{equation} P(\omega) = \frac{\sigma^2}{|1 - \alpha\exp{(-i\omega)}|^2} \end{equation} The complex term $\alpha$ is expressed $\alpha = |\alpha|e^{i\omega}$, where $\omega$ encapsulates the actual wind velocity of each layer. (Without the complex phase, $\alpha$ acts as a gain for a typical Linear Quadratic Gaussian controller without extracting and predicting based on wind layers.) The fits for each $\alpha$ and $\sigma$ inform a Kalman filter, which in turn provides a prediction of each Fourier coefficient. We predict the state vector of our system \begin{equation} \mathbf{x}[t] = \begin{bmatrix} \mathbf{a}(t) & \phi(t+1) & \phi(t) & \phi(t-1) & d(t-1) & d(t-2) \end{bmatrix} \end{equation} where $\mathbf{a}$ encapsulates the fit parameters, $\phi$ are the wavefront states, and $d$ are commands to the deformable mirror. We predict the next iteration with: \begin{equation} \mathbf{x}(t) = (\mathbf{I} - \mathbf{K}\mathbf{C})\mathbf{A}\mathbf{x}(t-1) + (\mathbf{I} - \mathbf{K}\mathbf{C})\mathbf{G}d(t-1) + \mathbf{K}y(t) \end{equation} where $\mathbf{K}$ are the Kalman gains, $\mathbf{C}$ is a control matrix, $\mathbf{A}$ is a covariance matrix, $\mathbf{G}$ is a matrix to update DM commands, $d(t)$ holds DM commands, and $y(t)$ holds noisy wavefront sensor measurements. For the full definitions and derivations of this Kalman filter see Appendix \ref{sec:pfcmath}, as well as the original treatment \cite{Poyneer2007}. Note that in our implementation, for ease of comparison with EOF, we rewrite the original control law so that we can apply it in open loop. Details of this update to the original form are described in Appendix \ref{sec:pfcmath}. We found that using 10000 frames of data to build our PSDs and windows between 1024-2048 exposures provided a consistent correction (see Appendix \ref{sec:pfcmath} for further description of the windowing and its impacts), and used these parameters for our prediction performance estimates. \section{RESULTS} \label{sec:results} \subsection{Simulated Single and Multi-Layer Turbulence} Using \texttt{hcipy}, the High Contrast Imaging package for Python\cite{hcipy}, we simulated three test case atmospheres: two with single layer atmospheres with a wind speed of 7 m/s (split into $v_x, v_y = 2.13, 6.67$) and 14 m/s (split into $v_x, v_y = 2, 13.5$), and one that included 8 layers according to a C$_N^2$ and wind velocity study of Maunakea weather\cite{KAON303}. The layers and heights from this profile are described in Table \ref{tab:chun}; all layers in the multi-layer simulation had a random direction assigned. For the purposes of this simulation we used an 8 meter primary mirror diameter, an $r_0=15$cm at $500$nm, simulated at a resolution of 48 by 48 pixels (which maps to an idealized wavefront sensor of 48 by 48 pixels), and evolved with a $0.5$ms time step. This simulates a Gemini Planet Imager-like system, which was the instrument for the original PFC simulations\cite{Poyneer2007}. We consider an idealized wavefront sensor whose output is exactly the simulated 48 by 48 pixel phase screen at a given time step. I.e., we use a perfect wavefront sensor (there is no difference between the phase screen and the phase screen we sense) and a perfect deformable mirror (there is no difference between the phase screen we predict and the correction we apply) and focus only on how faithfully we can predict the wavefront given a choice of time delay. \begin{table}[ht] \caption{Wind layers simulated based on Neyman (2004)\cite{KAON303}, to emulate nominal performance at Maunakea.} \label{tab:chun} \begin{center} \begin{tabular}{|l|l|l|} \hline \rule[-1ex]{0pt}{3.5ex} Velocity & Height & $C_N^2$ \\ \hline \rule[-1ex]{0pt}{3.5ex} 6.7 m/s & 0.0 km & 0.369 $\times 10^{-12}$ \\ \hline \rule[-1ex]{0pt}{3.5ex} 13.9 m/s & 2.1 km & 0.219 $\times 10^{-12}$ \\ \hline \rule[-1ex]{0pt}{3.5ex} 20.8 m/s & 4.1 km & 0.127 $\times 10^{-12}$ \\ \hline \rule[-1ex]{0pt}{3.5ex} 29.0 m/s & 6.5 km & 0.101 $\times 10^{-12}$ \\ \hline \rule[-1ex]{0pt}{3.5ex} 29.0 m/s & 9.0 km & 0.046 $\times 10^{-12}$ \\ \hline \rule[-1ex]{0pt}{3.5ex} 29.0 m/s & 12.0 km & 0.111 $\times 10^{-12}$ \\ \hline \rule[-1ex]{0pt}{3.5ex} 29.0 m/s & 14.8 km & 0.027 $\times 10^{-12}$ \\ \hline \end{tabular} \end{center} \end{table} We calculated residuals by subtracting the original simulated wavefront from the predicted wavefront in phase, and then calculate the root mean square (RMS) error at $500$nm. For the sake of comparison, we also estimate the performance of a quasi-integrator, where we subtract the simulated phase from itself with a two-step delay (i.e., if the system could apply a perfect correction two steps behind.) For this initial implementation we found that edge effects could have a major impact on the residual error, so as an interim measure we calculated the RMS error only over the inner 7 meter diameter of phase correction. Figure \ref{fig:sim_results} shows the results for the two predictors. For the 7 and 14 m/s single-layer turbulence examples, both predictors show improvement over the quasi-integrator, and EOF shows an advantage between the two predictors. Figure \ref{fig:video_sim} shows a video of turbulence and residuals of the 14 m/s atmosphere evolving over time for the 10000 time step correction. Figure \ref{fig:multi_results} shows the results for the multi-layer example, which again shows that both predictors out-perform the integrator, and an advantage to EOF between the two predictors (see Section \ref{sec:discuss} for further elaboration.) \subsection{On-Sky Telemetry} Our telemetry data is from the Keck II AO system running in closed loop on the night of 04-20-2019, previously published in Jensen-Clem (2019)\cite{Jensen2019}. It consists of 120000 time steps of data (taken at $\sim 1.7$ms intervals\cite{Cetre2018}) in the form of DM commands and near-IR pyramid wavefront sensor\cite{Bond2018} (installed in Keck II as part of the KPIC upgrade\cite{KPIC}) phase measurements in units of volts. Keck II is a 10 meter telescope, with a 21 by 21 actuator deformable mirror (we use the pyramid wavefront sensor's 21 by 21 reconstructed residuals to match.) As we see only the residuals and applied commands in this system, we must first reconstruct the pseudo-open loop turbulence in microns with \begin{gather} \textrm{open loop phase} = (-1\times\textrm{dm commands} + \textrm{wf residuals})\times0.6 \\ \textrm{integrator residuals} = (\textrm{wf residuals})\times0.6 \end{gather} The 0.6 factor accounts for system gain and volt-to-micron conversion\cite{Jensen2019}. Using data from the Maunakea Weather Center\cite{mkwc}, we use the Canada-France-Hawaii Telescope (CFHT) to estimate the wind velocity at the ground layer from the two minutes of Keck telemetry that we use for this analysis. Figure \ref{fig:wind_id_telem} shows the median x and y velocity components from our portion of the night, alongside a peak identification of the ground layer wind velocity from telemetry data. We estimate the ground layer to be $(v_x, v_y) = 1.07, 5.17$m/s. DIMM-reported seeing on that night was $\sim 0.7$", as opposed to MASS-reported seeing of $\sim 0.3$"\cite{mkwc}. Because MASS measures free atmosphere, the discrepancy between the two values implies the turbulence that night was dominated by the ground-layer, making it a good candidate for improvement with predictive control. Using the DIMM value for seeing that night, we find $r_0 \sim 15$cm at $500$nm. We applied both predictors to reconstructed open-loop phase screens, calculated residuals by subtracting the predicted screen via each method from the reconstructed screen, and then calculated the root mean square error. Figure \ref{fig:telem_results} shows the results for the two predictors. Both predictors show a $\geq 2$ factor improvement over the on-sky integrator, and in a departure from simulated results, PFC shows a modest advantage between the two predictors. Figure \ref{fig:video_telem} shows the turbulence residuals evolving over time; note features moving at a consistent velocity are still perceivable in the pseudo-open loop and integrator but give way to whiter noise in the two predictors, which implies our predictor is removing the linear trends imparted by wind layers. \subsection{Discussion} \label{sec:discuss} Table \ref{tab:strehl} compiles the residual wavefront error from each prediction method across each atmosphere and estimates (using the Marechal approximation\cite{hardy1998}) the Strehl ratio achieved by the integrator as well as each predictor. Note that simulations are highly idealized, with no measurement error from the wavefront sensor or fitting error from the deformable mirror. \begin{table}[ht] \caption{Estimation of Strehl improvements from residual WFE of each predictor.} \label{tab:strehl} \begin{center} \begin{tabular}{|l|l|l|l|l|} \hline \rule[-1ex]{0pt}{3.5ex} Atmosphere & wavelength & Predictor & WFE & Strehl \\ \hline \rule[-1ex]{0pt}{3.5ex} 7 m/s & 500 nm & quasi-integrator & 6.45 +/- 0.9 nm & 92.2 \% \\ \hline \rule[-1ex]{0pt}{3.5ex} 7 m/s & 500 nm & PFC predictor & 3.31 +/- 0.2 nm & 95.9 \% \\ \hline \rule[-1ex]{0pt}{3.5ex} 7 m/s & 500 nm & EOF predictor & 1.75 +/- 0.2 nm & 97.8 \% \\ \hline \rule[-1ex]{0pt}{3.5ex} 14 m/s & 500 nm & quasi-integrator & 12.84 +/- 1.1 nm & 85.1 \% \\ \hline \rule[-1ex]{0pt}{3.5ex} 14 m/s & 500 nm & PFC predictor & 6.48 +/- 0.46 nm & 92.2 \% \\ \hline \rule[-1ex]{0pt}{3.5ex} 14 m/s & 500 nm & EOF predictor & 0.67 +/- 0.11 nm & 99.2 \% \\ \hline \rule[-1ex]{0pt}{3.5ex} multi-layer & 500 nm & quasi-integrator & 36.84 +/- 1.54 nm & 62.9 \% \\ \hline \rule[-1ex]{0pt}{3.5ex} multi-layer & 500 nm & PFC predictor & 19.39 +/- 2.37 nm & 78.4 \% \\ \hline \rule[-1ex]{0pt}{3.5ex} multi-layer & 500 nm & EOF predictor & 7.19 +/- 0.30 nm & 91.4 \% \\ \hline \rule[-1ex]{0pt}{3.5ex} telemetry & 1633 nm & on-sky integrator & 112.35 +/- 17.27 nm & 64.9 \% \\ \hline \rule[-1ex]{0pt}{3.5ex} telemetry & 1633 nm & PFC predictor & 44.96 +/- 18.56 nm & 84.1 \% \\ \hline \rule[-1ex]{0pt}{3.5ex} telemetry & 1633 nm & EOF predictor & 58.43 +/- 8.02 nm & 79.9 \% \\ \hline \end{tabular} \end{center} \end{table} For every simulated atmospheric profile and on-sky telemetry we find that both predictors provide an improvement over a classic integrator. PFC provides a consistent improvement of 2-3 over the integrator, while EOF shows more variety. EOF succeeds more notably in picking out one extremely fast wind layer (the single-layer 14 m/s atmosphere), but requires a longer training set (90000 frames of data) to find and remove the 7 m/s wind layer. In contrast, PFC performs consistently with only 10000 frames of training data. Both methods of predictive control also out-perform the integrator in the on-sky telemetry example, which is consistent with the results of Jensen-Clem (2019)\cite{Jensen2019}. For the on-sky telemetry, we find that PFC shows a greater improvement than EOF, in contrast with its performance over the simulated atmospheres. This may be indicative of the way a Kalman filter handles Gaussian noise (a Kalman filter carries the variance of a distribution in its covariance matrix through each iteration\cite{arbook}), as opposed to the treatment of EOF as a purely linear filter. While EOF shows greater performance consistently over atmospheric simulation (which includes no measurement noise), the comparative performance over telemetry may better forecast lab and on-sky performance of the two predictors. However, between the two methods, EOF is less complex to implement, as evinced by the length of Appendix \ref{sec:pfcmath}. \section{CONCLUSIONS AND FUTURE WORK} \label{sec:conclusions} In this paper we have compared an initial implementation of two predictors to examine their relative performance on both simulated data and on-sky telemetry. We find that both the data-driven predictor using empirical orthogonal functions (EOF) and the model-driven predictor using predictive Fourier control (PFC) provide a consistent improvement across every simulated atmospheric profile and the telemetry compared with the integrator. While EOF shows greater improvement than PFC over our three simulated atmospheres, PFC shows greater improvement than EOF over on-sky telemetry data, which may better correlate with future on-sky performance. Future work will consist of further optimizing our implementation of these two prediction methods, including focusing on noise estimation and a more robust fit to the DC peak for PFC, and better training data and history vector length optimization for EOF. Finally, to build on our simulation work, we will implement these predictors as lab experiments on the Santa Cruz Extreme AO Laboratory testbed (SEAL)\cite{Jensen2021}. With this testbed, we will be able to apply realistic turbulence at spatial scales smaller than the resolution of our wavefront sensor (true to the physical nature of atmospheric turbulence) with a spatial light modulator Van Kooten (2022, in prep), correct with with a high-stroke 97 actuator ALPAO and a Boston kilo-DM, and do wavefront sensing with either a Shack-Hartmann or Pyramid wavefront sensor. While simulations are a core initial element of testing the performance of a method, we expect lab demonstrations will unlock key information about the application and performance of these two predictive control methods. \appendix % \section{PREDICTIVE FOURIER CONTROL MATH IN DETAIL} \label{sec:pfcmath} In this Appendix, we fully describe the mathematical formalism for Predictive Fourier Control\cite{Poyneer2007}, as well as how we have implemented it, and any deviations from the original method. For consistency across EOF and PFC, all matrices are defined in the convention of m rows by n columns, in departure from typical control engineering conventions and the original formulation from Poyneer (2007)\cite{Poyneer2007}. \subsection{Decomposition Into Complex Fourier Modes} We decompose each turbulent scene into complex Fourier modes. Given some mode $(k,l)$, over some x by y pixel grid, each mode is defined by \begin{equation} f_{k,l}[x,y] = \cos\left(2\pi\frac{kx + ly}{N}\right) + i\sin\left(2\pi\frac{kx + ly}{N}\right) \end{equation} where N is the total number of modes. (Note that the original paper from Poyneer (2007)\cite{Poyneer2007} contains a typo in its casting of each cos/sin term as complex, which we do not duplicate.) Fourier modes are meant to operate over square regions, and many telescope pupils are circular; both our Keck telemetry and simulations are over a circular pupil. Starting with a circular pupil embedded within a square grid, and applying a mask for the non-used elements to both the telescope pupil and the Fourier mode allows for a decomposition of the original turbulence image at near machine precision as mathematically proved in Poyneer (2005) \cite{Poyneer2005}, and experimentally reproduced in this work. Figure \ref{fig:circular_pupil} shows an example of a residual turbulent screen, a reconstruction of the same turbulent screen built with complex Fourier modes, the real component of a complex Fourier mode projected onto the same pupil, and the error introduced during this decomposition, on the order of $1\times10^{-12}$, as compared to the phase of the original turbulence which has a median of $\sim 850$nm (the ratio of which approaches the machine precision of Python.) \subsection{Creating Power Spectral Densities and Identifying Peaks} To examine the coefficient for each Fourier mode, we take a periodogram in temporal frequency space. We use a Welch periodogram\cite{statsbook} with a Hamming window, using \texttt{scipy.signal.welch} and \texttt{scipy.signal.get\_window}. We use windows of $n=1024$ or $n=2048$, depending on the noise present in the data. Longer windows provide the ability to resolve peaks further from each other and from the 0Hz DC peak, but increase noise in the periodogram. For our smallest window, $n=1024$ frames, our minimum resolvable frequency would be $\nu_{min}=1024\times t_{int}=2-0.6$Hz, for our 0.5-1.7 ms resolutions in simulated data and telemetry. We do not record a peak as input to the Kalman filter unless it is $\nu_{min}$ from 0, as well as other identified peaks. We find the location of each peak in the data with \texttt{scipy.signal.find\_peaks}, and fit it with \texttt{scipy.optimize} \texttt{.curve\_fit}. Peak identification appears repeatable over single layer atmospheres (as shown in Table \ref{tab:peak_id}), though Figure \ref{fig:wind_id_bad} shows some examples of sub-optimal peak identification and fitting given known wind layers, which may be due to spectral leakage or scalloping \cite{arbook}. \begin{table}[ht] \caption{Successful (within 2 Hz) peak identifications for varying atmospheric profiles.} \label{tab:peak_id} \begin{center} \begin{tabular}{|l|l|l|} \hline \rule[-1ex]{0pt}{3.5ex} Identified Layer & Correct Peak IDs & Total Modes \\ \hline \rule[-1ex]{0pt}{3.5ex} 7 m/s & 1909 & 2304 \\ \hline \rule[-1ex]{0pt}{3.5ex} 14 m/s & 1989 & 2304 \\ \hline \rule[-1ex]{0pt}{3.5ex} telemetry ground layer & 179 & 484 \\ \hline \end{tabular} \end{center} \end{table} \subsection{Fitting $\alpha$ and $\sigma$} Each peak can be defined with the power law: \begin{equation} P(\omega) = \frac{\sigma^2}{|1 - \alpha e^{(-i\omega)}|^2} \end{equation} where $\sigma^2$ describes the peak height for a given coefficient and complex $\alpha = |\alpha|e^{i\omega}$ describes the placement (i.e. peak width and velocity) of a given wind layer. $\nu$ is a natural frequency space in Hz, whereas the phase $\omega$ describes angular frequency that goes with $\omega = -2\pi\nu t_{int}$, where $t_{int}$ is the time between each frame of data. \subsection{Building the State Vector and Covariance Matrix} In keeping with the traditional Linear Quadratic Gaussian (LGQ) control, we use a Kalman filter to predict forward the state of the system. While the original Poyneer (2007) \cite{Poyneer2007} paper describes a close-loop control law, we reformulate it to run in open loop, adjusting the covariance matrix $\mathbf{A}$, as well as control matrix $\mathbf{C}$. The original close loop law includes terms in $\mathbf{A}$ and $\mathbf{C}$ to subtract off a deformable mirror (DM) command. Our update renders the DM commands in the state vector unnecessary for the Kalman filter (which could impact computational performance), but in this initial implementation we keep them for simplicity in following the original math. We denote these changes in each matrix with an$^{*}$. The state vector $\mathbf{x}$ is a vector of (L+6) elements that consists of \begin{equation} \mathbf{x}(t) = \begin{bmatrix} \mathbf{a}(t) & \phi(t+1) & \phi(t) & \phi(t-1) & d(t-1) & d(t-2) \end{bmatrix} \end{equation} where $\phi(t)$ and $d(t)$ map to the wavefront coefficient at each Fourier mode, and the command to the deformable mirror respectively. The system operates with a two-step time delay, and is centered one step off from the deformable mirror and one step off from the wavefront sensor in opposite directions. At the time of some system measurement $y(t)$ taken at $t=t_0$, the wavefront is at time $t=t_0+dt$, and the deformable mirror has information from time $t=t_0-dt$; this is why we examine $\phi(t+1), \phi(t)$ as compared to $d(t-1), d(t-2)$. $\mathbf{a}$ is a vector of (L+1) elements that encapsulates information on every identified wind layer at that coefficient. \begin{gather} \mathbf{a} = (a_0, a_1, ..., a_L) \\ a(t)_L = \alpha_La(t-1) + w(t) \end{gather} Each $a_L$ evolves according to a first order auto-regressive process, AR(1)\cite{arbook}, driven by the $\alpha_L$ for a given wind layer and complex white noise $w(t)$. The periodogram for each Fourier mode includes a direct current (DC) layer peak at $\nu=0$, for which we include $\alpha_0=0.999$ and $\sigma^2=\textrm{max}[P(\nu)]$, where $P(\nu)$ is the PSD. (Future implementations could fit the $\sigma^2_{DC}$ term in a way that is more physically relevant.) The state vector $\mathbf{x}$ evolves according to the state equation: \begin{equation} \mathbf{x}(t+1) = \mathbf{A}\mathbf{x}(t) + \mathbf{G}d(t) + \mathbf{B}\mathbf{w}(t) \end{equation} with the (L+6) by (L+6) covariance matrix \begin{equation} \mathbf{A} = \begin{bmatrix} \mathbf{R} & \mathbf{0}_{L+1 x 1} & \mathbf{0}_{L+1 x 1} & \mathbf{0}_{L+1 x 1} & \mathbf{0}_{L+1 x 1} & \mathbf{0}_{L+1 x 1} \\ \mathbf{1}_{1 x L+1} & 0 & 0 & 0 & 0 & 0 \\ \mathbf{0}_{1 x L+1} & 1 & 0 & 0 & 0 & 0 \\ \mathbf{0}_{1 x L+1} & 0 & 1 & 0 & 0 & 0 \\ \mathbf{0}_{1 x L+1} & 0 & 0 & 0 & 0 & 0 \\ \mathbf{0}_{1 x L+1} & 0 & 0 & 0 & 0^{*} & 0 \end{bmatrix} \end{equation} where (L+1) by (L+1) $\mathbf{R}$ holds the $\alpha_L$ values on the diagonals such that \begin{equation} \mathbf{R} = \textrm{Diag}(\alpha_0, \alpha_1, ..., \alpha_L) \end{equation} The 1 by (L+6) DM update matrix is defined as \begin{equation} \mathbf{G} = \begin{bmatrix} \mathbf{0}_{1 x L+1} & 0 & 0 & 0 & 1 & 0 \end{bmatrix} \end{equation} Finally, an (L+6) by (L+1) noise propagator matrix \begin{equation} \mathbf{B} = \begin{bmatrix} \mathbf{I}_{L+1 x L+1} \\ \mathbf{0}_{5 x L+1} \end{bmatrix} \end{equation} The measurement in the system is defined with \begin{equation} y[t] = \mathbf{C}\mathbf{x}[t]+v[t] \end{equation} where $v[t]$ is the measurement noise, which is assumed to be zero-mean Gaussian white noise, and 1 by (L+6) $\mathbf{C}$ is the control matrix \begin{equation} \mathbf{C} = \begin{bmatrix} \mathbf{0}_{1 x L+1} & 0 & 0 & 1 & 0^{*} & 0 \end{bmatrix} \end{equation} \subsection{Steady State Solutions with the Algebraic Ricatti Equation} Instead of recalculating the Kalman filter at each step -- which is quite computationally expensive, we opt to use the Algebraic Ricatti Equation (ARE)\cite{DARE}, specifically with a discrete solver, as the covariance matrices will reach a steady-state solution. We use \texttt{scipy.linalg.solve\_discrete\_are} to calculate $\mathbf{P}_s$. Poyneer (2007)\cite{Poyneer2007} outlines this equation as: \begin{equation} \mathbf{P_s} = \mathbf{AP_sA^H} + \mathbf{BP_wB^H} - \mathbf{AP_wC^H}(\mathbf{CP_sC^H} + \mathbf{P_v})^{-1}\mathbf{CP_sA^H} \\ \end{equation} with $\mathbf{P_v}$ as a scalar variance of the white noise distribution and (L+1) by (L+1) \begin{equation} \mathbf{P_w} = \textrm{Diag}(\sigma^2_0, \sigma^2_1, ..., \sigma^2_L) \end{equation} We note that this use of the discrete ARE differs from the typical form in that it uses Hermitian conjugates of the known coefficient matrices $\mathbf{A}\rightarrow\mathbf{A}^H$ and $\mathbf{B}\rightarrow\mathbf{C}^H$, and builds the noise term with $\mathbf{Q}\rightarrow\mathbf{BP_w}\mathbf{B}^H$\cite{DARE, scipy}. Finally we calculate the Kalman gains (a vector of L+6 elements) for a given coefficient with: \begin{equation} \mathbf{K} = \mathbf{P_sC^H}(\mathbf{CP_sC^H} + \mathbf{P_v})^{-1} \end{equation} which builds all of the pieces for the update equation to predict the next step: \begin{equation} \mathbf{x}(t) = (\mathbf{I} - \mathbf{K}\mathbf{C})\mathbf{A}\mathbf{x}(t-1) + (\mathbf{I} - \mathbf{K}\mathbf{C})\mathbf{G}d(t-1) + \mathbf{K}y(t) \end{equation} From the newly calculated state vector we pull the $L+1$st element as the prediction of the wavefront $\phi(t+1)$. \acknowledgments % Many thanks to Don Gavel, who gave up many days of his retirement to give me a crash course in control theory and help me untangle these fundamental papers, as well as Lisa Poyneer for advising me on ways to improve my implementation of Predictive Fourier Control. Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. \bibliography{predictive_battle} % \bibliographystyle{spiebib} %
Title: Identifying Galaxy Mergers in Simulated CEERS NIRCam Images using Random Forests
Abstract: Identifying merging galaxies is an important - but difficult - step in galaxy evolution studies. We present random forest classifications of galaxy mergers from simulated JWST images based on various standard morphological parameters. We describe (a) constructing the simulated images from IllustrisTNG and the Santa Cruz SAM, and modifying them to mimic future CEERS observations as well as nearly noiseless observations, (b) measuring morphological parameters from these images, and (c) constructing and training the random forests using the merger history information for the simulated galaxies available from IllustrisTNG. The random forests correctly classify $\sim60\%$ of non-merging and merging galaxies across $0.5 < z < 4.0$. Rest-frame asymmetry parameters appear more important for lower redshift merger classifications, while rest-frame bulge and clump parameters appear more important for higher redshift classifications. Adjusting the classification probability threshold does not improve the performance of the forests. Finally, the shape and slope of the resulting merger fraction and merger rate derived from the random forest classifications match with theoretical Illustris predictions, but are underestimated by a factor of $\sim 0.5$.
https://export.arxiv.org/pdf/2208.11164
\title{Identifying Galaxy Mergers in Simulated CEERS NIRCam Images using Random Forests} \email{crr9508@rit.edu} \author[0000-0002-8018-3219]{Caitlin Rose} \author[0000-0001-9187-3605]{Jeyhan S. Kartaltepe} \affil{Laboratory for Multiwavelength Astrophysics, School of Physics and Astronomy, Rochester Institute of Technology, 84 Lomb Memorial Drive, Rochester, NY 14623, USA} \author{Gregory F. Snyder} \affiliation{Space Telescope Science Institute, 3700 San Martin Dr, Baltimore, MD 21218, USA} \author[0000-0002-9495-0079]{Vicente Rodriguez-Gomez} \affiliation{Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218, USA} \affiliation{Instituto de Radioastronom\'{i}a y Astrof\'{i}sica, Universidad Nacional Aut\'{o}noma de M\'{e}xico, Apdo. Postal 72-3, 58089 Morelia, Mexico} \author[0000-0003-3466-035X]{L. Y. Aaron\ Yung} \affiliation{Astrophysics Science Division, NASA Goddard Space Flight Center, 8800 Greenbelt Rd, Greenbelt, MD 20771, USA} \author[0000-0002-7959-8783]{Pablo Arrabal Haro} \affiliation{NSF's National Optical-Infrared Astronomy Research Laboratory, 950 N. Cherry Ave., Tucson, AZ 85719, USA} \author[0000-0002-9921-9218]{Micaela B. Bagley} \affiliation{Department of Astronomy, The University of Texas at Austin, Austin, TX, USA} \author[0000-0003-2536-1614]{Antonello Calabr{\`o}} \affiliation{Osservatorio Astronomico di Roma, via Frascati 33, Monte Porzio Catone, Italy} \author[0000-0001-7151-009X]{Nikko J. Cleri} \affiliation{Department of Physics and Astronomy, Texas A\&M University, College Station, TX, 77843-4242 USA} \affiliation{George P.\ and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, Texas A\&M University, College Station, TX, 77843-4242 USA} \author[0000-0003-1371-6019]{M. C. Cooper} \affiliation{Department of Physics \& Astronomy, University of California, Irvine, 4129 Reines Hall, Irvine, CA 92697, USA} \author[0000-0001-6820-0015]{Luca Costantin} \affiliation{Centro de Astrobiolog\'ia (CSIC-INTA), Ctra de Ajalvir km 4, Torrej\'on de Ardoz, 28850, Madrid, Spain} \author[0000-0002-5009-512X]{Darren Croton} \affiliation{Centre for Astrophysics \& Supercomputing, Swinburne University of Technology, Hawthorn, VIC 3122, Australia} \affiliation{ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D)} \author[0000-0001-5414-5131]{Mark Dickinson} \affiliation{NSF's National Optical-Infrared Astronomy Research Laboratory, 950 N. Cherry Ave., Tucson, AZ 85719, USA} \author[0000-0001-8519-1130]{Steven L. Finkelstein} \affiliation{Department of Astronomy, The University of Texas at Austin, Austin, TX, USA} \author[0000-0002-1857-2088]{Boris H\"au{\ss}ler} \affiliation{European Southern Observatory, Alonso de Cordova 3107, Casilla 19001, Santiago, Chile} \author[0000-0002-4884-6756]{Benne W. Holwerda} \affil{Physics \& Astronomy Department, University of Louisville, 40292 KY, Louisville, USA} \author[0000-0002-6610-2048]{Anton M. Koekemoer} \affiliation{Space Telescope Science Institute, 3700 San Martin Dr., Baltimore, MD 21218, USA} \author[0000-0002-8816-5146]{Peter Kurczynski} \affiliation{Observational Cosmology Laboratory (Code 665), NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA} \author[0000-0003-1581-7825]{Ray A. Lucas} \affiliation{Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA} \author{Kameswara Bharadwaj Mantha} \affiliation{Minnesota Institute for Astrophysics, University of Minnesota, 116 church St SE, Minneapolis, MN, 55455, USA.} \author[0000-0001-7503-8482]{Casey Papovich} \affiliation{Department of Physics and Astronomy, Texas A\&M University, College Station, TX, 77843-4242 USA} \affiliation{George P.\ and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, Texas A\&M University, College Station, TX, 77843-4242 USA} \author[0000-0003-4528-5639]{Pablo G. P\'erez-Gonz\'alez} \affiliation{Centro de Astrobiolog\'{\i}a (CAB/CSIC-INTA), Ctra. de Ajalvir km 4, Torrej\'on de Ardoz, E-28850, Madrid, Spain} \author[0000-0003-3382-5941]{Nor Pirzkal} \affiliation{ESA/AURA Space Telescope Science Institute} \author[0000-0002-6748-6821]{Rachel S. Somerville} \affiliation{Center for Computational Astrophysics, Flatiron Institute, 162 5th Avenue, New York, NY, 10010, USA} \author[0000-0002-4772-7878]{Amber N. Straughn} \affiliation{Astrophysics Science Division, NASA Goddard Space Flight Center, 8800 Greenbelt Rd, Greenbelt, MD 20771, USA} \author[0000-0002-8224-4505]{Sandro Tacchella} \affiliation{Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge, CB3 0HA, UK}\affiliation{Cavendish Laboratory, University of Cambridge, 19 JJ Thomson Avenue, Cambridge, CB3 0HE, UK} \section{Introduction} \label{sec:intro} Mergers are known to play an important role in the evolution of galaxies over cosmic time. The gravitational interactions between merging galaxies compress gas and create shocks, inducing star formation throughout, and can funnel gas toward their centers, powering nuclear starbursts and fueling active galactic nuclei (AGN) \citep[e.g.,][]{sanders1988a,mihos1996,hopkins2008}. This process can also disrupt the orderly rotation of disk stars, driving the morphological transition of galaxies by turning spiral disk galaxies into ellipticals \citep[e.g.,][]{too1977,cox2006,kor2009,rod2017} as well as inducing disk-instabilites that may cause the build up of the most massive structures at $z > 3$ \citep[e.g.,][]{tacchella2015,zolotov2015, Costantin2021, Costantin2022}. We now know that the rate at which mergers occur evolves strongly out to $z\sim1.5$, as seen by many observational studies as well as cosmological simulations \citep[e.g.,][]{kart2007,bri2007,lotz2011,rod2015,man2018} and that interactions and mergers can have a large impact on the star formation rates and AGN activity of galaxies \citep[e.g.,][]{ell2008,pat2011,lar2016}. Studies of the merger rate during cosmic noon ($z\sim 1-3$) have benefited from deep NIR surveys of galaxies with the Wide Field Camera (WFC3) on the {\it Hubble Space Telescope} (HST), though many uncertainties in the observations and tension with expectations from simulations remain. At these higher redshifts, the empirical merger rates of \cite{lotz2011} and \cite{oleary2021} and the theoretical rates of \cite{hop2010} and \cite{rod2015} continue to increase (albeit at different rates). On the other hand, \cite{man2016} find that their empirical merger rate flattens, while \cite{man2018} find that their empirical rate either turns over and begins decreasing, or remains increasing, depending on the criteria used for their merger selection. \cite{dun2019} compare their observed merger rates from the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey \citep[CANDELS; ][]{koe2011,gro2011} to previous studies. They find that for galaxies with $\log_{10} (M_{\star} / M_{\odot}) > 10.3$, their increasing merger rate agrees with that measured in the Illustris simulation by \cite{rod2015} out to $z\sim6$, as well as with \cite{mun2017} out to $z\sim2$. However, beyond $z\sim2$, \cite{dun2019} disagrees with the rate from \cite{man2018} which begins decreasing. For galaxies with $9.7 < \log_{10} (M_{\star} / M_{\odot}) < 10.3$, the increasing rate of \cite{dun2019} agrees with that of \cite{ven2017} out to $z\sim3$ (within their error bars), yet disagrees with the shallower rate from Illustris, indicating that the Illustris merger rates are in tension with observations. Furthermore, \cite{dun2019} find that their comoving merger rates suffer from significant uncertainties at $z > 4$. Additionally, \cite{fer2020} identify mergers using deep learning and find that the rates broadly agree with visual classifications out to $z\sim3$. There are several sources of uncertainty that currently plague our understanding of mergers at these high redshifts. First, modern surveys with WFC3 are only sensitive to the most massive galaxies and major mergers \citep[mass ratio $<$4:1; e.g.,][]{Ellison2013,man2018} and the numbers detected drop-off sharply beyond $z>3$. At these redshifts, it is expected that low mass galaxies and minor mergers (mass ratios between 4:1 and 10:1) may play an increasing role \citep[e.g.,][]{Kaviraj2014}. Furthermore, the conversion of an observed merger fraction into a merger rate requires the assumption of a merger time scale and how it might evolve with redshift. This timescale itself is highly uncertain and relies on information from simulations \citep{sny2017, dun2019}. Finally, identifying mergers from images in the first place poses many of its own challenges. Typically, there have been two methods employed to identify merging systems: 1) identifying close pairs of galaxies that are likely to merge at some point in the future and 2) identifying advanced mergers through morphological disturbances, such as double nuclei, tidal tails, and other asymmetries. The close pair method finds galaxy pairs that are close on the sky and in redshift, using either spectroscopic redshifts \citep[e.g.,][]{lin2004,tas2014,ven2017,shah2020} or a sophisticated analysis of photometric redshifts \citep[e.g.,][]{kart2007,man2018,dun2019}. Identifying galaxy mergers through morphological features can be done via visual classifications \citep[e.g.,][]{lin2008, kart2015}, and is typically robust since the human eye is skilled at picking out patterns and faint features in noisy images. However, visual classifications can be both subjective and time-consuming, especially for large surveys. Quantitative parameters such as $CAS$, $G$, $M_{20}$, and the $MID$ statistics \citep[e.g.,][]{con2003, lotz2004, free2013} were developed as a less subjective alternative to visual classifications. In populations of low redshift galaxies, these quantitative morphology parameters have been shown to effectively locate mergers, since they can be correlated with features like asymmetries, multiple cores, starbursts, and tidal tails that are caused by merging \citep[e.g.,][]{con2003,lotz2004,lotz2008,free2013,wen2014,snylotz2015,paw2016}. However, for high redshift galaxies, these parameters become less effective at classifying mergers \citep[e.g.,][]{lotz2004,con2008,kartaltepe2010,thom2015,snylotz2015}. The main reason for this is simply that at high redshifts, mergers are more difficult to see -- cosmological surface brightness dimming leads to the loss of low surface brightness features, poor angular resolution leads to the blurring of small scale structure, and rest-frame optical emission is shifted to the near- and mid-infrared, which means optical, and short-wavelength instruments like ACS and WFC3 on HST are unable to observe the structure of the general stellar population. Furthermore, high redshift galaxies can have intrinsically more clumpy and irregular morphologies than those at low redshifts, which could masquerade as merger signatures \citep[e.g.,][]{Dekel2009,Kartaltepe2012,kart2015}. Therefore, high quality, deep, high-resolution near-infrared imaging is needed to detect merger features at high redshifts and subsequently curate an accurate sample set of high redshift galaxy mergers. The state-of-the-art and recently launched {\it James Webb Space Telescope} (JWST) will provide the highest quality images of distant galaxies to date. One of the first deep extragalactic surveys that JWST will undertake is the Cosmic Evolution Early Release Science (CEERS) Survey (PI: S. Finkelstein). CEERS utilizes JWST imaging and spectroscopy in parallel over $100$ arcmin$^2$ in the Extended Groth Strip HST legacy field for $0.5 < z < 13$ galaxies. CEERS uses JWST’s near-infrared camera (NIRCam) to probe structural features at higher redshifts than has been possible with HST \citep{fink2017}. Figure \ref{fig:hst_v_jwst} compares the same simulated $z=3$ galaxy as seen with HST/ACS + WFC3 and with JWST/NIRCam using the CEERS observing strategy, at similar wavelengths. The improved resolution of the JWST CEERS images will result in much more accurate morphological measurements than the HST images. In addition to using deeper, high resolution NIR imaging from JWST, new analysis techniques can be employed to better identify the morphological signatures of mergers. In particular, machine learning methods show great promise for identifying high redshift mergers. At low redshift, a number of studies have already used machine learning for morphological classifications \citep[e.g.,][Guzman-Ortega et al., in preparation]{hue2015,peth2016,sreejith2018,bot2019,nev2019,pearson2019, cheng2020}. For merger classifications in particular, \cite{nev2019} note that their machine learning method outperforms the classical automated identification methods (e.g., \citealt{con2003, lotz2004}) in the nearby universe. Recently, studies have begun to explore using machine learning to identify high redshift mergers \citep{sny2019,cip2020, fer2020,Ferreira2022,sharma2021arXiv}. In particular, \cite{sny2019} use random forests to identify $0.5 < z < 4$ mergers from Illustris-1 and find that their classifier achieves a true positive rate of up to $\sim70\%$ for mergers in simulated HST images. They also note that their classifier generally outperforms the more simple classifiers \`{a} la \cite{con2003} and \cite{lotz2004} across their redshift range. In this paper, we explore the use of random forests in identifying galaxy mergers from simulated CEERS images from IllustrisTNG in preparation for new JWST observations. In \S \ref{sec:data}, we describe the simulated JWST images used in this work as well as how those images were modified to create one set that matches CEERS NIRCam observations and other effectively noiseless sets for comparison purposes. In \S \ref{sec:params}, we describe the large set of quantitative morphology parameters used as input to the forests and how they were calculated from the data. In \S \ref{sec:analysis} and \S \ref{sec:discuss}, we present our random forest analyses and discuss the results. We summarize and conclude in \S \ref{sec:con}. \section{Data} \label{sec:data} \subsection{CEERS NIRCam Imaging} CEERS is an Early Release Science program (PI S. Finkelstein) to observe the EGS (Extended Groth Strip; \citealt{davis2007}) extragalactic deep field (one of the five CANDELS fields; \citealt{koe2011,gro2011}) early in Cycle 1. The NIRCam imaging of CEERS will cover 10 pointings over 100 sq.\ arcmin with the F115W, F150W, F200W, F277W, F356W, and F444W filters down to a 5$\sigma$ depth ranging from 28.4–29.2. CEERS NIRCam imaging will enable spatially resolved rest-optical/near-IR measurements for $1 < z < 7$ galaxies, with a resolution of $< 1$ kpc at 3.6 $\mu$m. Table \ref{tab:ETCsetup} presents the CEERS observing strategy and integrated observing time. CEERS is expected to reveal objects with irregular and perturbed morphologies in great detail, which will allow astronomers to track the structural evolution of galaxies from $z \sim 7$ to today. Since CEERS is targeting the EGS HST field, CEERS data will also be accompanied by a rich set of multi-wavelength data from HST ACS \citep{davis2007}, HST WFC3 \citep{koe2011,gro2011,momcheva2016}, {\it Spitzer} \citep{ashby2013}, {\it Herschel} \citep{lutz2011,oliver2012}, {\it Chandra} \citep{laird2009,nandra2015}, and a number of ground-based imaging and spectroscopic surveys \citep[e.g.,][]{coil2004,cooper2012b,newman2013,kriek2015,masters2019} with high-quality photometric reshifts and stellar masses \citep{stefanon2017}. JWST launched in December 2021, with the first CEERS images observed in June 2022 and released in July 2022, including 4 out of the 10 NIRCam pointings that make up the complete CEERS NIRCam mosaic in the EGS field. The remaining 6 pointings will be observed in December 2022. \subsection{The Simulated Images} \label{sec:simdata} We construct mock images of the EGS field from IllustrisTNG100-1, one of three cosmological simulations of galaxy formation and evolution included in the IllustrisTNG suite \citep{TNG1Springel2018,TNG2Naiman2018,TNG3Nelson2018,TNG4Pillepich2018,TNG5Marinacci2018}. Compared to Illustris-1, IllustrisTNG uses several updated prescriptions for physical processes (such as magnetic fields, black hole feedback, and galactic winds) as detailed by \cite{Weinberger2017} and \cite{pill2018}. TNG100-1 has been shown to produce galaxy morphologies in good agreement with observations \citep[e.g.,][]{rod2019,Tacchella2019}. To construct simulated images, we first adopt a simulated lightcone with footprint overlapping the observed EGS field, constructed using dark matter halos from an n-body simulation and the Santa Cruz semi-analytic model (SAM) for galaxy formation \citep{som2021, yung2022}. The Santa Cruz SAM tracks a wide variety of baryonic processes using prescriptions derived analytically, inferred from observations, or extracted from numerical simulations, and provides physically-backed predictions for galaxies across wide ranges of redshift and mass. This model has been shown to be able to reproduce the observed evolution in distribution functions of rest-frame UV luminosity, stellar mass, and SFR from $z \sim 0$ to the highest redshift where observational constraints are available \citep{som2015,yung2019_1,Yung2019}. Then, for each galaxy in the SAM, we identify IllustrisTNG subhalos from the TNG100-1 simulation by searching in the space of halo mass versus star formation rate. We randomly choose matching subhalos from those within a factor of two in these dimensions from the SAM galaxy catalogue. We then create simulated images of each subhalo using the public visualization API \citep{Nelson2019} in each of the JWST NIRCam F115W, F150W, F200W, F277W, F356W, and F444W filters, and add them together to form the wide-area mock images. These images cover an area of $\sim$100 square arcmin, to mimic the size of the EGS field that CEERS will cover, and contain over 100,000 galaxies. The pixel scale of the images is 0.03 arcsec/pix. These images are accompanied by catalogs with intrinsic information such as redshift, star formation rate, and stellar mass. Additionally, the merger history catalogs for IllustrisTNG galaxies \citep{rod2015,nel2019} used in this work give the IllustrisTNG snapshot numbers for each galaxy's most recent past merger and next future merger (both major and minor), as well as the total number of past mergers experienced in the galaxy's history for different timescales. The merger history information available for these galaxies makes IllustrisTNG an ideal dataset for training and testing machine learning algorithms to identify galaxy mergers at different stages. \subsection{Modifying the Simulated Images} \begin{table*}[t] \centering \begin{tabular}{lcccccc} \hline \hline & Subarray & Readout pattern & Groups per & Integrations & Exposure per & Exposure time \\ & & & integration & per exposure & specification & [sec] \\ \hline CEERS & FULL & DEEP8 & 5 & 3 & 1 & 2855.98 \\ Effectively Noiseless & FULL & DEEP8 & 80 & 60 & 1 & 1023632.92 \\ \hline \end{tabular} \caption{JWST ETC detector setup and resulting exposure times for each set of images. The CEERS setup here is described in the original CEERS proposal \citep{fink2017}. This setup has since changed to MEDIUM8 with 9 groups times 3 exposures for the actual CEERS observations. However, the final exposure times are similar and should not affect our results.} \label{tab:ETCsetup} \end{table*} We start with the pristine simulated images, and convolve them with the PSF of each filter, which are the model PSFs from WebbPSF \citep{perrin2014}. We then add Poisson noise (due to the galaxies in the image) following the formulation of \cite{pont2016}, where each image is convolved with a kernel to sum fluxes in neighboring pixels. We then add background noise to represent the actual CEERS exposure times (see Table \ref{tab:ETCsetup}), which was estimated using the exposure time calculator (ETC) system developed for JWST \citep{pont2016}. Last, the Python package \texttt{photutils} is used to estimate the average background in each image, which is then subtracted from the images resulting in a final set of CEERS-like, background subtracted images for each of the six NIRCam filters. Figure \ref{fig:stamps} shows examples of simulated mergers at each step of the image modification process. \begin{table*}[t] \centering \begin{tabular}{lcccccc} \hline \hline & \texttt{DETECT\_MINAREA} & \texttt{DETECT\_THRESH} & \texttt{ANALYSIS\_THRESH} & \texttt{DEBLEND\_NTHRESH} & \texttt{DEBLEND\_MINCONT} \\ & cold / hot & cold / hot & cold / hot & cold / hot & cold / hot \\ \hline CEERS & 10 / 15 & 4.5 / 1.7 & 5.0 / 0.8 & 32 / 64 & 0.01 / 0.001 \\ Effectively Noiseless & 10 / 15 & 4.5 / 1.7 & 5.0 / 0.8 & 32 / 64 & 0.001 / 0.0001\\ \hline \end{tabular} \caption{\texttt{Source Extractor} parameters for each set of images.} \label{tab:se} \end{table*} We similarly create a set of effectively noiseless images, using an extremely long exposure time of 11 days in the ETC (see Table \ref{tab:ETCsetup}), so that we can compare measurements from our CEERS-like images to those from the noiseless ones. This step is necessary because a purely noiseless set of images causes the \texttt{Galapagos-2} code (see \S \ref{sec:params}) to output unusually large errors, and therefore true noiseless images cannot be used for our analysis. Examples of nearly noiseless galaxies are shown in column E of Figure \ref{fig:stamps}. Finally, the headers of each image are amended to include the keywords required by \texttt{Galapagos-2}. All of our final CEERS-like and noiseless simulated NIRCam images are available to the public via \href{https://ceers.github.io/releases.html}{https://ceers.github.io/releases.html}. \section{Quantitative Morphology Parameters} \label{sec:params} This work makes use of several different morphology parameters. Parametric measurements assume that the galaxy's light distribution follows a specific mathematical profile, such as the S\'ersic profile, specified by the S\'ersic index $n$ \citep{ser1963} and other parameters. Nonparametric measurements do not assume an underlying mathematical form, but rather are statistical measures of the light distribution in a galaxy (e.g., the $CAS$ parameters; \citealt{ber2000,con2000,con2003}). S\'ersic parameters were calculated using the IDL program \texttt{Galapagos-2} from the \texttt{MegaMorph} Project\footnote{\href{https://www.nottingham.ac.uk/astronomy/megamorph/}{https://www.nottingham.ac.uk/astronomy/megamorph/}\label{fnote:mm}}. \texttt{MegaMorph} is a project designed to improve astronomers' ability to measure the structure of galaxies via parametric methods while making full use of modern multiwavelength imaging surveys \citep{bam2011,hau2013,vika2013}. Using multiwavelength information allows one to constrain fit parameters that vary smoothly as a function of wavelength, which produces more physically consistent models. Under the \texttt{MegaMorph} Project, \texttt{Galapagos}\footnote{\href{https://borishaeussler.github.io/galapagos\_v1/home.html}{https://borishaeussler.github.io/galapagos\_v1/home.html}} \citep{bar2012} was modified to create \texttt{Galapagos-2}, and \texttt{Galfit}\footnote{\href{https://users.obs.carnegiescience.edu/peng/work/galfit/galfit.html}{https://users.obs.carnegiescience.edu/peng/work/galfit/galfit.html}} \citep{peng2002,peng2010} was modified to create \texttt{GalfitM}. \texttt{GalfitM} is a least-squares fitting algorithm that finds the optimum solution to the S\'ersic fit for a galaxy \citep{peng2002,peng2010,peng2012}. \texttt{Galapagos} (Galaxy Analysis over Large Areas: Parameter Assessment by \texttt{Galfit}-ting Objects from \texttt{SExtractor}) is essentially an \texttt{IDL} wrapper routine that allows \texttt{GalfitM} to be used for large survey images. In addition to the final output catalog with the S\'ersic fit parameters, \texttt{Galapagos-2} also outputs the original stamp, the \texttt{GalfitM} model, and the residual image for each galaxy detected in the survey images. \texttt{Galapagos-2}/\texttt{GalfitM} can also do bulge-disk decomposition and output separate S\'ersic parameters for both galaxy components. \texttt{Galapagos-2} uses the \texttt{Source Extractor} program for object detection \citep{ber1996}. \texttt{Galapagos-2} first runs \texttt{Source Extractor} in ``cold" mode to deblend nearby galaxies, then in ``hot" mode to detect faint galaxies. The ``cold" and ``hot" catalogues are then combined. Table \ref{tab:se} lists our ``cold" and ``hot" mode parameters for both sets of images. The Python package \texttt{statmorph}\footnote{\href{https://statmorph.readthedocs.io/en/latest/}{https://statmorph.readthedocs.io/en/latest/}} was used for calculating nonparametric morphology measurements as well as single S\'ersic fits \citep{rod2019}. The inputs to \texttt{statmorph} are the science image, the segmentation map created by \texttt{Source Extractor}, the PSF, and the gain. While \texttt{statmorph} can handle both single galaxy images or survey images, it cannot handle multiple images at once to make full use of multiwavelength data. The morphology measurements from \texttt{statmorph} include: concentration ($C$), asymmetry ($A$), and clumpiness/smoothness ($S$) \citep{ber2000,con2000,con2003}; the Gini coefficient ($G$) and the moment of light ($M_{20}$) \citep{abra2003,lotz2004}; multimode ($M$), intensity ($I$), and deviation ($D$) \citep{free2013}; and outer asymmetry ($A_O$) and shape asymmetry ($A_S$) \citep{wen2014,paw2016}, as well as various parameters such as radii (e.g., $r_{20}$, $r_{50}$, $r_{80}$, $r_{\textrm{half}}$, $r_{\textrm{petro}}$) and signal-to-noise per pixel. We also make use of the residual images output by \texttt{GalfitM}. Residual images have been shown to have the potential to highlight asymmetric or unusual structures not captured by the S\'ersic model \citep[e.g.,][]{man2019}. We run \texttt{statmorph} on all residual images to obtain residual morphology measurements. Since \texttt{statmorph} requires that images have positive flux, which was not true for all residual images, we add an offset of $+1$ to every pixel of all residual images. This addition does affect \texttt{statmorph}'s measurements, but will be consistent across all residual images, both mock CEERS and nearly noiseless. \section{Merger Identification} \label{sec:analysis} As described in \S \ref{sec:intro}, machine learning techniques are potentially more advantageous for high redshift merger identification compared to classical techniques, due to their ability to exploit complex multidimensional information. Here, we choose to explore the random forest technique \citep{ho1995,brei2001}, which we describe in \S \ref{sec:RFexp}. Prior to implementing random forests, we first prepare the morphology catalog that will be the features input to the forests (\S \ref{sec:catalog}) and define the merger and non-merger classes that will be the labels input to the forests (\S \ref{sec:mergerdef}). We also show an example of the performance of a classical method using our mock CEERS dataset, which further illustrates the need for machine learning merger identification (\S \ref{sec:gm20}). \subsection{Catalog Creation} \label{sec:catalog} The IllustrisTNG merger history catalog \citep{rod2015,nel2019} contains IllustrisTNG snapshot numbers for each galaxy, as well as snapshot numbers for the most recent and next merger events, for both major and minor mergers. We convert the snapshot numbers to ages of the universe in Gyr, then calculate the difference in time between the galaxy of interest and the last and next merger events. The resulting merger history catalog therefore contains the time since the last merger (both major and minor) and the time until the next (both major and minor). After running \texttt{Galapagos-2} and \texttt{statmorph}, we combine these catalogs with the merger history catalog using \texttt{match\_coordinates\_sky()} from the \texttt{astropy} Python package. The final master catalog contains 109,395 galaxies, although only $\sim$70,000 are detected by \texttt{Source Extractor} in the simulated CEERS images and subsequently have morphology measurements. The final catalog therefore contains columns for intrinsic information such as redshift and merger timescales, as well as the \texttt{Galapagos-2} and \texttt{statmorph} measurements. For our analysis, we restrict our mock CEERS dataset to have \texttt{statmorph} signal-to-noise per pixel (S/N$_{F115W}$) $> 3$, as well as \texttt{Flag}$_{\texttt{Galfit}}$ $=2.0$ which indicates successful \texttt{Galfit} measurements. We also restrict our dataset to only galaxies with $0.5\leq z \leq4.0$ to match the redshift range chosen in \cite{sny2019}. Figures \ref{fig:mvz} and \ref{fig:sfrm} illustrate some basic properties of galaxies in the full original IllustrisTNG dataset compared to the galaxies detected in our mock CEERS-like images. Figure \ref{fig:mvz} shows that the full redshift range extends from $z = 0.5$ up to 10, with only a few galaxies existing in the highest redshift bins. It also shows that the stellar masses in our $z < 4$ mock CEERS span a range from $\log(M_{\star} / M_{\odot}) \sim 5 - 12$, but there are far fewer low mass objects in the mock CEERS set than in the original IllustrisTNG set. All of the lowest mass galaxies (both original IllustrisTNG and mock CEERS) are in the lower redshift bins. Figure \ref{fig:sfrm} shows star formation rate as a function of stellar mass, divided into the redshift bins used in this work. The IllustrisTNG data (and mock CEERS data) follows the main sequence fits of \cite{whit2014} and \cite{sch2015}, with a notable lack of starbursts lying above the main sequence. \begin{table*} \centering \begin{tabular}{lccccc} \hline \hline Redshift Bin & Dataset & Number of Objects & Number of Mergers & Training Set Number & Test Set Number \\ & & & (major $+$ minor) & \\ \hline 0.5 - 1.0 & CEERS & 9450 & 276 & 6332 & 3119 \\ & EN Set 1 & 9553 & 264 & 6400 & 3153 \\ & EN Set 2 & 7588 & 203 & 5083 & 2505 \\ \hline 1.0 - 1.5 & CEERS & 9227 & 618 & 6182 & 3045 \\ & EN Set 1 & 10937 & 679 & 7327 & 3610 \\ & EN Set 2 & 7748 & 498 & 5191 & 2557 \\ \hline 1.5 - 2.0 & CEERS & 7009 & 827 & 4696 & 2313 \\ & EN Set 1 & 18808 & 3007 & 12601 & 6207 \\ & EN Set 2 & 5939 & 671 & 3979 & 1960 \\ \hline 2.0 - 2.5 & CEERS & 6144 & 1242 & 4116 & 2028 \\ & EN Set 1 & 9395 & 1922 & 6294 & 3101 \\ & EN Set 2 & 5349 & 1086 & 3583 & 1766 \\ \hline 2.5 - 3.0 & CEERS & 3809 & 1174 & 2552 & 1257 \\ & EN Set 1 & 5726 & 1785 & 3836 & 1890 \\ & EN Set 2 & 3317 & 1008 & 2222 & 1095 \\ \hline 3.0 - 3.5 & CEERS & 2199 & 729 & 1473 & 726 \\ & EN Set 1 & 3087 & 1128 & 2068 & 1019 \\ & EN Set 2 & 1920 & 628 & 1286 & 634 \\ \hline 3.5 - 4.0 & CEERS & 2553 & 923 & 1710 & 843 \\ & EN Set 1 & 3470 & 1401 & 2324 & 1146 \\ & EN Set 2 & 2262 & 830 & 1515 & 747 \\ \hline \end{tabular} \caption{Dataset sizes for each redshift bin. Major mergers are defined to have a mass ratio less than 4:1 and minor mergers are defined to have a mass ratio between 4:1 and 10:1. The merger timescale is $\pm 0.25$ Gyr.} \label{tab:data} \end{table*} For comparison with the effectively noiseless images, we make two slightly different datasets. The first dataset (EN Set 1) used \texttt{statmorph} S/N$_{F115W}$ $> 3$ and \texttt{Flag}$_{\texttt{Galfit}}$ $=2.0$ cuts based on the effectively noiseless data, which captures fainter objects not seen in the mock CEERS dataset. The second dataset (EN Set 2) used \texttt{statmorph} S/N$_{F115W}$ $> 3$ and \texttt{Flag}$_{\texttt{Galfit}}$ $=2.0$ cuts based on the mock CEERS data, however, the effectively noiseless morphology measurements were still used as inputs to the forests. This was done to directly compare the same objects in both the CEERS-like and effectively noiseless images. Table \ref{tab:data} lists the sizes of each dataset used in this work. Note that there are fewer objects in EN Set 2 than in the mock CEERS set. Since the CEERS-like images and the nearly noiseless images are different images with different noise properties, Source Extractor will not make the exact same detections in both, and may over deblend or under deblend objects in one set of images but not the other. Additionally, \texttt{Galapagos-2} and \texttt{statmorph} may flag an object with ``bad" measurements in one set but not the other. Therefore, we are unable to perfectly match galaxies in the nearly noiseless catalog to those in the mock CEERS catalog, and lose some galaxies due to the aforementioned issues. \subsection{Merger Definition} \label{sec:mergerdef} We create labels for the random forest algorithm using the time since and time until a major and minor merger. Following \cite{sny2019}, we combine major and minor mergers to increase our training set size. For a binary classification scheme, galaxies that had never experienced a merger or never will experience a merger (denoted as $-1.0$ in the final catalog) were labeled as Class 0 (``non-merger"). Also following \cite{sny2019}, we choose a timescale window of 500 Myr ($\pm250$ Myr) for our merger class definition since the lifetime of merger features will likely not be longer. Therefore, galaxies that have experienced a merger greater than 250 Myr ago and will experience a merger greater than 250 Myr in the future are also assigned to Class 0. Galaxies that experienced a merger within 250 Myr, past or future, were assigned to Class 1 (``merger"). As a check, we shift the merger definition to include windows from 100 Myr to 500 Myr in the past or future (to include more pre- and post-mergers), but do not see a significant improvement in the performance of the forests. We also test using a three-class classification scheme with ``non-merger," ``past merger," and ``future merger," but the forests perform poorly in these trials, most likely due to low numbers in each merger class. Figure \ref{fig:mhist} shows the four merger timescales -- past major and future major (top panel) and past minor and future minor (bottom panel) -- for each galaxy in our mock CEERS set compared to the full IllustrisTNG dataset. The red shaded region shows that the selected $\pm 250$ Myr window spans a relatively narrow range of the full timescale distributions. \subsection{Performance of Classical Methods} \label{sec:gm20} We compare the merger classification performance of machine learning techniques with the performance of classical methods, such as the $G-M_{20}$ parameter space \citep{lotz2004, lotz2008}, in order to judge if machine learning provides any improvements. Figure \ref{fig:g_v_m20} shows the F277W observed (rest-frame optical) $G-M_{20}$ parameter space for objects in the mock CEERS $3.0 < z < 3.5$ redshift bin. The merger discriminating line is \begin{equation} \label{eq:gm20} G > -0.14 M_{20} + 0.33, \end{equation} as defined by \cite{lotz2008}. True mergers, according to the merger definition in \S \ref{sec:mergerdef}, occupy the same space as non-mergers. In this redshift bin, the fraction of correctly classified mergers, according to Equation \ref{eq:gm20}, is only $\sim 19\%$ since most true mergers lie below the merger discriminating line. This is to be expected since this method is very sensitive to merger stage, and is best at selecting mergers just after first passage \citep{Lotz2008b}. Of the predicted mergers (objects above the merger discriminating line), only $\sim 48\%$ are actually true mergers. If we choose the F356W filter for our observed filter, the results are the same. Across all redshift bins, the number of correctly classified mergers ranges from $\sim 19\% - 23\%$. The number of predicted mergers that are actually true mergers ranges from only $\sim 4\%$ at the lowest redshift bin to $\sim 50\%$ at the highest redshift bin, a consequence of the increasing number of mergers at higher redshifts. These results illustrate the poor performance of classical methods for identifying mergers at a range of stages at high redshift, and motivates the use of machine learning techniques for improving the level of completeness. \subsection{Random Forest Experiments} \label{sec:RFexp} The random forest (RF) algorithm is a supervised classification algorithm consisting of many decision trees \citep{ho1995,brei2001}. A single decision tree is a flowchart-like diagram where each split (or ``node") in the tree represents a decision made based on the input features. The ``terminal nodes" at the end of the tree represent the possible classifications. The algorithm uses an ensemble of decision trees to minimize overfitting from any one tree. We choose to use random forests due to their simplicity and because \cite{sny2019} demonstrated that random forests show some promise at high-z galaxy merger classification tasks. Table \ref{tab:data} shows our dataset sizes after cleaning the dataset and defining the merger class. We split the data into training and test sets, with a training fraction of 0.67, where the ratio of objects are preserved for each class. We use \texttt{BalancedRandomForestClassifier()} from Python's \texttt{imblearn} package, which is specifically designed for imbalanced data sets. It works by deliberately undersampling the majority class during training. The morphology parameters we feed to the forests are the single S\'ersic index $n$, the two-component S\'ersic indices $n_{bulge}$ and $n_{disk}$, and the non-parametric $A, C, G, I, m_{20}, M, A_{O}, D$, and $S$, in all six filters. We also feed to the forests the non-parametric $A, C, G, I, m_{20}, M, A_{O}, D$, and $S$ as calculated from the residual images, in all filters. We run \texttt{keras-tuner} for hyperparameter optimization, which uses Bayesian optimization to search the parameter space and find the optimal combination of hyperparameters without having to test all possible combinations \citep{omal2019}. Generally, \texttt{keras-tuner} will find the most optimal hyperparameters in less time than other hyperparameter tuning algorithms such as \texttt{GridSearchCV}. The hyperparameters that we let vary are ``max$\_$samples" (0.9 - 0.99 with a step size of 0.1), ``max$\_$features" (1 - 15 with a step size of 1), ``max\_leaf\_nodes" (5 - 55 with a step size of 1), and ``n$\_$estimators" (1000 - 2000 with a step size of 50). We allow \texttt{keras-tuner} to search over the parameter space for 50 trials. For each trial, we provide \texttt{keras-tuner} with a cross-validation (CV) set (CV fraction was 0.2 of the training set) that preserves the ratio of objects for each class. We train seven separate forests for our seven different redshift bins. We explore many different sets of hyperparameters and allowed ranges, and find that generally the forests perform similarly regardless of fine tuning the hyperparameters. Therefore, although further tuning is possible, we conclude that it is not necessary; the forests most likely will not perform significantly better than reported here. We categorize the output of the random forest into four classes: \begin{itemize} \item True Positives (TP): the number of true mergers correctly classified by the random forest. \item False Positives (FP): the number of non-mergers incorrectly classified as mergers. \item True Negatives (TN): the number of correctly classified non-mergers. \item False Negatives (FN): the number of true mergers incorrectly classified as non-mergers. \end{itemize} Therefore the number of RF-selected mergers is TP + FP, and the number of RF-selected non-mergers is TN + FN. The number of true mergers is TP + FN, and the number of true non-mergers is TN + FP. We can judge the performance of the random forests using several metrics: \begin{itemize} \item True Positive Rate (TPR) -- also known as \textit{recall} or \textit{completeness} -- is defined as \begin{equation} TPR = \textrm{recall} = \frac{TP}{TP + FN}. \end{equation} \item False Positive Rate (FPR) -- also known as \textit{fall out} - is defined as \begin{equation} FPR = \frac{FP}{FP + TN}. \end{equation} \item Positive Predictive Value (PPV) -- also known as \textit{precision} -- is defined as \begin{equation} PPV = \textrm{precision} = \frac{TP}{TP + FP}. \end{equation} \item F1 Score is the harmonic mean of precision (P) and recall (R), and is defined as \begin{equation} F_{1} = 2\frac{P \times R}{P + R} = \frac{TP}{TP + \frac{1}{2} (FP + FN)}. \end{equation} \end{itemize} Classifiers that perform well have both high precision and high recall (and therefore a high F1 score). Accuracy, defined as (TP + TN)/N where $N$ is the total number of objects, is a biased indicator of performance for imbalanced sets, so we do not consider it here. Figure \ref{fig:cm_3to_3p5} shows a confusion matrix for $3 < z \leq 3.5$ galaxies. This is an example from one of the best random forest trials. This shows that the forest correctly classified $60\%$ of the non-merger class and $63\%$ of the merger class. The confusion matrices for the other redshift bins all look similar to this one, where $58\% - 63\%$ of the non-merger class were correctly classified and $60\% - 64\%$ of the merger class were correctly classified (see the recall values in Figure \ref{fig:metrics_comp}). The top panel of Figure \ref{fig:roc_pr_curve} shows the corresponding receiver operating characteristic (ROC) curve for this redshift bin, which illustrates the performance of the random forest for different discrimination thresholds. A discrimination threshold is the probability cutoff (default $= 0.5$) used to assign the final classification. The curve of a perfect classifier would consist of two straight lines from (0,0) to (0,1) and from (0,1) to (1,1). The curve of a random classifier consists of a straight line from (0,0) to (1,1). The performance of classifiers can then be judged by how close they are to the upper left corner of the plot. This figure shows that our random forest does better than a random classifier. The bottom panel of Figure \ref{fig:roc_pr_curve} shows the corresponding precision-recall curve for this redshift bin, which in principle is more informative for imbalanced datasets than the ROC curve. For the precision-recall curve, a perfect classifier reaches the upper right-hand corner of the plot at (1,1). The curve of a random classifier is not fixed, like in the ROC curve, but determined by the ratio of positives (mergers) to total number of objects. Therefore a random classifier for a perfectly balanced dataset would lie at $y=0.5$. This plot, like the ROC curve, also shows that our forest performs better than random chance. However these plots, especially the precision-recall curve, also show that our classifiers are far from perfect. The ROC curves and precision-recall curves for the other redshift bins look similar to those shown here. Figure \ref{fig:metrics_comp} shows how precision, recall, and f1 scores change across redshift for both mergers and non-mergers. For mergers, the classification metrics improve as redshift increases. For non-mergers, the classification metrics generally worsen as redshift increases. This is likely due to the test set becoming more balanced at higher redshifts. To test this, we artificially balanced the $z = 0.5 - 1.0$ test set and found that the merger precision score (and therefore f1 score) dramatically increased and the non-merger precision (and therefore f1 score) decreased such that they were more in line with the results of the later redshift bins. This means that the performance of the $z = 0.5 - 1.0$ forest can be improved with respect to the merger class by simply randomly removing non-mergers from the test set. This implies that the better performance of the forests of the later redshift bins is mostly due to the increasing lack of non-mergers, not because the forest was better trained. For the effectively noiseless images, the forests trained and tested on EN Set 1 generally performed worse than the mock CEERS forests. The train and test sets were larger, but this increase was mostly due to the inclusion of faint and probably ambiguous-looking galaxies that were cut from the mock CEERS set. We conclude that the difficulty of classifying these objects probably out-weighed any gains from the ability to see fainter structures in the effectively noiseless images. The left panel of Figure \ref{fig:ensets} shows the confusion matrix for this set. The forests trained and tested on EN Set 2 generally performed slightly worse than the mock CEERS forests. In this case, any gains from the effectively noiseless images are probably out-weighted by the smaller size of the training and test sets. The right panel of Figure \ref{fig:ensets} shows the confusion matrix for this set. \section{Discussion} \label{sec:discuss} \subsection{Feature Importance} Each forest calculates the importance of the features given to it. The more important a feature is, the more useful it is to the forest for determining the difference between mergers and non-mergers. Figure \ref{fig:feat_import} shows the top five most important features for each redshift bin. This figure shows that asymmetry features (e.g., $A$ and $A_{O}$) are most important for low redshift bins while bulge and clump features (e.g., $G$ and $S$) are more important for higher redshift bins. These most important features are calculated from the science images, not the residual images. There also appears to be a dependence on filter. The bluer F115W filter is more useful for low redshift bins, and the redder F444W filter is more useful for higher redshift bins. This seems to indicate that the forests are using the rest-frame optical features to make decisions, even though all filters were available to each forest. Figure \ref{fig:example_grid} shows examples of $3.0 < z < 3.5$ galaxies categorized into true positives (correctly classified mergers), false positives (incorrectly classified non-mergers), true negatives (correctly classified non-mergers), and false negatives (incorrectly classified mergers). The top left hand corner of each stamp shows the probability of the object being a merger as assigned by the random forest. The stamps are arranged in order of decreasing probability, so the horizontal orange line effectively represents the probability threshold between merger and non-merger classifications, which is the default 0.5. The top right hand corner of each stamp shows the F356W Gini statistic for each galaxy, which was the most important feature for the random forest. The bottom left hand corner of each stamp shows the merger timescale for each galaxy. The timescales include both major and minor mergers. A positive timescale indicates the time since a past merger, while a negative timescale indicates the time until a future merger. Galaxies can have both past and future mergers, so whichever timescale is smallest is shown here. Recall that the merger cutoff defined in \S \ref{sec:mergerdef} is $\pm250$ Myr, so true positives and false negatives all have a merger timescale $< 0.25$ Gyr, while false positives and true negatives all have a merger timescale $> 0.25$ Gyr. Finally, the segmentation map outlines are color-coded by merger type (major or minor). The distribution of merger probabilities ranges from about 0.3 - 0.7, while the distribution of the Gini statistic ranges from about 0.4 to 0.65. There is a slight trend where probability increases as Gini increases, which becomes increasingly clear when examining the full test set. This is expected since the F356W Gini statistic was the most important feature to the random forest for this redshift bin, but the correlation is not so strong that Gini can solely be used to determine merger status. There appears to be little to no trend of merger timescale with either Gini or with probability when looking at the full test set. Other interesting insights come from looking at the false negatives and false positives. Many of the false positives in Figure \ref{fig:example_grid} have segmentation maps that appear elongated, either because the segmentation map is contaminated with emission from background or neighboring galaxies, or because the galaxy has persisting remnants of signatures from a merger outside of the chosen time frame, which may have contributed to the ``merger" designation by the forest. The average past and future timescales of true negatives is $0.75 \pm 0.35$ Gyr and $0.66 \pm 0.26$ Gyr, respectively. The average past and future timescales of false positives is $0.66 \pm 0.32$ Gyr and $0.71 \pm 0.43$ Gyr, respectively. Since the timescale distribution of false positives generally matches that of the true negatives, within error, it does not appear that false positives are more likely to be closer in time to having a merger than other non-mergers. On the other hand, many of the false negatives appear relatively undisturbed visually, especially the ones with smaller merger probabilities, suggesting that even though these are true mergers those mergers have had a relatively minor impact on the morphology. The fraction of minor mergers among the true positives and false negatives is $35.1\%$ and $35.6\%$, respectively. This indicates that false negatives are no more likely to be minor mergers than true positives. \subsection{Thresholding} \label{sec:thresh} The probability threshold used to distinguish between mergers and non-mergers can be adjusted in an attempt to improve the performance of the random forest. There are a number of methods for selecting the optimal threshold based on the trade-off between the TPR and FPR, or precision and recall, including: \begin{itemize} \item Geometric Mean (G-Mean), defined by \begin{equation} \textrm{G-Mean} = \sqrt{TPR \times (1 - FPR)}. \end{equation} \item Youden's J Statistic \citep{you1950}, defined by \begin{equation} \textrm{J} = TPR - FPR. \end{equation} \item Matthew's Correlation Coefficient (MCC) \citep{mat1975}, defined by \begin{multline} \textrm{MCC} = \\ \frac{TP \times TN - FP \times FN}{\sqrt{(TP + FP)(TP + FN)(TN + FP)(TN+FN)}}. \end{multline} \item Balance Point, defined by the point at which $TPR = 1 - FPR$. \item F1 Score - defined in \S \ref{sec:RFexp} - which is the harmonic mean between precision and recall. \end{itemize} For the $3.0 < z < 3.5$ redshift bin, G-mean, J, and MCC all return the same optimal threshold of 0.518. The balance point returns a threshold of 0.504. All four are very close to the default threshold of 0.5. Figure \ref{fig:roc_pr_curve} highlights the location of these two thresholds on the ROC curve, which are located where the curve of the test set is closest to the upper left hand corner of the plot. The optimal threshold based on the F1 score was 0.470, which is highlighted in the precision-recall curve in Figure \ref{fig:roc_pr_curve}. Here, the threshold is located where the curve of the test set is closest to the upper right hand corner of the plot. Use of the G-mean/J/MCC threshold (Figure \ref{fig:threshs}, \textit{top}) or the balance point threshold (Figure \ref{fig:threshs}, \textit{middle}) improves the performance of the forest on the non-merger class (fraction correctly classified = 0.66 or 0.61, respectively, where before it was 0.60), at a cost of poorer performance on the merger class. Use of the F1 score threshold (Figure \ref{fig:threshs}, \textit{bottom}) drastically improves the performance on the merger class (fraction correctly classified = 0.79 where before it was 0.63), but at great cost to the non-merger class (fraction correctly classified is 0.44 where before it was 0.60). This shows one could optimize the performance of the forest to generate a more complete sample of mergers, but that sample would be highly contaminated. None of the thresholds improve performance on both the merger and non-merger classes. \subsection{Merger Fraction and Merger Rate} Finally, we calculate the merger fraction and merger rate using both mergers selected by the random forest and true mergers based on our merger timescale window of 0.5 Gyr. First, we calculate the fraction of merging galaxies selected by the random forest as $f_{\mathrm{uncorr}}(\mathrm{RF}) = N_{RF}/N$, that is, the total number of galaxies (in the test set) selected as mergers by the random forest divided by the total number of galaxies (in the test set) for a given redshift bin. Then we multiply this fraction by $PPV / TPR$ and $<M/N>$ \citep{sny2019}. $PPV / TPR$ corrects for the known incompleteness and purity of the classifier based on the training set. $<M/N>$ is the average number of merging events per true merger, and accounts for the fact that some true mergers experience more than one merger during the specified time frame of 0.5 Gyr. Therefore, the actual merger fraction for the RF-selected mergers is: \begin{equation} f_{\mathrm{merger}}(\mathrm{RF}) = \frac{N_{RF}}{N} \frac{PPV}{TPR} <\frac{M}{N}> . \end{equation} Here, $<M/N>$ was calculated using only the true positives in the test set. The merger fraction for the true merging galaxies (from the test set) based on our merger timescale definition is then \begin{equation} f_{\mathrm{merger}}(\mathrm{true}) = \frac{N_{true}}{N} <\frac{M}{N}>, \end{equation} since there is no need to correct for the performance of the random forest classifier. But our intrinsic merger definition also does not account for multiple mergers with our time frame. Here, $<M/N>$ was calculated using the true positives plus the false negatives in the test set. The left panel of Figure \ref{fig:merg_frac_rate} shows our uncorrected random forest merger fraction $f_{\mathrm{RF}}$, the corrected random forest merger fraction $f_{\mathrm{merger}}$(RF), and the true merger fraction $f_{\mathrm{merger}}$(true). This plot reveals that the uncorrected fraction $f_{\mathrm{RF}}$ is overestimated compared to the true merger fraction, but once corrected by $PPV / TPR$, the RF-selected merger fractions and the true merger fractions line up very well. This panel also shows the theoretical Illustris merger fraction \cite[derived from][]{rod2015} and the random forest merger fraction estimated from \cite{sny2019}. Our uncorrected merger fractions line up very closely with the uncorrected merger fractions from \cite{sny2019}. The shape and steep slope of our corrected fraction $f_{\mathrm{merger}}$(RF) and true fraction $f_{\mathrm{merger}}$(true) generally match the theoretical Illustris fraction. However, our $f_{\mathrm{merger}}$(RF) and $f_{\mathrm{merger}}$(true) are underestimated compared to theory and the corrected random forest fractions from \cite{sny2019}. To calculate merger rates, we divide the merger fractions by our merger window timescale of $0.5$ Gyr. The right panel of Figure \ref{fig:merg_frac_rate} shows our merger rates for the corrected RF-selected mergers and for true mergers. We again show the theoretical Illustris merger rates derived from \cite{rod2015} as shown in \cite{sny2019}. This panel shows that our merger rates are underestimated by a factor of about 0.5 (\textit{grey dashed line}) when compared to the theoretical merger fraction. Since both our random forest fractions and rates, and true fractions and rates are underestimated when compared to theory, the issue may lie in our calculation of $<M/N>$. From our merger history catalog, we calculate the timescales for the \textit{most recent} and the \textit{first future} major and minor mergers. However, this does not tell us if a galaxy has experienced e.g., more than one past major merger or more than one future minor merger within the specified merger window. Therefore our values for $<M/N>$ are almost certainly underestimated, which may account for the discrepancy between our data and the theoretical Illustris curves. \subsection{Comparison to Previous Works} We compare our results to that of \cite{sny2019} and \cite{sharma2021arXiv}, both of which uses random forests to classify high redshift merging galaxies. \cite{sny2019} classify mergers in a sample of $0.5 < z < 4.0$ Illustris-1 simulated HST galaxies, with a merger definition of $\pm 250$ Myr, and report a true positive rate of $\sim70\%$ across their redshift range. \cite{sharma2021arXiv} classify mergers in a sample of $0.5 < z < 3$ SPHGal simulated HST images, with a merger definition of $\pm 500$ Myr, and report a true positive rate of $0.95$ for the full data set (see their Figure 3). There are a few possible explanations for the better performance of these studies. Both \cite{sny2019} and \cite{sharma2021arXiv} focused on merger classification performance, whereas here we try to optimize performance on both mergers and non-mergers. \cite{sharma2021arXiv} specifically note that their merger fraction is overestimated due to false positives. \cite{sny2019} trained random forests on single snapshots, whereas here we use redshift ranges, so the forests from \cite{sny2019} may be overtrained on a per-snapshot basis. Finally, \cite{sny2019} and \cite{sharma2021arXiv} use Illustris-1 and SPHGal, respectively, to create their simulated images rather than IllustrisTNG as we do here, and there may be differences between the three that make it easier or harder of select mergers with this technique. \section{Summary and Conclusions} \label{sec:con} In this work, we investigate using random forests to classify merging galaxies in simulated CEERS-like and nearly noiseless images, which were constructed from IllustrisTNG and the Santa Cruz SAM. We use the morphology programs \texttt{Galapagos-2} and \texttt{statmorph} to calculate a number of morphology parameters which were then used as inputs to the random forests. We also use IllustrisTNG merger history catalogs to define intrinsic merger labels which were also given to the forests. We train seven random forests for seven different redshift bins, and find the following results: \begin{enumerate} \item The forests correctly classify $\sim$60\% of mock CEERS mergers and non-mergers across all redshift bins. The precision of the merger class increases with redshift while the precision of the non-merger class decreases with redshift. ROC curves and precision-recall curves indicate that the forests perform better than random classifiers. The random forests do not perform better when trained and tested on morphology parameters from nearly noiseless simulated images. \item Rest-frame asymmetry features tend to be most important for merger classifications at low redshift, while rest-frame bulge and clump features tend to be more important at higher redshifts. \item False positives tend to appear elongated with potential faint merger signatures, despite being no more likely to be closer to the $\pm250$ merger window than true negatives. False negatives tend to appear undisturbed by their recent mergers. \item Selecting different probability thresholds results in improved performance on the merger class at the cost of worse performance on the non-merger class. \item After correcting for the incompleteness and purity of the forests, we recover the true merger fraction of the mock CEERS dataset very well (using the $\pm 250$ Myr merger definition). The shape and slope of our mock CEERS corrected merger fraction and merger rate, $f_{\mathrm{merger}}$(RF) and $\mathbb{R}_{\mathrm{merger}}$(RF), match with theoretical Illustris predictions. However, our $f_{\mathrm{merger}}$(RF) and $\mathbb{R}_{\mathrm{merger}}$(RF) are underestimated compared to Illustris predictions. \end{enumerate} Given a sample of CEERS galaxies with unknown merger labels, the results of this work indicate that we could recover a reasonable merger fraction and merger rate. However, it would be difficult to disentangle specific true mergers from misclassified non-mergers. One area of improvement lies with the segmentation map. The morphology parameters described in this work all depend on how galaxies are identified and deblended in the segmentation map, so great care must be taken in how sources are detected and deblended. The impact of source detection on merger identification is an important topic for future exploration. Our findings suggest that we have reached the ceiling on how well random forests are able to identify mergers from these standard morphological parameters. Further improvement is likely to be gained through training convolutional neural networks (CNNs) to identify mergers directly from the images, which will be the subject of a future paper. \begin{acknowledgments} Support for this work was provided by NASA through grants JWST-ERS-01345.015-A and HST-AR-15802.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. This research is based in part on observations made with the NASA/ESA Hubble Space Telescope obtained from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5–26555. The authors acknowledge Research Computing at the Rochester Institute of Technology for providing computational resources and support that have contributed to the research results reported in this publication. \href{https://doi.org/10.34788/0S3G-QD15}{https://doi.org/10.34788/0S3G-QD15} The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources that have contributed to the research results reported within this paper. \href{http://www.tacc.utexas.edu}{http://www.tacc.utexas.edu} \end{acknowledgments} \software{Source Extractor \citep{ber1996}, Galapagos-2 \citep{hau2013}, statmorph \citep{rod2019}} \bibliography{rose_paper.bib}{} \bibliographystyle{aasjournal}
Title: Acetaldehyde binding energies: a coupled experimental and theoretical study
Abstract: Acetaldehyde is one of the most common and abundant gaseous interstellar complex organic molecules, found in cold and hot regions of the molecular interstellar medium. Its presence in the gas-phase depends on the chemical formation and destruction routes, and its binding energy (BE) governs whether acetaldehyde remains frozen onto the interstellar dust grains or not. In this work, we report a combined study of the acetaldehyde BE obtained via laboratory TPD (Temperature Programmed Desorption) experiments and theoretical quantum chemical computations. BEs have been measured and computed as a pure acetaldehyde ice and as mixed with both polycrystalline and amorphous water ice. Both calculations and experiments found a BE distribution on amorphous solid water that covers the 4000--6000 K range, when a pre-exponential factor of $1.1\times 10^{18}s^{-1}$ is used for the interpretation of the experiments. We discuss in detail the importance of using a consistent couple of BE and pre-exponential factor values when comparing experiments and computations, as well as when introducing them in astrochemical models. Based on the comparison of the acetaldehyde BEs measured and computed in the present work with those of other species, we predict that acetaldehyde is less volatile than formaldehyde, but much more than water, methanol, ethanol, and formamide. We discuss the astrochemical implications of our findings and how recent astronomical high spatial resolution observations show a chemical differentiation involving acetaldehyde, which can easily explained as due to the different BEs of the observed molecules.
https://export.arxiv.org/pdf/2208.08774
\label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \begin{keywords} Astrochemistry --- solid state: volatile --- molecular data --- molecular processes --- ISM: abundances \end{keywords} \section{Introduction} Acetaldehyde (CH$_3$CHO) was one of the first polyatomic molecules discovered in the interstellar medium (ISM) \citep{Gottlieb1973,Fourikis1974}. It is a rather common interstellar molecule, found in warm and cold environments. For example, it is abundant in hot cores \citep[e.g.][]{Blake1987,Csengeri2019,Law2021} and hot corinos \citep[e.g.][]{Cazaux2003,Manigand2020,Yang2021,Chahine2022}, protostellar molecular shocks \citep[e.g.][]{Lefloch2017,DeSimone2020_acet,codella2020} and young disks \citep[e.g.][]{Codella2018,Lee2019}. Against the theoretical expectations, acetaldehyde is also present in cold prestellar cores \citep[e.g.][]{Bacmann2012,vastel2014,Scibelli2020}. Finally, acetaldehyde is detected in comets \citep[][]{Crovisier2004,LeRoy2015,Biver2021}, where the measured relative abundance with respect to methanol is similar to that found in hot corinos \citep[e.g.][]{Bianchi2019,Drozdovskaya2019}. Several experimental and theoretical studies have focused on the acetaldehyde formation routes both in the gas-phase and on the grain-surfaces. \cite{Charnley2004} first proposed that acetaldehyde is synthesised in the gas-phase via the reaction of the ethyl radical (CH$_3$CH$_2$) with atomic oxygen (O). Subsequently, \cite{Skouteris2018} found that acetaldehyde could be a daughter of ethanol (CH$_3$CH$_2$OH), while \cite{Vasyunin2017} proposed a synthesis from methanol (CH$_3$OH) reacting with CH. Finally, publicly available astrochemical reaction networks \citep[KIDA and UMIST:][respectively]{Wakelam2012,mcelroy2013} also report an ionic route via the protonated acetaldehyde (CH$_3$CHOH$^+$), which is in turn formed from dimethyl ether (CH$_3$OCH$_3$) reacting with H$^+$. \cite{vazart2020} has reviewed all these reactions from the theoretical and experimental point of view and concluded that only the first two routes are viable (i.e. involving CH$_3$CH$_2$ and CH$_3$CH$_2$OH, respectively), while the last two (i.e. those involving CH$_3$OH and CH$_3$CHOH$^+$) are inefficient in the ISM conditions. Finally, the observed correlation between the derived abundances of acetaldehyde and ethanol in warm and hot sources is in favor of acetaldehyde being the ethanol daughter \citep{vazart2020}. For the grain-surface formation routes, the situation is more debated. Since the end of last millennium, experiments have found that acetaldehyde is formed in UV-illuminated ices consisting of water, CO, methanol and methane \citep[e.g.][]{Hudson1997,Bennett2005,Oberg2010,MartinDomenech2020} or other components \citep[e.g.][]{Chuang2021}. Triggered by these experiments, \cite{Garrod2006} proposed the formation of acetaldehyde from the combination of the radicals HCO and CH$_3$ on the dust grain icy surfaces, when the grain temperature increases enough to make the two radicals mobile. However, a first study by \cite{Enrique-Romero2016} suggested that the reaction of the two radicals on an amorphous water ice surface actually does not end up into the formation of acetaldehyde but rather into carbon monoxide (CO ) and methane (CH$_4$). Subsequent and more accurate studies confirmed that the acetaldehyde formation on amorphous water surfaces may or may not occur and is always in competition with the CO + CH$_4$ formation \citep[][]{Rimola2018,Enrique-Romero2019,Enrique-Romero2020}. Similar conclusions were reached when considering the reaction on CO-rich ices \citep{Lamberts2019}. Finally, the most recent theoretical study by \cite{Enrique-Romero2021} found that the efficiency of the formaldehyde formation by the coupling of HCO and CH$_3$ is a strong function of the grain temperature and the diffusion energies of the two radicals, with the possibility to being practically zero if their mobility is relatively large (i.e., a diffusion/BE ratio less than 0.3). In agreement with these theoretical predictions, a very recent and sophisticated study by \cite{Gutierrez-Quintanilla2021} found no formation of acetaldehyde on an amorphous water ice enriched with HCO and CH$_3$. In their experiment, \cite{Gutierrez-Quintanilla2021} trapped radicals, formed via photolysis of methanol ice, in an argon matrix at 14 K and identified them via Electronic Paramagnetic Resonance (EPR) spectroscopy. Once liberated from the matrix, the various radicals combined giving rise to several species, but not acetaldehyde. More recently, non energetic processes on the grain surfaces have been found to possibly produce acetaldehyde in CO-rich ices \citep[e.g.][]{,Fedoseev2022}. Adding confusion to this situation, \cite{Hudson2020} noticed that the IR bands, on which the identification (frequencies) and quantification (band strengths) of acetaldehyde in the various experiments relied, were often incorrect. Finally, this work also emphasized the almost impossibility for astronomical observations to be able to identify frozen acetaldehyde. Whatever the formation mechanism, either in the gas-phase or on the grain surfaces, acetaldehyde would freeze onto the cold dust grain mantles and would be removed from them only when the dust temperature reaches the acetaldehyde sublimation temperature, which is governed by its binding energy (BE: sometimes it is also called desorption energy) to the water ice, and its pre-exponential factor (see \S ~\ref{sec:discussion}) . Please note that the sublimation temperature can be reached due to both thermal and non-thermal processes, such as the desorption caused by the interaction of the cosmic-rays, which permeate the Milky Way, with the dust grains. In the latter case, for example, only a fraction of the grain is heated up and acetaldehyde would be desorbed if the reached temperature is larger than its sublimation temperature. Therefore, in practice, whether acetaldehyde is in the gas-phase (where it can be observed via its rotational lines) or frozen onto the grain mantles (where it has not been detected so far) is completely governed by its BE. Moreover, BE always enters in an exponential form, so that its accurate estimation is essential to properly assess whether and how much acetaldehyde would be gaseous. So far, there are very few specific studies on the acetaldehyde BE. To our best knowledge, \cite{Wakelam2017BE} is the first theoretical study, but it only considered one water molecule to simulate the interstellar water ice. These authors found a BE equal to $\sim$5400 K. Recently, \cite{Corazzi2021} reported an experimental study of the BE on olivine surfaces covered by an ice of pure acetaldehyde or mixed with water. They found that the acetaldehyde BE on a mixture of iced water-acetaldehyde is 3079 K, substantially different from the Wakelam et al.'s estimate. In a very recent combined theoretical and experimental study, \cite{Molpeceres2022} obtained, for the desorption of acetaldehyde on non-porous amorphous solid water surfaces, the values of 3624 and 3774 K, respectively. These values are relatively similar to the Corazzi et al's value, but still far from that of Wakelam et al.'s one. There is, therefore, a need to clarify the origin of this difference as well as to improve both experimental and theoretical estimates. In this work, we combine state-of-the-art experimental (temperature programmed desorption, TPD) and theoretical (quantum chemical computations) techniques with the aim to provide possibly the most accurate estimate of the acetaldehyde BE on water ice, simulating at best the ISM conditions. The article is organised as follows. Sec. \ref{sec:experiments} reports the experiments, Sec. \ref{sec:calculations} the theoretical computations and Sec. \ref{sec:discussion} discusses the results and comparison of the two methods, as well as the astrophysical implications. \section{Laboratory experiments}\label{sec:experiments} The experiments were performed with the FORMOLISM (FORmation of MOLecules in the InterStellar Medium) set-up. The apparatus is meant to study the reactivity of atoms and molecules on surfaces of astrophysical interest, under the conditions of temperatures and pressures similar to those in the ISM. The practicalities of the experimental setup are described here but more details are given in previous works \citep{amiaud2007, Congiu2012}. \subsection{Methods} FORMOLISM is composed of an ultra-high vacuum (UHV) stainless steel chamber with a pressure of a few 10$^{-11}$ mbar. The sample holder is located in the center of the main chamber and it is thermally connected to a closed cycle Helium cryostat. The temperature of the sample is measured in the range of 10--800 K by a calibrated titanium diode connected to the sample holder. This one is made of a 1 cm diameter copper block (18 mm long), which is covered with a highly oriented pyrolytic graphite (HOPG, ZYA-grade) sample. The HOPG is a model of an ordered carbonaceous material, carbon atoms in a hexagonal lattice, mimicking some aspects of interstellar dust grains analogues. The HOPG grade (10 mm diameter-2 mm thickness) was firstly dried in an oven at about 100°C for two hours, and then cleaved several times in air using the "Scotch tape" method at room temperature to yield several limited defects and step edges \citep{chaabouni2020}. The HOPG was glued directly onto the copper finger and dried in ambient conditions for about an hour before inserting it in the chamber and closing the set up. Once high vacuum is reached (<10$^{-7}$ mbar) in few minutes the sample is dried under vacuum during 3 hours at 350 K. Then the HOPG sample and the whole system is baked out at 100°C for few days in order to remove any adsorbed contaminants and reach the UHV base pressure. Before starting any experiment, the system is slowly heated up to 720 K to maximise the degassing rate and ensure a very clean sample surface. FORMOLISM is equipped with a quadrupole mass spectrometer (QMS). It allows both the detection of species desorbing from the sample during a TPD (temperature programmed desorption) and, when placed in front of the beam-line, the characterization and the calibration of molecular beams. The CH$_3$CHO (Sigma Aldrich, purity higher than 99.5 \%) molecular beam is prepared in a triply differentially pumped beam-line aimed at the sample holder. It is composed of three vacuum chambers connected together by tight diaphragms of 3 mm diameters. Molecular pressure injected in the gas line is of about 1.4 mbar. In the expansion chamber (first stage) the pressure is in the 10$^{-5}$ mbar range, while in the main chamber the rise of the pressure is not measurable (<10$^{-11}$ mbar). The CH$_3$CHO on HOPG experiments have been used as reference and preliminary studies before investigating the desorbing behaviour of the molecule on two different D$_2$O ice substrates formed on the sample. In this study, we have deposited CH$_3$CHO molecules on HOPG by using only one beam-line oriented at 57° relatively to the surface of the sample, while D$_2$O ice films have been deposited on HOPG by using a separated channel and a micro-capillary array doser moved close to the sample holder. It allows to control the ice deposition rates and to avoid to raise the pressure in the whole chamber. The two different D$_2$O ices are, respectively, compact non-porous amorphous deuterated water (ASW-c) and (poly)crystalline deuterated ice (PCI). ASW-c is formed when the sample is held at 110 K, while PCI layers have been formed by depositing D$_2$O at 110 K, heating until 150 K with a ramp rate of 5 K/min and waiting for few minutes to be sure that the crystallization has happened, as detected by the change in the desorption rate \citep{Speedy1996}. For both ices, about 30 monolayers (ML) have been deposited on HOPG while the pressure in the main chamber never exceeded $5 \times 10^{-8}$ mbar. After the CH$_3$CHO deposition phase at 45 K, we used the TPD technique by warming-up the sample and detecting the desorbing species from the surface through mass spectrometry. TPD were obtained from 45 to 220 K for the CH$_3$CHO on HOPG experiments and only to 140 K for the CH$_3$CHO on water ice ones, in order to avoid D$_2$O desorption after 150 K. In all cases, a linear heating rate of 12 K/min has been programmed to heat up the sample. Because of the poor thermal contact between the sample holder and the copper block, the temperatures registered during the TPD were re-calibrated by depositing different gases of known BEs (O$_2$, Kr, D$_2$O). The accuracy is about 1 K for the values of temperature below 125 K and 0.2 K for the ones above. The actual new ramps obtained after calibration are taken into account in the simulations and analyses. With the QMS, the same molecule can be detected in different m/z corresponding to the fragmentation of the parent molecules after the electron impact (here at 30 eV). The main fragments under study here are m/z=44 (molecular mass of CH$_3$CHO$^+$) and m/z=29 (HCO$^+$, the main fragment of acetaldehyde). The desorption and degree of crystallization of D$_2$O ice films have been followed through the signal of m/z=20. \subsection{Results}\label{subsec:exp-results} Experiments on HOPG have been used mainly as reference, in order to understand the behavior of the acetaldehyde adsorption and the deposition time needed to have a monolayer on the substrate. Seven measurements corresponding to increasing deposition times (1, 1.5, 3, 4, 5, 6, 12 minutes) have been carried out on HOPG to study the evolution of the TPDs from a sub-monolayer regime to a multilayer one. Fig. \ref{fig:Famille TPD CH3CHO SUR HOPG} shows only four TPD curves belonging to four different deposition times using the same first stage pressure to introduce a constant flux of CH$_3$CHO in the system. There are two main features appearing in the graph: a low temperature peak between 105-110 K, and a second peak centered around 117.5 K. At low deposition times, only the peak at higher temperature appears (black and red curves corresponding to 1.5 and 3 minutes). It initially increases in intensity with coverage but saturate for longer depositions. After 6 minutes of deposition, the low temperature peak clearly appears. The TPD after 12 min deposition shows both peaks equally intense, although the low temperature peak is slightly broader. The high temperature peak is attributed to sub-monolayer regime and is first populated, and, when the monolayer is saturated and the multilayer regime starts, the second peak at lower temperature appears. By following the TPD experiments on HOPG, a monolayer of molecules on the substrate is believed to form after ca. 4 minutes of deposition. We define this time to calibrate the rest of the analyses. As HOPG and ASW-c have about the same surface density of sites, the same time deposition is used to characterise the coverage on ice, and to derive the BEs of CH$_3$CHO as a function of the coverage. In order to derive the BEs on HOPG and on the two different ices, the three TPDs for the 4 min depositions on HOPG, ASW-c and PCI have been selected. By proceeding this way, the interaction between a monolayer of deposited CH$_3$CHO and each substrate can be studied. Fig. \ref{fig:Famille TPD CH3CHO SUR PCI} shows the results for one of them, i.e., the experiment of 4 min deposition of CH$_3$CHO on PCI, and the simulation obtained by fitting the data. The procedure is detailed in \cite{Chaabouni2018}. The calculation uses eleven independent TPDs, starting from 4500 K, with an energy gap between each TPD of 150 K (i.e. 4500, 4650, 4800 K...). Each individual TPD is calculated using simple Arrhenius kinetics, named the Polanyi-Wigner equation: \begin{equation} r(T) =-\frac{dN}{dT}=\ A N^n \exp{\left(-\frac{E_{des}}{T} \right)} \label{eq:eq1} \end{equation} where $r(T)$ is the desorption rate (in ML s$^{-1}$), $N$ is the number density of molecules adsorbed on the surface (ML), and $n$ is the order of the desorption equal to 1 in our case. $A$ is the pre-exponential factor (s$^{-1}$), $T$ is the temperature of the surface (in K), and $E_{des}$ is the activation energy for desorption (in K). If there is no reorganisation of the surface\footnote{This is 100\% true on graphite, but this is an assumption/approximation in case of acetaldehyde on water.}, $E_{des}$ is equal to BE. First order desorption TPD profiles are independent of the surface population, so that each independent BE can be weighted, and finally a distribution of BEs can be found. We have set the pre-exponential factor at $1.1 \times 10^{18}$ s$^{-1}$ (see \S ~\ref{sec:discussion}). The arbitrary choice of binning the energy distribution (basis of eleven BEs) have been optimized with the aim to have a good fitting of all the three experiments under analysis. Fig. \ref{fig:Famille TPD CH3CHO SUR PCI} gives a qualitative idea of the simulation for the case of the PCI substrate. The best values found for the eleven TPDs, also known as energy distribution set for each substrate, are reported in Table \ref{tab:my-table}. The values are here expressed as a function of the population, in percentage, having that specific BE. We estimate the accuracy of the method to be around a few percents, mostly due to the noise-to-signal ratio and the presence of a small background noise. \begin{table} \caption{Desorption energy values of the eleven simulated TPDs obtained for three sets of 4 min CH$_3$CHO depositions: on HOPG, ASW-c and PCI (pre-exponential factor:$1.1 \times 10^{18}$ s$^{-1}$).} \label{tab:my-table} \resizebox{\hsize}{!}{% \begin{tabular}{l|l|l|l|} \cline{2-4} & HOPG & c-ASW & PCI \\ \hline \multicolumn{1}{|l|}{E$_{des}$ (K)} & Population (\%) & Population (\%) & Population (\%) \\ \hline \multicolumn{1}{|l|}{4500} & 0 & 0 & < 4 \\ \hline \multicolumn{1}{|l|}{4650} & 10 & 16 & 28 \\ \hline \multicolumn{1}{|l|}{4800} & 0 & 32 & 27 \\ \hline \multicolumn{1}{|l|}{4950} & 0 & 10 & 12 \\ \hline \multicolumn{1}{|l|}{5100} & 34 & 12 & 12 \\ \hline \multicolumn{1}{|l|}{5250} & 66 & 7 & 9 \\ \hline \multicolumn{1}{|l|}{5400} & 0 & 8 & 6 \\ \hline \multicolumn{1}{|l|}{5550} & 0 & 9 & 6 \\ \hline \multicolumn{1}{|l|}{5700} & < 4 & 6 & 8 \\ \hline \multicolumn{1}{|l|}{5850} & < 4 & 7 & < 4 \\ \hline \multicolumn{1}{|l|}{6000} & < 4 & < 4 & < 4 \\ \hline \end{tabular}% } \end{table} The three energy distributions are displayed in Fig. \ref{fig:Energy distribution of 4 min TPD for HOPG, ASW and PCI}. We omit the populations below 4 percents. One can observe the main differences because of the different substrate. For HOPG, there is not an actual distribution, but it rather shows almost only two main energy values, which are associated with the sub-monolayer regime (at 5100--5250 K). As can be noticed by the 10\% population at 4650 K, for the 4 min depositions the multilayer regime slightly started already. This is in agreement with the accuracy of the determination of the flux, which is usually estimated around 15$\%$. The distributions of BEs of CH$_3$CHO on ices present different characteristics, signature of the more disordered nature of water ice (even for the PCI case). There is a larger distribution of population covering almost all the possible energy values proposed by the simulation. The ASW-c substrate presents the largest distribution of BEs. The energy distribution histogram associated with the desorption of CH$_3$CHO on PCI presents, instead, a more peaked distribution towards lower values of energy (mainly below 4900 K). This is due to the reduced number of combination of water molecules on the surface. The disorder here is provoked by the possibility of having an O or an H in the vicinity of the adsorption sites, even if the collective structure is cubic. On the contrary, ASW-c have a more disordered nature and, accordingly, the adsorption sites may have a more variable number of water molecules strongly interacting with CH$_3$CHO (see below). Finally, we calculate the weighted average for each case to estimate the difference in BEs for the different substrates. We obtain the following result: 5150 K for HOPG (range: 4650--5250 K), 5080 K for ASW-c (range: 4650--5850 K) and 4990 K for PCI (range: 4650--5700K). This corresponds to what one can expect by looking at the trend in the desorption profiles. \section{Quantum chemistry calculations}\label{sec:calculations} Simulations of the adsorption of an acetaldehyde molecule on crystalline and amorphous ice models, as well as a conmponent of a pure acetaldehyde surface have been carried out in order to i) gain insights of the adsorption process at an atomic level, and ii) calculate BEs of the species on ices with the purpose to make comparisons with TPD experiments. \subsection{Methods} \subsubsection{Computational details} All the calculations were performed with the ab initio CRYSTAL17 code \citep{dovesi2018quantum}. The code can simulate any kind of system, from non periodic molecules to full periodic 3D crystalline systems, employing atom centered Gaussian basis sets for the description of the electronic structure. In this work, we carried out DFT-based static optimizations using the BFGS optimization algorithm, relaxing both the cell parameters and the atomic positions. These calculations were performed with the PBEsol0-3c functional \citep{PBE0sol-3c}, which makes use of an Ahlrichs' polarised valence-single zeta basis set in order to reduce the computational cost. Then, on the PBEsol-3c optimized geometries, single point energy calculations were carried out with the B3LYP functional \citep{Becke1993, LYP88} added with the Grimme's D3(BJ) correction (to account for dispersive forces \citep{Grimme2010, Grimme2011}) with the aim to refine the BEs at a higher level of theory. These refinement calculations have been performed using an Ahlrichs' VTZ basis set added with a double set of polarization functions. For the sake of accuracy, the calculations carried out with the B3LYP-D3(BJ) method have been corrected for the basis set superposition error (BSSE), adopting the \textit{a posteriori} counterpoise correction. Vibrational harmonic frequencies were calculated at the PBEsol0-3c level using a finite differences method. A partial Hessian approach was used to reduce the computational cost of the calculations. Thus, the vibrational frequencies were calculated on the optimized geometries only for a fragment of the entire system, which included the acetaldehyde molecule and the closest water molecules interacting with it. From this vibrational frequency calculations, the zero point energy (ZPE) correction terms were included in the calculations of the BEs. \subsubsection{Water ice and pure acetaldehyde periodic structures} The crystalline and amorphous water ice surface models used in this article have already been described in \cite{Ferrero2020} and are reported in Fig. \ref{fig:ice_surfaces}. Briefly, the crystalline slab model derives from the (010) surface of the proton-ordered P-ice. For this surface, two different unit cells have been used, the 1x1 and the 2x1 (Fig. \ref{fig:ice_surfaces}A and B, respectively), this way covering both high and low coverage regimes. The amorphous water ice surface was constructed by joining different water clusters from \cite{Shimonishi_2018}, in which, upon imposing periodicity and optimising the structure, the resulting surface model presents a cavity (Fig. \ref{fig:ice_surfaces}C). Since the crystalline structure of pure acetaldehyde is not available yet, in order to simulate a pure acetaldehyde slab we resorted to the crystal structure prediction algorithm. First, the acetaldehyde molecule was optimized in gas phase with the B3LYP-D3(BJ) method combined with the optimized def2-SVP basis set with the Gaussian 09 \citep{gaussian0920091} program. This gas-phase optimized structure was used as input parameter to derive the pure acetaldehyde periodic solid state structures, which was achieved by using the programs of DFTB$+$ (version 20.2.1) \citep{hourahine2020dftb+} including Self Consistent Charge \citep{elstner1998self} and the D4 dispersion model by Grimme \citep{caldeweyher2019generally, caldeweyher2017extension}, and VASP 5.4.4 \citep{kresse1993ab, kresse1994ab, kresse1996efficiency, kresse1996efficient} using PAW PBE pseudopotentials \citep{kresse1999ultrasoft}. Solid-state structure prediction was performed using the USPEX (version 10.4.1) evolutionary algorithm \citep{lyakhov2013new, oganov2006crystal, oganov2011evolutionary}, with embedded the topology crystal structure generator \citep{bushlanov2019topology}, employing the DFTB$+$ code for geometry optimizations. The first generation of crystal structures were randomly generated, while subsequent generations were obtained by applying the genetic algorithm embedded in USPEX coupled with different variation operators, in particular rot-mutation and lattice mutation. The search space was limited from 1 to 4 molecules per cell. Calculations proceeded for 23 generations and 570 structures, in which USPEX identified the best structure, which contains 2 molecules per cell and a final volume of 118.77 \AA $^3$. Since periodic DFTB$+$ calculations tend to excessively shrink the unit cell, subsequent calculations with USPEX but this time employing VASP calculations were performed using as initial seeds the best structures found by DFTB$+$. This proceeded for 28 generations and 723 structures, resulting in the final optimized structure characterized by a $=$ 4.960 \AA, b $=$ 5.655 \AA, c = 4.536 \AA, and $\alpha$ $=$ 89.55$^{\circ}$, $\beta$ $=$ 89.70$^{\circ}$, $\gamma$ $=$ 127.17$^{\circ}$ (with a final volume of 127.17 \AA $^3$) The resulting bulk structure has been optimized at the PBEsol0-3c level and it is depicted in Fig. \ref{fig:acetaldeide}A. Once we obtained the bulk, we cut it along the (010) direction to obtain a 2D periodic slab model, which does not posses a large dipole along the non periodic direction (\textbf{z} axis (Fig. \ref{fig:acetaldeide}B and C). \subsection{Results}\label{subsec:comp-results} \subsubsection{Adsorption of acetaldehyde on ice structures} We first simulate the acetaldehyde adsorption process by placing the molecule on the 1x1 crystalline surface, in which a single hydrogen bond (H-bond) between a dangling hydrogen (dH) of the ice surface and the O atom of acetaldehyde is established. The size of the 1x1 unit cell is very small (it only contains one dH) and accordingly adsorption of acetaldehyde ends up with a structure resembling a mono-layer coverage adsorption situation (see panel A of Fig. \ref{fig:adsorption}), in which lateral interactions between acetaldehydes of adjacent unit cells take place due to the periodic boundary conditions. By using the 2x1 unit cell, such lateral interactions are removed, giving rise to a low coverage regime, in which a single acetaldehyde is adsorbed on the surface (see panel B of Fig. \ref{fig:adsorption}). The obtained ZPE-corrected binding energies, BE(0), are 7253 K and 5194 K for the 1x1 and 2x1 unit cell cases, respectively (reported in Tab. \ref{tab:BEs}). Since the amorphous ice is larger (60 water molecules) and presents a disordered surface morphology, this surface model presents more adsorption sites. We sampled some of them by placing acetaldehyde molecule in different starting positions and then optimising the structures. Out of a total of 14 optimizations, we obtained 9 different minima, in which in all the cases acetaldehyde establishes H-bond interactions with dH of the amorphous surface. The different calculated BE(0) values are reported in Table \ref{tab:BEs}, which spans the ca. 2800--6000 K range. Panels C and D of Fig. \ref{fig:adsorption} show the optimized structures for the complexes presenting the highest and the lowest BE(0) values, respectively. \begin{table} \caption{Calculated (ZPE corrected) BE(0) (in K) of acetaldehyde on the crystalline and the amorphous ice surface models.} \label{tab:BEs} \begin{tabular}{cc} \hline \multicolumn{1}{l}{\textbf{Crystalline ice}} & \textbf{BE(0) } \\ \hline \multicolumn{1}{l}{1x1 cell} & 7253 \\ \multicolumn{1}{l}{2x1 cell} & 5194 \\ \hline \multicolumn{1}{l} {\textbf{Amorphous ice}} & \\ \hline \multicolumn{1}{l}{Site$_1$} & 5208 \\ \multicolumn{1}{l}{Site$_2$} & 3366 \\ \multicolumn{1}{l}{Site$_3$} & 5700 \\ \multicolumn{1}{l}{Site$_4$} & 4708 \\ \multicolumn{1}{l}{Site$_5$} & 2809 \\ \multicolumn{1}{l}{Site$_6$} & 4028 \\ \multicolumn{1}{l}{Site$_7$} & 3450 \\ \multicolumn{1}{l}{Site$_8$} & 4597 \\ \multicolumn{1}{l}{Site$_9$} & 6038 \\ \hline \end{tabular} \end{table} \subsubsection{Adsorption of acetaldehyde on pure acetaldehyde surface} A single acetaldehyde molecule was adsorbed on the 2x2 super cell of the (010) pure acetaldehyde slab model. Its binding energy BE(0) was computed similarly as described for the adsorption on the water ice structures. The calculated BE(0) for this case is 2650K. This value is a lower limit of the BE(0), as the isolated molecule is engaged in very few interactions with the companion surface. The upper limit of the BE is the cost needed to extract a single acetaldeyde molecule from the external layer of the (010) surface. In this case, the BE(0) is a compromise between the rupture of a consistent number of intermolecular interactions holding acetaldehyde within the slab and the geometry relaxation of the cavity created in the surface. The binding energy BE(0) was, therefore, calculated by subtracting the energy of the relaxed surface with the cavity \emph{plus} the energy of the extracted acetaldehyde from the energy of the pristine perfect surface. The result is a BE(0)=5692 K, that is, almost twice as large as the acetaldehyde BE(0) at the clean (010) surface (\emph{vide supra}). \section{Discussion}\label{sec:discussion} \subsection{The importance of the pre-exponential factor}\label{subsec:preexp-factor} Before discussing the chemical physics, or any methodological bias, it is important to be able to compare the values derived by the experimental and theoretical methods, which in the case of desorption is not straightforward. To understand this point, we will first recall the framework in which the BE values are obtained in each approach. Unfortunately, the three communities (laboratory, quantum chemistry, and astrochemistry) have their own underlying definitions, which we will discuss first. \noindent \textit{The experimental point of view:} Experiments do not directly measure a desorption energy, but a desorbing flux, and, by applying Eq. \ref{eq:eq1} (see \S ~\ref{sec:experiments}), a BE is derived (which is supposed to be equal to the desorption energy in most of the cases). However, to do so, one needs to set a value for the pre-exponential factor, since the equation has two parameters: the pre-exponential factor A and the binding energy BE (if the desorption order is set to $n$=1). Here, following the recommendation by \cite{Minissale2022}, we adopt the derivation of the pre-exponential factor based on the Transition State Theory (TST). This theory takes into account both the rotational and translation partition functions of the desorbing molecules and, therefore, allows including in the calculation of the pre-exponential factor, $\rm {A_{TST}}$, the entropic effect associated with the kinetic desorption rate. We briefly describe the salient points of the calculation of $\rm{A_{TST}}$ \citep[more details can be found in][]{tait2005b}. We used the following formula: \begin{equation} {\rm A_{TST}}=\frac{k_b T}{h}\frac{q^{\ddagger}}{q_{ads}}= \frac{k_b T}{h} \frac{A}{\Lambda^2} \frac{\sqrt[]{\pi}}{\sigma \,h^3} (8\, \pi^2 \,k_b\, T_{peak})^{3/2} ~\sqrt{I_x\, I_y \,I_z} \label{eq:TST} \end{equation} where $q_{ads}$ and $q^{\ddagger}$ are single-particle partition functions for the adsorbed (initial) state and the transition state, respectively, calculated at the temperature $T_{peak}$. \textit{A} is the surface area per adsorbed molecule. Here, like in other experiments or computational studies, it is fixed to 10$^{−19}$ m$^2$ (the inverse of the number of surface sites per unit area). $I_x$, $I_y$, and $I_z$ are the principal moments of inertia for the rotation of the particle, obtained by diagonalizing the inertia tensor of the free acetaldehyde. They are fixed to $I_x$ = 52.46, $I_y$ = 46.86, $I_z$ = 8.78 amu \AA$^2$. The symmetry factor, $\sigma$, is the number of different but indistinguishable rotational configurations of the particle and it is fixed to 1 in the case of acetaldehyde (as it belongs to the C$_s$ symmetry point group). The thermal wavelength of the molecule, $\Lambda$, depends on its atomic mass (44 amu) and on the peak of the desorption energy (${T_{peak}}$=115 K), and it is 24.5 pm. By using these parameters, ${\rm A_{TST}}$=$1.07\times 10^{18}$ s$^{-1}$. We emphasize, however, that ${\rm A_{TST}}$ is a function of ${T_{peak}}$. As shown in Fig.\ref{fig:Famille TPD CH3CHO SUR HOPG}, the temperature of the peak depends on the surface coverage. For example, when using ${T_{peak}}$=108 K, ${\rm A_{TST}}$ is equal to $8.58\times 10^{17}$ s$^{-1}$, while fixing ${ T_{peak}}$=117 K then ${\rm A_{TST}}$=$1.13\times10^{18}$ s$^{-1}$. In this study, we choose ${\rm A_{TST}}$=$1.1 \times10^{18}$ s$^{-1}$, which is the average value. \vspace{0.3cm} \noindent \textit{The astrochemical modelling point of view:} Often, the pre-exponential factor is calculated using the harmonic oscillator approximation introduced by \cite{Hasegawa1992} and here called ${\rm A_{HH}}$: \begin{equation} {\rm A_{HH}} = \left( \frac{2~n_s~\rm BE}{\pi^2~m} \right)^{1/2} \label{eq:HH-pre-exponential} \end{equation} where $n_s$ is the surface density of sites and $m$ is the mass of the adsorbed species. Using this approximation, which does not take into account the fact that the molecule adsorbed does no\textbf{t} rotate whereas the desorbed one does, we find ${\rm A_{HH}}=1.3\times 10^{12}$ s$^{-1}$ if we choose an arbitrary value of BE=4500 K. In addition, in the \cite{Hasegawa1992} formulae, the value of ${\rm A_{HH}}$ depends exclusively on the BE of the adsorbate and is calculated directly from it, so that, in the models, there is a unique parameter, BE, and not the A and BE pair, as determined in experiments. We stress that the chosen pre-exponential factor A can strongly affect the BE value determined using the TPD technique, since they are actually coupled (see Eq. \ref{eq:eq1}) and are somewhat degenerated (monotonic). To compare the values obtained with different pre-exponential factors A, we can re-scale the BE values using ${\rm A_{HH}}$ and the following formula: \begin{equation} {\rm BE_{HH}}={\rm BE_{TST}}-T_{peak} ~ln({\rm A_{TST}/A_{HH}}) \label{eq3} \end{equation} where ${\rm BE_{HH}}$ and ${\rm A_{HH}}$ (${\rm BE_{TST}}$ and ${\rm A_{TST}}$) are the pair BE and pre-exponential factor calculated with the harmonic oscillator approximation of \cite{Hasegawa1992} or the TST proposed by \cite{tait2005b}. \vspace{0.3cm} \noindent \textit{The theoretical point of view:} A full quantum mechanical (QM) evaluation of the pre-exponential factor can be derived by the statistical mechanical treatment as in the transition state theory, considering the availability of the harmonic frequencies for all involved systems. The QM pre-exponential factor $\nu_{TST}(T)$ is related to the $A_{TST}$ of Eq. \ref{eq:TST} by: \begin{equation}\label{eq:apprx_pre_exponential} \nu_{TST}(T) = A_{TST} \underbrace{\frac{{^\ddagger q_{vib}(M)} {^\ddagger q_{vib}(S)}}{q_{vib}(C)}}_{\textcolor{red}{\mathbf{q_{vib}^{TST}}}}\; . \end{equation} Therefore the two equations coincides when the $q_{vib}^{TST}$ ratio is equal to unity. The application of the full TST treatment to acetaldehyde, limited to one representative case related to the P-ice model (with a BH(0) of 60.3 kJ/mol, image 6 panel A of the paper), gave the ratio between the full TST treatment and the \citet{tait2005b} prefactor close to unity for temperatures $\leq$ 50 K, decreasing by ~1 and 2 orders of magnitude at 100 K and 200 K, respectively. Therefore, considering the desorption temperature of 107 K for acetaldehyde, the \citet{tait2005b} approximation is relatively good within around one order of magnitude of the value of $\approx$ 10$^{18}$ s$^{-1}$ while being much simpler to apply. Another advantage of adopting Eq. \ref{eq:TST} over the full QM TST one is its explicit dependence on $T_{peak}$. For these reasons, in the following, we will adopt Eq. \ref{eq:TST} for the calculation of the pre-exponential factor. \vspace{0.3cm} \noindent \textit{Comparison with previous experimental estimates:} Now we have a tool to compare BE provided by different values of pre-exponential factors. \cite{Corazzi2021} recently reported an experimental value of BE=3100 K of acetaldehyde co-deposited with water on a substrate made of micrometer-sized olivine grains. It obviously differs from the measured values here and it is lesser than any calculated value. However, these authors used a fixed value of A=10$^{12}$ s$^{-1}$. To be able to compare with our results, we use Eq. \ref{eq3} and find that the couple BE=3079 K and A=10$^{12}s^{-1}$ corresponds to the couple BE=4680 K and A=$1.1 \times 10^{18}$ s$^{-1}$, that is exactly what has been found here for the multilayer energy of acetaldehyde. This example shows us how important is to take into account not only the BEs, but also the pre-exponential factor that allows us to calculate a desorption flux, to reproduce experiments or to simulate the desorption in ISM conditions. \subsection{Comparison of experimental and theoretical results}\label{subsec:comparison} The aim of this section is to compare the experimental and theoretical approaches. We will do that for the pure acetaldehyde, crystalline and amorphous water ices, separately. \subsubsection{Pure acetaldehyde ice} By comparing theory with experiments, in general, we compare a static and "uni-molecular" calculation with a measurement that is both dynamic and averaged over a large population and a large number of situations. We will first compare the case of the pure acetaldehyde ice. Here, experiments provide a BE of 4650 K, while calculations dealing with the ideal model of a molecule above the (010) acetaldehyde surface gives a BE of 2650 K. Given the robustness of the measurement \citep[confirmed by][]{Corazzi2021}, one would conclude that what is measured does not correspond to what is calculated. In reality, when a molecular film is heated, it will constantly reorganise itself, and the situation of a molecule on top of the surface is not the most energetically stable configuration. The other extreme case corresponds to the calculation of the BE(0) of one acetaldehyde molecule extracted from the surface slab, which is 5692 K (see above). We think that this last computed value is probably overestimated with respect to the experimental BE, as we only focused on the process occurring at the rather stable (010) surface without considering more defective surfaces, from which the desorption can occur from kink and edges leaving the molecule less engaged with the underneath layers. As a first conclusion, therefore, it can be said that the present calculations provide good high and low boundary values to the experimental results, but that the dynamic (in the case of experiments) versus the static (in the case of calculations) nature can make any direct comparison somewhat misleading. \subsubsection{CH$_3$CHO desorbing from crystalline water ice} Although we provided two BE values because of the use of two different unit cell sizes (i.e., 1x1 and 2x1 supercell), here, for the sake of comparison with experiments, we take the value of BE=5194 K obtained using the 2x1 supercell, as it is more realistic. In experiments, we find a distribution of BEs. It is not a bias of the method, as it was possible to derive a unique value of the BE for the HOPG surface. Actually, in experiments, even for a PCI, the surface is disordered. Indeed, the water molecules at the outermost positions of the surface almost randomly alternate dH pointing outside or inside the solid. Therefore, even for crystalline water ice, distributions of BE are usually measured whatever is the adsorbate (\cite{amiaud2007,Noble2012,Nguyen2018}. The calculations, made on a perfectly regular or periodic crystal, are not able to reproduce this disorder. In experiments, the largest population is found for BE=4650 K and it corresponds to the BE of an acetaldehyde multilayer. This means that CH$_3$CHO molecules prefer to rearrange themselves, probably creating some sort of small clusters, rather than spreading over the surface. In other words, acetaldehyde do not wet properly the water ice surface. The low coverage values, which correspond more to the calculations made for a unique molecule, lay between 4950 and 5700 K. This is in excellent agreement with the calculated value of BE=5194 K. We note that population at 5700 K could also correspond to defaults or steps in the poly-crystalline assembly. \subsubsection{CH$_3$CHO desorbing from amorphous water ice} Both experiments and calculations show a broad distribution of BEs. Calculations demonstrates that, compared to the crystalline case, some sites are more energetically favorable (two over nine), one is about equal, and the other six are less favorable sites for adsorption. There is no doubt that such less bounded sites exist, but in experiments they will not be populated, as molecules reorganise during the heating phase and tend to occupy the available most favourable adsorption sites. This has been well experimented and documented as the "filling behaviour", for relatively volatile substances \citep{Kimmel2001,Dulieu2005}. In the experiments, for all adsorbates, the minimum BE measurable is set by the limit of the BE of the multilayer. This is why we observe that half of the population of acetaldehyde desorbs from sites with a BE very close to that of pure CH$_3$CHO films. Indeed, this accumulation at the lower part of the BE distribution corresponds, in the calculations, to the six (out of nine) sites that have a BE lower than 4650 K (the multilayer limit). Once again, we find here that the theoretical calculations are in very good agreement with what is obtained experimentally. They show perfectly the extent of the distribution (up to 6000 K), and propose a good sampling of sites. Finally, we point out here that we do not observe any co-desorption of acetaldehyde with water. All the acetaldehyde desorbs prior to the water. This is consistent with the rather low values of BE calculated for the interaction with water, lower than the multilayer BE measurements of the acetaldehyde film alone. This molecule would somehow have a low hydrophilic character. \subsection{Comparison of the acetaldehyde BE with those of other important interstellar molecules}\label{subsec:BE-comparison} The detailed calculations and measurements of the acetaldehyde BEs on water ices show a consistent but complex picture. First, as already reported in other works \citep[e.g.][]{Dulieu2005,Ferrero2020}, BE on an icy surface is not a single value, but there is rather a distribution of BEs caused by the different sites to which acetaldehyde can be adsorbed. Second, using the BE values without paying attention to its associated pre-exponential factor can lead to inconsistencies, if not mistakes. That being said, most of the astrochemical models use a single value of BE and the ${\rm A_{HH}}$ pre-exponential factor, recalculated from it following Eq. \ref{eq:HH-pre-exponential}, which is not a guarantee of correctness. Another question is: which single value of BE to choose from the observed distribution? To choose a single value in the distribution, one needs to know the nature of the ice, for instance, if the surface will be fully or partly covered. Acetaldehyde is a frequently observed molecule in the gas phase, but it has not been detected yet in the ice mantles because its concentration is probably low (but see also the discussion in the Introduction). Therefore, the coverage of this molecule on the surface of icy grains should remain low and the low coverage side of the distribution shall be preferred. Moreover, the interval of values that we propose overlaps the BE extrapolated by \cite{Wakelam2017BE} (5400 K), which turns out to be correct if one adopts the pre-exponential factor of A=$1.1 \times 10^{18}$ s$^{-1}$. However, if the astrochemical model uses the harmonic oscillator approximation and a value of ${\rm A_{HH}} \sim 10^{12}$ s$^{-1}$, a value around 3800 K shall be preferred. The advantage to use the couple BE=5400 K and A=$1.1 \times 10^{18}$ s$^{-1}$ is that one can then compare this BE with that derived by the theoretical calculations and to the already published BE values of other molecules \citep{Minissale2022}. As an example, CH$_3$CHO can be compared to H$_2$CO, which seems to have a similar desorption behaviour, exhibiting a non co-desorption with water \citep[][]{Noble2012b}. The values for H$_2$CO are: BE=4117 K (A=$8.29 \times 10^{16}$ s$^{-1}$). On the contrary, both ethanol and methanol exhibit co-desorption with water and have larger BEs, BE=7000 K (A=$3.89 \times 10^{18}$ s$^{-1}$) \citep{Dulieu2022} and BE=6621 K (A=$3.18 \times 10^{17}$ s$^{-1}$) \citep{Bahr2008,Minissale2022}, respectively. Finally, formamide (NH$_2$CHO) has a refractory behaviour with respect to water. It desorbs at higher temperature from bare surfaces because of its high BE=9561 K (A=$3.69 \times 10^{18}$ s$^{-1}$) \citep{Chaabouni2018,Minissale2022}. One obtains the same substantial result when considering the theoretical BEs calculated by \cite{Ferrero2020}, although they are BE distributions. \subsection{Astrochemical implications} From an astrochemical point of view, two points are particularly relevant. The first point regards the presence of acetaldehyde in cold ($\sim10$ K) objects \citep[e.g.][]{Bacmann2012,vastel2014,Scibelli2020,Zhou2022}. The relatively large BE (4800--6000 K) makes it difficult to explain the presence of gaseous acetaldehyde if formed on the grain-surfaces, and would rather favor a gas-phase formation. Of course, this may just move the problem of the presence of the reactants needed to synthesize acetaldehyde from them, namely ethyl radical and/or ethanol. Non-thermal mechanisms could be at play such as cosmic ray bombardment that would not be chemically selective (\cite{Dartois2019}), whereas the chemical desorption initiated by hydrogenation on the grain surface may have different efficiencies depending on the molecule \citep{Minissale2016}. As written in the Introduction, a the BE of a given species determines at what temperature the species remains adsorbed or goes to the gas-phase. In hot cores/corinos, several species, including acetaldehyde, have a jump in the abundance in the region where the dust temperature reaches about 100 K. This is believed to be caused by the sublimation of the frozen water \citep[e.g.][]{Charnley1992,Ceccarelli2000a,Jaber2014}, which is the major constituent of the icy grain mantles \citep[e.g.][]{Boogert2015}. However, the analysis of the emission lines of different species and their spatial distributions sometimes suggest that there is a differentiation in the sublimation of different species \citep[e.g.][]{Manigand2020,Bianchi2022}. The recent study by \cite{Bianchi2022} definitively shows a chemical differentiation between the two hot corinos of the SVS13 protobinary system. Specifically, the analysis of the high spatial resolution ALMA observations leads to the conclusion that gaseous acetaldehyde and formamide (NH$_2$CHO) are distributed in an onion-like structure of the hot corino, with formamide becoming abundant in a warmer region with respect to the acetaldehyde region. A natural explanation of this behavior is that species with larger BE emit lines (mostly) in more compact regions, i.e., the region corresponding to their sublimation temperature, rather than the water sublimation front, because the large densities ($\geq 10^8$ cm$^{-3}$) make those species freeze-out back very quickly. Therefore, given their respective BEs (see \S ~\ref{subsec:BE-comparison}), acetaldehyde is emitted in a region more extended and colder than that of formamide. More in general, by considering their respective BEs, formaldehyde and acetaldehyde would desorb before water, ethanol and methanol with water, but formamide has to wait for higher average temperatures, given its much higher BE. Our new estimates of the acetaldehyde BE, slightly lower than water, would suggest that acetaldehyde should be found approximately in the regions where water and methanol are present too, which is what has been so far found \citep{Bianchi2022}. On the contrary, formamide has definitively a larger BE range, so that we predict formamide to originate in a hotter region than that of acetaldehyde, which is indeed what is observed in the few cases where a similar analysis has been carried out \citep[e.g.][]{Csengeri2019,Okoda2021,Bianchi2022}. In summary, with the high sensitivity of the new present facilities, such as ALMA and NOEMA, very likely studies revealing a differentiation in the chemical species in hot cores/corinos will become more an more available. We need to be prepared with studies similar to the one reported here, where the BEs of more complex organic molecules are estimated, so that we can appropriately interpret the astronomical observations. In turn, those observations can help to validate our methods, for which so much uncertainty still persist. \section{Conclusions}\label{sec:conclusions} We presented TPD experiments and quantum chemical computations of the acetaldehyde BE on a pure acetaldehyde ice, and on crystalline and amorphous water ices. The main conclusions of this work are the following: \begin{enumerate} \item The experiments indicate that the acetaldehyde BE has a distribution of values. In the acetaldehyde ice, the BE has two main energy values, at about 4650 and 5250 K, with the peak around 5250 K being about ten times more frequent. In crystalline water ice, the BE ranges between about 4600 and 5700 K, with a peak around 4600--4800 K. In amorphous water ice, the BE ranges between about 4600 and 5900 K, with a peak around 4800 K. The weighted average in the three cases are 5150, 5080 and 4990 K, respectively. \item The theoretical calculations give BE values on the different sites of the used ice model. BEs spread over 5194 and 7253 K in the case of crystalline ice, and 2809 to 6038 K in the case of amorphous ice. In the pure acetaldehyde ice, two extreme values of 2650 and 5692 K are obtained. \item We discussed and showed the importance of the pre-exponential factor when deriving, comparing and using BE derived from experiments and theoretical calculations. Remarkably, when the correct pre-exponential factor is used with the derived BE, experiments and theory are in fair good agreement. \item A comparison of the the derived acetaldehyde BEs with those of other important species shows that acetaldehyde would desorb at temperatures lower than those at which water desorbs and even lower temperatures with respect to formamide. \item The large acetaldehyde BE challenges the explanation for its gas-phase presence in cold ($\sim 10$ K) astronomical objects, especially if it is formed on the grain surfaces. The problem may be alleviated if it is formed by gas-phase reactions. \item In hot cores/corinos, the measured and computed BE is in agreement with the observations of acetaldehyde originating in regions colder than formamide, whose BE is larger. \end{enumerate} Finally, our study shows the importance to extend the methodology adopted here to other molecules of astrochemical interest, such as ethanol or formamide, which are nowadays routinely observed in astronomical objects and that show a spatial segregation probably due to their different BE. \section*{Acknowledgements} This project has received funding within the European Union’s Horizon 2020 research and innovation programme from the European Research Council (ERC) for the projects ``The Dawn of Organic Chemistry” (DOC), grant agreement No 741002 and ``Quantum Chemistry on Interstellar Grains” (QUANTUMGRAIN), grant agreement No 865657, and from the Marie Sklodowska-Curie for the project ``Astro-Chemical Origins” (ACO), grant agreement No 811312. This work was supported by the Agence Nationale de la recherche (ANR) SIRC project (Grant ANR-SPV202448 2020-2024). CC wishes to thank Eleonora Bianchi and Lorenzo Tinacci for fruitful discussions on the chemical segregation observed in hot corinos and the evaluation of the theoretical binding energy, respectively. This work has also been funded by the European Research Council (ERC) for the Starting Grant "DustOrigin”, hold by Prof. Ilse De Looze, University of Ghent, grant agreement ID: 851622. AR is indebted to the \textit{Ramón y Cajal} programme. \section*{Data availability} The data underlying this article are available in Zenodo at \url{https://zenodo.org/communities/aco-astro-chemical-origins/?page=1&size=20} \bibliographystyle{mnras} \bibliography{CC_bibtex} % \bsp % \label{lastpage}
Title: Advanced wavefront sensing and control demonstration with MagAO-X
Abstract: The search for exoplanets is pushing adaptive optics systems on ground-based telescopes to their limits. Currently, we are limited by two sources of noise: the temporal control error and non-common path aberrations. First, the temporal control error of the AO system leads to a strong residual halo. This halo can be reduced by applying predictive control. We will show and described the performance of predictive control with the 2K BMC DM in MagAO-X. After reducing the temporal control error, we can target non-common path wavefront aberrations. During the past year, we have developed a new model-free focal-plane wavefront control technique that can reach deep contrast (<1e-7 at 5 $\lambda$/D) on MagAO-X. We will describe the performance and discuss the on-sky implementation details and how this will push MagAO-X towards imaging planets in reflected light. The new data-driven predictive controller and the focal plane wavefront controller will be tested on-sky in April 2022.
https://export.arxiv.org/pdf/2208.07334
\keywords{high-contrast imaging, high-resolution spectroscopy, exoplanets, adaptive optics} \section{INTRODUCTION} One of the primary goals of the upcoming generation of Giant Segmented Mirror Telescopes (GSMT) will be the direct imaging of Earth-like exoplanets. This requires large apertures, both for their light collecting area and their resolution. Planets, like Earth, are usually many orders of magnitude fainter than their host star \cite{traub2010exoplanets}. This makes them difficult to detect. Especially when the planets are only separated by several times the diffraction limit. Furthermore, atmospheric turbulence create large wavefront errors that need to be corrected. High-contrast imaging instruments tackle both problems by removing the influence of the star with extreme adaptive optics and coronagraphs \cite{guyon2018extreme}. Earth-like planets around M dwarf stars, which are the primary targets for the upcoming telescopes, have a contrast that is close to $10^{-8}$\cite{guyon2018wavefront}. Current direct imaging instruments routinely reach post-processed contrast levels of $10^{-4}$ to $10^{-6}$ at angular separations between 0.1 arcsec and 1.0 arcsec \cite{beuzit2019sphere}. This sensitivity is enough to image and characterize massive self-luminous planets \cite{marley2007hotjupiters} that emit the majority of their emission in the near-infrared part of the spectrum. And even though these instruments are sensitive enough to detect Jupiter-like planet, very few are discovered. Analysis of direct imaging surveys and radial velocity surveys hint that there is a turn over where the occurrence rate of exoplanets starts to drop \cite{bowler2015gpoccurence, nielsen2019gpies, fernandes2019occurence, wagner2019wideoccurence}. This turnover happens between 1 to 10 AU, which is the expected position of the snow line. The sensitivity closer in to the star has to be improved for both current exoplanet planet science and future exoplanet science. The limitation at small angular separations for the upcoming GSMTs can be divided in 3 problems: \begin{itemize} \item Time lag in the adaptive optics (AO) system \cite{kasper2012hci,milli2017sphereperformance,cantalloube2019winddrivenhalo}. The correction of the atmosphere is always trying to catch up because the wavefront that has been measured has also already passed through the system. This causes a delay that can not be corrected anymore. The servo-lag error creates a so called wind-driven halo that limits the contrast. \item The optics of the coronagraph are not part of the wavefront sensor. This means that light will travel through non-common optical paths, which will create internal instrument aberrations that are not visible to the wavefront sensor. These non-common path aberrations create speckles that leak through the coronagraph. \item The GSMTs will be segmented. The segmentation create differential piston modes, which are difficult to sense with conventional wavefront sensors. These segment piston modes for the Giant Magellan Telescope or petal modes for the European Extremely Large Telescope are very low-order modes. And, strong low-order modes create the most coronagraphic leakage. \end{itemize} In this proceeding we summarize the work that has been done with the MagAO-X instrument to tackle these problems. MagAO-X is a new high-contrast imaging instrument for the 6.5 Magellan Clay Telescope\cite{males2022magaox}. Section 2 describes our approach to predictive control to reduce the wind-driven halo. A new focal-plane wavefront control strategy is shown in Section 3. And Section 4 shows a new approach to measuring and controlling differential piston / petal modes. \section{Predictive control} There are multiple approaches to solve the wind-driven halo. The first is to run the AO system at a high enough speeds that the atmospheric turbulence is frozen. This requires measuring the wavefront at speeds of several kHz. This approach use at SCeXAO and MagAO-X \cite{jovanovic2015scexao, males2020magao}. While running faster can reduce the impact of the wind-driven halo it does not solve it completely. The system is still running behind even at high speeds. Another approach is to predict how the atmosphere is going to evolve and correct the wavefront errors before they are measured. Predictive control can lead to significant gains in post-processed contrast for high-contrast imaging. The post-processed contrast could be improved by a factor 100 to 1000, if predictive control is used and the temporal evolution of the atmosphere is predictable \cite{guyon2017eof,males2018lpc,correia2020hcipwfs}. We have recently developed the data-driven subspace predictive control (DDSPC) algorithm \cite{haffert2021data}. This algorithm only uses the wavefront measurements and the past DM commands to determine the new optimal command. The DDSPC algorithm directly uses the closed-loop residuals, without reconstructing the full turbulence. The advantage of this approach is that we side step any reconstruction error due to model errors. This approach was implemented for the GPU with the Compute Unified Device Architecture (CUDA) \cite{cuda} to create a real time adaptive controller. The controller runs at 2kHz in double precision and at 4 kHz in single precision for MagAO-X, which has 1600 controlled modes. The DDSPC algorithm is called data-driven and model free. This is a slight misnomer, because there is no controller that is truly model free. If models are called model free then what is meant is that there is no underlying parametric model of which the parameters are optimized. The DDSPC algorithm uses an auto-regressive structure as its backbone. This results in the following model structure for a single DM mode or actuator\cite{haffert2021data}, \begin{equation} \yf = A \yp + B \up + C \uf. \label{eq:prediction} \end{equation} Here $\yf$ and $\uf$ are the future vectors that contains the future $N$ wavefront sensor measurements and DM commands at time step $i$. And $\yp$ and $\up$ are the vectors that contain the $M$ past wavefront sensor measurements and DM commands at time step $i$. This is a model that is completely linear in $A$, $B$ and $C$. Therefore, these matrices can be estimated with a Linear-Least Squares (LLS) approach. The LLS problem is then given by, \begin{equation} y^i_f = \begin{bmatrix} A^i& B^i& C^i& \end{bmatrix} \begin{bmatrix} y^i_p\\ u^i_p\\ u^i_f \end{bmatrix} = \Theta^i \phi^i \end{equation} Here $\Theta^i$ is the concatenation of all model matrices, and $\phi^i$ is the concatenation of $\yp$, $\uf$ and $\up$. We have chosen to use recursive LLS (RLS) to learn the model because we want to learn the system dynamics online and have the ability to track changes. The mathematical details of the RLS implementation can be found in \cite{haffert2021data}. With the models in hand, the controller has to be found. The cost function for the controller is the quadratic sum of all future measurements and control commands, \begin{equation} J_i = y_f^{iT}y_f^{i} + \lambda u_f^{iT}u_f^{i} = \begin{bmatrix} y_p^{iT} & u_p^{iT} & u_f^{iT} \end{bmatrix} \begin{bmatrix} A^TA & A^TB & A^TC \\ B^TA & B^TB & B^TC\\ C^TA & C^TB & C^TC \end{bmatrix} \begin{bmatrix} y_p^i \\ u_p^i \\ u_f^i \end{bmatrix} + \lambda u_f^{iT} u_f^i. \end{equation} Here $J_i$ is the cost at iteration $i$ and $\lambda$ is a regularization parameter that determines how much the DM commands have to be dampened. After some algebra we find the optimal control signal as, \begin{equation} u_f = -\left(C^{iT} C^i + \lambda I\right)^{-1}\begin{bmatrix} A^{iT}C & B^{iT}C^i \end{bmatrix} \begin{bmatrix} y_p^i \\ u_p^i \end{bmatrix} = -K^i \begin{bmatrix} y_p^i \\ u_p^i \end{bmatrix}. \end{equation} Here $K_i$ is the controller at time step $i$. The derivation that was presented here assumes that all measurements and commands are for a single actuator or mode. This assumes that there is no cross-coupling between the modes. This allows us to create a distributed controller. First the wavefront sensor reconstructs the modal coefficients, generally with an interaction matrix, and the DDSPC filters are applied to each mode independently. This decoupling makes this algorithm naturally parallel. However, there is no reason why a model with all modal coupling added back would not work. The same equations will govern this model. The only downside is the increased computational complexity. The system is first trained before the algorithm is deployed on realistic disturbance. This is necessary to make sure the algorithm will not get stuck in Null spaces of the model. A System identification (SI) approach is used to let the algorithm get familiar with the system it is controlling. Here we excite the system with a known disturbance by adding noise to the controller commands. For high-order systems, it is important to construct information-rich signals that are able to persistently excite all relevant frequencies. A popular choice is a random binary signal (RBS). During the learning sequence, a randomly generated binary signal will be added to the final computed commands. \subsection{Optical gain tracking} One of the exciting aspects of online optimization of the controller is its ability to track changes in the system. These changes could be either non-stationary turbulence or changes in the instrument itself. The pyramid wavefront sensor \cite{ragazzoni1996pupil} is a non-linear wavefront sensor. One of the associated problems is a reduction of the optical gain when high-order turbulence is present. The relatively small dynamic range of the PWFS saturates due to the high-order uncontrolled wavefront aberrations, which reduced the sensitivity of the low-order controlled modes. The reduction of the optical gain is essentially an aspect of non-linear interaction. There are several approaches that are currently proposed to measure the optical gain and then modify the gain of the controller. The measurement processes are complicated or rely on accurate knowledge of turbulence statistics \cite{deo2019telescope, chambouleyron2020pyramid}. These may not always be available. The DDSPC controller can modify its gain during closed-loop operation, and would be a simple solution for the optical gain problem. A toy model with a single controlled mode is shown in Figure \ref{fig:optical_gain}. The optical gain, or the sensor gain, is abruptly changed at 0.5 s. At 1.0 s the optical gain is set back again to its initial value. The predictive controller is able to find the optimal feedback signal within several tens of ms during the first change. At the second optical gain change, the controller finds the optimal gain within several iterations. This shows the power of online-learning and tracking of the system, changes in the system and the disturbance are pickup smoothly by the controller. \subsection{Closing the loop on all modes} The controller is now able to run closed-loop on all the DM modes of MagAO-X on the internal source at high speed. The Power Spectral Density (PSD) before and after control are shown in Figure \ref{fig:clc}. The predictive controller reaches the same rms as the optimized controller. This can be explained by the fact that the integrator already reduces the PSD to a nearly flat PSD, which means that the residuals are pure noise. Changing the control would lead to no additional gain. \section{Focal plane wavefront control with MagAO-X} We have implemented a new approach to focal plane wavefront control for MagAO-X. This is based on pair-wise probing and Electric Field Conjugation (EFC) \cite{give2007broadband}. Pair-wise probing with the DM leads to a linear response between the probed images and the electric field: \begin{equation} \Delta I = 4M\begin{bmatrix} \Re{\left\{E\right\}}\\ \Im{\left\{E\right\}} \end{bmatrix}. \end{equation} This system of equations can be inverted as long as there are enough probed images. The full electric field can be reconstructed from this. The EFC controller then tries to cancel the reconstructed electric field by injecting the opposite electric field with the DM. This problem uses the fact that there is also a linear relation between the DM modes and the electric field that the modes can create, \begin{equation} \Delta E = G\vec{\alpha}. \end{equation} Here $G$ is the transfer matrix that transforms the modal coefficients $\alpha$ into the created electric field $\Delta E$. These two systems of linear equation are solved in succession to remove speckles. The main challenge for this approach is that both the system matrix $M$ and $G$ have to be modelled. That means that any error in the optical model will fold into a wrong reconstruction and control, which will limit the achieved contrast \cite{potier2020comparing}. However, both equations can also be combined into a single linear problem! \begin{equation} \Delta I = 4MTG\alpha=F\alpha. \end{equation} Here $T$ is the matrix that separates an electric field into its real and imaginary components. We can now do an empirical calibration because there is a linear response between measurements and the DM. This means we don't have to rely on an optical model anymore. The calibration of the interaction matrix, $F$, can be done by applying the classic push-and-pull method. We apply a positive amplitude for a mode that needs to be controlled. Then the pair-wise probed images are measured. This is then repeated with a negative amplitude mode. The two sets of pair-wise probed images can be subtracted from each other leading to the double difference images from which the columns of the interaction matrix are measured, \begin{equation} F_i = \frac{\Delta \Delta I}{2\alpha_i} = \frac{\Delta I (+\alpha_i) - \Delta I (-\alpha_i)}{2\alpha_i}. \end{equation} The actual controller is then created by taking the pseudo-inverse of $F$. We call this approach implicit EFC (iEFC) because the electric field is not explicitly calculated anymore. Figure \ref{fig:darkhole} shows the deepest dark hole that we have made on MagAO-X. We have used a Phase Apodized Lyot Coronagraph \cite{por2020phase} in our H$\alpha$ filter to create the dark hole. The contrast is currently limited by tip/tilt jitter which is most likely created by our telescope simulator. We are currently exploring ways to remove these vibrations. \section{Controlling piston with MagAO-X} The Holographic Dispersed Fringe Sensor (HDFS) is a new sensor that has been developped to measure differential piston on the GMT \cite{haffert2022phasing}. This sensor uses a hologram to combine pairs of segments of the GMT and it creates a dispersed fringe for each of the pairs. The piston can then be extracted from these dispersed fringes. An example of the HDFS is shown in Figure\ref{fig:hdfs}. The HDFS uses different multiplexed gratings on each segment to combine pairs and dispersed them at the same time. This is based on the concept of Holographic Aperture Masking \cite{doelman2021first}. The HDFS was tested both in simulation and in the lab with MagAO-X where we reached less than 50 nm rms on the piston modes \cite{haffert2022phasing, hedglen2022lab}. It is now the 2nd stage wavefront sensor for the GMT facility AO system \cite{pacheco2022gmt}. \section{Outlook and conclusion} In this proceeding we describe several different parts of the wavefront control equation that we need to be able to handle if we want to search for Earth-like planets. We have implemented these approaches independently from each other in the lab. Each method will now be tested on-sky separately and finally combined at the same time. Combining both predictive control and focal plane wavefront control should lead to significantly improved sensitivity. We expect that we will be able to increase the sensitivity by 1 or 2 orders of magnitude after post-processing. We are planning an upgrade of the MagAO-X system in our next phase where each method will be implemented. We are also designing a high performance PIAACMC coronagraph that will allow use to do direct imaging at $1\lambda/D$. The combination of advanced wavefront control and a small-inner working angle coronagraph should help us to target Proxima b \cite{males2022magaox}. The results of these experiments will then be implemented into the design for GMagAO-X, which is a concept direct imaging instrument for the GMT\cite{males2022gmagaox, close2022gmagaoxDM, haffert2022gmagaox,kautz2022gmagaox}. \acknowledgments % Support for this work was provided by NASA through the NASA Hubble Fellowship grant \#HST-HF2-51436.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. This research made use of HCIPy, an open-source object-oriented framework written in Python for performing end-to-end simulations of high-contrast imaging instruments \cite{por2018high}. Support for this work was provided through the NSF "Cooperative Support Award \#2013059" and the AURA subaward NE0651C to GMTO. \bibliography{report} % \bibliographystyle{spiebib} %
Title: A Volumetric Study of Flux Transfer Events at the Dayside Magnetopause
Abstract: Localized magnetic reconnection at the dayside magnetopause leads to the production of Flux Transfer Events (FTEs). The magnetic field within the FTEs exhibit complex helical flux-rope topologies. Leveraging the Adaptive Mesh Refinement (AMR) strategy, we perform a 3-dimensional magnetohydrodynamic simulation of the magnetosphere of an Earth-like planet and study the evolution of these FTEs. For the first time, we detect and track the FTE structures in 3D and present a complete volumetric picture of FTE evolution. The temporal evolution of thermodynamic quantities within the FTE volumes confirm that continuous reconnection is indeed the dominant cause of active FTE growth as indicated by the deviation of the P-V curves from an adiabatic profile. An investigation into the magnetic properties of the FTEs show a rapid decrease in the perpendicular currents within the FTE volume exhibiting the tendency of internal currents toward being field aligned. An assessment on the validity of the linear force-free flux rope model for such FTEs show that the structures drift towards a constant-$\alpha$ state but continuous reconnection inhibits the attainment of a purely linear force-free configuration. Additionally, the flux enclosed by the selected FTEs are computed to range between 0.3-1.5 MWb. The FTE with the highest flux content constitutes $\sim$ 1% of the net dayside open flux. These flux values are further compared against the estimates provided by the linear force-free flux-rope model. For the selected FTEs, the linear force-free model underestimated the flux content by up to 40% owing to the continuous reconnected flux injection.
https://export.arxiv.org/pdf/2208.14589
\thispagestyle{plain} \newcommand{\btx}{\textsc{Bib}\TeX} \newcommand{\thestyle}{\texttt{\filename}} \begin{center}{\bfseries\Large Reference sheet for \thestyle\ usage}\\ \large(Describing version \fileversion\ from \filedate) \end{center} \begin{quote}\slshape For a more detailed description of the \thestyle\ package, \LaTeX\ the source file \thestyle\texttt{.dtx}. \end{quote} \head{Overview} The \thestyle\ package is a reimplementation of the \LaTeX\ |\cite| command, to work with both author--year and numerical citations. It is compatible with the standard bibliographic style files, such as \texttt{plain.bst}, as well as with those for \texttt{harvard}, \texttt{apalike}, \texttt{chicago}, \texttt{astron}, \texttt{authordate}, and of course \thestyle. \head{Loading} Load with |\usepackage[|\emph{options}|]{|\thestyle|}|. See list of \emph{options} at the end. \head{Replacement bibliography styles} I provide three new \texttt{.bst} files to replace the standard \LaTeX\ numerical ones: \begin{quote}\ttfamily plainnat.bst \qquad abbrvnat.bst \qquad unsrtnat.bst \end{quote} \head{Basic commands} The \thestyle\ package has two basic citation commands, |\citet| and |\citep| for \emph{textual} and \emph{parenthetical} citations, respectively. There also exist the starred versions |\citet*| and |\citep*| that print the full author list, and not just the abbreviated one. All of these may take one or two optional arguments to add some text before and after the citation. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. (1990)\\ |\citet[chap.~2]{jon90}| & Jones et al. (1990, chap.~2)\\[0.5ex] |\citep{jon90}| & (Jones et al., 1990)\\ |\citep[chap.~2]{jon90}| & (Jones et al., 1990, chap.~2)\\ |\citep[see][]{jon90}| & (see Jones et al., 1990)\\ |\citep[see][chap.~2]{jon90}| & (see Jones et al., 1990, chap.~2)\\[0.5ex] |\citet*{jon90}| & Jones, Baker, and Williams (1990)\\ |\citep*{jon90}| & (Jones, Baker, and Williams, 1990) \end{tabular} \end{quote} \head{Multiple citations} Multiple citations may be made by including more than one citation key in the |\cite| command argument. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90,jam91}| & Jones et al. (1990); James et al. (1991)\\ |\citep{jon90,jam91}| & (Jones et al., 1990; James et al. 1991)\\ |\citep{jon90,jon91}| & (Jones et al., 1990, 1991)\\ |\citep{jon90a,jon90b}| & (Jones et al., 1990a,b) \end{tabular} \end{quote} \head{Numerical mode} These examples are for author--year citation mode. In numerical mode, the results are different. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. [21]\\ |\citet[chap.~2]{jon90}| & Jones et al. [21, chap.~2]\\[0.5ex] |\citep{jon90}| & [21]\\ |\citep[chap.~2]{jon90}| & [21, chap.~2]\\ |\citep[see][]{jon90}| & [see 21]\\ |\citep[see][chap.~2]{jon90}| & [see 21, chap.~2]\\[0.5ex] |\citep{jon90a,jon90b}| & [21, 32] \end{tabular} \end{quote} \head{Suppressed parentheses} As an alternative form of citation, |\citealt| is the same as |\citet| but \emph{without parentheses}. Similarly, |\citealp| is |\citep| without parentheses. Multiple references, notes, and the starred variants also exist. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citealt{jon90}| & Jones et al.\ 1990\\ |\citealt*{jon90}| & Jones, Baker, and Williams 1990\\ |\citealp{jon90}| & Jones et al., 1990\\ |\citealp*{jon90}| & Jones, Baker, and Williams, 1990\\ |\citealp{jon90,jam91}| & Jones et al., 1990; James et al., 1991\\ |\citealp[pg.~32]{jon90}| & Jones et al., 1990, pg.~32\\ |\citetext{priv.\ comm.}| & (priv.\ comm.) \end{tabular} \end{quote} The |\citetext| command allows arbitrary text to be placed in the current citation parentheses. This may be used in combination with |\citealp|. \head{Partial citations} In author--year schemes, it is sometimes desirable to be able to refer to the authors without the year, or vice versa. This is provided with the extra commands \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citeauthor{jon90}| & Jones et al.\\ |\citeauthor*{jon90}| & Jones, Baker, and Williams\\ |\citeyear{jon90}| & 1990\\ |\citeyearpar{jon90}| & (1990) \end{tabular} \end{quote} \head{Forcing upper cased names} If the first author's name contains a \textsl{von} part, such as ``della Robbia'', then |\citet{dRob98}| produces ``della Robbia (1998)'', even at the beginning of a sentence. One can force the first letter to be in upper case with the command |\Citet| instead. Other upper case commands also exist. \begin{quote} \begin{tabular}{rl@{\quad$\Rightarrow$\quad}l} when & |\citet{dRob98}| & della Robbia (1998) \\ then & |\Citet{dRob98}| & Della Robbia (1998) \\ & |\Citep{dRob98}| & (Della Robbia, 1998) \\ & |\Citealt{dRob98}| & Della Robbia 1998 \\ & |\Citealp{dRob98}| & Della Robbia, 1998 \\ & |\Citeauthor{dRob98}| & Della Robbia \end{tabular} \end{quote} These commands also exist in starred versions for full author names. \head{Citation aliasing} Sometimes one wants to refer to a reference with a special designation, rather than by the authors, i.e. as Paper~I, Paper~II. Such aliases can be defined and used, textual and/or parenthetical with: \begin{quote} \begin{tabular}{lcl} |\defcitealias{jon90}{Paper~I}|\\ |\citetalias{jon90}| & $\Rightarrow$ & Paper~I\\ |\citepalias{jon90}| & $\Rightarrow$ & (Paper~I) \end{tabular} \end{quote} These citation commands function much like |\citet| and |\citep|: they may take multiple keys in the argument, may contain notes, and are marked as hyperlinks. \head{Selecting citation style and punctuation} Use the command |\bibpunct| with one optional and 6 mandatory arguments: \begin{enumerate} \item the opening bracket symbol, default = ( \item the closing bracket symbol, default = ) \item the punctuation between multiple citations, default = ; \item the letter `n' for numerical style, or `s' for numerical superscript style, any other letter for author--year, default = author--year; \item the punctuation that comes between the author names and the year \item the punctuation that comes between years or numbers when common author lists are suppressed (default = ,); \end{enumerate} The optional argument is the character preceding a post-note, default is a comma plus space. In redefining this character, one must include a space if one is wanted. Example~1, |\bibpunct{[}{]}{,}{a}{}{;}| changes the output of \begin{quote} |\citep{jon90,jon91,jam92}| \end{quote} into [Jones et al. 1990; 1991, James et al. 1992]. Example~2, |\bibpunct[; ]{(}{)}{,}{a}{}{;}| changes the output of \begin{quote} |\citep[and references therein]{jon90}| \end{quote} into (Jones et al. 1990; and references therein). \head{Other formatting options} Redefine |\bibsection| to the desired sectioning command for introducing the list of references. This is normally |\section*| or |\chapter*|. Define |\bibpreamble| to be any text that is to be printed after the heading but before the actual list of references. Define |\bibfont| to be a font declaration, e.g.\ |\small| to apply to the list of references. Define |\citenumfont| to be a font declaration or command like |\itshape| or |\textit|. Redefine |\bibnumfmt| as a command with an argument to format the numbers in the list of references. The default definition is |[#1]|. The indentation after the first line of each reference is given by |\bibhang|; change this with the |\setlength| command. The vertical spacing between references is set by |\bibsep|; change this with the |\setlength| command. \head{Automatic indexing of citations} If one wishes to have the citations entered in the \texttt{.idx} indexing file, it is only necessary to issue |\citeindextrue| at any point in the document. All following |\cite| commands, of all variations, then insert the corresponding entry to that file. With |\citeindexfalse|, these entries will no longer be made. \head{Use with \texttt{chapterbib} package} The \thestyle\ package is compatible with the \texttt{chapterbib} package which makes it possible to have several bibliographies in one document. The package makes use of the |\include| command, and each |\include|d file has its own bibliography. The order in which the \texttt{chapterbib} and \thestyle\ packages are loaded is unimportant. The \texttt{chapterbib} package provides an option \texttt{sectionbib} that puts the bibliography in a |\section*| instead of |\chapter*|, something that makes sense if there is a bibliography in each chapter. This option will not work when \thestyle\ is also loaded; instead, add the option to \thestyle. Every |\include|d file must contain its own |\bibliography| command where the bibliography is to appear. The database files listed as arguments to this command can be different in each file, of course. However, what is not so obvious, is that each file must also contain a |\bibliographystyle| command, \emph{preferably with the same style argument}. \head{Sorting and compressing citations} Do not use the \texttt{cite} package with \thestyle; rather use one of the options \texttt{sort} or \texttt{sort\&compress}. These also work with author--year citations, making multiple citations appear in their order in the reference list. \head{Long author list on first citation} Use option \texttt{longnamesfirst} to have first citation automatically give the full list of authors. Suppress this for certain citations with |\shortcites{|\emph{key-list}|}|, given before the first citation. \head{Local configuration} Any local recoding or definitions can be put in \thestyle\texttt{.cfg} which is read in after the main package file. \head{Options that can be added to \texttt{\char`\\ usepackage}} \begin{description} \item[\ttfamily round] (default) for round parentheses; \item[\ttfamily square] for square brackets; \item[\ttfamily curly] for curly braces; \item[\ttfamily angle] for angle brackets; \item[\ttfamily colon] (default) to separate multiple citations with colons; \item[\ttfamily comma] to use commas as separaters; \item[\ttfamily authoryear] (default) for author--year citations; \item[\ttfamily numbers] for numerical citations; \item[\ttfamily super] for superscripted numerical citations, as in \textsl{Nature}; \item[\ttfamily sort] orders multiple citations into the sequence in which they appear in the list of references; \item[\ttfamily sort\&compress] as \texttt{sort} but in addition multiple numerical citations are compressed if possible (as 3--6, 15); \item[\ttfamily longnamesfirst] makes the first citation of any reference the equivalent of the starred variant (full author list) and subsequent citations normal (abbreviated list); \item[\ttfamily sectionbib] redefines |\thebibliography| to issue |\section*| instead of |\chapter*|; valid only for classes with a |\chapter| command; to be used with the \texttt{chapterbib} package; \item[\ttfamily nonamebreak] keeps all the authors' names in a citation on one line; causes overfull hboxes but helps with some \texttt{hyperref} problems. \end{description}
Title: The Visible Integral-field Spectrograph eXtreme (VIS-X): high-resolution spectroscopy with MagAO-X
Abstract: MagAO-X system is a new adaptive optics for the Magellan Clay 6.5m telescope. MagAO-X has been designed to provide extreme adaptive optics (ExAO) performance in the visible. VIS-X is an integral-field spectrograph specifically designed for MagAO-X, and it will cover the optical spectral range (450 - 900 nm) at high-spectral (R=15.000) and high-spatial resolution (7 mas spaxels) over a 0.525 arsecond field of view. VIS-X will be used to observe accreting protoplanets such as PDS70 b and c. End-to-end simulations show that the combination of MagAO-X with VIS-X is 100 times more sensitive to accreting protoplanets than any other instrument to date. VIS-X can resolve the planetary accretion lines, and therefore constrain the accretion process. The instrument is scheduled to have its first light in Fall 2021. We will show the lab measurements to characterize the spectrograph and its post-processing performance.
https://export.arxiv.org/pdf/2208.02720
\keywords{high-contrast imaging, high-resolution spectroscopy, exoplanets, adaptive optics} \section{INTRODUCTION} Young stars form in dense clouds that collapse under their own gravity. The angular momentum that is present at the beginning of this process forces the material to clump together into a disk around the young star. Planets form inside these disks either through instabilities in the disk that cause the formation of dens clumps or through pebble accretion. Multiple pathways to the formation of planets have been proposed and there is no clear answer yet on what each process's contribution is. A thorough understanding of the formation and evolution of exoplanets will allow us to gain an understanding of not only the origin of Earth, but possibly life. To this end we will need to characterize exoplanets in all stages of their evolution, from young to old. However, the most common characterization technique utilizes the transit method which suffers from astrophysical limitations such as constraints on the orbital geometry or astrophysical noise due to e.g. star spots or circumstellar material. Direct imaging plays an important role to overcome these observational limitations. For both the young and old systems, the influence of the star, and possible circum-stellar material, can be significantly reduced by spatially resolving the planet from its environment, allowing detailed characterization of the planet. The past few years has seen a large step in the capabilities of direct imaging instruments. Instruments such as SPHERE \cite{beuzit2019sphere} or MagAO \cite{close2014into} have observed the environment around many young stars in search of giant proto-planets; planets that are still accreting material from their birth environment. Young proto-planets sweep up the gas and dust within the proto-planetary disk. The accretion of gas releases a large amount of energy when the gas falls onto the planet, or its circum-planetary disks. Most of this energy is released by specific line emission such as the hydrogen emission lines \cite{aoyama2020spectral}. These signatures are therefore one of the strongest signposts of accreting gas giants. Several instruments contain sets of narrowband imaging filters that image the emission lines and the nearby continuum \cite{close2014discovery}. Recent observations have shown that medium to high-resolution spectroscopy is ideal to observe accreting proto-planets \cite{haffert2019pds70, xie2020searching}. However, no direct imaging instrument has the capability of visible light high-resolution integral-field spectroscopy. The Magellan Adaptive Optics eXtreme (MagAO-X) system is a new adaptive optics for the Magellan Clay 6.5m telescope at Las Campanas Observatory (LCO). MagAO-X has been designed to provide extreme adaptive optics (ExAO) performance in the visible. It will ultimately deliver Strehl ratios of 90\% at 0.9 $\mu$m and nearly 80\% at H$\alpha$ (Males et al., 2018). The performance of MagAO-X in the visible is comparable to what other direct imaging instruments achieve in H or K-band, making MagAO-X the ideal instrument to push exoplanet characterization to the visible range. However, MagAO-X does not have any spectroscopic capability. The Visible Integral-field Spectrograph eXtreme (VIS-X) is a spectrograph for MagAO-X that will cover the optical spectral range at high-spectral and high-spatial resolution. Sections 2 will expand on the primary science case and the instrumental requirements. Then Section 3 will discuss the instrument design and the first lab results. \section{Primary science case for VIS-X: Accreting proto-planets} The accretion process of sub-stellar companions is a key part of the information that can be used to discriminate between the different formation processes. The ability to accrete gas at all and the actual mass accretion rate will allow us to discriminate between formation pathways \cite{stamatellos2015properties}. Gas accretion on massive planets is thought to be a very energetic process, and the emitted accretion luminosity can become comparable to the total internal luminosity of the planet \cite{mordasini2017characterization}. Therefore, visible-light High-Contrast Imaging (HCI) is a promising approach to detect these young protoplanets, because it provides access to strong accretion tracers such as H$\alpha$. The huge potential of H$\alpha$ imaging has been demonstrated by recent detections of actively accreting companions with HST/WFC (Zhou et al. 2014), Magellan/MagAO \cite{close2014discovery, wu2017alma, wagner2018pds70}, and more recently VLT/MUSE \cite{haffert2019pds70, xie2020searching, eriksson2020strong }. Haffert et al. 2019 show that integral-field spectroscopy is a very powerful technique to unambiguously detect proto-planets. Conventional high-contrast imaging techniques, such as Angular Differential Imaging (ADI) \cite{marois2006adi}, often lead to point-like features that are caused by either residual instrumental artifacts or due to the presence of non-symmetric circumstellar disks \cite{ligi2018investigation}. With high-resolution integral-field spectroscopy we can eliminate the star and the circum-stellar disk by removing a scaled stellar spectrum from each spatial pixel (spaxel). The signal from the circum-stellar disk is also removed during this procedure because the light from the disk in the visible range consists mainly of reflected star light and is therefore identical to the stellar spectrum. The emission lines from accreting planets are very narrow \cite{aoyama2020spectral}. Marleau et al. in prep show that the H$\alpha$ emission line starts to be resolved around a resolving power of 15,000 (20 km/s). This implies that optimal SNR can be achieved when the resolving power of the instrument is 15,000. If the resolving power is increased even further the light from the proto-planet will be smeared out over more detector pixels, which will increase the detector noise contribution. The signal to noise (SNR) of the H$\alpha$ measurement is, \begin{equation} \mathrm{SNR} = \frac{T F_{\mathrm{H\alpha},P}}{\sqrt{T (F_{\mathrm{Phot},S} + F_{\mathrm{H\alpha},S}) C(\theta_P) + N\sigma_D^2 }}. \end{equation} Here $T$ is the throughput from the top of the atmosphere to the camera, $F_{\mathrm{H\alpha},P}$ and $F_{\mathrm{H\alpha},S}$ are the H$\alpha$ flux of the planet and star, $F_{\mathrm{Phot},S}$ is the stellar photospheric emission, $C(\theta_P)$ the contrast of the observations at angular separation $\theta_P$, $N$ is the number of pixels that are used to sample an unresolved emission line and $\sigma_D$ is the detector noise (read noise + dark current). Under favourable conditions the H$\alpha$ line of the star is separated from the H$\alpha$ line of the planet due to an intrinsic radial velocity difference. At high enough spectral resolution the stellar H$\alpha$ flux will not contribute at all at the velocity position of the planet. The flux of the stellar continuum is proportional to the bandwidth of a single spectral slice. Taking this into account we arrive at the following SNR for the H$\alpha$ detection, \begin{equation} \mathrm{SNR} = \frac{T F_{\mathrm{H\alpha},P}}{\sqrt{T \langle F_{\mathrm{Phot}, S} \rangle \delta \lambda C(\theta_P) + N\sigma_D^2 }}. \end{equation} Here the photospheric flux of the star has been replaced by the average flux density times the bandwidth of a single spectral channel. The bandwidth $\delta \lambda = \lambda / R$, where $R$ is the resolving power of the spectrograph. In the regions where photon noise dominates, the SNR is \begin{equation} \mathrm{SNR} = \frac{F_{\mathrm{H\alpha},P} \sqrt{TR}}{\sqrt{ \langle F_{\mathrm{Phot}, S} \rangle \lambda C(\theta_P)}}. \end{equation} This shows that the expected SNR scales with $\sqrt{R}$, under the assumption that the spectral line is unresolved. A high-resolution spectrograph at $R=15,000$ can have a large gain in sensitivity compared to narrowband imaging ($R=100$ for MagAO, and $R=120$ or $R\sim650$ for the broadband and narrowband H$\alpha$ filters in SPHERE, respectively). A rough order of magnitude in SNR improvement is $\sqrt{15000 / 100}\approx12.3$. This shows that significant gains can be made by using high-resolution integral field spectroscopy for accreting proto-planets. \subsection{Post-processing gain} There are several aspects of high-resolution integral-field spectroscopy for emission line imaging. The first and foremost advantage of HRS, is that it is easier to disentangle light from the emission lines from the neighboring continuum. Only two images are taken with the classic approach of dual band imaging. In post-processing, the continuum image is magnified by $\lambda_1 / \lambda_2$ to take care of the chromatic scaling due to diffraction. For accurate subtraction of one channel from the other, the flux total flux has to be scaled. This is usually achieved by measuring the total flux within the Airy core. After the flux correction, the images can be subtracted from each other, \begin{equation} \delta I = I(\lambda_1) - a I(\lambda_2) - b. \end{equation} Here $I(\lambda_i)$ is the observed image at spectral channel $i$, $a$ is the linear scale parameter, and $b$ is the background. This approach can be used to gain in contrast, however the gain is limited because this model assumes that the diffraction pattern and speckles can be modeled by a global linear scaling. This approach would work for a system without any wavefront aberration. However, any amount of wavefront aberration will introduce non-linear chromatic behavior in the focal plane speckles. This is especially true when amplitude errors are also present. The pupil plane electric field can be represented by, \begin{equation} E_p = A(1+g) e^{i \frac{2\pi}{\lambda} \delta}. \end{equation} Here $E_p$ is the pupil plane intensity, $A$ the pupil function, $g$ the amplitude aberrations and $\delta$ the wavefront error. For high-contrast imaging instruments $\delta$ is usually small and the exponential can be expanded into its Taylor series, \begin{equation} E_p = A(1+g) e^{i \frac{2\pi}{\lambda} \delta} \approx A(1+g)\left(1 + \frac{2\pi i}{\lambda} \right). \end{equation} The propagation from the input pupil plane through the optical system to the final science focal plane is represented by the linear operator $C$. With this operator in hand, the focal plane electric field is $E_f = CE_p$. This representation holds for any type of IFS implementation, such as micro-lens array based IFS's or dual band imagers. Detectors can not measure the electric field directly and measure the intensity. This means that the final intensity that we measure is, \begin{equation} I_p = |CP +CP\frac{2\pi i}{\lambda}|^2 = |CP|^2 + |CP\frac{2\pi i}{\lambda}|^2 + 2\Re\left\{CP \left(CP \frac{2\pi i}{\lambda}\delta\right)^{\dagger}\right\}. \end{equation} From this it is clear that even in the small-aberration regime, there are terms that have a different chromatic behavior. The aberration-free term has no chromatic scaling except for the chromatic magnification due to diffraction. And the other two terms both scale differently with wavelength. In this example, the wavefront error was achromatic and the higher-order terms have been neglected. Such a simple example already shows why there is no global linear relation between two different spectral channels, and also explains why DBI will not remove all aberrations. More spectral channels have to be measured to accurately model the chromatic behavior of the stellar speckles. A IFS provide such an opportunity. However, having many spectral channels is not the only requirement. Chromatic scaling due to diffraction happens on spectral resolving powers of $R=N$, with $R$ the resolving power and $N$ the field of view in units of $\lambda/D$. Typical instruments have a field of view of $\approx100 \lambda/D$. The instrument should have a $R\gg100$ to accurately measure the speckles. However, most if not all direct imaging IFS have low spectral resolving power, e.g. SPHERE-IFS has 50, CHARIS/SCEXAO has 20-80 and ALES at the LBT has 40. Speckle removal with higher-resolution spectroscopy has already proven itself to work better than DBI\cite{hoeijmakers2018atomic}. The post-processing of HR-IFS data uses the assumption that diffraction and instrumental effects happen at low-spectral resolution and can therefore be modeled by low-order polynomials. This means that the spectrum at every pixel can described by a reference stellar spectrum multiplied by a low-order polynomial. There are many spectral channels available for each spatial pixel(spaxel) in the IFU, therefore a different polynomial model can be estimated for each spaxel. This is described by, \begin{equation} I_j(\lambda) = \sum_i a_{ij} \phi_i(\lambda) S(\lambda). \end{equation} A Linear Least-Squares (LLS) solution can be found for the polynomials coefficients because the model is linear in the coefficients. A good choice of low-order polynomials are Chebyshev or Legendre polynomials. These are orthogonal and create robust matrices for the matrix inversion step. The results of the post-processing gain are shown in Figure \ref{fig:example}. The high-resolution observations are well below $10^{-8}$, which is more than sufficient to detect accreting proto-planets. The DBI mode can have strong residuals, depending on the exact chromatic behavior of the speckles. The DBI method is limited to $>10{-6}$ at the inner regions for smallest wavefront errors. This simulation only considered a single phase screen in the pupil. Real systems are expected to contain stronger and more complex chromatic behavior which will push the post-processed contrast to the $0.5\lambda$ curve. \subsubsection{First end-to-end simulations} End-to-end simulations of an $R=15000$ H$\alpha$ spectrograph coupled to MagAO-X have been performed to investigate the gain in performance. We have simulated a system with 50 actuators across the pupil driven by a unmodulated pyramid wavefront sensor. The star itself was an 8th magnitude star. Median atmospheric conditions of the Las Campanas site have been assumed ($v=15m/s$, $r_0=0.16$ at $\lambda=550$ nm). A separate static phase screen with 50 nm rms has been used to simulate non-common path errors. The sensitivity curves are shown in Figure \ref{fig:sensitivity}. The derived contrast curves show that VIS-X will indeed add roughly a factor 10 to the sensitivity of MagAO-X. MagAO-X itself will already be more sensitive than any other instrument because the instrument has been optimized to operate as an xAO system in the visible. VIS-X shows the larges gain in the inner few $\lambda/D$, exactly where we expect to find the most planets \cite{close2020separation}. This demonstrates the benefit of high-resolution IFU observations of the H$\alpha$ emission line. Additionally, due to the higher resolution it may become possible to study the line shapes and derive the physical state of the accretion process. \section{VIS-X DESIGN AND FIRST MEASUREMENTS} Due to constraints on detector real estate there is a trade-off between spatial sampling, field-of-view, spectral bandwidth and spectral resolution. Maximizing the field of view and spectral resolving power requires a large format detector. In the past few years there has been a significant amount of progress of in the quality of back-illuminated CMOS detector technology. SONY released the imx455 sensor in 2019 with 9600x6422 pixels, a quantum efficiency close to 80 percent at H$\alpha$ and a peak efficiency close to 90 percent (at 550 nm) and a ~1 electron read noise at the highest gain setting. With a water-based cooling it is possible to reduce the dark current to below 0.001 electron/s/pixel. These properties make this an ideal sensor for visible integral-field spectroscopy. With an internal UA seed grant, we developed a prototype micro-lens array (MLA) based integral-field spectrograph that can operate in a narrowband (6 nm) around H$\alpha$ using the new imx455 sensor. MLA based IFUs are in use in all direct imaging instrument and are considered a mature technology and therefore a low risk design. The prototype has been designed to deliver a resolving power of R$\sim15000$ at H$\alpha$, with a fixed spectral bandwidth of 5nm. The prototype has a limited field of view of 0.5” in diameter. Figure \ref{fig:layout} shows a schematic of the spectrograph within the available space envelop of MagAO-X. We use two spherical mirrors as an achromatic relay that magnifies the F/69 beam of MagAO-X to sample the PSF with 3 spaxels per $\lambda/D$ at H$\alpha$ with the MLA. This will keep us Nyquist sampled down to H$\gamma$ (434 nm). On-sky experience with the MUSE IFU at the VLT showed us that well sampled LSF’s are critical for accurate post-processing\cite{xie2020searching}. The relay itself has a theoretical wavefront rms of $\lambda/100$ because we are working with slow beams (F/69 to F/870). The performance is therefore mainly determined by the manufacturing quality of the relay mirrors. We found that $\lambda$/4 mirrors are fitting for our purpose, and lab measurements confirmed that there is no indication of degradation of the PSF after the relay, see Figure \ref{fig:extracted_psf} for an extracted PSF. The extracted PSF contains roughly $\lambda$/10 rms defocus which is well within the correction range of the NCPA DM of MagAO-X. The PSF is sampled by a micro-lens array with a 192 $\mu$m pitch and a 3.17 mm focal length (F/16.5). The spectrograph’s backend is kept as simple as possible by using a first-order layout with identical lenses (Thorlabs TTL200MP) for the camera and collimator both having a focal length of 200 mm. The current design has diffraction-limited performance over $\pm$0.25 arcseconds on-sky, but the performance rapidly degrades outside this small field of view. The monochromatic PSFs of the spaxels can be seen in Figure \ref{fig:psflets}. \section{Conclusion} This manuscript has described the rational and design of a new IFU, VIS-X, for MagAO-X. It's main priority it focused on accreting proto-planets by observing H$\alpha$ at high spectral resolution. We expect that VIS-X will provide a gain in sensitivity of a 100 compared to other direct imaging instrument. This would enable us to search for fainter proto-planets at smaller angular separations. VIS-X has its first light scheduled for Fall 2021. \acknowledgments % The authors acknowledge funding from the Lucas/San Diego Astronomy Association Junior Faculty Award to build the VIS-X spectrograph. Support for this work was provided by NASA through the NASA Hubble Fellowship grant \#HST-HF2-51436.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. This research made use of HCIPy, an open-source object-oriented framework written in Python for performing end-to-end simulations of high-contrast imaging instruments \cite{por2018high}. \bibliography{report} % \bibliographystyle{spiebib} %
Title: GRB 210619B optical afterglow polarization
Abstract: We report on the follow-up of the extremely bright gamma-ray burst GRB 210619B with optical polarimetry. We conducted optopolarimetric observations of the optical afterglow of GRB 210619B in the SDSS-r band in the time window ~ 5967 - 8245 seconds after the burst, using the RoboPol instrument at the Skinakas observatory. We find signs of variability of the polarization degree as well as the polarization angle during the time of observations. We also note a significant rise in polarization value and a significant change in the polarization angle towards the end of our observations. This is the first time such behavior is observed in this timescale.
https://export.arxiv.org/pdf/2208.13821
\title{GRB 210619B optical afterglow polarization} \author{N. Mandarakas \inst{1,2} \and D. Blinov\inst{1,2} \and D. R. Aguilera-Dena\inst{1} \and S. Romanopoulos\inst{1,2} \and V. Pavlidou\inst{1,2} \and K.~Tassis\inst{1,2} \and J.~Antoniadis\inst{1,3} \and S. Kiehlmann\inst{1,2} \and A.~Lychoudis\inst{2} \and L. F. Tsemperof Kataivatis\inst{2} } \institute{Institute of Astrophysics, Foundation for Research and Technology - Hellas, Voutes, 70013 Heraklion, Greece\\ \email{nmandarakas@physics.uoc.gr} \and Department of Physics, University of Crete, 70013, Heraklion, Greece \and Max-Planck-Institut f\"{u}r Radioastronomie, Auf dem H\"{u}gel 69, DE-53121 Bonn, Germany } \date{Received XXX; accepted March 25, 1821} \abstract {} {We report on the follow-up of the extremely bright long gamma-ray burst GRB~210619B with optical polarimetry.} {We conducted optopolarimetric observations of the optical afterglow of GRB~210619B in the SDSS-r band in the time window $\sim 5967 - 8245$ seconds after the burst, using the RoboPol instrument at the Skinakas observatory.} {We report a $5\,\sigma$ detection of polarization $P=1.5\pm0.3$ at polarization angle $EVPA=8\pm6^\circ$. We find that during our observations the polarization is likely constant. These values are corrected for polarization induced by the interstellar medium of the Milky Way and host-induced polarization is likely negligible. Thus the polarization we quote is intrinsic to the GRB afterglow.} {} \keywords{} \section{Introduction} Gamma-ray bursts (GRBs) are the most energetic electromagnetic astrophysical phenomena known today. They are typically discovered and classified by their characteristic fast-rising, luminous emission of gamma-ray photons, the so-called prompt emission phase, and are typically followed by a multiband emission, known as afterglow. Depending on the time during which most gamma-ray photons are detected, GRBs are divided into two categories: short GRBs, which typically last < 2 seconds, and long GRBs, which last $\sim30$ seconds \citep{Kouveliotou1993}. Recently, short GRBs have been observed to form in association with kilonovae and gravitational wave emission, suggesting that they originate in compact object mergers \citep[e.g.][]{Narayan1992,Abbott2017B,Abbott2017A,Makhathini2021}. Long GRBs are theorized to occur after the collapse of a massive star into a black hole \citep[e.g.][]{Woosley1993}, or a magnetar \citep[e.g.][]{Usov1994}, and are often associated with supernovae \citep[e.g][]{Hjorth2003}. GRB afterglows are usually attributed to the interaction between an ultra-relativistic jet that is launched during the formation of the compact object, and the circumburst medium. The observed multiwavelength synchrotron emission is thought to be due to the propagation of two shocks, a forward shock and a reverse shock, with the latter dominating at early times \citep{Piran1999,Piran2004}. If an ordered magnetic field is present in the ejecta, the reverse shock can be highly polarized \citep[e.g.][]{Granot2003}. The polarization of the forward shock depends on the morphology and intensity of the circumburst magnetic field \citep{Uehara2012}. The exact emission mechanisms, geometry, and physical properties of the emission region in GRBs are currently not well understood. Polarimetric observations of GRBs, both in the prompt phase and during the afterglow, can potentially reveal some of their unknown properties. Polarization in the prompt phase of the GRBs in the $\gamma$-ray band is expected to arise due to synchrotron radiation \citep[e.g.][]{Granot2003,Waxman2003,Metzger2011}, inverse Compton \citep[e.g.][]{Shaviv1995}, or fragmented fireballs \citep[e.g.][]{Lazzati2009}, and the expected levels of polarization are in the order of several tens of percent, depending on the model, while observations are in agreement with the predictions (see \cite{Covino2016} for a review). However, recent results by \cite{Kole2020}, reveal cases where the polarization of the prompt phase in the 50-500 keV energy range can be as low as zero. There have been several polarimetric measurements of optical GRB afterglows. Most late GRB optical afterglows display polarization degree values of a few percent \citep{Covino2016}. However, early-time optical afterglows systematically display higher values of polarization (tens of percent), which has been attributed to ordered magnetic fields within the jets, strong enough not to be distorted by the reverse shock \citep{Deng2017}. Whether magnetic fields around GRBs are mostly ordered or random can have an observable effect on their polarisation, related mostly to the variations of polarization angle in time \citep{Teboul2021}. There are some notable examples of single-epoch optopolarimetric measurements of GRB afterglows. For example, \cite{Steele2009} measured the polarization of the early afterglow of GRB~090102 at $10.1\pm1.3\%$. \cite{Uehara2012} presented a polarization measurement of GRB~091208B, 149–706 seconds after trigger at the level of $10.4\pm2.5\%$. Time-resolved measurements could provide better insights into the processes following a GRB. For example, \cite{Mundell2013} observed the optopolarimetric evolution of GRB~120308A and reported a polarization degree of $28\pm4$, decreasing to $16\substack{+5 \\ -4}\%$, with the polarization angle remaining constant during the observations. Recently, \cite{Shrestha2022a} followed GRB~191016A 3987–7687 seconds after the burst and reported evidence of polarization in all phases, with a peak of $14.6\pm7.2\%$ , which coincides with the start of the flattening of the light curve. \cite{Shrestha2022} present time-resolved optopolarimetric measurements of several GRB afterglows, finding evidence of polarization in two of them, one of them being GRB~191016A as discussed above. \subsection{GRB~210619B} In this paper we report on the optical afterglow polarization of long gamma-ray burst GRB~210619B, $\sim 5967-8245$ seconds post-burst. On June 19, 2021, at 23:59:25 UT, the \textit{Swift} Burst Alert Telescope (BAT) triggered and located GRB~210619B \citep{Davanzo2021}. The GRB was also detected by the \textit{GECAM} \citep{Zhao2021}, the \textit{Konus-Wind} experiment \citep{Svinkin2021} and the \textit{Fermi} Gamma-Ray Burst Monitor \citep{Poolakkil2021}. \cite{Postigo2021} observed the optical afterglow with OSIRIS on the 10.4 m GTC telescope at Roque de los Muchachos Observatory and measured its redshift $z=1.937$. They also detected an intervening system at $z=1.095$. \cite{Atteia2021} noted the probability of lensed visible afterglow in the following months due to this intervening system. \cite{Oganesyan2021} combined multi-filter optical observations together with X-ray and $\gamma$-ray data to model the emission of the GRB. They disentangled the contributions of the reverse and forward shocks and argued that the GRB multiwavelength emission is produced by a narrow highly magnetized jet propagating in a sparse environment, with an approximate jet-break time at $\sim 10^4$ seconds. Comparison between the optical and X-ray data shows evidence of a secondary component of radiation in the jet wings. \section{Observations and data reduction} \subsection{GRB observations}\label{grb_obs} We conducted optical polarimetric observations of the afterglow of GRB~210619B with the RoboPol instrument which is mounted on the 1.3 m telescope at the Skinakas Observatory in Crete\footnote{https://skinakas.physics.uoc.gr/}. RoboPol is a four-channel polarimeter without rotating parts that can measure the linear Stokes parameters $I$, $q=Q/I$, and $u=U/I$ in a single exposure \citep{Ramaprakash2019}. The instrument splits the incoming light in four beams, corresponding to four orthogonal polarization directions, and projects four spots on the CCD in a cross-shape for each of the imaged sources. It is optimized for a single source placed in the center of the field of view by using a mask in the telescope focal plane, which is designed to significantly lower the background and prevent overlapping between sources. Observations with RoboPol are performed using an automated pipeline \citep{King2014b}. In case of targets-of-opportunity, like GRBs, the automated system informs the observers and provides the coordinates of the event. For example, RoboPol was used to observe the optical afterglow of GRB~131030A, where \cite{King2014a} measured a constant linear polarization value of $p=2.1\pm1.6\%$ throughout the observations. Following the trigger of GRB~210619B, regular observations were interrupted and the telescope was pointed towards the GRB location. We began taking exposures in the SDSS-r band at 1:37:12.00 UT, June 20, 2021, $\sim5867$ seconds after the trigger ($\sim1998$ seconds after the burst in the source frame). At the time of observations, it was already morning astronomical twillight. Therefore, we observed the GRB afterglow by taking a series of 200-second exposures until the background was too high to allow for more observations. This resulted in 11 200-second exposures. For the first five of them we were able to confirm that they are likely not affected by the polarization of the morning sky (see Sec.~\ref{sec:results}). Data reduction and calibration were performed using the standard RoboPol pipeline \citep{King2014b,Panopoulou2015,Blinov2021} and the RoboPol instrument model. \subsection{Interstellar polarization} The observed polarization of any object is a result of the Milky Way interstellar polarization (ISP) added to the intrinsic one. To account for and correct for the ISP, we observed three field stars in the mask of RoboPol. ISP is produced by the same dust that is responsible for extinction; therefore, stars that are more affected by extinction are expected to provide a better estimate of the polarization fraction induced by the interstellar medium (ISM). We used the 3D dust map compiled by \cite{Green2019} to probe the galactic extinction at different distances from the line of sight of the GRB. We chose bright stars in the field of the GRB that are expected to have maximum extinction (Fig.~\ref{fig:Egr}), and therefore their polarization should reflect, as accurately as possible, the polarization induced by the ISM, provided they are intrinsically zero-polarized. This is a fair assumption as most stars do not have intrinsic polarization, with few exceptions, such as magnetic stars, evolved stars or stars surrounded by a dusty disk \citep{Fadeyev2007,Clarke2010}. In fact, if a peculiar star was present, it would show up as an outlier from the rest in the $q-u$ plot (Fig.~\ref{fig:ISP}). The extinction $E(B-V)$ towards the line of sight of the GRB is $0.12-0.14\,mag$ (depending on the choice of equation for unit conversion between \cite{Green2019} units and $E(B-V)$ - see \cite{dustmaps}). The upper limit for the polarization induced by dust alignment in the ISM is $13\%\times E(B-V)$ \citep{Panopoulou2019,Planck2020}. Based on this estimate, the expected maximum ISM-induced polarization in the line of sight is $1.6-1.8\%$. However, field stars in the same line of sight have an average polarization value of $P=0.26\pm0.05\%$. This could be due to the presence of multiple dust clouds in the line of sight which are permeated by magnetic fields with different orientations. This configuration would align dust grains in different directions in each cloud and thus give rise to depolarization of the light of distant stars \citep[e.g.][]{Tassis2018}. The steps seen in the reddening plot in Fig.~\ref{fig:Egr} hint in such a scenario. Another way to estimate the ISP, is by using the polarized thermal emission map provided by \cite{Planck2015XIX}. Dust grains absorb optical light, which is preferentially polarized parallel to their major axis. Therefore, in the case where dust is ordered (e.g. due to the presence of a magnetic field), light would appear preferentially polarized in the direction perpendicular to the dust grains. The light absorbed by dust is re-emitted in the far-infrared, with a polarization along the major axes of the grains. It follows from this that thermal emission is expected to be polarized in the plane perpendicular to the optical polarization produced by the ISM. \cite{Planck2015XXI} studied the polarization of 206 stars in the sub-millimeter and optical ranges and provided the correlation of the Stokes parameters $q=Q/I$, and $u=U/I$ between the two bands. The correlation is clear and robust. Thus, the optical ISP can be directly inferred by the submillimeter measurements of \cite{Planck2015XIX} on the same line of sight. We derive the optical ISP using Planck data for the region centered in the location of the GRB with two different resolutions, 30$\arcmin$ and 15$\arcmin$. We show the measurements of the field stars together with the optical ISP derived by the Planck measurements in Fig.~\ref{fig:ISP}. For comparison, we also plot the weighted mean of the first five GRB measurements on the same plot - not corrected for the ISM contribution (see Sect~\ref{sec:results} for more details). Information on the ISP and mean GRB measurements is presented in Table~\ref{table:fs}. The weighted mean of the field stars and the ISP value obtained by the Planck 15' measurement are well consistent with each other. Since we only have three field stars for the ISP estimation, we decided to use the Planck value, since the Planck beam averages over a dust column with a larger cross section, which gives higher SNR. The field stars served as an additional confirmation of the validity of Planck data in the line of sight of interest. We favor the 15$\arcmin$ over the 30$\arcmin$ resolution, since it gives information about the ISP on more localized scales and should better reflect the true value of the ISP in the line of sight of the GRB. Finally, \cite{Skalidis2019} have shown that using Planck submillimeter polarization data in such a scale to infer the optical ISP is a reasonable choice that provides accurate and high-SNR measurements. In the case when the object is extragalactic, such as GRBs, host galaxy ISM could also potentially polarize the observed light. Host-induced polarization is proportional to the amount of dust present around the location of the burst and can be probed by the reddening of the GRB. For our case, \cite{Oganesyan2021} find that the host galaxy reddening is negligible, and therefore the host-induced polarization should be negligible as well. The final measurement and uncertainties of Stokes $q=Q/I$ for each of the exposures is simply: \begin{equation} q = q^{measured} - q^{instrumental} -q^{ISP} \end{equation} \begin{equation} \sigma_q = \sqrt{\left(\sigma_q^{measured}\right)^2 + \left(\sigma_q^{instrumental}\right)^2 + \left(\sigma_q^{ISP}\right)^2} \end{equation} and similarly for Stokes $u=U/I$. \begin{table*} \caption{Field stars polarization corrected for instrumental polarization and their weighted mean together with the optical ISP derived from Planck using two different resolutions, 30$\arcmin$ and 15$\arcmin$, as well as the weighted mean and errors on the mean of the first five GRB measurements.} \label{table:fs} \centering \begin{tabular}{c c c c l S[table-format=-2.2,table-figures-uncertainty=2] S[table-format=-2.2,table-figures-uncertainty=2]} % \hline\hline star ID & RA & Dec & Dist. & r & $Q/I$ & $U/I$ \\ & deg & deg & kpc & mag & \% & \% \\ \hline Field Star 1 & 319.7258658 & 33.8492013 & 4.3 & 13.85 & 0.34\pm0.15 & 0.11\pm0.05 \\ Field Star 2 & 319.7152985 & 33.8603076 & 4.2 & 13.90 & 0.30\pm0.10 & 0.08\pm0.16 \\ Field Star 3 & 319.7351566 & 33.8638350 & 3.2 & 15.7 & 0.25\pm0.06 & -0.12\pm0.11 \\ \hline Weighted Mean & -- & -- & -- & -- & 0.25\pm0.04 & 0.07\pm0.13 \\ \hline Planck 30$\arcmin$ & 319.71831 & 33.85044 & -- & -- & 0.16\pm0.02 & 0.19\pm0.02 \\ Planck 15$\arcmin$ & 319.71831 & 33.85044 & -- & -- & 0.25\pm0.02 & 0.11\pm0.02 \\ \hline GRB & 319.71831 & 33.85044 & -- & -- & 1.50\pm0.31 & 0.46\pm0.28 \\ \hline \end{tabular} \end{table*} \section{Results \& Discussion}\label{sec:results} We present the time evolution of the degree of polarization ($P$) and the Electric Vector Position Angle (EVPA) of our observations in Fig.~\ref{fig:results}. Since polarization is a nonnegative quantity, measurements are biased toward higher values, especially those with low signal-to-noise ratio \citep[e.g.][]{Vaillancourt2006}. We present in Table~\ref{table:p_evpa} both the raw measurements and the debiased values for the polarization. Debiasing was performed according to \cite{Plaszczynski2014}. All the provided values in Table~\ref{table:p_evpa} and Fig.~\ref{fig:results} are corrected for instrumental polarization and ISP. Since the observations were made during morning twilight, we considered the possibility that the rise of the background affected our measurements (the morning sky is highly polarized). The stars in the RoboPol field cannot be confidently measured to compare how their polarization changes with time, allowing us to investigate this scenario. For this reason, we ran simulations as follows: We made four fake images for each of our exposures by replacing the GRB source in our images with a mock source with polarization $P=2\%$, and a different value of EVPA for each of the four images: $0,45,90,135 ^\circ$. The FWHM and intensity of the source in each frame matched the FWHM and intensity of the corresponding real exposure. Then, for each of the fake frames, we conducted the analysis in the same way as for the real observations. We present the output of the simulations in Fig.~\ref{fig:sims}. It becomes obvious that the latter observations in all cases tend to be farther away from the real values than the former. Especially in the simulation with input $EVPA=0^\circ$ (top left of Fig.~\ref{fig:sims}), the measurements of EVPA tend to drift towards the background value. A similar drift seems to be apparent in the last measurements of the simulation with input $EVPA=45^\circ$ (top right of Fig.~\ref{fig:sims}). Finally, the latter polarization values of all simulations seem to be, on average, less accurate than the first ones. Based on the above, we conclude that the first five measurements are likely the only ones not affected by the polarized sky, since they are the only ones not affected by the background in all simulations. We present here all of our measurements, yet we highlight that the last six measurements of the time series are probably seriously affected by the high background. Although RoboPol allows for background estimation for each of the four spots of the source to account for occasions when the background is polarized, in these particular images the source is fairly faint, the background high and the sky highly polarized ($\sim40\%$). Therefore, minor deviation of the background estimate from its true value is enough to give rise to the observed behavior.  We note that for the first five measurements of the simulation, the scatter in the measured values in the simulations is similar to the one in the observations. Therefore, we postulate that the polarization and EVPA of the GRB are likely constant throughout these measurements. By combining them we get the debiased values $P=1.5\pm0.3$, $EVPA=8\pm6^\circ$. Unlike the previous GRB observed with RoboPol at Skinakas \citep{King2014a}, where the polarization of GRB~131030A was found to be consistent with the interstellar polarization, we get a high-confidence (5 $\sigma$) detection of intrinsic GRB afterglow polarization. The degree of polarization measured in this work for GRB~210619B is rather typical for optical GRB afterglows in such scales \citep[e.g.][]{Covino2016}. According to the modeling of \cite{Oganesyan2021}, at the time of our observations the optical emission was dominated by the contribution of the forward shock. This level of linear polarization agrees with the long-thought idea that it arises primarily from synchrotron radiation. There are several different models that agree with this level of polarization. However, observations of temporal evolution of GRB polarization over a longer time period are more appropriate to put constraints and test the theory (see \cite{Gill2021} for a recent review). \begin{table*} \caption{Measurements of the degree of polarization and EVPA of the GRB. Time corresponds to the median time in the observer's frame of each 200 second exposure.} \label{table:p_evpa} \centering \begin{tabular}{S[table-format=4.0] S[table-format=4.0] S[table-format=-1.1,table-figures-uncertainty=2] S[table-format=-1.1,table-figures-uncertainty=2] S[table-format=-2.0,table-figures-uncertainty=2] S[table-format=-2.0,table-figures-uncertainty=2]} \hline\hline \multicolumn{2}{c}{Time since GRB (seconds)} & P & P$_{debiased}$ & {EVPA} & {EVPA}$_{debiased}$\\ {Observer's frame} & {Source's frame} & \% & \% & $^\circ$ & $^\circ$ \\ \hline \\ \multicolumn{6}{c}{Likely not affected by the background}\\ \hline 5967 & 2032 & 0.9\pm0.7 & 0.7\pm0.7 &37 \pm 21 &37 \pm 29 \\ 6177 & 2103 & 1.6\pm0.7 & 1.4\pm0.7 &-6 \pm 13 &-6 \pm 15 \\ 6537 & 2226 & 2.3\pm0.7 & 2.2\pm0.7 &22 \pm 9 &22 \pm 10 \\ 6751 & 2299 & 2.7\pm0.8 & 2.6\pm0.8 &2 \pm 8 &2 \pm 8 \\ 6964 & 2371 & 1.7\pm0.8 & 1.5\pm0.8 &14 \pm 14 &14 \pm 16 \\ \hline \multicolumn{2}{c}{Average} & 1.6\pm0.3 & 1.5\pm0.3 &8 \pm 5 &8 \pm 6\\ \hline \\ \multicolumn{6}{c}{Likely affected by the background}\\ \hline 7178 & 2444 & 4.1\pm.9 & 4.0\pm0.9 &-9 \pm 7 &-9 \pm 7 \\ 7391 & 2517 & 1.1\pm1.0 & 0.8\pm1.0 &5 \pm 27 &5 \pm 36 \\ 7604 & 2589 & 2.5\pm1.1 & 2.3\pm1.1 &-31 \pm 12 &-31 \pm 14 \\ 7818 & 2662 & 3.4\pm1.5 & 3.1\pm1.5 &-17 \pm 13 &-17 \pm 15 \\ 8031 & 2735 & 5.6\pm1.7 & 5.3\pm1.7 &2 \pm 8 &2 \pm 10 \\ 8245 & 2807 & 1.0\pm2.0 & 0.6\pm2.0 &-23 \pm 56 &-23 \pm 52 \\ \hline \end{tabular} \end{table*} \begin{acknowledgements} We thank the anonymous referee and the editor Sergio Campana for providing useful comments that helped improved this manuscript. N.M, D.B., K.T., and K.K. acknowledge support from the European Research Council (ERC) under the European Union Horizon 2020 research and innovation program under grant agreement No 771282. K.T. acknowledges support from the Foundation of Research and Technology - Hellas Synergy Grants Program through the project POLAR, jointly implemented by the Institute of Astrophysics and the Institute of Computer Science. V.P. and S.R. were supported by the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the "First Call for H.F.R.I. Research Projects to support Faculty members and Researchers and the procurement of high-cost research equipment grant" (Project 1552 CIRCE). V. P. acknowledges support from the Foundation of Research and Technology - Hellas Synergy Grants Program through project MagMASim, jointly implemented by the Institute of Astrophysics and the Institute of Applied and Computational Mathematics. D. R. A.-D. and J. A. acknowledge support by the Stavros Niarchos Foundation (SNF) and the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the 2nd Call of “Science and Society” Action Always strive for excellence– “Theodoros Papazoglou” (Project Number: 01431). \end{acknowledgements} \bibliographystyle{aa} \bibliography{bibliography}
Title: CONCERTO: Readout and control electronics
Abstract: The CONCERTO spectral-imaging instrument was installed at the Atacama Pathfinder EXperiment (APEX) 12-meter telescope in April 2021. It has been designed to look at radiation emitted by ionised carbon atoms, [CII], and use the "intensity Mapping" technique to set the first constraints on the power spectrum of dusty star-forming galaxies. The instrument features two arrays of 2152 pixels constituted of Lumped Element Kinectic Inductance Detectors (LEKID) operated at cryogenic temperatures, cold optics and a fast Fourier Transform Spectrometer (FTS). To readout and operate the instrument, a newly designed electronic system hosted in five microTCA crates and composed of twelve readout boards and two control boards was designed and commissioned. The architecture and the performances are presented in this paper.
https://export.arxiv.org/pdf/2208.07629
\flushbottom \section{Introduction} CONCERTO is a millimeter-wave low spectral resolution imaging-spectrometer with an instantaneous field-of-view of 18.6\, arcmin diameter operating in the range 130-310\,GHz \cite{Lagache2020,monfardini2021concerto}. It is installed at the Atacama Pathfinder EXperiment (APEX) 12-meter telescope located at 5100\,m above sea level on the Chajnantor plateau. The primary goal of CONCERTO is to observe the radiation emitted by ionised carbon atoms using the \emph{intensity Mapping} technique to set the first constraints on the power spectrum of dusty star-forming galaxies. Furthermore, it will open a new window towards large mapping speed low resolution spectroscopy for the study of galaxy clusters via the Sunyaev-Zel'dovich (SZ) effect \cite{sz}. The instrument is composed of a dilution cryostat operating at 60\,mK, which features two arrays of 2152 pixels made of Lumped Element Kinetic Inductance Detectors (LEKID) \cite{Day2003,Baselmans2012} and cold optics. To minimize the number of cryostat feedthroughs from the sensors to the warm electronics, each pixel array is constituted of only six transmission lines coupled to the LEKIDs. This amounts for about 360 detectors per feed-line. In acknowledgment of the unavoidable fabrication dispersion, which can cause KID self resonant frequency overlapping, an average frequency separation of 2.5\,MHz was set by design between each LEKID of a given feedline. Indeed, in the past few years we have put some efforts in investigating a post-processing (trimming) technique allowing to individually adjust the resonances position. Despite the good results \cite{shu_disp} we concluded that this approach is somewhat impractical for our Aluminium thin films, in particular for an instrument with planned lifetime of the order of several years. Consequently, this leads to the requirement of having a 1\,GHz bandwidth readout electronics dedicated to each feedline. The cryostat is coupled to a room-temperature Martin-Puplett Interferometer (MpI) \cite{MARTIN1970105} that is designed to obtain a spectral resolution up to 1.2\,GHz. To avoid atmospheric drifts during a single interferogram, this fast Fourier Transform Spectrometer (FTS) is designed to achieve four full interferograms per second and per pixel. Thus, to properly sample the interferograms, the readout electronics shall sample the LEKID at a sampling rate of about 4\,kHz and the MpI movements must be controlled synchronously with the acquisition system. As the last system constraints, i.e. the available room in the APEX, the reduced heat dissipation at high altitude due to lower convection and the requirement to have an easy remote operation and maintenance. Indeed the cabin drastically limit the available space for the electronic, the space available was $\rm 600 \times 200 \times 300\,mm^3$ for the entire CONCERTO readout, housekeeping and control of the C-cabin elements. Therefore, most of the warm electronics are directly integrated in the instrument chassis. This paper describes the readout and control electronics used for CONCERTO. \section{Electronics system description} The setup required to instrument a single resonator line is composed of a dedicated readout board operated at room temperature, which injects an excitation frequency comb in the cryostat and measures the modified returning signal. The frequency comb has each of its tones tuned to the LEKID self-resonant frequencies and has to be generated in the 1.5-2.5\,GHz range. In the cryostat, the excitation signal is fed down to the low temperature stages. In order to attenuate the thermal noise coming from the warmer stages, we have installed at 4\,K an attenuator of -20\,dB. A further (distributed) attenuation of around -6\,dB is provided by the stainless steel (lossy) injection cables running from the 4\,K down to base temperature. This ensures the injected thermal noise is lower than the cold amplifier input noise. NbTi superconducting coaxial cables transfer the LEKID output signal with virtually no losses to the 4\,K stage. The signal is first amplified with a Low Noise Amplifier (LNA) installed in the same 4\,K stage\footnote{Arizona State University 0.5-3\,GHz; Gain: 30\,dB. Noise T=5\,K. Input 1\,dB Compression (minimum): -36\,dBm. Typical Power diss. 10\,mW.}, and then again with a second LNA amplifier at the 50\,K stage\footnote{MITEQ AFS3-02000400-08-CR-4 2-4\,GHz. Gain: 30\,dB. Noise T < 50\,K. Output 1\,dB Compression (minimum): +5\,dBm. Typical power diss. 50\,mW .}, for a total amount of roughly +60\,dB. Stainless steel cables link the 50\,K stage and the room temperature output. The overall electrical gain of each radio-frequency line to and from the room temperature electronics has been measured and is confirmed to be about +10\,dB. Fixed attenuators at 300\,K are used to finely adjust the overall power to the available dynamic range of the ADC. The core principle of the readout system is similar to the one used in the NIKA2 experiment \cite{Bourrion_2016}. Indeed, the excitation frequency comb is generated at baseband in the electronics using coordinate rotation digital computer (CORDIC) and the returning frequency comb is down-converted and analyzed by channelized Digital Down Converters (DDC) that provide In-phase (I) and Quadrature (Q) components. These components are used to compute each tone amplitude and phase. Due to the limited space available at APEX, a large part of the warm electronics is directly integrated in the instrument chassis (see \cite{Lagache2020} for details on the instrumental setup). CONCERTO needs (i) twelve KID readout boards to instrument the low and high frequency arrays , each composed of six resonator lines, (ii) one board dedicated to Martin-Puplett Interferometer management named Motor Controller and Martin-Puplett Monitor (MCMPM) and (iii) one board devoted to the Cryostat Positioning System (CPS). These boards were designed in the Advanced Mezzanine Card (AMC) format and are hosted in five compact micro Telecom Computer Architecture (microTCA) crates (Vadatech VT899). Each crate has a dimension of $\rm 127\,mm \times 311\,mm \times 260\,mm$. As shown in figure~\ref{crateFig}, four crates are used to specifically host the KID readout electronics. They are organized in two crates of three boards per array. Furthermore one crate is used to host the MCMPM and CPS boards. Each crate features one central clocking and synchronization board (CCSB) mounted on a MicroTCA Carrier Hub (MCH) and one 600\,W power supply module. The MCH, which is required by the microTCA specification, is in charge of managing the crate (slot activation for hot-plug, power supply monitoring, fan speed controlling, sensors monitoring). It hosts a Gigabit Ethernet (GbE) switch function for communicating with each slot and the CCSB. The CCSB collects and distributes the reference clock and the demodulated Inter-Range Instrumentation Group standard B (IRIG-B) provided by the clock and timing box (CTB) via the crate backplane. The CTB is composed of two different Printed Circuit Boards (PCB). It fans-out the 10\,MHz reference clock provided by a FS725 Rubidium frequency standard (Stanford Research Systems) and the demodulated IRIG-B signal. The latter is produced, before fan-out, by the CTB, which uses the modulated IRIG-B-AM signal supplied by the GPS (FS740 from Stanford Research Systems). A single RF synthesizer (Holzworth HSM2001A) is used to provide the Local Oscillator (LO) to each KID\_READOUT board, thanks to a commercial RF passive splitter. The RF synthesizer uses the Rubidium 10\,MHz as a reference clock. One modulation output from one of the KID\_READOUT boards is connected to the synthesizer in order to control the LO frequency which then permits to tune and calibrate the LEKIDs detectors. The tuning is the process that allows to find the optimal excitation frequencies for each LEDKID \cite{bounmy2022}, while the calibration is used to determine the LEKID response to a known frequency shift. All readout boards are operated with the same reference clock and started simultaneously using a Pulse Per Second (PPS) signal which is extracted from the IRIG-B frame received by the CCSB and then distributed to all AMC boards. \section{KID readout board} \subsection{Hardware description} The KID\_READOUT board is composed of two parts, the digital and carrier part and the radio-frequency front-end part. The module fits in a full size double width AMC format (see figure~\ref{readoutHwFig}). The carrier main components are a large Field Programmable Gate Array (FPGA) (Xilinx XCKU060FFVA1156-2), a 12-bit Analog to Digital Converter (ADC) (Texas Instrument ADC12D1000) and a dual 16-bit Digital to Analog converter (DAC) (Analog Devices AD9136). All converters are operated at 2\,GSamples/s. The FPGA is the center part of the carrier board and it is connected to the ADC, the DAC and the crate back-panel. The dual DAC and the ADC analog parts are connected directly to the front-end board via a dedicated high performance connector (SAMTEC \mbox{QSE-040-01-F-D-A}). The FPGA generates the excitation comb digital signal sent to the DAC and analyzes the returning signal sampled by the ADC. It is also in charge of interfacing the acquisition with the Gigabit Ethernet link though the MCH and the Clocking and Timing system via the CCSB. The FPGA is configured at boot time from dedicated flash memory that hosts the firmware. This flash memory can be reprogrammed \emph{in-situ} through the Ethernet link, and thus allows remote firmware upgrades. The main components are all clocked with a dedicated clocking circuitry which is referenced by the 10\,MHz clock distributed by the CCSB through the back-panel. Thus all readout boards are all perfectly synchronous and uses a very stable timing reference. As shown in figure~\ref{readoutHwFig}, a large part of the carrier board is devoted to local power supply generation. Indeed, 15 different voltages must be generated to separate analog from digital supplies and to optimize the power consumption: five for the FPGA, two for the ADC, five for the DAC, one for the RF part and two for the clocking part. A Module Management Controller (MMC) is implemented to comply with the microTCA standard. It is composed of an 8-bit micro-controller and hosts a dedicated firmware. Aside from the mandatory functionalities required by the standard, it provides board health monitoring such as local power supplies current and voltage monitoring, board temperatures (four distributed LM35B probes) and power supply sequencing. Connected to the main board, a mezzanine contains the front-end analog shaping electronics, as shown in figure \ref{schMezzFig}. At first a low pass filter stage, composed of the SMC component LP0BA0790A7TR250 from AVX, suppresses the sampling frequency of the DAC and rejects the image response. The low pass filters have a cutoff frequency of about 950\,MHz. The differential IQ signals filtered are mixed by an IQ mixer (ADL5375) that up-converts frequencies from DC-1\,GHz bandwidth to the LEKID matrix frequency range of 1.5\,GHz-2.5\,GHz, thanks to the Local Oscillator of 1.5\,GHz. Then, the up-converted signal (EXC\_OUT) goes through the cryostat, probes the state of the detectors (MEAS) and is down-converted at the input of the mezzanine thanks to a mixer (AD8342) with the same LO used previously. The resulting signal is then ready to be digitized after low-pass filtering. \subsection{Firmware description} An overview of the firmware is shown in figure~\ref{readoutFwFig}. The center piece is the acquisition Finite State Machine (ACQ\_FSM) which collects the 400 I,Q data generated by the band managers and tags them with precise timing thanks to the IRIG-B receiver. One data frame holds two 32-bit words (I,Q data) per LEKID resonator monitored (400 in total), the timing information on two 32-bit words, the frame number on one 32-bit word and eleven ADC/DAC monitoring 32-bit words (peak values observed). Thus, it is composed of 3256 bytes of data in total. The data collected are stored in an buffer (DAQ\_FIFO), and waiting there to be shipped to the acquisition computer through Ethernet. The IRIG-B receiver uses the demodulated signal and the reference clock provided through the microTCA backplane to extract the date and time from the serial frame and the Pulse Per Second pulse (PPS). This timing functionality is extended with a sub-counter incremented by the 10\,MHz reference clock and reset every PPS, hence a theoretical timing resolution of 100\,ns can be reached. In practice the resolution is limited to about 4\,\textmu s because of the analog demodulation technique used in the CTB. However this is sufficient to synchronize the data that are sampled at a frequency of about 4\,kHz. To synchronize the CONCERTO readout and control electronics, the acquisition and data recording is armed by software and started on the following PPS received. This results in having all boards embedded in the microTCA crates begin the data production simultaneously and have the same frame numbering. Then, when the astronomical observation is about to start, the external modulation sequence (EXT\_modulator) is programmed to start at a given frame number, which is the same start number programmed in the MCMPM for initiating the movement of the mirror of the MPI. The EXT\_modulator element is made of a FSM coupled to a memory table holding the modulation values to apply at each sampling step. The memory table is large enough to hold all values for the slowest MpI operating frequency, which is 1\,Hz, i.e. 4096 steps of 1/4\,kHz. The modulation values are transferred via I2C protocol to the 10-bit DAC (AD5311) installed on the RF front-end board. Both the KID\_READOUT boards and MCMPM board possess this memory table so the modulation, which used for tuning \cite{bounmy2022} and scientific calibration \cite{modulationFasano}, does not interfere with the interferometry measurement. For example, among the 4096 steps, only the first 64 steps are modulation samples leaving the rest for observation purpose. These modulated data samples can be used to adjust each tone's probing frequency and as a reference response of the readout electronics to a known frequency shift of the LEKIDs \cite{bounmy2022}. The communication between the acquisition computer and the FPGA is achieved via Ethernet through a UDP layer. On the one hand, this layer allows us to set and monitor the board via the IPBUS protocol \cite{Larrea_2015}, which is a secured communication channel that offers an easy parallel interface on the user side. On the other hand, it is used to send a dedicated UDP datagram (jumbo frame) every time at least one full acquisition frame is stored in the DAQ\_FIFO. The use of large datagram maximizes the throughput and reduces the software overhead required to manage the data taking. The average data rate due to the two arrays readout is about 150\,MB/s. To read-out the 400 LEKID resonators of a transmission line over a bandwidth of 1\,GHz, the firmware features ten band managers that are in charge of instrumenting a slice of 100\,MHz of the bandwidth. The motivations for this configuration were to have a more optimal polyphase filters implementation and a lower operating frequency (250\,MHz) for the LEKID processors as it was the case in \cite{Bourrion_2016}. As shown in figure~\ref{bandMgrFig}, each band manager contains 40 tone managers. Each of these constructs the excitation signals thanks to a 10-bit CORDIC \cite{Volder1959TheCT} generator coupled to a phase accumulator and analyzes the returning signal with a Digital Down Converter (DDC) \cite{Lhning2000DigitalDC}. The filters used in the DDC are Cascaded Integrated Comb filters whose decimation period is selected to be the same as one CORDIC phase accumulation ($2^{16}$ 250\,MHz clock cycles). This yields an I,Q sampling period of about 4\,kHz. The tone manager operates in the 250\,MHz clock domain and therefore the excitation signal must be up-converted to 2\,GSamples per second and frequency shifted in its 100\,MHz band, while the measurement signal must undergo the opposite processing. On the excitation path, each sine and cosine signal are potentially attenuated with a digital attenuator by $\frac{1}{16^{th}}$ steps. The 40 resulting sine and cosine signals are then numerically summed-up to construct the partial excitation comb which is then modified by a digital gain (16-bit tuning word). The resulting signals are monitored by amplitude peak detector to ensure that no clipping occurs. These peaks values (max\_I, max\_Q) are stored in the acquisition frame. A clipping situation can be mitigated by either reloading the tone frequencies given the fact that each phase is quasi random and that an unlucky all-in-phase summation would then be removed, or alternatively by attenuating the gain for the concerned frequency band. Finally, the up-conversion operation is performed in each band manager. As depicted in figure~\ref{bandGenFig}, each sub-band is generated at 250\,MSps and then up-converted at 2\,GSPs in its appropriate frequency band. Each up-converter is composed of an up-sampler and a Numerically Controlled Oscillator (NCO) ensuring the appropriate frequency band-shifting. The ten band excitation signals are then numerically summed and fed to the dual DAC. The signal (fakeAdc 1,2,3, ... in Fig.~\ref{readoutFwFig} ) used on the measurement path by each bandManager is provided by the adequate output of the polyphase filter as shown in figure~\ref{readoutFwFig}. This filter takes the eight phases of the measurement signal digitized at 2\,GSps, performs the ten required frequency shifting and the decimation to down-convert the full spectrum in ten equally spaced spectrum, each having a usable bandwidth of 100\,MHz. \subsection{Resource usage} A significant effort was made to optimize overall the firmware design for the experiment. In particular, the polyphase filter and the up-converters were studied and designed to use the minimum number of DSP while providing adequate performances for the experiment. Consequently, the firmware uses about 214k (64.6\%) Configurable Logic Block (CLB) Look Up Table (LUT), 345k (52.1\%) CLB Flip-flops (FF) and 1973 (71.5\%) Digital Signal Processors (DSP) of the available resources. The detailed allocation of resources is shown in table~\ref{resourceTable}. \begin{table}[hbtp] \centering \includegraphics[angle=0,width=0.95\textwidth]{kid_readout_resources.pdf} \caption{Table summarizing the FPGA resource usage of the KID READOUT board. The various firmware components and their relative contribution to the total are shown. The overall FPGA resource consumption with respect to the maximum available is also given. } \label{resourceTable} \end{table} From the table, it can be seen that the highest resource users are the band managers, each of them uses about 8\% of the CLB resources and about 4\% of the DSPs. In terms of DSP resources usage, the first users are the ten up-converters, for a total of 680 DSPs, and then the polyphase filter uses 418 DSPs. \subsection{Readout electronics performances} \subsubsection{Power consumption} One KID\_readout board uses about 37\,W in total power to instrument 400 KIDs over the total bandwidth of 1\,GHz. Four crates, each equipped with three readout boards, are required to readout the two arrays of 2152 pixels. Within each crate, the power consumption of one MCH+CCSB is about 12\,W and each fan tray uses 17\,W. Hence, each equipped crate uses 157\,W of power from the 12\,V back-plane distributed supply. Accounting for the power module efficiency, about 90\%, each crate requires about 175\,W for the main power supply. In the end the total power required for the four readout crate is about 700\,W below 0.17 W per KID. To ensure safe operation, various temperatures are constantly monitored on the experimental site, thanks to the features provided by the KID readout boards and the microTCA crates. Indeed, each KID readout board is equipped with sensors probing two PCB temperatures and three others for the FPGA, DAC and ADC die temperatures, while the microTCA crates measure the air intake and exhaust temperatures. The plot of these temperatures is shown in figure~\ref{Temperatures} as a function of the board number. We can see that the maximum temperature elevation is about 29\,°C for the FPGA and ADC components. \subsubsection{System transfer function} As a first characterisation of the electronics, an excitation signal composed of 400 tones evenly spaced in frequency was generated by the KID\_readout board and measured with a Vector Network Analyzer (Rhode \& Schwarz ZNL6). The Local Oscillator (LO) input was set at 1.2\,GHz, the measured spectra can be seen in figure~\ref{spectrumSnapshots}. The nice side band rejection (-30\,dB), as well as the flatness (<2\,dB) of the spectrum can be observed. In a second stage, using the same LO frequency, the excitation output of the readout board was looped-back in the measurement input with an external cable. Then using a direct ADC signal recording feature embedded in the FPGA firmware, two snapshots of the ADC output were recorded at 2\,Gsamples/s, each featuring 65356 samples. The first snapshot was recorded with no excitation signal produced, while the second featured the 400 tones evenly spaced in frequency like previously. For each snapshot, a Fast Fourier Transform (FFT) was computed (see figure~\ref{fftADCplots}). On the spectrum of the ADC response with no excitation signal produced, we can see the harmonics of the 250\,MHz (reference clock used by the board), the 500\,MHz (ADC dual data rate clock frequency) and two peaks at 400\,MHz and 800\,MHz due to the inter-modulation products between the LO and the sampling frequency. The spectrum of ADC response featuring the excitation signal is highly similar to the one recorded directly at the excitation output (see figure~\ref{spectrumSnapshots}), aside from the end of the bandwidth (above 950\,MHz) where the drop in response is much more important. This can be explained by the additional low pass filter embedded on the mezzanine board (figure~\ref{schMezzFig}) used to filter the ADC analog input. \subsubsection{System noise} To assess the system noise the same frequency comb featuring 400 tones evenly spaced in frequency was used and 655360 IQ data were recorded for each tone. As previously, the excitation output of the readout board was looped-back in the measurement input with an external cable and the LO was set at 1.2\,GHz. The power spectral density of amplitudes and phases were computed for one tone in each band (see figure~\ref{PSDspectrum}). These plots show that the system noise floor is reached at around 100\,Hz and that it is in the order of -100\,dBc/Hz. We also observe two unwanted peaks at 763\,Hz and 1527\,Hz. Unfortunately, those are not yet explained, but they are sufficiently far from the scientific signal, which lies in the 110\,Hz to 280\,Hz range. Consequently, to have an overview of the phase and amplitude system noise in this area of interest for the full electronics bandwidth, the average noise of each tone between 110\,Hz and 280\,Hz is plotted in figure~\ref{noiseDist}. We can see that the noise level stays around -100\,dBc/Hz over the full bandwidth and experience a significant rise in the end of the spectrum (after 950\,MHz), this is consistent with the fact that the transmission response drops in this area (see figure~\ref{fftADCplots}). The peaks observed at 62.5\,MHz and 250\,MHz are respectively sub-products of the Ethernet reference clock (125\,MHz) and the ADC and DAC reference clock (250\,MHz). \section{Motor Controller and Martin-Puplett Monitor (MCMPM) board} The Martin-Puplett Interferometer linear motors drivers and mirror position monitors are managed by a single dedicated board called the Motor Controller and Martin-Puplett Monitor (MCMPM) board (see figure~\ref{mcmpmGenFig}). The two main requirements of this board are (i) to always move simultaneously and in opposite direction the mirror and its counter-weight to dampen the vibrations to a minimum and (ii) to drive the motors and monitor the positions synchronously with respect to the KID readout electronics. The board was designed to be hosted in a microTCA crate. On the back-end side, through the back-panel connector, it communicates via Ethernet (Ipbus) with the DAQ software and receives the 10\,MHz reference clock and the PPS signal. On the front-end side it is connected to two linear motor drivers and to three position measuring devices via serial links. Two of these are used to measure the mirror position and its possible deformations, while the third is used to monitor vibrations of the polariser membrane, which changes the effective optical path difference \cite{monfardini2021concerto}. The interface between the back-end and front-end is ensured by an XC7K70TFB676-1 Xilinx FPGA. To avoid parasitic signals and unwanted smearing in the acquired interferograms and hence ease the data analysis, the motor movements and various position measurements must be done synchronously to the data acquisition \cite{fasano_spie}. Consequently, the two stepper driver (linMOT E1450) are driven with an incremental code (A/B) generated by the MCMPM. At the same time, the mirror top and bottom positions are measured with LASER (Greyline ODS 120) measuring devices that are triggered and read-out by the MCMPM via a communication serial link (RS232 protocol). The linear motor driver parameters are configured through Ethernet with a driving software. An overview of the firmware is shown in figure~\ref{mcmpmFwOverviewFig}. On the right side, the interfaces with the position monitoring devices and the linear motor driver are shown. In the center, sits the central Finite State Machine (FSM), which is in charge of synchronously acquiring the data. On the left side, the buffer used to smooth the data flow sent toward the Ethernet (IPBus) and its content are shown. Thanks to the 10\,MHz reference clock, the DAQ\_sync block produces the enable signal used to perform the acquisition cycles at the same rate as the KID data readout, i.e. at $\rm \dfrac{250}{2^{16}}\sim4\,kHz$. However, the position measuring devices are limited to a maximum acquisition rate of 2\,kHz, so the LASER\_interface are triggered at half the acquisition rate. Consequently, to up-sample the laser positions, each position measured is duplicated in two consecutive data blocks as those are generated at about 4\,kHz in the firmware. The acquisition process, and hence the frame numbering, is started on the subsequent PPS received after the electronics is armed by software. Using the same methodology for all electronic boards involved in the acquisition provides a simple way to start synchronously the acquisition from a GPS receiver. In parallel, the decoded IRIG-B date and time code along with the sub-second counter are inserted in the data block by the DAQ FSM. This measurement, along with the frame number, permits (i) to uniquely time-tag each data block, but also (ii) to check the delay between the various readout electronics which must remain constant. Indeed, the same IRIG-B signal is distributed on the five crates, these precise timing information allow the detection of an abnormal condition or drift in the DAQ and control system. The linear motor controllers (LINMOT0/1) are further detailed in figure~\ref{mcmpmSyncDriveFig}. The core component of the controller is a large memory (32768 elements) that holds the steps frequency to generate along with the direction of the movement. The memory depth was chosen in order to allow slow mirror frequency, i.e as low as 0.1\,Hz. As illustrated in the two plots below the block diagram shown in figure~\ref{mcmpmSyncDriveFig}, the motor speed as a function of time can be derived from the mirror position as a function of the time. The number of steps to produce per time interval is computed using the linear motor conversion factors, with the time interval being the duration of one acquisition cycle. Finally, the motor step curve profiles are computed in advance and loaded in the MCMPM. Given the fact that the mirror and its counter-weight must be moved simultaneously and in opposite direction to dampen the vibrations to a minimum, the curves are computed to be symmetric. The curve computing also uses the optical zero path difference offset and the maximum achievable acceleration (limitation) as input parameters. At each time interval, the frequency word (named increment) provided by the memory is used by a 16-bit accumulator operating at 250\,MHz whose Most Significant Bit (MSB) is used to generate the steps needed by the motor power drivers. The step frequency is given by the following formula: $\rm f_{step}=\dfrac{250\,MHz}{2^{16}}\times \dfrac{increment}{2}$. Per manufacturer specification, the motor power drivers can accept a maximum step frequency of 2\,MHz, the increment width is limited to 8-bit, which allows the step frequency to be tuned between 0\,Hz and 1953\,kHz with a resolution of 7.6\,kHz. The memory readout pointer is driven by a Finite State Machine, which starts its operation when armed by software and upon receipt of a frame number matching the configured start number. This latter synchronization is required as the acquisition must be already running before starting mirror movements and we want the interferometer cycle to start just after the 3-point frequency modulation executed at each start of scan. Indeed this frequency modulation is mandatory before each interferogram acquisition as it allows to calibrate the KIDs response to a known frequency shift \cite{modulationFasano}. The frequency modulation is injected by the KID readout board which is synchronized with the same technique as described here. The firmware described above use a low percentage of the FPGA available resources, it uses 6805 (16.6\%) LUTs, 6953 (8.5\%) FFs and 36 (26\%) block RAMs. The latter are mostly used for the IPBUS protocol as multi-buffering is required to allow lost packets recovery (16 block RAMs) and for the two speed tables (9 block RAM each). \section{Conclusion} In this paper we have presented a dedicated KID readout and control electronics for the CONCERTO experiment installed at the APEX 12-m telescope. These electronics ensure the KID readout and the synchronous operation and monitoring of the CONCERTO MPI. It consists of a total of five microTCA crates: four crates (three boards each) for KID readout and one crate for control and monitoring of the instrument. With respect to previous KID readout electronics used in the NIKA and NIKA2 experiments it represents a major step forward. It ensures similar noise properties with a much larger operation sampling rate (4\,kHz instead of tens of Hz), a doubled bandwidth (1\,GHz instead of 500\,MHz) and a more compact design with lower power consumption (35\,W per readout board). Furthermore, by using a single DAC, it provides an homogeneous readout of the 400 tones avoiding the sub-bands uncorrelated noise contribution observed in the previous version featuring five DACs \cite{adam_phd}. In addition, it allows us to synchronously and accurately control the MPI motors, as well as monitor the position of the MPI mirrors and the vibrations of the MPI main polariser, that are critical for the scientific measurements. Finally, these electronics have been successfully operated in real conditions (regular scientific CONCERTO observations are carried out since July 2021) \cite{monfardini2021concerto,catalano_2022}. Thus, we conclude that the CONCERTO electronics are fully operational and fulfill the technical and scientific requirements. \section*{Acknowledgments} This project has received funding from the European Research Council (ERC) under the European’s Horizon 2020 research and innovation programme (grant agreement No 788212), from the Excellence Initiative of Aix-Marseille University-A*Midex, a French "Investissements d’Avenir" programme, and LabEx FOCUS (France). \bibliography{report_sample_bibtex} \bibliographystyle{JHEP}
Title: Detection of new pulsars at the frequency 111 MHz
Abstract: The pulsar search was started at the radio telescope LPA LPI at the frequency 111~MHz. The first results deals of a search for right ascension $0^h - 24^h$ and declinations $+21^{\circ} - +42^{\circ}$ are presented in paper. The data with sampling 100 ms and with 6 frequency channals was used. It were found 34 pulsars. Seventeen of them previously been observed at radio telescope LPA LPI, and ten known pulsars has not previously been observed. It were found 7 new pulsars.
https://export.arxiv.org/pdf/2208.08839
\label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \section{Introduction} Since the discovery of pulsars in 1967 (\citeauthor{Hewish1968}, \citeyear{Hewish1968}), searches for new pulsars have been regularly conducted in both the northern and southern hemispheres. As a rule, the search for pulsars is carried out under "lanterns": the plane of the Galaxy, globular clusters, remnants of supernova bursts. Only very few surveys cover areas exceeding at least one steradian. Of the first surveys covering a significant part of the sky, we can note the surveys of Molonglo (\citeauthor{Large1971} (\citeyear{Large1971}), \citeauthor{Manchester1978} (\citeyear{Manchester1978})), Jodrell Bank (\citeauthor{Davies1972} (\citeyear{Davies1972}), \citeauthor{Davies1973} (\citeyear{Davies1973})), Green Bank (\citeauthor{Damashek1978} \citeyear{Damashek1978}). Of the latest, still ongoing surveys, we note AO327 on the 300-meter radio telescope in Arecibo (\citeauthor{Deneva2013} \citeyear{Deneva2013}), GBNCC on the 100-meter radio telescope in Green Bank (\citeauthor{Boyles2013} \citeyear{Boyles2013}), NHTRU on the 100-meter telescope in Effelsberg (\citeauthor{Barr2013} \citeyear{Barr2013}), HTRU on the 64-meter telescope in Parks (\citeauthor{Keith2010} \citeyear{Keith2010}), LOTAAS on the LOFAR aperture synthesis system (\citeauthor{Coenen2013} \citeyear{Coenen2013}). In these surveys, years of observations are required to cover most of the celestial hemisphere. The reduction of the total observation time is achieved, as a rule, due to a small integral accumulation time in a given direction. Multiple observations of selected sites are not practiced. At the same time, it is known that pulsars are objects with strong variability caused by both external (interstellar scintillating, see, for example, review (\citeauthor{Rickett1977} \citeyear{Rickett1977}) and internal causes (flare pulsars: example pulsar J0946+0951 (\citeauthor{Vitkevich1969} \citeyear{Vitkevich1969})). Therefore, with regular observations of the entire celestial sphere, relatively strong new pulsars can be detected in previously studied areas. On the Large Phased Array (LPA) of Lebedev Physics Institute (LPI) radio telescope, a daily sky survey is conducted in test mode, covering a full day in right ascension and $50 ^{\circ}$ in declination. Brief reports on the discovery of 7 new pulsars in this survey are published in \citeauthor{Tyulbashev2015a} (\citeyear{Tyulbashev2015a}), \citeauthor{Tyulbashev2015b} (\citeyear{Tyulbashev2015b}). In this paper, the details of the search for pulsars are given. \section{Observations and processing of observations} LPA LPI which is a phased array, was upgraded in 2012. In the course of its modernization, the low-noise amplifiers of the first floor were replaced, the Butler phasing matrices were replaced, and new cable systems were laid. As a result of this work, two independent diagram-forming systems appeared on the radio telescope (LPA1 and LPA2). The first phasing system is based on the old diagram formation system. It forms 16 beams that can be shifted by half the width of the radiation pattern so that in two days of observations it becomes possible to overlap the sky with the beams of the LPA LPI at a power level of 0.8. This system of 16 beams overlaps in declination about $8^{\circ}$, and can switch so that observations are provided at declinations from $-27^{\circ}$ to $+88^{\circ}$. On LPA1, observations of pulsars are carried out, among other things, on specialized "pulsar" receivers. The maximum effective area of this system is $20,000 \pm 2000$ sq.m.. The second antenna pattern is constructed in a special way. 128 non-switchable beams have been created in it. The overlap of the beams is made according to the power level of 0.4. This beam system allows you to observe sources with declinations from $-8^{\circ}$ to $+55^{\circ}$. The effective area of LPA2, reduced to the zenith, is $47,000 \pm 2500 $ sq.m. Both LPA1 and LPA2 have a full frequency band of 2.5~MHz. A multi-channel digital receiver has been made for the LPA2, allowing for the registration of the signal from 96 beams of the LPA. These beams overlap declinations from $-8^{\circ}$ to $+42^{\circ}$. A special system has been developed to calibrate the power of incoming signals. It provides measurement of the main parameters of the radio telescope: the noise temperature of the system, the effective area of the antenna, checking the operability of the distributed amplification system and its individual elements. The input of low-noise amplifiers switches between the antenna and the calibration noise generator. The noise generator generates 2 levels of the calibration signal corresponding to the noise temperature of the matched load and the noise temperature of the switched-on generator. The noise temperature of the matched load is equal to the ambient temperature. The temperature of the noise signal of the calibration generator is 2400 K and little depends on the ambient temperature. The measured changes in the step height do not exceed $\pm 3\%$ when the ambient temperature changes from $-15^{\circ}$ to $+43^{\circ}$ Celsius. The multichannel digital radiometer consists of two recorders - industrial computers with multichannel receiving and recording modules that provide signal registration for 96 beams of the radio telescope. Each recorder includes six 8-channel digital signal processing modules. 4 paired ADCs~TI~ADS62P29 are installed at the input of the module. Digital processing is performed using programmable logic device PLD EP3SL780C3 (Stratix III, Altera). Used: the method of direct digitization of the signal, digital filtering systems, frequency transfer, spectral analysis. The digitization frequency is 230 MHz, the band of the recorded signal in each channel is 2.5 MHz with a central frequency of 110.25 MHz. PLD resources make it possible to implement 8 independent video converters on one chip, filtering of high-frequency and low-frequency signals, spectral analysis and processing of 8 independent data streams. The recorder module has the ability to record signal power to a hard disk in 32 or 6 spectral channels with a frequency resolution of 78 or 415 kHz, respectively. The time resolution of the signal is 12.5 ms or 100 ms. Since July 2014, simultaneous data recording has been carried out, both with a sampling 100~ms in 6 frequency channels ("short data") and with a sampling 12.5~ms in 32 frequency channels ("long data") at hourly intervals. The time service is monitored at the beginning of each observation hour. The start of digitization of the first point is carried out with an accuracy of at least 5 ms. The accuracy of channel polling within an hour is determined by the accuracy of the quartz oscillator of the digital receiver. The estimated maximum possible time discrepancy in the hourly interval is $\pm 25$ ms ($\pm$ two points of primary "long data") and $\pm 100$~ms ($\pm$ one point of primary "short data"). To search for pulsars, a program has been written in Qt/C++ distributed under the GPL V3.0 (\citeauthor{TyulbashevV2015} \citeyear{TyulbashevV2015}), consisting of two blocks. The first block performs a direct addition of periods with a search of dispersion measures (DM). All detected periodic signals related to signal to noise $S/N \ge 4$ are recorded in the base directory. In the second block, the detected signals are analyzed. The program allows you to compare catalogs for all processed days by a number of parameters and perform additional digital filtering. For example, you can track the repeatability of periodic signals from day to day. Check the approximate coincidence of the maxima of accumulated pulses when searching for pulsars with a double period. Identify candidates for pulsars with S/N greater than the specified one. Remove known pulsars from directories. Construct dynamic spectra. There are other digital filters in the program. The full passage of the source through the meridian of LPA LPI takes $425s/cos(\delta)$ ($\delta$ - declination of the source). The maximum sensitivity in a given direction is achieved at the moment when the meridian is crossed by the source. When processing observations, the region $\pm 1.5-2$ min from the moment of crossing the meridian is usually used. Therefore, in order to achieve maximum sensitivity, three-minute intervals with a shift of 1.5 minutes were processed during the search. Such a shift makes it possible to ensure that in the processed recordings, the pulsar will necessarily pass through the top of the antenna radiation pattern, where maximum sensitivity will be achieved. \section{Pulsar detection} The sensitivity when detecting extremely weak pulsars, according to the flux density is determined by the well-known equation: $$S_{min} = \frac{S/N_{min} T_{sys}}{G \sqrt{n_p \Delta t \Delta \nu_{MHz}}} \sqrt{\frac{W_e}{P-W_e}}\, (mJy),$$ where the expected sensitivity ($S_{min}$) when searching for pulsars on full-turn antennas is determined by a given $S/N_{min}$ (in practice $S/N_{min} =6-8$), the temperature of the system ($T_{sys} = T_b + T_r$, where $T_b$ is the background temperature, and $T_r$ is the receiver temperature), the parameter $G$ associated with the normalization coefficient with the effective area of the antenna, the total accumulation time ($\Delta t$), the frequency band ($\Delta\nu$), the number of observed linear polarizations ($n_p$), the ratio of the pulsar pulse duration ($W_e$) to the pulsar period ($P$). In the case of LPA which is an antenna array, it is necessary to take into account a number of additional factors affecting the sensitivity: a) the sensitivity of the antenna decreases with the zenith distance; b) the total observation time is limited by the time the source passes through the meridian. The maximum sensitivity at the top of the radiation pattern of the LPA LPI for 3 minutes; c) the position of the beams is fixed in declination, and therefore, if the coordinates of the source do not coincide with the coordinate of the center of the beam, the sensitivity decreases. Let's evaluate the sensitivity of the LPA LPI when searching for pulsars. It depends, first of all, on the temperature of the galactic background and on how close to the center of the antenna array pattern the pulsar will pass, crossing the celestial meridian. Assuming a temperature in the plane of the Galaxy of 1500K, and outside the plane of 500K (see radioisophotes at 178 MHz (\citeauthor{Tartle1962} \citeyear{Tartle1962}), which estimated the temperature and recalculated at 111 MHz with a spectral index $\alpha = -2.55$, where $S\sim \nu^{-\alpha}$), the pulsar hitting exactly in the center of the diagram (beam) and in the middle between the two beams, it is possible to estimate the best ($S_{best}$) and worst ($S_{worst}$) expected sensitivity according to the formula given above for the case when the sampling is less than the pulse duration of the pulsar. The following parameters were used for the estimates: $S/N=6$, $G=17$ K/Jy, $n_p=1$, $\Delta\nu =2.5$~MHz, $\Delta t = 180$~s, $W_e=0.1P$, $T_r= 300$K. Table~\ref{tab:tab1} gives estimates of the sensitivity of LPA for the best, worst and expected typical case. \begin{table} \centering \caption{The expected sensitivity when searching for pulsars on LPA LPI in the direction of the zenith.} \label{tab:tab1} \begin{tabular}{cp{1cm}p{1cm}p{1cm}} \hline Sensitivity & $S_{best}$ (mJy) & $S_{worst}$ (mJy) & $S_{typical}$ (mJy)\\ \hline The plane of the Galaxy & 9.9 & 24.5 & 15-20 \\ \hline Outside the plane of the Galaxy & 4.4 & 10.8 & 6-8 \\ \hline \end{tabular} \end{table} The currently conducted pulsar search surveys (see the links in the Introduction) use observation frequencies from 111~MHz to 1400~MHz. Therefore, in order to compare the sensitivity when searching for pulsars in different surveys, the sensitivity estimates from the original works must be translated to a frequency of 111 MHz, taking into account the expected spectral index of the pulsar spectrum. All other things being equal, the sensitivity when searching for pulsars in the Galactic plane is 2-3 times lower than outside the plane due to the difference in background temperatures in the Galactic plane and outside it. On the other hand, it is in the plane of the Galaxy that the vast majority of known pulsars are located. Therefore, in some of the surveys listed in the Introduction, a large integral accumulation time was used when observing the Galactic plane, and a small accumulation time outside the plane, and therefore the sensitivity in these surveys is reversed high for the Galactic plane, and lower for observations outside the Galactic plane. \begin{table*} \centering \caption{Comparison of surveys currently being conducted to search for pulsars} \label{tab:tab2} \begin{tabular}{lllllll} \hline Telescope (dimensions) & $\nu$~(MHz) & $\Delta \nu$~(MHz) & $\Delta t$(c)& $S_{min}$(mJy) & $S_{111}$(mJy) & links \\ \hline Effelsberg (100m)$^1$ & 1360 & 240 & 90/1500 & 0.17/0.05 & 28.7/8.4 & \citeauthor{Barr2013} (\citeyear{Barr2013}) \\ Parks (64m)$^1$ & 1352 & 340 & 270/4300 & 0.61/0.2 & 101.8/33.4 & \citeauthor{Keith2010} (\citeyear{Keith2010}) \\ Green Bank (100m)$^2$ & 350 & 50 & 140 & 0.6/3.9(1.34) & 7.3/47(16.6)& \citeauthor{Boyles2013} (\citeyear{Boyles2013}) \\ Arecibo (300m)$^3$ & 327 & 57 & 60 & 0.3/?? & 2.6/?? & \citeauthor{Deneva2013} (\citeyear{Deneva2013}) \\ Pushchino (200$\times$400m) & 111 & 2.5 &180 & & 7/18 & this paper \\ \hline \multicolumn{7}{|p{17cm}|}{Notes to Table 2: According to \citeauthor{Barr2013} (\citeyear{Barr2013}), the sensitivity of the surveys in Effelsberg and Parks is the same, but the sensitivity estimates taken from the original papers differ by almost 4 times. Our estimates of the expected sensitivity when searching for pulsars in Parks practically coincide with the estimates of sensitivity when searching for pulsars on the 100-meter telescope in Effelsberg; 2) The sensitivity estimate of the Green Bank antenna for observations in the Galactic plane is given based on the temperature of 300 K (\citeauthor{Boyles2013}, \citeyear{Boyles2013}). Apparently, this is an extreme case. In parentheses in Table~\ref{tab:tab2}, the sensitivity value is given, which is obtained based on the fact that the temperature in the plane of the Galaxy is 90~K, which is 3 times higher than the temperature outside the plane of the Galaxy; 3) In observations on Arecibo, the plane of the Galaxy was excluded (galactic latitudes $|b| > 5^{\circ}$ were taken). The sensitivity for Arecibo is taken from Figure 2 "Mock") (\citeauthor{Deneva2013} \citeyear{Deneva2013}).}\\ \hline \end{tabular} \end{table*} Table~\ref{tab:tab2} accumulates information on all large surveys currently being conducted. In the first column, a radio telescope is noted, on which observations are made to search for pulsars. Columns 2 and 3 show the frequency at which the survey is conducted and the total band of observations. Column 4 gives the accumulation time. Different accumulation times for high, medium and low galactic latitudes were used for surveys in Parks and in Effelsberg. The table shows the accumulation time for high ($|b|> 15^o$) and through the sign "/"\, low ($|b| < 3.5^o$) galactic latitudes. Column 5 shows estimates of the expected best sensitivity in the survey outside the plane and through the sign "/"\, in the plane of the Galaxy. The column contains the sensitivity estimates given in the original papers. If the sensitivity assessment was not given in the article, then the sensitivity calculation was made according to the values given in the work or was taken from the corresponding figures. Column 6 shows the results of recalculation of the sensitivity of the surveys at a frequency of 111~ MHz, assuming that the spectral index of all pulsars is equal to two. In the original works on pulsar surveys, sensitivity estimates were given under the assumption of different $S/N_{min}$ and different ratios of pulse duration to period. In order to be able to compare the surveys, the sensitivity estimates in them were first recalculated under the assumption $S/N_{min}=6, W_e=0.1P$, and then the expected minimum flux density was recalculated to a frequency of 111~MHz. Finally, column 7 contains references to pulsar surveys, from which the sensitivities given in column 5 were taken, or the numbers on the basis of which we made sensitivity estimates. Table~\ref{tab:tab2} shows that the expected sensitivity of LPA is inferior to the sensitivity of the 300-meter Arecibo telescope in observations outside the Galactic plane. It is also inferior to the sensitivity of the 100-meter telescope in Effelsberg and, apparently, the 64-meter telescope in Parks when observed in the plane of the Galaxy ($|b| < 3.5^{\circ}$). In all other cases, the expected sensitivity when searching for pulsars on LPA should be comparable or better than in the other cases considered. For a test search for pulsars, a declinations of $21^{\circ}23^\prime -42^{\circ}08^\prime$ was taken. A total of 24 days of observations were processed. Every day, approximately 400,000 periodic signals were detected in the data, from which digital filters allow you to select several hundred objects that were analyzed manually. During the search, a direct addition of periods was carried out with a search from 0.5 s to 15 s and a search for DM in the range of 0-200 pc /cm$ ^3$. "Short data" was used for the search. A pulsar was considered found if it was detected for at least 3 days, at least once with $S/N>6$. According to ATNF (https://www.atnf.csiro.au/research/pulsar), 77 pulsars with a period of more than 500~ms and DM up to 200~pc/cm$^3$ fall into the survey area. The search program "blind"\, the search found 27 known pulsars. Data on them are given in the table~3. Column 1 shows the name of the source in the J2000 agreement. In parentheses is the name in the B1950 agreement, if it is used for this pulsar. Columns 2 and 3 contain period from the ATNF catalog and estimates of pulsar periods obtained by us. In columns 4 and 5, estimates of pulsar flux density at frequencies 102.5~MHz (\citeauthor{Malofeev2000} \citeyear{Malofeev2000}) and 400~MHz (ATNF) are written out. Column ~6 gives a rough estimate of the expected pulsar flux density at a frequency of 111 MHz. The estimate was made based on the flux density at 102.5~MHz, if there were such measurements, and based on the flux density at 400~MHz, if there were no measurements at 102.5~MHz. When recalculating, the spectral index $\alpha = 2$ was assumed. If there is an asterisk after the pulsar name, then there is a comment to this source in the notes after Table~\ref{tab:tab3}. Column 7 shows the half-width of the pulsar pulse taken from ATNF. Column 8 shows how many times the pulsar was detected in 24 processed days. \begin{table*} \centering \caption{Known pulsars detected during the "blind" search.} \label{tab:tab3} \begin{tabular}{llllllll} \hline Name & $P_{ATNF}$(c) & $P_{survey}$(c) & $S_{102}$(mJy) & $S_{400}$(mJy)& $S_{111}$(mJy) & $W_{50}$(mc) & N \\ \hline PSR J0048+3412 (B0045+33) & 1.21709 & 1.2171 & 88 & 2.3 & 75 & 21.7 & 17 \\ PSR J0323+3944 (B0320+39) & 3.03207 & 3.0326 & 230 & 10.8 & 196 & 42.7 & 23 \\ PSR J0528+2200 (B0525+21) & 3.74554 & 3.7443 & 100 & 57 & 85 & 185.5 & 21 \\ PSR J0611+3016* & 1.41209 & 1.4120 & & 1.4 & 18.9 & & 22 \\ PSR J0613+3731 & 0.61920 & 0.6190 & & 1.6 & 21 & 11 & 18 \\ PSR J0754+3231 (B0751+32) & 1.44235 & 1.4422 & 49 & 8 & 42 & 12.2 & 7 \\ PSR J0826+2637 (B0823+26) & 0.53066 & 0.5306 & 620 & 73 & 529 & 5.8 & 18 \\ PSR J0943+4109 & 2.22949 & 2.2302 & & 8.6 & 112 & & 11 \\ PSR J1238+2152 & 1.11859 & 1.1181 & 60 & 2 & 51 & & 17 \\ PSR J1239+2453 (B1237+25) & 1.38245 & 1.3822 & 260 & 110 & 222 & 51.1 & 23 \\ PSR J1532+2745 (B1530+27) & 1.12484 & 1.1246 & 94 & 13 & 80 & 25.7 & 19 \\ PSR J1741+2758 & 1.36074 & 1.3608 & 30 & 3 & 26 & 7 & 7 \\ PSR J1758+3030 & 0.94726 & 0.9472 & 60 & 8.9 & 51 & 27 & 23 \\ PSR J1813+4013 (B1811+40) & 0.93109 & 0.9311 & & 8 & 104 & 12.2 & 22 \\ PSR J1821+4145* & 1.26179 & 1.2620 & & 2.6 & 35 & & 20 \\ PSR J1907+4002 (B1905+39) & 1.23576 & 1.2355 & & 23 & 299 & 58.5 & 24 \\ PSR J1921+2153 (B1919+21) & 1.33730 & 1.3371 & 1900 & 57 & 1620 & 30.9 & 24 \\ PSR J2018+2839* (B2016+28)& 0.55795 & 0.5580 & 260 & 314 & 222 & 14.9 & 24 \\ PSR J2055+2209 (B2053+21) & 0.81518 & 0.8152 & & 9 & 117 & 16.9 & 16 \\ PSR J2113+2754 (B2110+27) & 1.20285 & 1.2030 & 130 & 18 & 111 & 13 & 24 \\ PSR J2139+2242 & 1.08351 & 1.0835 & 30 & & 26 & 91 & 22 \\ PSR J2157+4017 (B2154+40) & 1.52527 & 1.5250 & 200 & 105 & 171 & 38.6 & 21 \\ PSR J2207+40 & 0.63699 & 0.6370 & & 3.8 & 49 & & 22 \\ PSR J2227+3036 & 0.84241 & 0.8423 & & 2.4 & 31 & & 17 \\ PSR J2234+2114 & 1.35875 & 1.3587 & 35 & 2.6 & 30 & 43 & 17 \\ PSR J2305+3100 (B2303+30) & 1.57589 & 1.5758 & & 24 & 312 & 17.4 & 23 \\ PSR J2317+2149 (B2315+21) & 1.44465 & 1.4445 & 100 & 15 & 86 & 20.2 & 24 \\ \hline \multicolumn{8}{|p{17cm}|}{Notes to Table 3: PSR J0611+3016 was observed in a 430 MHz survey made in Arecibo (\citeauthor{Camilo1996} \citeyear{Camilo1996}). PSR J1821+4145 was observed in a 350~MHz survey done in Green-Bank (\citeauthor{Stovall2014} \citeyear{Stovall2014}). PSR J2018+2839 - the estimation of the flux density at 111 MHz is formal, since the spectrum between the frequencies of 102.5 and 400 MHz is inverted.}\\ \hline \end{tabular} \end{table*} Table~\ref{tab:tab3} shows that the accuracy of determining the pulsar period, for most of the pulsars found, is better than one unit in the third decimal place. This corresponds to the expected accuracy, determined by the record length of 180 seconds, on which the search was conducted. The real sensitivity obtained during the processing of monitoring data for pulsars, according to Table~\ref{tab:tab3}, is difficult to estimate. Consider column 6 of Table~\ref{tab:tab3}. Weak detected pulsars lying in the extragalactic plane have an expected flux density of 20-30 mJy. Weak pulsars lying in the galactic plane have an expected flux density of 80-100 mJy. Both estimates of the flux density are 3 times greater than the sensitivity limit expected in the survey. Thus, the practical confirmed sensitivity is several times lower than expected. In addition to the known pulsars, 7 new pulsars missing from the ATNF catalog were discovered. The pulsar was considered open if: a) the $S/N$ was greater than 6, at least on one of the processed days; b) the average profiles with approximately the same height are visible on the recordings with a double period; c) the pulsar is detected in the recordings for at least 3 days out of 24 processed; d) the observed S/N from DM has a pronounced maximum. Table~\ref{tab:tab4} shows the characteristics of these pulsars. The first column contains the name of the source in the J2000 agreement. Columns 2 and 3 give the coordinates of the pulsar in right ascension and declination. The accuracy of the coordinates of pulsars J0146+3104, J0928+3037, J1242+3938, J1721+3524 in the right ascension $\pm 40^s$, in declination $\pm 10^\prime$. Pulsars J0220+3622, J0303+2248, J0421+3240 have the corresponding accuracy of $\pm 60^s$ and $\pm 15^\prime$. Columns 4-6 list the characteristics of the pulsar: period, DM, half-width of the average profile. Column 7 shows how many times the pulsar was found during the 24 processed days. \begin{table*} \centering \caption{ Characteristics of new pulsars} \label{tab:tab4} \begin{tabular}{lllllll} \hline Name & $\alpha_{2000}$ & $\delta_{2000}$ & P(s) & DM(pc/cm$^3$) & $W_{0.5}$(ms) & N \\ \hline J0146+3104 &$01^h46^m15^s$ & $31^o04^\prime$ & 0.9381 & 24-26 & 20 & 7 \\ J0220+3622 &$02^h20^m50^s$ & $36^o22^\prime$ & 1.0297 & 30-50 & 220 & 8 \\ J0303+2248 &$03^h03^m00^s$ & $22^o48^\prime$ & 1.207 & 15-25 & 50 & 4 \\ J0421+3240 &$04^h21^m30^s$ & $32^o40^\prime$ & 0.9005 & 60-90 & 400 & 4 \\ J0928+3037 &$09^h28^m43^s$ & $30^o37^\prime$ & 2.0919 & 20-24 & 50 & 16 \\ J1242+3938 &$12^h42^m34^s$ & $39^o38^\prime$ & 1.3100 & 25-27 & 35 & 14 \\ J1721+3524 &$17^h21^m57^s$ & $35^o24^\prime$ & 0.8219 & 19-25 & 60 & 18 \\ \hline \end{tabular} \end{table*} The data in the Table~\ref{tab:tab4} is preliminary. Currently, observations are being carried out to clarify the parameters (DM and period) of the detected objects on an installation made specifically for the study of pulsars. Pulsars J0146+3104, J0220+3622, J0421+3240, J1242+3938, J1721+3524 have already been confirmed by observations on LPA1. For them, the clarification of the period is no worse than up to the seventh decimal place (\citeauthor{Malofeev2015} \citeyear{Malofeev2015}). We pay special attention to pulsars J0220+3622, J0421+3240. These pulsars have broad average profiles that can take up about half of the period. For the new pulsars, the Fig.1 show dynamic spectra, average profiles, and $S/N$ from $DM$ for the day of observations. "Long data" was used to obtain the drawings. The drawings of the dynamic spectrum and the average profile are made with a double period. Dynamic spectra were constructed using "long data". The pulsars found are weak, so averaging was carried out within the dynamic spectrum by frequencies and/or time (see the comments to the figure). The minimum frequencies within the observation band correspond to the upper part of the dynamic spectrum. The maximum frequencies correspond to the lower part of the spectrum. \section{DISCUSSION OF THE RESULTS AND THE CONCLUSION} Table~\ref{tab:tab1} shows a theoretical estimate of the expected sensitivity of the survey when searching for pulsars on LPA LPI. Processing a real survey shows that the sensitivity is noticeably worse than the calculated one. There are a number of factors, taking into account which will allow us to approach the calculated sensitivity. Firstly, the maximum sensitivity when searching for pulsars will be achieved when the pulse duration of the pulsar is equal to the sampling. If the sampling in the raw data is less than the expected pulse duration of the pulsar, then an additional averaging can be performed within the expected pulsar period and the maximum S/N can be obtained. Table~\ref{tab:tab3} shows that approximately 80\% of all detected known pulsars have a half-width of the average pulsar pulse profile less than 30~ms, therefore, taking into account our sampling of 100~ms, the loss in the S/N when processing these pulsars was 1.5 times or more. First of all, S/N losses will affect the detection of the weakest pulsars. These losses can be avoided by processing "long data". A sampling of 12.5~ms is sufficient to obtain the maximum S/N when searching for the vast majority of known second pulsars, and, most likely, it will be sufficient when searching for new pulsars. Secondly, in the meter wavelength range, scintillation of radio sources is observed on the interplanetary plasma and on the ionosphere of the Earth. According to earlier studies conducted by (\citeauthor{Artyukh1982} \citeyear{Artyukh1982}), the median value of the confusion effect of scintillating radio sources at 102 MHz is 0.14~Jy, which is close to the fluctuation sensitivity of LPA2. Scintillation of compact radio sources is constantly observed in real primary recordings. Early studies at the LPA LPI showed that the density of detected compact (scintillating) radio sources in the sky is about 1 source per 1 sq.deg. (\citeauthor{Artyukh1996} \citeyear{Artyukh1996}), which is comparable to the size of the radiation pattern of LPA. Since scintillation is a random process, they increase the width of the noise track and thereby reduce the sensitivity of the survey when searching for pulsars. Note also that the characteristic scintillation time in the meter wavelength range is about 0.5s. Therefore, there is a problem of subtracting the background signal from the recording. The scintillation detection area is wide. Observations under the "Space Weather" program\, on the LPA show that the zone of increased scintillating, with the current sensitivity of the LPA2 antenna, can extend for 12-18 hours, depending on the time of year (\citeauthor{Chashei2015} \citeyear{Chashei2015}). Ionospheric scintillating, excluding magnetic storms, takes about one hour in the morning and in the evening. The characteristic time of ionospheric scintillation starts from a few seconds (\citeauthor{Chashei2006} \citeyear{Chashei2006}), which, as for interplanetary scintillation, can lead to problems subtracting the background from the recording. Therefore, when searching for pulsars to achieve guaranteed maximum sensitivity, it makes sense to choose only 5-6 night hours in the recordings. Thirdly, during the search, about a third of the known pulsars ($P>0.5$~c) in investigated area were detected. About another third of the known pulsars not detected during the search are most likely too weak to detect them. The expected flux density of these pulsars, extrapolated from ATNF estimates of flux densities, may be less than 20-30 mJy at a frequency of 111~MHz. These are pulsars J0540+3207, J0546+2441, J0555+3948, J0947+2742*, J1503+2111, J1720+2150, J1746+2245, J1746+2540, J1900+3053, J1903+2225, J1912+2525, J1913+3732, J1929+2121, J1931+3035, J1937+2950, J1939+2449, J1946+2224, J1949+2306, J1953+2732, J2007+3120, J2010+2845, J2015+2524, J2036+2835, J2151+2315*, J2155+2813*. The pulsars listed above, with the exception of those marked with the sign '*', are located in the plane of the Galaxy, where the sensitivity of the survey to search for pulsars is the worst. Some pulsars are X-ray or RRATs objects. These are pulsars J1308+2127, J1605+3249, J2225+3530. Nevertheless, there remain approximately 20-25 known pulsars, which by all indications should have been detected during the search, but were not detected. In particular, some of these pulsars were previously observed on LPA at a frequency of 102.5 MHz. These are pulsars J0417+3545 ($S_{102}=46$~mJy), J0927+2347* ($S_{102}=30$~mJy), J0943+2256* ($S_{102}=77$~mJy), J1649+2533 ($S_{102}=60$~mJy), J1652+2651 ($S_{102}=40$~mJy), J1920+2650 ($S_{102}=30$~mJy), J1948+3540 ($S_{102}=60$~mJy), J2002+3217 ($S_{102}=80$~mJy), J2002+4050 ($S_{102}=170$~mJy), J2008+2513 ($S_{102}=60$~mJy), J2030+2228 ($S_{102}=22$~mJy), J2037+3621 ($S_{102}=34$~mJy), J2212+2933* ($S_{102}=50$~mJy), J2307+2225* ($S_{102}=30$~mJy) \cite{Malofeev2000}. Pulsars marked with the sign '*' are located at great distances from the plane of the Galaxy. Approximately half of the pulsars listed in this paragraph were found either at $S/N<6$, or less than three times, or at multiple harmonics, and therefore, when searching for new pulsars, it would be eliminated. Currently, methodological work is being carried out to improve the search program. There are a number of other factors that it makes sense to mention in the work: 1) at low declinations, there is a large amount of interference sitting in the south direction and being industrial interference; 2) in spring and early summer, a lot of records disappear due to thunderstorms; 3) the 111~MHz range on which observations are carried out is not protected on primary and secondary bases, so it is regularly used interference appears to be associated with mobile services. All these factors lead to the fact that approximately 20-25\% of all records cannot be processed. Concluding the paper, we note that the main advantages of LPA is its high effective area, and therefore high sensitivity, the possibility of simultaneous observations in many beams, the possibility of daily monitoring. In other current pulsar search surveys (see Table~\ref{tab:tab2}) high sensitivity of observations is achieved due to a wide frequency band, and due to the fact that the temperature of the system ($T_{sys}$) decreases with increasing frequency of observations. The possibility of daily monitoring of the entire sky in these surveys is excluded. The search for pulsars in the daily monitoring data on LPA LPI is especially advantageous for detecting rare objects: flashing pulsars, in which long periods of relative rest are replaced by a significant increase in the observed flux density, pulsars of the RRATs type, pulsars of the Geminga type, pulsars with nullings, pulsars with giant pulses. Separately, we note close pulsars, which, due to interstellar scintillation, can significantly change the observed flux density from day to day, as well as pulsars with very steep spectra ($\alpha > 2.5$). In conclusion, we note that when processing half of the available declination area and using "short data", 7 new pulsars were detected. Taking into account the apparent loss of sensitivity in the search, the nature of which is not entirely clear, and taking into account the fact that "long data" will eventually be processed, we can expect the detection of at least several dozen new pulsars in the monitoring data of LPA LPI. \section*{Acknowledgments} We express our gratitude to V.M. Malofeev for the preliminary reading of the manuscript and a number of comments that made it possible to improve its text, as well as L.B. Potapova and G.E. Tyulbasheva for their help in the design of the article and pics. This work was supported by the program of the Division of Physical Sciences of the Russian Academy of Sciences “Transient and explosive processes in astrophysics.” \bsp % \label{lastpage}
Title: Quantitative spectroscopy of B-type supergiants
Abstract: Context. B-type supergiants are versatile tools to address various astrophysical topics, ranging from stellar atmospheres over stellar and galactic evolution to the cosmic distance scale. Aims. A hybrid non-LTE approach - line-blanketed model atmospheres computed under the assumption of local thermodynamic equilibrium (LTE) in combination with line formation calculations that account for deviations from LTE - is tested for quantitative analyses of B-type supergiants with masses $M<30 M_{\odot}$, characterising a sample of 14 Galactic objects. Methods. Hydrostatic plane-parallel atmospheric structures and synthetic spectra computed with Kurucz's Atlas12 code together with the non-LTE line-formation codes Detail/Surface are compared to results from full non-LTE calculations with Tlusty, and the effects of turbulent pressure on the models are investigated. High-resolution spectra are analysed for atmospheric parameters, using Stark-broadened hydrogen lines and multiple metal ionisation equilibria, and for elemental abundances. Fundamental stellar parameters are derived by considering stellar evolution tracks and Gaia EDR3 parallaxes. Interstellar reddening towards the target stars is determined by matching model spectral energy distributions to observed ones. Results. Our hybrid non-LTE approach turns out to be equivalent to hydrostatic full non-LTE modelling for the deeper photospheric layers of the B-type supergiants considered. Turbulent pressure can become relevant for microturbulent velocities larger than 10 km s$^{-1}$. High precision and accuracy is achieved for all derived parameters by bringing multiple indicators to agreement simultaneously. Abundances for chemical species (He, C, N, O, Ne, Mg, Al, Si, S, Ar, Fe) are derived with uncertainties of 0.05 to 0.10 dex. The derived ratios N/C vs. N/O tightly follow the predictions from Geneva stellar evolution models.
https://export.arxiv.org/pdf/2208.02692
\title{Quantitative spectroscopy of B-type supergiants} \author{D. We{\ss}mayer\inst{1} \and N. Przybilla\inst{1} \and K. Butler\inst{2} } \institute{Institut f\"ur Astro- und Teilchenphysik, Universit\"at Innsbruck, Technikerstr. 25/8, 6020 Innsbruck, Austria\\ \email{david.wessmayer@uibk.ac.at ; norbert.przybilla@uibk.ac.at} \and LMU M\"unchen, Universit\"atssternwarte, Scheinerstr. 1, 81679 M\"unchen, Germany } \date{Received ; accepted } \abstract {B-type supergiants are versatile tools to address a number of highly-relevant astrophysical topics, ranging from stellar atmospheres over stellar and galactic evolution to the characterisation of interstellar sightlines and to the cosmic distance scale.} {A hybrid non-LTE (local thermodynamic equilibrium) approach -- involving line-blanketed model atmospheres computed under the assumption of LTE in combination with line formation calculations that account for deviations from LTE -- is tested for quantitative analyses of B-type supergiants of mass up to about 30\,$M_\sun$, characterising a sample of 14 Galactic objects in a comprehensive way.} {Hydrostatic plane-parallel atmospheric structures and synthetic spectra computed with Kurucz's {\sc Atlas12} code together with the non-LTE line-formation codes {\sc Detail/Surface} are compared to results from full non-LTE calculations with {\sc Tlusty}, and the effects of turbulent pressure on the models are investigated. High-resolution spectra at signal-to-noise ratio\,$>$\,130 are analysed for atmospheric parameters, using Stark-broadened hydrogen lines and multiple metal ionisation equilibria, and for elemental abundances. Fundamental stellar parameters are derived by considering stellar evolution tracks and Gaia early data release 3 (EDR3) parallaxes. Interstellar reddening and the reddening law along the sight lines towards the target stars are determined by matching model spectral energy distributions to observed ones.} {Our hybrid non-LTE approach turns out to be equivalent to hydrostatic full non-LTE modelling for the deeper photospheric layers of the B-type supergiants under consideration, where most lines of the optical spectrum are formed. Turbulent pressure can become relevant for microturbulent velocities larger than 10\,km\,s$^{-1}$. The changes in the atmospheric density structure affect many diagnostic lines, implying systematic changes in atmospheric parameters, for instance an increase in surface gravities by up to 0.05\,dex. A high precision and accuracy is achieved for all derived parameters by bringing multiple indicators to agreement simultaneously. Effective temperatures are determined to 2-3\% uncertainty, surface gravities to better than 0.07\,dex, masses to about 5\%, radii to about 10\%, luminosities to better than 25\%, and spectroscopic distances to 10\% uncertainty typically. Abundances for chemical species that are accessible from the optical spectra (He, C, N, O, Ne, Mg, Al, Si, S, Ar, and Fe) are derived with uncertainties of 0.05 to 0.10\,dex (1$\sigma$ standard deviations). The observed spectra are reproduced well by the model spectra. The derived N/C versus N/O ratios tightly follow the predictions from Geneva stellar evolution models that account for rotation, and spectroscopic and Gaia EDR3 distances are closely matched. Finally, the methodology is tested for analyses of intermediate-resolution spectra of extragalactic~B-type~supergiants. } {} \keywords{Stars: abundances -- Stars: atmospheres -- Stars: early-type -- Stars: evolution -- Stars: fundamental parameters -- supergiants } \section{Introduction} Massive stars are drivers of the evolution of galaxies as they are crucial contributors to the energy and momentum budget of the interstellar medium (ISM), and they are sources of nucleosynthesis products \citep[e.g.][]{Matteucci08}. This is because of their ionising radiation, their strong stellar winds, and final fate in supernova explosions and -- under certain circumstances -- as $\gamma$-ray bursts. Multiple facets of the evolution of single and binary massive stars are largely understood, though many details have yet to be resolved \citep[e.g.][]{MaMe12,Langer12,Sanaetal12}, with several independent grids of evolutionary models being available \citep[e.g.][]{Brottetal11,Ekstroemetal12,LiCh18,Szecsietal22}. Improvements in our understanding of galactic and massive star evolution are driven by observational constraints, either qualitatively by consideration of new aspects, or quantitatively by a reduction in observational uncertainties (which is of interest here). The evolution of massive stars in the upper Hertzsprung-Rus\-sell diagram (HRD) splits overall into two domains, connected to the Humphreys-Davidson limit \citep{HuDa79}. Stars more massive than $\sim$40\,$M_\sun$ remain blue objects throughout their entire life because strong stellar winds and probably pulsational instabilities lead to the loss of their envelopes. These early and mid-O dwarfs and giants on the main sequence (MS) evolve into early B-type hypergiants \citep[e.g.][]{Clarketal12,Herreroetal22} and supergiants of luminosity class Ia, which constitute one of the more frequently populated regions of post-MS evolution in the HRD \citep[e.g.][]{Castroetal14}. In this evolutionary stage, they belong to the visually brightest stars in star-forming galaxies in addition to their high energy output at UV wavelengths and they are likely to become luminous blue variables (LBVs) at more advanced evolutionary stages, and finally Wolf-Rayet (WR) stars. Realistic quantitative spectroscopy of these objects requires hydrodynamical stellar atmosphere models that account for deviations from local thermodynamic equilibrium (non-LTE) and metal-line blanketing \citep{HiMi98,Pauldrachetal01,Graefeneretal02,Pulsetal05}. The less massive stars (i.e.~$M$\,$\lesssim$\,30\,$M_\sun$) with bolometric magnitudes larger than about $-$9.5 to $-$10\,mag evolve into red supergiants (RSGs) with extended hydrogen-rich envelopes at least once during their lifetime. They become B-type supergiants of luminosity classes Ib and Iab and at the lower limit of the massive star regime at $\sim$8-9\,$M_\sun$ also bright giants (luminosity class II) when they evolve from late-O and early-B dwarfs on the MS on their way towards the RSG stage. Alternatively, some B-type supergiants may be post-RSG objects like Sk $-69$\degr\,202, the precursor of SN~1987A \citep{Westetal87}, with possible evolutionary channels provided by binary \citep{Podsiadlowski92} as well as single star evolution \citep{Hirschietal04}. Such objects should be rare. While signatures of mass-loss are still present in the spectra of these lower-luminosity B-type supergiants, in the optical part of the spectrum they are restricted to a few spectral lines like H$\alpha$. The photospheric spectrum on the other hand is formed under conditions close to hydrostatic equilibrium, so that hydrostatic line-blanketed non-LTE model atmospheres \citep{HuLa95} may be employed for their quantitative analysis. This was confirmed by a comparison of hydrostatic and hydrodynamic non-LTE model atmospheres for luminous early B-type supergiants by \citet{Duftonetal05}. Galactic B-type supergiants have been investigated for a long time, starting with the early work on the B1\,Ib star $\zeta$~Per by \citet{Cayrel58} based on photographic plate spectra and the studies of \citet{Dufton72,Dufton79} using improved LTE model atmospheres. The advent of spectroscopy with CCD detectors facilitated the first spectral atlas of Galactic B-type supergiants to be obtained at optical wavelengths and spectral line behaviours to be investigated qualitatively over the entire spectral type \citep{Lennonetal92,Lennonetal93}. This dataset was later employed for the first larger-scale investigation of atmospheric parameters and the chemical abundances of B-type supergiants, employing plane-parallel and hydrostatic non-LTE atmospheres composed of hydrogen and helium, plus subsequent line-formation computations for the metals \citep{McErleanetal99}. A main focus were abundances of carbon, nitrogen, and oxygen as tracers for the presence of CN(O)-processed material in the atmospheres, modified from initial standard values due to evolutionary processes. The Lennon et al. spectra were also utilised to derive stellar wind parameters for Galactic B-type supergiants by \citet{Kudritzkietal99} to constrain the wind momentum-luminosity relationship \citep{Pulsetal96} for distance measurements of this kind of object, employing hydrodynamical H+He model atmospheres in non-LTE. About the same time some B-type supergiants were employed in the derivation of Galactic abundance gradients, using a differential pure LTE analysis \citep{Smarttetal01a}. Studies with sophisticated line-blanketed non-LTE model atmospheres followed, concentrating on the derivation of atmospheric, stellar wind, and fundamental parameters, often employing observational data of higher resolving power and wider wavelength coverage than in the earlier work \citep{Crowther_etal_06,Lefeveretal07,MaPu08,Searle_etal_08,Hauckeetal18}. Elemental abundances were discussed in some of these works, focusing again on carbon, nitrogen, and oxygen as tracers for mixing of the atmospheric layers with nuclear-processed material from the stellar core. Models predict very tight correlations for the surface CNO abundances independent of single or binary star evolution \citep{Przybillaetal10,Maederetal14}. More comprehensive chemical information (C, N, O, Mg, and Si) was derived by use of line-blanketed hydrostatic non-LTE model atmospheres for B-type supergiants in Galactic open clusters by \citet{Hunteretal09}, while \citet{Fraser_etal_10} studied atmospheric parameters, nitrogen abundances, and rotational and macroturbulent velocities based on high-resolution spectra. Similar work with an extended observational database was later conducted by \citet{IACOBIII}. On the cool end of the B-type supergiants and towards the early A-type supergiants a sample of objects was analysed by \citet{FiPr12}, using techniques very similar to those employed in the present work \citep{Przybillaetal06}. The enormous luminosities of B-type supergiants makes their spectroscopy feasible at distances beyond the Milky Way. Objects in the Magellanic Clouds were therefore intensely studied, concentrating initially on the more metal-poor Small Magellanic Cloud \citep[SMC,][]{Trundleetal04,Leeetal05,Duftonetal05}, where surface enrichments with nuclear-processed matter due to rotational mixing were predicted to be stronger \citep[e.g.][]{MaMe01,georgy_etal_13}. Only later was attention turned towards the Large Magellanic Cloud \citep[LMC,][]{Hunteretal09,McEvoyetal15,Urbanejaetal17}. Even earlier, first studies of B-type supergiants were undertaken for more distant galaxies of the Local Group, based on intermediate-resolution spectra and aiming at the determination of stellar parameters and elemental abundances. Work on M31, based on the \citet{McErleanetal99} approach or LTE techniques \citep{Smarttetal01b,Trundleetal02} and on NGC6822 \citep{Muschieloketal99} was followed by studies of M33 supergiants using hydrodynamical non-LTE atmospheres \citep{Urbanejaetal05b,Uetal09}, aiming at the derivation of abundance and metallicity gradients, and distances. Metallicities and distances \citep[derived via application of the Flux-weighted Gravity-Luminosity Relationship, FGLR,][]{Kudritzkietal03,Kudritzkietal08} were also the focus of studies of the Local Group dwarf irregular galaxies IC1613 \citep{Bresolinetal07,Bergeretal18} and WLM \citep{Bresolinetal06,Urbanejaetal08}. B-type supergiants in galaxies beyond the Local Group have also been studied, investigating not only stellar parameters, metallicities and metallicity gradients, but also interstellar reddening in these galaxies, distances, and the galaxy mass-metallicity relationship \citep[e.g.][]{Lequeuxetal79,Tremontietal04,Maiolinoetal08}, which is a key to the study of galaxy evolution. Objects in NGC300 \citep{Bresolinetal02,Bresolinetal04,Urbanejaetal03,Urbanejaetal05a} and NGC55 \citep{Castroetal12,Kudritzkietal16} in the Sculptor filament of galaxies were investigated, and in NGC3109 \citep{Evansetal07,Hoseketal14}, a member of the nearby Antlia-Sextans group. At even larger distances of about 3.5, 4.5, and 6.5\,Mpc, respectively, B-type supergiants were analysed in the grand design spiral galaxy M81 \citep{Kudritzkietal12}, in the barred spiral galaxy M83 \citep{Bresolinetal16}, and in the field spiral galaxy NGC3621 \citep{Kudritzkietal14}, all based on spectra obtained with 8-10\,m-class telescopes. In addition to their usefulness for stellar studies, B-type supergiants are frequently employed as background stars for studies of diffuse interstellar bands (DIBs) because they facilitate sight lines to be covered to large distances and provide continuous spectra with relatively few intrinsic stellar spectral features. B-type supergiants are therefore not only employed to cover interstellar sight lines in the Milky Way \citep[e.g.][]{Coxetal17,Ebenbichleretal22}, but also as tracers of DIBs in other galaxies such as the Magellanic Clouds \citep{Coxetal06,Coxetal07} and M31 \citep{Cordineretal08,Cordineretal11}. \begin{table*} \caption{B-type supergiant sample.} \label{tab:spectra_info} \centering {\small \setlength{\tabcolsep}{1mm} \begin{tabular}{lllllcrrrccr} \hline\hline ID\# & Object & Sp. T.\tablefootmark{a} & Sp. T.\tablefootmark{b} & OB Assoc.\tablefootmark{c} & & $V$ \tablefootmark{d} & $B-V$\tablefootmark{d} & $U-B$\tablefootmark{d} & Date of Obs. & $T_{\mathrm{exp}}$ & $S/N$ \\ & & & \multicolumn{1}{l}{} & & & mag & mag & mag & YYYY-MM-DD & s & \\ \hline \multicolumn{3}{l}{FOCES $R$\,=\,40\,000}\\[1mm] 1 & \object{HD 7902} & B6 Ib & B6 Ia & NGC\,457 & & 6.988$\pm$0.023 & 0.414$\pm$0.009 & $-$0.380$\pm$0.004 & 2001-09-27 & 896 & 320 \\ 2 & \object{HD 14818} & B2 Ia & B2 Ia & Per OB1 & & 6.253$\pm$0.016 & 0.301$\pm$0.007 & $-$0.613$\pm$0.009 & 2005-09-22 & 3$\times$1200 & 320 \\ 3 & \object{HD 25914} & B5 Ia & B6 Ia & Cam OB3 & & 7.99 & 0.6 & $-$0.28 & 2005-09-25 & 2700 & 180 \\ 4 & \object{HD 36371} & B4 Ib & B5 Ia & Aur OB1 & & 4.766$\pm$0.014 & 0.345$\pm$0.013 & $-$0.445$\pm$0.015 & 2001-09-30 & 240 & 480 \\ 5 & \object{HD 183143} & B6 Ia & B7 Ia\tablefootmark{e} & Field & & 6.839$\pm$0.017 & 1.185$\pm$0.018 & 0.165$\pm$0.031 & 2001-09-25 & 900 & 220 \\ 6 & \object{HD 184943} & B8 Ia/Iab & B8 Iab & Vul OB1 & & 8.184$\pm$0.016 & 0.725$\pm$0.009 & $-$0.073$\pm$0.011 & 2005-09-25 & 1800 & 130 \\ 7 & \object{HD 191243} & B6 Ib & B5 Ib & Cyg OB3 & & 6.111$\pm$0.022 & 0.151$\pm$0.014 & $-$0.447$\pm$0.034 & 2005-09-21 & 900 & 350 \\ 8 & \object{HD 199478} & B8 Ia & B8 Ia & NGC\,6991 & & 5.679$\pm$0.018 & 0.461$\pm$0.017 & $-$0.341$\pm$0.028 & 2001-09-26 & 1200\,+\,3$\times$600 & 240 \\[2mm] \multicolumn{3}{l}{FEROS $R$\,=\,48\,000}\\[1mm] 9 & \object{HD 51309} & B3 Ib/II & B3 Ib\tablefootmark{f} & Field & & 4.380$\pm$0.014 & $-$0.064$\pm$0.008 & $-$0.704$\pm$0.018 & 2011-12-09 & 2$\times$45 & 440 \\ 10 & \object{HD 111990} & B1/B2 Ib & B2 Iab & Cen OB1 & & 6.792$\pm$0.015 & 0.242$\pm$0.006 & $-$0.579$\pm$0.008 & 2013-08-17 & 300 & 260 \\ 11 & \object{HD 119646} & B1 Ib/II & B1.5 Ib & Field & & 6.602$\pm$0.020 & 0.118$\pm$0.007 & $-$0.685$\pm$0.022 & 2005-04-23 & 400\,+\,410 & 490 \\ 12 & \object{HD 125288} & B5 Ib/II & B5 II & Field & & 4.336$\pm$0.013 & 0.115$\pm$0.007 & $-$0.444$\pm$0.007 & 2013-08-20 & 240 & 410 \\ 13 & \object{HD 159110} & B4 Ib & B2 II & Field & & 7.578$\pm$0.009 & $-$0.022$\pm$0.010 & $-$0.685$\pm$0.012 & 2005-04-23 & 2$\times$1000 & 410 \\ 14 & \object{HD 164353} & B5 I/Ib & B5 Ib\tablefootmark{e} & Coll\,359 & & 3.961$\pm$0.019 & 0.023$\pm$0.012 & $-$0.606$\pm$0.019 & 2013-08-20 & 135\,+\,180 & 540 \\ \hline \end{tabular} \tablefoot{ \tablefoottext{a}{adopted from SIMBAD} \tablefoottext{b}{this work} \tablefoottext{c}{\cite{humphreys78}} \tablefoottext{d}{\cite{Mermilliod97}} \tablefoottext{e}{\cite{GrCo09}} \tablefoottext{f}{Walborn's B-type standards \citep{GrCo09}} }} \end{table*} Overall, B-type supergiants show enormous potential as versatile tools to address multiple astrophysical topics of high relevance. The present paper addresses the quantitative spectroscopy of B-type supergiants based on a hybrid non-LTE approach -- combining hydrostatic line-blanketed LTE atmospheres with subsequent non-LTE line formation --, applying state-of-the-art model atoms. The paper is organised as follows: observations and the data reduction are summarised in Sect.~\ref{section:observations}. The hybrid non-LTE modelling approach is introduced and comparisons to full non-LTE model atmospheres are made in Sect.~\ref{section:model_atmospheres}. Then, details of the analysis methodology are discussed in Sect.~\ref{section:spectral_analysis}. Section~\ref{section:results} presents all results from the B-type supergiant sample analysis and the suitability of the method for quantitative analyses at intermediate spectral resolution is investigated in Sect.~\ref{section:intermediateR}, in preparation for extragalactic studies applying the hybrid non-LTE approach. Conclusions are drawn in Sect.~\ref{section:conclusions}. An example of a detailed comparison of a tailored model with an observed spectrum is given in Appendix~\ref{section:appendixA}. \section{Observations and data reduction}\label{section:observations} High-resolution spectra of 14 Galactic B-type supergiants at high signal-to-noise ratio $S/N$ constitute the observational basis for the present work. The spectral range B1.5 to B8 at luminosity classes II, Ib, Iab, and Ia is covered, extending previous work on late B and early A-type supergiants of similar masses and luminosities \citep{Przybillaetal06,SchPr08,FiPr12} towards higher temperatures. Basic information on the star sample and the observing log are summarised in Table~\ref{tab:spectra_info}. An internal ID number is given, the Henry-Draper catalogue designation, the spectral type, and an OB association or open cluster membership is indicated. Spectral type information from the SIMBAD database\footnote{\url{http://simbad.u-strasbg.fr/simbad/sim-fid}} is summarised, as well as from a re-determination in the present work, based on anchor points of the Morgan-Keenan system and Walborn's B-type standards \citep{GrCo09}. The luminosity class determination was based on Balmer line appearance, in particular concentrating on H$\alpha$. Moreover, photometric data in the Johnson system are given in Table~\ref{tab:spectra_info}, the $V$ magnitude and the $B-V$ and $U-B$ colours. The observing log provides the date of observation, exposure times, and the $S/N$ of the final spectrum, measured around 5585\,{\AA}. The raw spectra were obtained with two instruments. Objects in the northern hemisphere were observed with the Fibre Optics Echelle Cassegrain Spectrograph \citep[FOCES,][]{Pfeifferetal98} on the Calar Alto 2.2\,m telescope in two observing runs in 2001 and 2005. The spectra cover a wavelength range from 3860 to 9580\,{\AA} at a resolving power $R$\,=\,$\lambda/\Delta\lambda$\,$\approx$\,40\,000, with 2 pixels covering a $\Delta\lambda$ resolution element. A median filter was applied to the raw images to remove effects of bad pixels and cosmics in an initial step. Then, the FOCES semi-automatic pipeline \citep{Pfeifferetal98} was employed for the data reduction, performing subtraction of bias and dark current, flatfielding, wavelength calibration using Th-Ar exposures, and rectification and merging of the echelle orders. A major advantage of the FOCES design was that the order tilt was much more homogeneous than in similar spectrographs. This facilitated a more robust continuum rectification than is usually feasible, even in the case of broad features like the hydrogen Balmer lines, which can span more than one echelle order \citep[for a discussion see][]{Korn02}. Two objects had multiple exposures taken consecutively that were combined to increase the $S/N$ of the final spectrum. Objects of the southern hemisphere were observed with the Fiberfed Extended Range Optical Spectrograph \citep[FEROS,][]{Kauferetal99} on the European Southern Observatory (ESO)/Max-Planck Society 2.2\,m telescope in La Silla, Chile. FEROS provides a resolving power of $R$\,$\approx$\,48\,000, with a 2.2-pixel resolution element. The reduced Phase 3 spectra were downloaded from the ESO Science Portal\footnote{\url{https://archive.eso.org/scienceportal/home}}. Continuum normalisation was achieved by division by a spline function to carefully selected continuum windows. Only the $\sim$3800 to 9000\,{\AA} range of the full wavelength coverage of FEROS satisfied our quality criteria for the quantitative analysis. Examples of the analysed spectra can be seen in Fig.~\ref{fig:spect_lum_showcase}, to demonstrate the data quality achieved for the present work. The figure focuses on three diagnostic wavelength regions: the window around H$\delta$ with \ion{Si}{ii} and \ion{Si}{iv} lines plus several \ion{He}{i} and \ion{O}{ii} lines, among others, the region of the \ion{Si}{iii} triplet plus numerous \ion{O}{ii} lines and on the wind-affected H$\alpha$ line with the adjacent strong \ion{C}{ii} doublet. We note how the density of spectral lines increases towards the earlier spectral types. We also note that objects at luminosity class Ib show nearly symmetric H$\alpha$ absorption, that is a negligible stellar wind, while the wind gives rise to pronounced H$\alpha$ emission at luminosity~class~Ia, while H$\delta$ is essentially symmetric at the luminosities covered here. \begin{table} \caption{IUE spectrophotometry used in the present work.} \label{table:IUE_data} \centering \resizebox{\linewidth}{!}{\small\begin{tabular}{llcccc} \hline\hline ID\,\# & Object & SW & Date & LW & Date \\ \hline 2 & HD14818 & P18658 & 1982-11-26 & R14722 & 1982-11-26 \\ 5 & HD183143 & P06550 & 1979-09-18 & R05637 & 1979-09-20 \\ 8 & HD199478 & P07596 & 1980-01-07 & R06573 & 1980-01-07 \\ 9 & HD51309 & P13936 & 1981-05-09 & R10551 & 1981-05-09 \\ 10 & HD111990 & ... & ... & P13362 & 1988-06-05 \\ 12 & HD125288 & P19460 & 1983-03-14 & R15489 & 1983-03-14 \\ 13 & HD159110 & P45210 & 1992-07-22 & P21218 & 1991-09-11 \\ 14 & HD164353 & P10172 & 1980-09-18 & R08836 & 1980-09-18\\\hline \end{tabular}} \end{table} Besides the optical spectra, additional archival photometric data and UV spectrophotometry were collected to establish the objects' spectral energy distributions (SEDs). For all analysed objects, optical photometry in the Johnson $U$, $B$, and $V$ bands by \cite{Mermilliod97} was adopted, $J$, $H$, and $K$ magnitudes from the Two Micron All Sky Survey \citep[2MASS,][]{2MASS2006} and $W1$ to $W4$ IR-photometric data from the Wide-field Infrared Survey Explorer (WISE) mission, from the ALLWISE data release \citep{ALLWISE_vizier}. For a thorough comparison in the ultraviolet wavelength range, spectrophotometry taken by the International Ultraviolet Explorer (IUE) were preferred in our analysis. The designation and observation date for each IUE-spectrum used in the analysis are given in Table~\ref{table:IUE_data}. For both short-wavelength (SW, $\lambda\lambda$1150-1978\,{\AA}) and long-wavelength data (LW, $\lambda\lambda$1851-3347\,{\AA}) low-resolution spectrophotometry taken with the large aperture was favoured; in cases where only high-resolution data were available, the spectra were artificially degraded in resolution for the analysis. Whenever possible, SW and LW data observed close in time were employed. For several of the stars of our sample, IUE data was either unavailable or inconsistent with the optical and infrared photometry (possibly because of a misalignment of the aperture). In these cases photometric measurements by the Astronomical Netherlands Satellite \citep[ANS,][]{wesseliusetal82} or from the Belgian/UK Ultraviolet Sky Survey Telescope \citep[S2/68,][]{Thompson95} on board the European Space Research Organisation (ESRO) TD1 satellite were used. \section{Model atmospheres and spectrum synthesis}\label{section:model_atmospheres} Our methodology for the analysis of B-type supergiants is based on a hybrid non-LTE approach of calculating static, plane-parallel line-blanketed LTE model atmospheres, which serve as the basis of non-LTE line formation computations. The basic approach was outlined by \citet{Przybillaetal06} where its potential to accurately reproduce all relevant spectral features of late B- and early A-type (BA-type) supergiants was shown. This methodology was validated in a direct comparison with full non-LTE hydrodynamic line-blanketed model atmospheres \citep{NiSi11} and was used to derive high-precision atmospheric and fundamental stellar parameters and abundances for many chemical species in early B-type MS stars \citep{NiPr12,NiPr14,Irrgangetal14}. Moreover, the hybrid approach is applicable to analyses of a wide range of other B-type stars, such as subdwarf B-stars \citep{Przybillaetal06b,Schaffenrothetal21}, MS Bp \citep{Przybillaetal08a}, He-strong stars \citep{Przybillaetal16,Przybillaetal21}, and supergiant extreme helium stars \citep{Kupferetal17}. In the following, we therefore briefly recap the basic principles and the model codes and will concentrate on new aspects relevant for the present work. \begin{table} \caption{Model atoms for non-LTE calculations with {\sc Detail}.} \label{table:modelatoms} \centering {\small \begin{tabular}{llll} \hline\hline Ion & Terms & Transitions & Reference \\ \hline H & 20 & 190 & {[}1{]} \\ He\,{\sc i} & 29+6 & 162 & {[}2{]} \\ C\,{\sc ii/iii} & 68/70 & 425/373 & {[}3{]} \\ N\,{\sc i/ii} & 89/77 & 668/462 & {[}4{]} \\ O\,{\sc i/ii} & 51/176+2 & 243/2559 & {[}5{]} \\ Ne\,{\sc i/ii} & 153/78 & 952/992 & {[}6{]} \\ Mg\,{\sc ii} & 37 & 236 & {[}7{]} \\ Al\,{\sc ii/iii} & 54+6/46+1 & 378/272 & {[}8{]} \\ Si\,{\sc ii/iii/iv} & 52+3/68+4/33+2 & 357/572/242 & {[}9{]} \\ S\,{\sc ii/iii} & 78/21 & 302/34 & {[}10{]} \\ \ion{Ar}{ii} & 56 & 596 & {[}11{]} \\ Fe\,{\sc ii/iii/iv} & 265/60+46/65+70 & 2887/2446/2094 & {[}12{]}\\\hline \end{tabular} \tablebib{[1] \cite{PrBu04}; [2] \cite{przybilla05}; [3]~\cite{NiPr06}, \cite{NiPr08}; [4] \cite{PrBu01}; [5] \cite{Przybillaetal00}, Przybilla \& Butler (in prep.); [6]~\cite{MoBu08}; [7] \cite{Przybillaetal01a}; [8] Przybilla (in prep.); [9] Przybilla \& Butler (in prep.); [10] \citet{Vranckenetal96}, updated; [11] Butler (in prep.); [12] \cite{Becker98}, \cite{Moreletal06}. }} \end{table} \subsection{Models and programmes} The LTE line-blanketed model atmospheres used in this work were calculated with the \,{\sc Atlas12} code \citep{kurucz05}, which assumes plane-parallel geometry, a stationary and hydrostatic stratification, and chemical homogeneity. In contrast to the previous version {\sc Atlas9} \citep[which is still required to provide converged starting models]{kurucz93} it does not rely on pretabulated opacity distribution functions (ODFs), but evaluates the opacities via opacity sampling (OS). The code thus facilitates model atmospheres to be calculated for freely specified input abundances and microturbulent velocities, including the turbulent pressure in the hydrostatic equation for a self-consistent solution. The LTE model atmospheres were then used to compute non-LTE level population densities via an updated and extended version of \,{\sc Detail} \citep{giddings81} by solving the coupled radiative transfer and statistical equillibrium equations adopting an accelerated lambda iteration scheme by \cite{RyHu91}, and considering line blocking based on the Kurucz' OS scheme. State-of-the-art model atoms were employed, as summarised in Table \ref{table:modelatoms}. There, for each chemical species the considered ions are listed, the number of explicit terms (+ superlevels) and radiative bound-bound transitions, and references. All model atoms are completed by the ground term of the next higher ionisation stage not indicated here. While most of the model atoms were used previously in other studies, a new model atom for \ion{O}{ii} was employed in the present work for the first time. We describe it briefly. Level energies were adopted from \citet{Martinetal93} and combined into 176 LS-coupled terms up to principal quantum number $n$\,=\,8 and the levels for $n$\,=\,9 combined into two superlevels, one each for the doublet and quartet spin systems. Oscillator strengths and photoionisation cross-sections were for the most part adopted from the Opacity Project \citep[OP, e.g.][]{Seatonetal94}, with several improved data taken from \citet{Wieseetal96}, and supplemented by Kurucz' recently computed oscillator strengths\footnote{\url{http://kurucz.harvard.edu/atoms.html}} for missing transitions. Electron impact-excitation data for a large number of transitions were available from the ab-initio calculations of \citet{Tayal07} and \citet{Maoetal20}. Missing data were provided by use of Van Regemorter's formula \citep{vanRegemorter62} for radiatively permitted or Allen's formula \citep{Allen73} for forbidden transitions. All collisional ionisation data were provided via the Seaton formula \citep{Seaton62} using OP photoionisation threshold cross-sections or hydrogenic values. Finally, synthetic spectra based on the non-LTE population numbers were calculated using an updated and extended version of {\sc Surface} \citep{BuGi85}, employing refined fine-structure transition data and line-broadening theories. Oscillator strengths from \citet{Wieseetal96} and Kurucz were replaced by data computed based on the multi-configuration Hartree-Fock method by \citet{FFT04} for \ion{O}{ii} \citep[as for other elements and ions, also accounting for data from][]{FFTI06}, which was decisive in achieving the close match with observations. For both {\sc Detail} and {\sc Surface} an occupation probability formalism \citep{HuMi88} -- as realised by \citet{Hubenyetal94} -- was considered for hydrogen, in order to facilitate a better modelling of the series limits. Grids of synthetic spectra were calculated with {\sc Atlas12}, {\sc Detail}, and {\sc Surface} -- abbreviated as {\sc Ads} in the following -- for the entire parameter space of B-type supergiants. For the primary analysis of Balmer-lines and ionisation equilibria of all metals, effective temperatures $T_\mathrm{eff}$ were varied from 11\,000 to 23\,000\,K in steps of 500 to 700\,K, logarithmic surface gravities $\log g$ from 1.70 to 3.70 (in cgs-units) in steps of 0.2\,dex, and elemental abundances in steps of 0.2\,dex, centred on cosmic abundance standard values \citep{NiPr12,Przybillaetal08b,Przybillaetal13}. For nitrogen, much higher values up to 1\,dex above standard were covered because of the expected enrichment. Microturbulent velocities $\xi$ were varied with increments of 4\,km\,s$^{-1}$ initially and refined later to as low as 1~km\,s$^{-1}$. The analysis was carried out -- depending on the convergence of the model atmospheres -- on grids ranging from 0 up to 16\,km\,s$^{-1}$ in microturbulence, that is subsonic velocities. We employed the Spectral Plotting and Analysis Suite ({\sc Spas}, \citealt{Hirsch09}) to compare the synthetic and observed spectra. The programme allows instrumental, (radial-tangential) macroturbulent and rotational broadening to be applied flexibly to the models and can be used to interpolate to the actual parameters with bi-cubic splines and fit up to three different parameters on the pre-calculated grid. To achieve this task, the programme employs the downhill simplex algorithm \citep{NeMe65} to find minima in the $\chi^2$-landscape. \subsection{Comparison with full non-LTE models} Non-LTE effects gain in importance for more intense radiation fields (i.e. with increasing $T_\mathrm{eff}$) and reduced collision rates (i.e. with decreasing particle density). The atmospheric structures of B-type supergiants are therefore likely to be subject to non-LTE effects. Our hybrid non-LTE approach will only be successful if solutions from full non-LTE modelling can be closely recovered. A comparison of a full non-LTE model atmosphere for solar metallicity \citep{GrSa98} as adopted from the BSTAR grid \citep{LaHu07} that was computed using the {\sc Tlusty} code \citep{HuLa95} with an {\sc Atlas12} structure is shown in Fig.~\ref{fig:tlusty_atlas12_structure_20k250v10}, for model parameters $T_\mathrm{eff}$\,=\,20\,000\,K, $\log g$\,=\,2.50, and $\xi$\,=\,10\,km\,s$^{-1}$. The temperature $T$ and electron density $n_\mathrm{e}$ stratification as a function of Rosseland optical depth $\tau_\mathrm{R}$ is shown. Line-formation depths for some of the strongest diagnostic spectral features in the optical spectrum are also indicated, with the bulk of the metal lines being formed towards the inner boundary of this region. We want to note that the metallicity of the {\sc Atlas12} model \citep[computed explicitly for abundances according to][]{GrSa98} was reduced by 0.2\,dex in order to account empirically for non-LTE effects on the line blanketing. At the same metallicity, supergiant atmospheres in LTE and non-LTE show different temperature gradients because of the different amount of backwarming because of line blanketing and blocking. Empirically, a reduction of metallicity of LTE atmospheres by 0.2\,dex can compensate this differences (see Fig.~\ref{fig:tlusty_atlas12_structure_20k250v10}). The necessity for such an adjustment also follows on observational grounds. In order to reproduce the observed spectral lines, the real temperature gradient in the stellar atmosphere has to be matched by the model, as the different formation depths from line cores to the wings near the continuum-forming layers of the entire ensemble of the lines map the temperature (and density) structure in the atmosphere in detail. Achieving a match between observation and model as shown in the figures in Appendix~\ref{section:appendixA} requires the reproduction of the actual atmospheric structure by the model. To reproduce the observed SEDs, in particular for the cases where IUE spectrophotometry is available, also requires the reduction of the overall {\sc Atlas} atmosphere's metallicity. Otherwise the line absorption in the UV is stronger than observed. Thus, the empirical metallicity adjustment mimics non-LTE effects on the line opacity. The effect of a reduction of metalli\-city by 0.2\,dex in the LTE model can be applied globally to supergiant models covering the range of effective temperatures investigated here, and only diminishes for models towards the main sequence. It can even be extended to early B-type supergiants as tested for a model with $T_\mathrm{eff}$\,=\,27\,000\,K, $\log g$\,=\,3.00, and $\xi$\,=\,10\,km\,s$^{-1}$. In all cases the agreement of the adapted {\sc Atlas12} stratifications with the {\sc Tlusty} structures is good throughout the photospheric line-formation depths, with differences less than 2\% in $T$ and 8\% in $n_\mathrm{e}$ and only starts to deviate more for $\log \tau_\mathrm{R}$\,$<$\,$-$2, where effects of the mass outflow would start to lead to departures from hydrostatic equilibrium in a real B-type supergiant atmosphere anyway. The comparison of the {\sc Tlusty} and {\sc Detail} non-LTE SEDs for the above parameters is shown in Fig.~\ref{fig:tlusty_atlas12_sed_20k250v10}. The agreement longward of the Lyman limit is excellent overall, with the differences amounting to only a few percent. Also the hydrogen series limits (see the insets in Fig.~\ref{fig:tlusty_atlas12_sed_20k250v10}) resemble each other closely because the same occupation probability formalism is employed in both codes. Larger differences occur at wavelengths below the Lyman limit, and in particular below the \ion{He}{i} ionisation limit (locations indicated in Fig.~\ref{fig:tlusty_atlas12_sed_20k250v10}), where {\sc Tlusty} predicts (significantly) higher ionising fluxes. The atmospheric layers that emit this extreme-ultraviolet radiation are located in the outermost regions of the model atmosphere. Consequently the differences are not relevant for the photospheric lines investigated here. Moreover, as these layers are not in hydrostatic equilibrium in real B-type supergiant atmospheres, the predictive power of both models presented here is limited and would be better investigated with hydrodynamical stellar atmosphere models. A further comparison of profiles for a selection of diagnostic hydrogen Balmer and \ion{He}{i} lines as calculated by {\sc Tlusty/Synspec} and {\sc Ads} for the above atmospheric parameters is shown in Fig.~ \ref{fig:tlusty_atlas12_HHE_20k250v10}. The match between the two non-LTE synthetic spectra for these two chemical species is excellent except for some fine details. These concern the line cores of the Balmer lines, with the differences diminishing towards the higher series members, and some of the forbidden components of the \ion{He}{i} lines, which are explained by the use of different broadening tables. However, the corresponding LTE model shows much weaker lines throughout, with the equivalent widths differing by factors of up to two to three. Overall, the differences increase towards the red. Pure LTE modelling is inapplicable for quantitative analyses of B-type supergiants. In particular, the panels in Fig.~\ref{fig:tlusty_atlas12_HHE_20k250v10} that show the Balmer lines cover a wider wavelength range and also depict spectral lines from other chemical species. While most of these cases show only moderate differences between the two non-LTE models, some lines are noticeably discrepant. However, a straightforward comparison of these should not be made, because -- in contrast to H and He, with their rather well-established atomic input data -- most of the differences stem from the different atomic data used and the different assumptions made in the construction of the model atoms that were used in the two approaches. A detailed discussion of these aspects for the case of OB-type main-sequence stars was presented by \citet{Przybillaetal11}, which we do not repeat here. The basic conclusion is that the ability of different models to reproduce observations in a consistent way is decisive. \subsection{Turbulent pressure}\label{subsection:turb_pressure} The {\sc Atlas12} code allows the effects of turbulent motions with velocity $\varv_{\mathrm{turb}}$ (i.e. the microturbulent velocity) on the model atmosphere computations to be taken into account. An additional turbulent pressure $P_{\mathrm{turb}}$ term is considered in the hydrostatic equilibrium equation in the form of \begin{equation}\label{eq:turb_press} P_{\mathrm{turb}} = \frac{\rho \varv_{\mathrm{turb}}^2}{2}, \end{equation} where $\rho$ is the atmospheric density. This additional term increases in importance for stars approaching the Eddington limit because of the diminishing r\^ole of the gas pressure, and for increasing $\varv_\mathrm{turb}$. Since it is possible to enable and disable turbulent pressure in the model specification of \,{\sc Atlas12}, we can directly compare the effects of this term on the atmospheric structure and the synthetic spectra, while keeping all other parameters fixed. As a test, we chose the sample star HD~14818, at $T_\mathrm{eff}$\,=\,18\,600\,K, $\log g$\,=\,2.45, and a derived high luminosity, $\log L/L_\sun$\,=\,5.41. We expected to find a maximised impact on the model atmospheric structure because of its large $\xi$\,=\,14\,km\,s$^{-1}$. Figure~\ref{fig:structure_turbulence} visualises the run of temperature $T$ (upper panel) and the logarithmic electron density $n_\mathrm{e}$ (lower panel) as a function of log-scale Rosseland optical depth $\tau_\mathrm{R}$ in the model atmosphere of HD~14818 for the two cases of turbulent pressure switched on and off, respectively. While the temperature hardly changes, with a maximum difference of about 50\,K (being higher in the model with turbulent pressure), the electron density is noticeably lower for $\log \tau_\mathrm{R}$\,$<$\,0 when turbulent pressure is considered because of the more extended atmosphere. Here, the absolute difference is about 0.12\,dex in $\log n_\mathrm{e}$ at $\log \tau_\mathrm{R}$\,=\,$-1$. Figure \ref{fig:turbulent_pressure_lines} shows the effects of the models with and without turbulent pressure for otherwise identical parameters on various spectral line profiles. It can be seen that for the fitted lines of hydrogen (H$\delta$ and H$\varepsilon$) the decreased density in the atmospheres with turbulent pressure corresponds to reduced pressure broadening of the Balmer line wings. Conversely, the model without turbulent pressure appears like a model with increased pressure broadening corresponding to the effect of an increase in surface gravity of about $\Delta \log g \approx 0.05$ dex. A systematic effect can also be detected in lines of helium and some metallic lines (\ion{Si}{ii}, \ion{C}{ii}, and \ion{S}{ii}) which mostly show enhanced line strength for models without turbulent pressure (the exception being the sulphur line at 4253\,{\AA})\footnote{We note that the panels depicting H$\delta$, \ion{Si}{ii} $\lambda$4130 and \ion{He}{i} $\lambda$4120 show lines of (and blends with) \ion{O}{ii} that systematically suggest a lower oxygen abundance. Originating from two multiplets sharing the same lower energy term, these lines appear too strong throughout the sample stars and were therefore excluded from the further quantitative analysis.}. The effect stems from a shift in the ionisation balance, yielding a higher degree of ionisation in the model that accounts for microturbulent pressure. This amounts to a reduction of equivalent widths of individual \ion{Si}{ii} lines by $\sim$15 to 25\% for the example of HD~14818 (the equivalent width of \ion{Si}{ii} $\lambda$4130\,{\AA} in Fig.~\ref{fig:turbulent_pressure_lines} is e.g. reduced by 16\%), while the lines of the main ionisation stage \ion{Si}{iii} remain essentially unchanged, and \ion{Si}{iv} lines experience a slight strengthening by $<$5\% in equivalent width. This impacts the atmospheric parameter and abundance determination to some small, but systematic, degree. Turbulent pressure is therefore considered in all analyses in the present work. \section{Spectral analysis}\label{section:spectral_analysis} \subsection{Atmospheric parameter and abundance determination} The basic atmospheric parameters were determined via an analysis of the spectral features of multiple ionisation stages of seven different chemical species (C, N, O, Ne, Al, Si, and Fe) as well as the analysis of the neutral helium lines and the Balmer lines of hydrogen. These parameters, effective temperature $T_{\mathrm{eff}}$, surface gravity $\log g$, helium number fraction $y$, microturbulent velocity $\xi$, projected rotational velocity $\varv \sin i$, macroturbulence $\zeta$ as well as the elemental abundances $\varepsilon\left(X\right)$\,=\,$\log\left(X/\mathrm{H}\right)$\,+\,12, were derived on the basis of spectrum synthesis, aiming at the reproduction of the detailed line profiles of features spanning the entire observed visual to near-infrared spectra. An iterative approach was employed to overcome ambiguities because of strong correlations, until all parameters were constrained in a consistent way and a single global solution for the synthetic spectrum was found that matches closely the entire observed spectrum. \subsubsection{Effective temperature and surface gravity} In order to begin the analysis, an initial guess on the basis of spectral type and the shape and strength of the Balmer lines suffices for an estimation to within $\Delta T_{\mathrm{eff}}$\,$<$\,1500\,K and $\Delta \log g$\,<\,0.4\,dex. Ambiguities in these two parameters arise due to their counteracting nature: in the regime of B-type supergiants, the Balmer lines grow weaker with $T_{\mathrm{eff}}$ as hydrogen is increasingly ionised, while increasing in strength with surface gravity due to the pressure broadening. Hence, multiple combinations of $T_{\mathrm{eff}}$ and $\log g$ fit the observations. This means that Balmer line fitting alone is insufficient for a thorough analysis. The problem is solved by independently constraining $T_\mathrm{eff}$ and $\log g$ using multiple ionisation equilibria of the studied elements, that is requiring that lines from the different ionisation stages of a chemical element are reproduced at the same elemental abundance value (within the mutual uncertainties). Table~\ref{tab:ionisation_equillibria} summarises which ionisation balances were employed for the analysis of the sample stars, sorted from highest to lowest $T_\mathrm{eff}$. Dots indicate that lines from the respective ionisation stage were analysed, the blue boxes then frame the achieved ionisation balance. Some combinations were useful throughout the entire $T_\mathrm{eff}$-range, for example \ion{O}{i/ii} or \ion{Si}{ii/iii}, while other ionisation stages appear only towards the highest $T_\mathrm{eff}$-values, like \ion{C}{iii}, \ion{Ne}{ii} or \ion{Si}{iv} and others such as \ion{N}{i}, \ion{Al}{ii} or \ion{Fe}{ii} are no longer visible. Four to seven ionisation balances were matched simultaneously per star, with the tightest constraints occurring if three consecutive ionisation stages could be employed, as in the case of \ion{Si}{ii/iii/iv}. Overall, ionisation balances are more sensitive to $T_\mathrm{eff}$-variations, while the Balmer lines are more sensitive to $\log g$-variations. The finally adopted values of effective temperature and surface gravity, and their uncertainties, were then calculated as the arithmetic mean and standard deviation of the values implied by the individual indicators. In most of the sample objects, H$\alpha$ (see Fig.~\ref{fig:spect_lum_showcase} for examples) had to be excluded from the analysis because of line asymmetries or the occurrence of emission due to the stellar wind. In the most luminous stars, H$\beta$, H$\gamma$, and even H$\delta$ may show signs of influence from the stellar wind and they were also omitted from the fitting process. \begin{table} \setlength{\tabcolsep}{2pt} \caption{Ionisation balances used for the atmospheric parameter determination.} \label{tab:ionisation_equillibria} \centering \resizebox{20.5cm}{!}{\begin{tabular}{lclllllll} \hline\hline ID\,\# & $T_{\mathrm{eff}}$ & \ion{C}{ii/iii} & \ion{N}{i/ii} & \ion{O}{i/ii} & \ion{Ne}{i/ii} & \ion{Al}{ii/iii} & \ion{Si}{ii/iii/iv} & \ion{Fe}{ii/iii} \\ & kK\\ \hline 11 & 19.7 & \tikz\draw[black,fill=black] (0,0) circle (.5ex);~~~~ & \tikzmark{top left 3}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 3} & \tikzmark{top left 16}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 16} & \tikzmark{top left 27}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 27} & ~~~~\tikz\draw[black,fill=black] (0,0) circle (.5ex); & \tikzmark{top left 41}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 41} & ~~~~\tikz\draw[black,fill=black] (0,0) circle (.5ex); \\ 13 & 19.5 & \tikzmark{top left 1}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 1} & \tikzmark{top left 2}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 2} & \tikzmark{top left 15}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 15} & \tikz\draw[black,fill=black] (0,0) circle (.5ex);~~~~ & \tikzmark{top left 29}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 29} & \tikzmark{top left 40}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 40} & \tikzmark{top left 54}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 54} \\ 2 & 18.6 & \tikz\draw[black,fill=black] (0,0) circle (.5ex);~~~~ & ~~~~\tikz\draw[black,fill=black] (0,0) circle (.5ex); & \tikzmark{top left 3}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 3} & \tikzmark{top left 28}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 28} & ~~~~\tikz\draw[black,fill=black] (0,0) circle (.5ex); & \tikzmark{top left 42}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 42} & ~~~~\tikz\draw[black,fill=black] (0,0) circle (.5ex); \\ 10 & 17.2 & \tikz\draw[black,fill=black] (0,0) circle (.5ex);~~~~ & ~~~~\tikz\draw[black,fill=black] (0,0) circle (.5ex); & \tikzmark{top left 17}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 17} & \tikz\draw[black,fill=black] (0,0) circle (.5ex);~~~~ & ~~~~\tikz\draw[black,fill=black] (0,0) circle (.5ex); & \tikzmark{top left 43}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 43} & \tikzmark{top left 55}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 55} \\ 9 & 15.6 & \tikz\draw[black,fill=black] (0,0) circle (.5ex);~~~~ & \tikzmark{top left 5}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 5} & \tikzmark{top left 18}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 18} & \tikz\draw[black,fill=black] (0,0) circle (.5ex);~~~~ & \tikzmark{top left 30}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 30} & \tikzmark{top left 44}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 44} & \tikzmark{top left 56}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 56} \\ 14 & 14.7 & \tikz\draw[black,fill=black] (0,0) circle (.5ex);~~~~ & \tikzmark{top left 6}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 6} & \tikzmark{top left 19}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 19} & \tikz\draw[black,fill=black] (0,0) circle (.5ex);~~~~ & \tikzmark{top left 4}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 4} & \tikzmark{top left 45}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 45} & \tikzmark{top left 57}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 57} \\ 4 & 14.6 & \tikz\draw[black,fill=black] (0,0) circle (.5ex);~~~~ & \tikzmark{top left 7}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 7} & \tikzmark{top left 20}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 20} & \tikz\draw[black,fill=black] (0,0) circle (.5ex);~~~~ & \tikzmark{top left 31}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 31} & \tikzmark{top left 46}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 46} & \tikzmark{top left 58}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 58} \\ 1 & 14.1 & \tikz\draw[black,fill=black] (0,0) circle (.5ex);~~~~ & \tikzmark{top left 8}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 8} & \tikzmark{top left 21}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 21} & \tikz\draw[black,fill=black] (0,0) circle (.5ex);~~~~ & \tikzmark{top left 32}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 32} & \tikzmark{top left 47}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 47} & \tikzmark{top left 59}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 59} \\ 7 & 14.0 & \tikz\draw[black,fill=black] (0,0) circle (.5ex);~~~~ & \tikzmark{top left 9}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 9} & \tikzmark{top left 22}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 22} & \tikz\draw[black,fill=black] (0,0) circle (.5ex);~~~~ & \tikzmark{top left 33}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 33} & \tikzmark{top left 48}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 48} & \tikzmark{top left 60}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 60} \\ 12 & 13.7 & \tikz\draw[black,fill=black] (0,0) circle (.5ex);~~~~ & \tikzmark{top left 10}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 10} & \tikzmark{top left 23}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 23} & \tikz\draw[black,fill=black] (0,0) circle (.5ex);~~~~ & \tikzmark{top left 34}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 34} & \tikzmark{top left 49}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 49} & \tikzmark{top left 61}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 61} \\ 3 & 13.6 & \tikz\draw[black,fill=black] (0,0) circle (.5ex);~~~~ & \tikzmark{top left 11}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 11} & \tikzmark{top left 24}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 24} & \tikz\draw[black,fill=black] (0,0) circle (.5ex);~~~~ & \tikzmark{top left 35}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 35} & \tikzmark{top left 50}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 50} & \tikzmark{top left 62}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 62} \\ 5 & 12.8 & \tikz\draw[black,fill=black] (0,0) circle (.5ex);~~~~ & \tikzmark{top left 12}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 12} &\tikzmark{top left 66}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 66}& \tikz\draw[black,fill=black] (0,0) circle (.5ex);~~~~ & \tikzmark{top left 36}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 36} & \tikzmark{top left 51}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 51} & \tikzmark{top left 63}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 63} \\ 8 & 12.7 & \tikz\draw[black,fill=black] (0,0) circle (.5ex);~~~~ & \tikzmark{top left 13}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 13} & \tikzmark{top left 25}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 25} & \tikz\draw[black,fill=black] (0,0) circle (.5ex);~~~~ & \tikzmark{top left 37}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 37} & \tikzmark{top left 52}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 52} & \tikzmark{top left 64}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 64} \\ 6 & 11.9 & \tikz\draw[black,fill=black] (0,0) circle (.5ex);~~~~ & \tikzmark{top left 14}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 14} & \tikzmark{top left 67}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 67}& \tikz\draw[black,fill=black] (0,0) circle (.5ex);~~~~ & \tikzmark{top left 38}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 38} & \tikzmark{top left 53}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 53} & \tikzmark{top left 65}\tikz\draw[black,fill=black] (0,0) circle (.5ex);~~\tikz\draw[black,fill=black] (0,0) circle (.5ex);\tikzmark{bottom right 65}\\ \hline \end{tabular} \DrawBox[thick,blue]{top left 1}{bottom right 1} \DrawBox[thick,blue]{top left 2}{bottom right 2} \DrawBox[thick,blue]{top left 3}{bottom right 3} \DrawBox[thick,blue]{top left 4}{bottom right 4} \DrawBox[thick,blue]{top left 5}{bottom right 5} \DrawBox[thick,blue]{top left 6}{bottom right 6} \DrawBox[thick,blue]{top left 7}{bottom right 7} \DrawBox[thick,blue]{top left 8}{bottom right 8} \DrawBox[thick,blue]{top left 9}{bottom right 9} \DrawBox[thick,blue]{top left 10}{bottom right 10} \DrawBox[thick,blue]{top left 11}{bottom right 11} \DrawBox[thick,blue]{top left 12}{bottom right 12} \DrawBox[thick,blue]{top left 13}{bottom right 13} \DrawBox[thick,blue]{top left 14}{bottom right 14} \DrawBox[thick,blue]{top left 15}{bottom right 15} \DrawBox[thick,blue]{top left 16}{bottom right 16} \DrawBox[thick,blue]{top left 17}{bottom right 17} \DrawBox[thick,blue]{top left 18}{bottom right 18} \DrawBox[thick,blue]{top left 19}{bottom right 19} \DrawBox[thick,blue]{top left 20}{bottom right 20} \DrawBox[thick,blue]{top left 21}{bottom right 21} \DrawBox[thick,blue]{top left 22}{bottom right 22} \DrawBox[thick,blue]{top left 23}{bottom right 23} \DrawBox[thick,blue]{top left 24}{bottom right 24} \DrawBox[thick,blue]{top left 25}{bottom right 25} \DrawBox[thick,blue]{top left 27}{bottom right 27} \DrawBox[thick,blue]{top left 28}{bottom right 28} \DrawBox[thick,blue]{top left 29}{bottom right 29} \DrawBox[thick,blue]{top left 30}{bottom right 30} \DrawBox[thick,blue]{top left 31}{bottom right 31} \DrawBox[thick,blue]{top left 32}{bottom right 32} \DrawBox[thick,blue]{top left 33}{bottom right 33} \DrawBox[thick,blue]{top left 34}{bottom right 34} \DrawBox[thick,blue]{top left 35}{bottom right 35} \DrawBox[thick,blue]{top left 36}{bottom right 36} \DrawBox[thick,blue]{top left 37}{bottom right 37} \DrawBox[thick,blue]{top left 38}{bottom right 38} \DrawBox[thick,blue]{top left 40}{bottom right 40} \DrawBox[thick,blue]{top left 41}{bottom right 41} \DrawBox[thick,blue]{top left 42}{bottom right 42} \DrawBox[thick,blue]{top left 43}{bottom right 43} \DrawBox[thick,blue]{top left 44}{bottom right 44} \DrawBox[thick,blue]{top left 45}{bottom right 45} \DrawBox[thick,blue]{top left 46}{bottom right 46} \DrawBox[thick,blue]{top left 47}{bottom right 47} \DrawBox[thick,blue]{top left 48}{bottom right 48} \DrawBox[thick,blue]{top left 49}{bottom right 49} \DrawBox[thick,blue]{top left 50}{bottom right 50} \DrawBox[thick,blue]{top left 51}{bottom right 51} \DrawBox[thick,blue]{top left 52}{bottom right 52} \DrawBox[thick,blue]{top left 53}{bottom right 53} \DrawBox[thick,blue]{top left 54}{bottom right 54} \DrawBox[thick,blue]{top left 55}{bottom right 55} \DrawBox[thick,blue]{top left 56}{bottom right 56} \DrawBox[thick,blue]{top left 57}{bottom right 57} \DrawBox[thick,blue]{top left 58}{bottom right 58} \DrawBox[thick,blue]{top left 59}{bottom right 59} \DrawBox[thick,blue]{top left 60}{bottom right 60} \DrawBox[thick,blue]{top left 61}{bottom right 61} \DrawBox[thick,blue]{top left 62}{bottom right 62} \DrawBox[thick,blue]{top left 63}{bottom right 63} \DrawBox[thick,blue]{top left 64}{bottom right 64} \DrawBox[thick,blue]{top left 65}{bottom right 65} \DrawBox[thick,blue]{top left 66}{bottom right 66} \DrawBox[thick,blue]{top left 67}{bottom right 67}} \end{table} \subsubsection{Helium abundance}\label{section:helium_fitting} In the first step of the iterative procedure, the helium number density was set to the cosmic value of $y$\,=\,0.089 \citep{NiPr12} in order to derive a satisfactory estimate for effective temperature and surface gravity. With these values, fitting of the weakest helium lines permitted a refined abundance estimate for helium with relative uncertainties between $\delta y$\,$\coloneqq$\,$\Delta y/y$\,$\approx$\,5--15\% to be derived. Stronger lines were generally excluded from the analysis, as they are less sensitive to abundance changes. Constraining the helium abundance is of importance not only per se. It strongly influences the molecular weight of the atmospheric elemental mixture and thereby changes the density and pressure stratification of the atmosphere, leading to changes in the derived surface gravity of up to $\Delta\log g$\,=\,0.05\,dex. Further changes of the helium abundance in the following steps of the iteration scheme (i.e. after correcting $T_{\mathrm{eff}}$, $\log g$, and $\xi$) were considered, but were found to lie within the uncertainties of the first determination. \subsubsection{Microturbulence} Turbulent flows of matter on scales smaller than unit optical-depth can influence the shape and strength of spectral lines. They are parameterised as an ad-hoc microturbulence broadening parameter $\xi$ (measured in km\,s$^{-1}$) in addition to thermal broadening. Since this microturbulent velocity directly influences the broadening and therefore also the strength of the fitted metal lines, an incorrect value will lead to offsets in ionisation balances and consequently to inaccurate estimates of effective temperature. In fact, because of the pressure of the turbulent matter flows, microturbulence can also change the density structure of the atmosphere noticeably (see Sect.~\ref{subsection:turb_pressure}), affecting the surface gravity determination. The appropriate value for $\xi$ can be found by enforcing the criterion that the abundances derived from various spectral lines of a given element are independent of the strength of the spectral lines. For the analysis of our sample, we measured the equivalent widths $W_{\lambda}$ of several trustworthy lines of \ion{N}{i/ii} and \ion{Si}{ii/iii/iv} as auxiliary quantities by direct integration of the observed spectral lines and compared their fitted abundances (from spectrum synthesis) for multiple values of $\xi$. This procedure is shown in Fig.~\ref{fig:microturbulence_hd93840}: as $\xi$ increases, the equivalent widths of the strong lines are affected more markedly by microturbulent broadening such that the abundance values necessary to fit them are reduced. The correct value for $\xi$ is found when the fitted individual line abundances of a given element no longer correlate with their $W_{\lambda}$ and the line-to-line abundance scatter is minimised. Uncertainties of the equivalent widths are of the order of the symbol size. In the given sample plot, the determinations are consistent with a microturbulent velocity of $\xi$\,$\approx$\,14\,km\,s$^{-1}$. The value so derived was then checked for consistency with multiple lines of \ion{C}{ii} and \ion{Mg}{ii} in later steps of the atmospheric parameter iteration, and with \ion{Fe}{ii/iii} lines in a further inspection. Corrections of $\Delta \xi$\,$\approx$\,1--2\,km\,s$^{-1}$ to the initial value were implied with respect to the initial value in some cases, such that a final value was obtained, fully consistent with the available indicators from several chemical species and ions. \subsubsection{Projected rotational velocity and macroturbulence} Since the parameters of projected rotational velocity $\varv \sin i$ and macroturbulent velocity $\zeta$ do not affect the model atmosphere structure nor influence the line formation, their derivation is not related to time-expensive grid calculations. By convolution of the synthetic spectrum with the corresponding rotational and macroturbulent broadening functions \citep[realised here by a radial-tangential model,][]{Gray75} and by fitting weak metal lines of the observed spectra, we can find well-fitting values to within about 10\% uncertainty for both $\varv \sin i$ and $\zeta$. As has been pointed out in previous studies, multiple values of a pair of these parameters can lead to a similarly satisfactory fit to individual lines \citep[e.g.][]{Ryansetal02,FiPr12,SiHe14}. However, this ambiguity of solutions may be minimised using suited line blends, see for example Fig.~11 of \citet{Przybillaetal06} or Fig.~5 of \citet{FiPr12}. The existence of non-rotational broadening in B-type supergiants is well established. Physically, surface motions due to a sub-surface convection zone and stellar pulsations, among others, are the phenomena that are subsumed by the macroturbulence parameter \citep{IACOBIII}. \subsubsection{Elemental abundances} Having derived a consistent solution for all primary atmospheric parameters, abundances of all investigated elements and ions were once more determined in a last step in order to allow a consistent fit of a single synthetic spectrum to the observed spectrum to be made. The final abundances of the individual elements were then computed as the arithmetic mean of the entire sample of fitted lines and the respective uncertainty as the $1\sigma$ standard deviation. For the specification of the abundance of an element $X$, the customary logarithmic scale normalised to 12 was chosen, such that $\varepsilon(X)$\,=\,$\log(X/H)$\,+\,12. A selected sub-sample of lines in one of the analysed spectra is shown in Fig.~\ref{fig:sample_plot} compared to the best-fitting model. The simultaneous reproduction of the Balmer lines, the helium and metal lines of different ionisation stages regardless of individual strength demonstrates the consistency of the derived solution. \subsection{Stellar mass and age}\label{section:masses_radii_method} The derivation of the spectroscopically accessible atmospheric parameters (in conjunction with the photometric data) allowed the determination of stellar masses. Effective temperature and surface gravity alone suffice to derive the initial, or zero-age main-sequence stellar mass $M_{\mathrm{ZAMS}}$. For this, we define the spectroscopic luminosity $\mathscr{L}$ as \begin{equation}\label{eq:spectroscopic_luminosity} \mathscr{L}/\mathscr{L}_{\odot} = \frac{\left(T_{\mathrm{eff}}/T_{\mathrm{eff},\odot}\right)^4}{\left(g/g_{\odot}\right)}\,, \end{equation} \citep{LaKu14}, where the values for the solar effective temperature and surface gravity are $T_{\mathrm{eff},\odot}$\,=\,5777\,K and $\log g_{\odot}$\,=\,4.44. Figure~\ref{fig:stellar_evolution} shows the sample stars in a 'spectroscopic' Hertzsprung-Russell diagram (sHRD) with tracks of stellar evolution models according to \cite{Ekstroemetal12} tracing the spectroscopic luminosity as a function of log-scale effective temperature. Interpolating on this grid of rotating models, we derived $M_{\mathrm{ZAMS}}$. Under the assumption of a normal, single-star, red-ward evolution in the HRD we then interpolated the model tracks in effective temperature to estimate the objects' current masses $M_{\mathrm{evol}}$ in their respective evolved state. Depending on the initial mass of the objects, the evolved stars lost more than 2\,$M_{\odot}$ through their stellar wind at the high mass limit of the sample and negligibly little ($<$\,0.05\,$M_{\odot}$) for the least massive objects. From a comparison of rotating versus non-rotating models, it is apparent that the derived masses depend on the initial rotational velocity. Specifically, we find that values of $M_{\mathrm{evol}}$ -- as derived from non-rotating models -- are larger by up to 7\,$M_{\odot}$ at the high mass limit and about 1\,$M_\odot$ larger at the lower mass limit of our sample. The initial rotational velocities of the sample stars on the ZAMS are unknown. However, mass loss (and therefore angular momentum loss) and the expansion to supergiant dimensions lead to a strong reduction of the rotational velocities, for example from about 300\,km\,s$^{-1}$ on the ZAMS to about 50\,km\,s$^{-1}$ for the rotating models of \citet{Ekstroemetal12} at the boundary spanned by the stars ID\#2 to \#10 to \#12 in Fig.~\ref{fig:stellar_evolution}. As our sample stars show $\varv \sin i$-values between about 20 to 50\,km\,s$^{-1}$ one would expect them to stem from stars with about average rotation on the ZAMS, and we can exclude initially very slow rotators with confidence, as they would be seen near zero $\varv \sin i$~at the supergiant stage. We note, however, that the predictive power of the stellar evolution models needs to be treated with caution because of remaining uncertainties of the models. We therefore would like to stress that our results were obtained under the stated assumptions, and some small-scale systematic errors are likely to be present in the fundamental stellar parameters, but these cannot be quantified at the current time. Ages of the stars were derived from interpolation in the isochrones for the (rotating) stellar evolution models of \citet{Ekstroemetal12}. Again, a normal red-ward evolution as single stars was assumed for the sample objects. \subsection{SED fitting}\label{section:sed_fitting} To fit the multi-band photometry and UV-data, synthetic \,{\sc Atlas9}\footnote{The {\sc Atlas9} starting models are equivalent to {\sc Atlas12} models for the purpose of SED fitting, as the temperature structures are practically identical for nearly scaled-solar abundances (as realised here). However, they can readily be employed for the comparison without requiring adjustment to the low-resolution observations.} model SEDs of all sample objects were reddened according to the mean extinction law of \cite{fitzpatrick99}. The flux of the observed magnitudes was calibrated with zero-points and fluxes according to the SVO Filter Profile Sevice\footnote{\url{http://svo2.cab.inta-csic.es/theory/fps/}} \citep{svoI,svoII}. Models were then fitted for the two-parameter solution by \cite{fitzpatrick99} -- total-to-selective extinction $R_V$\,=\,$A_V/E\left(B-V\right)$ and colour excess $E\left(B-V\right)$ -- in order to match the observations. The visual extinction $A_V$ is then simply the product of the two parameters. A different approach had to be employed for HD~183143 because of a highly anomalous reddening law, see \citet{Ebenbichleretal22} for details. This method of examining the SED of our final solution generally worked very well and produced a high-precision characterisation of the interstellar medium along the sight lines towards the sample objects. In addition to being consistent with small uncertainties (see Sect.~\ref{section:sightlines}) the method can detect unusual features in the extinction curve, such as excess radiation in the WISE pass bands, hinting at black body radiation contributions to the stellar SED. Specifically, this process can detect anormalies in the composition of the interstellar medium along these sight lines, as in the case of HD~183143 mentioned above. \subsection{Spectroscopic distance}\label{sec:spec_dist_method} Having derived spectroscopic and fundamental parameters, as well as a precise reddening law for all sample objects, spectroscopic distances $d_{\mathrm{spec}}$ were calculated using an expression by \cite{Ramspecketal2001} \begin{equation}\label{eq:spec_dist} d_{\mathrm{spec}} = 7.11 \times 10^4 \sqrt{H_{\nu}~M_{\mathrm{evol}}~10^{0.4 m_{V_0}-\log g}}, \end{equation} where $H_\nu$ denotes the Eddington flux, given in units of erg\,cm$^{-2}$\,s$^{-1}$\,Hz$^{-1}$ at 550 nm, $M_{\mathrm{evol}}$ the evolutionary mass in units of $M_{\odot}$, $m_{V_0}$\,=\,$m_V-A_V$ the dereddened Johnson $V$ magnitude in mag, and $\log g$ the logarithmic surface gravity in cgs units. Equation \ref{eq:spec_dist} utilises the Vega flux calibration according to \cite{Heberetal84} and provides distances in units of pc. We have to stress once more that our $M_\mathrm{evol}$-values were derived under the assumption of the overall applicability of the evolution tracks for rotating stars by \citet{Ekstroemetal12}. As we have argued in Sect.~\ref{section:masses_radii_method}, the true initial rotational velocities of the sample stars are unknown, therefore some additional systematic uncertainty applies to the spectroscopic distances according to Eqn.~\ref{eq:spec_dist} that may either increase or decrease the derived value. These spectroscopic distances $d_{\mathrm{spec}}$ may be compared to distances $d_{\mathrm{Gaia}}$ derived from Gaia early data release 3 (EDR3) parallaxes \citep{Gaia2016,Gaia2020}. One potential issue is a mismatch of the Gaia distance with the spectroscopic distance because of a biased evolutionary mass, however the effects are only of order $\propto$\,$M_\mathrm{evol}^{1/2}$. Alternatively, this can uncover potentially undetected problems with the spectroscopic analysis. For instance, widely diverging estimations of distances can hint at an incorrect value for the surface gravity as this parameter contributes most of the uncertainty to the equation. It can, however, also uncover an unusual evolutionary development of an object in question (see Sect.~\ref{section:spectroscopic_distances}). Gaia EDR3 parallaxes may also be affected by bias, such as increased uncertainties for the five brightest supergiants of the sample with Gaia $G$ magnitude smaller than 6, or for objects with a large renormalised unit weight error (RUWE), like HD~51309 (ID\#9), HD~125288 (ID\#12), and HD~164353 (ID\#14), which have a RUWE of about 2 -- all other objects have RUWE-values around 1. \subsection{Bolometric correction, luminosity and radius}\label{section:bolometric_correction} For the calculation of the bolometric correction $B.C.$, we defined the bolometric magnitude $m_{\mathrm{bol}}$ for each star as the direct integration of its \,{\sc Atlas9} model SED over all wavelengths, with the integration constant chosen such that a solar \,{\sc Atlas9} model satisfies $M_{\mathrm{bol},\odot}$\,=\,4.74 \citep[see][]{Besseletal98}. The $B.C.$ was then calculated as the difference between $m_{\mathrm{bol}}$ and the synthetic $m_V$. In order to determine the stellar luminosity $L$, the absolute $V$-band magnitude $M_V$ was calculated from the observed apparent magnitude $m_V$ \citep{Mermilliod97} utilising the derived spectroscopic distances $d_{\mathrm{spec}}$ (Sect.~\ref{sec:spec_dist_method}), as well as the total-to-selective extinction $R_V$, and colour excess $E(B-V)$ (Sect.~\ref{section:sed_fitting}) in the distance modulus. Correction of $M_V$ by $B.C.$ yielded the absolute bolometric magnitude $M_{\mathrm{bol}}$, from which $L$ was derived using the above value for $M_{\mathrm{bol},\odot}$. The effective temperature and luminosity were finally utilised to determine the stellar radius $R$ by application of the Stefan-Boltzmann law. \begin{table*} \caption{Stellar parameters of the sample objects.} \label{tab:stellar_parameters} \centering {\small \setlength{\tabcolsep}{1.6mm} \begin{tabular}{rlr@{\hspace{0.1mm}}rrcrrrlccrrrrrr} \hline\hline ID\# & Object & & $T_{\mathrm{eff}}$ & $\log g$ & $y$ & $\xi$ & $\varv \sin i$ & $\zeta$ & $R_V$ & $E\left(B-V\right)$ & $B.C.$ & $M_{\mathrm{evol}}$ & $R$ & $\log L/L_\sun$ & $\log \tau_\mathrm{evol}$ & $d_{\mathrm{spec}}$ & $d_{\mathrm{Gaia}}$\tablefootmark{a} \\ \cline{7-9} & & & kK & (cgs) & & \multicolumn{3}{c}{$\mathrm{km\,s}^{-1}$} & & mag & mag & $M_{\odot}$ & $R_{\odot}$ & & yr & pc & pc \\ \hline 1 & HD 7902 & & 14.1 & 2.13 & 0.089 & 9 & 36 & 35 & 3.16 & 0.58 & $-0.999$ & 19.2 & 64 & 5.16 & 6.98 & 2900 & 2487 \\ & & $\pm$ & 0.2 & 0.05 & 0.006 & 2 & 5 & 5 & 0.1 & 0.03 & & 0.8 & 7 & 0.09 & 0.3 & 280 & $^{110}_{80}$ \\ 2 & HD 14818 & & 18.6 & 2.45 & 0.095 & 14 & 48 & 40 & 3.09 & 0.56 & $-1.645$ & 23.6 & 49 & 5.41 & 6.89 & 2150 & 2121 \\ & & $\pm$ & 0.3 & 0.07 & 0.008 & 2 & 6 & 5 & 0.1 & 0.03 & & 1.9 & 6 & 0.11 & 0.03 & 250 & $^{150}_{110}$ \\ 3 & HD 25914 & & 13.6 & 1.81 & 0.092 & 11 & 35 & 40 & 2.97 & 0.76 & $-0.943$ & 25.7 & 106 & 5.54 & 6.86 & 6030 & 5431 \\ & & $\pm$ & 0.2 & 0.06 & 0.004 & 2 & 5 & 5 & 0.1 & 0.03 & & 1.6 & 13 & 0.10 & 0.03 & 640 & $^{790}_{440}$ \\ 4 & HD 36371 & & 14.6 & 2.11 & 0.086 & 11 & 36 & 35 & 3.35 & 0.52 & $-1.081$ & 21.1 & 68 & 5.28 & 6.94 & 1200 & 1214 \\ & & $\pm$ & 0.3 & 0.06 & 0.005 & 2 & 5 & 5 & 0.1 & 0.03 & & 1.2 & 8 & 0.10 & 0.03 & 130 & $^{380}_{220}$ \\ 5 & HD 183143 & & 12.8 & 1.76 & 0.099 & 7 & 37 & 27 & 3.3 & 1.22 & $-0.793$ & 24.2 & 109 & 5.46 & 6.88 & 1530 & 2168 \\ & & $\pm$ & 0.2 & 0.05 & 0.005 & 2 & 5 & 5 & 0.1 & 0.03 & & 1.4 & 15 & 0.11 & 0.03 & 170 & $^{120}_{120}$ \\ 6 & HD 184943 & & 11.9 & 1.88 & 0.099 & 9 & 35 & 25 & 2.97 & 0.84 & $-0.600$ & 17.7 & 82 & 5.07 & 7.02 & 4040 & 4090 \\ & & $\pm$ & 0.2 & 0.05 & 0.002 & 2 & 6 & 5 & 0.1 & 0.03 & & 0.8 & 10 & 0.10 & 0.04 & 400 & $^{240}_{240}$ \\ 7 & HD 191243 & & 14.0 & 2.64 & 0.087 & 8 & 27 & 25 & 2.88 & 0.33 & $-0.972$ & 11.0 & 27 & 4.39 & 7.32 & 1220 & 1205 \\ & & $\pm$ & 0.3 & 0.06 & 0.011 & 2 & 6 & 5 & 0.1 & 0.03 & & 0.5 & 3 & 0.09 & 0.05 & 120 & $^{30}_{30}$ \\ 8 & HD 199478 & & 12.7 & 1.76 & 0.107 & 8 & 40 & 40 & 3.03 & 0.62 & $-0.783$ & 24.0 & 111 & 5.46 & 6.88 & 2440 & 2423 \\ & & $\pm$ & 0.2 & 0.05 & 0.004 & 2 & 6 & 5 & 0.1 & 0.03 & & 1.3 & 12 & 0.09 & 0.03 & 230 & $^{230}_{220}$ \\ 9 & HD 51309 & & 15.6 & 2.59 & 0.081 & 10 & 30 & 35 & 3.08 & 0.11 & $-1.236$ & 13.7 & 32 & 4.72 & 7.17 & 950 & 1108 \\ & & $\pm$ & 0.4 & 0.05 & 0.003 & 2 & 6 & 5 & 0.1 & 0.03 & & 0.5 & 4 & 0.09 & 0.04 & 90 & $^{410}_{210}$ \\ 10 & HD 111990 & & 17.2 & 2.62 & 0.089 & 12 & 40 & 40 & 3.3 & 0.45 & $-1.464$ & 16.1 & 33 & 4.94 & 7.07 & 1940 & 2418 \\ & & $\pm$ & 0.3 & 0.05 & 0.003 & 2 & 6 & 5 & 0.1 & 0.03 & & 0.7 & 4 & 0.09 & 0.04 & 180 & $^{160}_{180}$ \\ 11 & HD 119646 & & 19.7 & 2.9 & 0.100 & 14 & 37 & 40 & 3.53 & 0.34 & $-1.781$ & 15.4 & 23 & 4.87 & 7.10 & 1620 & 1721 \\ & & $\pm$ & 0.2 & 0.07 & 0.009 & 2 & 6 & 5 & 0.1 & 0.03 & & 1.2 & 3 & 0.11 & 0.05 & 190 & $^{80}_{70}$ \\ 12 & HD 125288 & & 13.7 & 2.77 & 0.094 & 6 & 23 & 30 & 3.65 & 0.30 & $-0.934$ & 9.3 & 21 & 4.14 & 7.46 & 390 & 438 \\ & & $\pm$ & 0.3 & 0.05 & 0.007 & 2 & 4 & 5 & 0.1 & 0.03 & & 0.3 & 2 & 0.09 & 0.05 & 40 & $^{50}_{30}$ \\ 13 & HD 159110 & & 19.5 & 3.42 & 0.095 & 3 & 17 & 15 & 3.3 & 0.22 & $-1.826$ & 9.3 & 10 & 4.11 & 7.45 & 1290 & 1362 \\ & & $\pm$ & 0.3 & 0.05 & 0.001 & 2 & 4 & 5 & 0.1 & 0.03 & & 0.3 & 1 & 0.09 & 0.05 & 120 & $^{80}_{70}$ \\ 14 & HD 164353 & & 14.7 & 2.57 & 0.091 & 8 & 20 & 32 & 3.61 & 0.19 & $-1.081$ & 12.6 & 31 & 4.60 & 7.22 & 620 & 797 \\ & & $\pm$ & 0.3 & 0.05 & 0.004 & 2 & 4 & 5 & 0.1 & 0.03 & & 0.4 & 4 & 0.09 & 0.04 & 60 & $^{200}_{130}$ \\ \hline \end{tabular} \tablefoot{Uncertainties are 1$\sigma$-values, except where noted otherwise. \tablefoottext{a}{\cite{Gaia2016,Gaia2020} - distances and uncertainties correspond to 'photogeometric distances' and associated $14^{\mathrm{th}}$ and $86^{\mathrm{th}}$ confidence percentiles \citep{Bailer-Jones_etal_2021}.} }} \end{table*} \section{Results}\label{section:results} The results of the analysis of the sample stars are summarised in Table~\ref{tab:stellar_parameters}. The parameters listed are: internal identification number, HD-designation, effective temperature, surface gravity, surface helium abundance, microturbulent, projected rotational and macroturbulent velocities, total-to-selective extinction parameter, colour excess, bolometric correction, evolutionary mass, radius, luminosity, evolutionary age, spectroscopic and Gaia EDR3 distances \citep[probabilistic estimations of 'photogeometric' distances,][]{Bailer-Jones_etal_2021}. The respective uncertainties, given in the line below the observed values, denote $1\sigma$-intervals. \subsection{Atmospheric and fundamental stellar parameters}\label{section:stellar_parameters} For the effective temperature and surface gravity, the uncertainties roughly match the values derived in previous work analysing BA-type supergiants with a similar analysis approach as employed here \citep{FiPr12}, with $\delta T_{\mathrm{eff}}$\,$\approx$\,1--3\% and $\Delta \log g$\,$\approx$\,0.05\,dex. Abundances of helium were fitted with uncertainties of $\delta y$\,$\approx$\,5--15\% owing to the line-to-line scatter from the weakest \ion{He}{i} features analysed, that is those most sensitive to abundance variations. The uncertainty of the microturbulent velocity is generally limited by the size of the grid used during the fitting process. Though observed values of $\xi$ were in some cases inspected on scales of 1\,km\,s$^{-1}$, a conservative estimate of $\Delta \xi$\,$\approx$\,2\,km\,s$^{-1}$ is adopted throughout. Projected rotational velocities show relative uncertainties amounting to typically $\delta \varv \sin(i)$\,$\approx$\,10--15\% owing to the degeneracy in the joint derivation with macroturbulent velocities. For the macroturbulence $\zeta$ the uncertainties were estimated at a value of 5\,km\,s$^{-1}$. A similar but weaker ambiguity in deduced values is generally present in the estimation of total-to-selective extinction parameter and colour excess. The fitting procedure described in Sect.~\ref{section:sed_fitting} produces error margins of the order of $\Delta R_V$\,$\approx$\,0.1 and $\Delta E(B-V)$\,$\approx$\,0.03\,mag. Although the exact margins in this derivation depend on the available data (in particular in the UV-range) these values generally represent the typical uncertainties for the entire sample. For the derivation of the $B.C.$~no detailed analysis of uncertainties was conducted, though variation of parameters in input models hinted at an uncertainty range of $\Delta B.C.$\,$\approx$\,0.04--0.06\,mag for both hotter and cooler sample stars. The parameter of evolved mass $M_{\mathrm{evol}}$ was derived using the ZAMS mass estimates, tracing their mass loss in evolution tracks by \cite{Ekstroemetal12}, see Sect.~\ref{section:mass_estimates_and_discrepancy}, such that the uncertainties of the evolved masses are assumed to be identical to the ZAMS mass uncertainty of about $\delta M_{\mathrm{ZAMS}}$\,=\,$\delta M_{\mathrm{evol}}$\,$\approx$\,5\%. Radii of sample objects show relative errors of about $\delta R$\,$\approx$\,10\%, stemming largely from the associated uncertainty in luminosity, which amounts to typically $\Delta \log L/L_{\odot}$\,$\approx$\,0.1\,dex. The distances derived in this work show consistent relative uncertainties of $\delta d_{\mathrm{spec}}$\,$\approx$\,10\%, matching the sample mean relative difference between the deduced values and those derived from parallactic distances by the Gaia mission (see Sect.~\ref{section:spectroscopic_distances}). The fundamental parameters can be expected to be subject to a small amount of additional systematic error because a set of stellar evolution models for a particular rotational velocity were adopted, see the discussion in Sect.~\ref{section:masses_radii_method}. \begin{table*} \caption{Metal abundances $\varepsilon (X)$\,=\,$\log (X/\mathrm{H})+12$ and metallicity $Z$ (by mass) of the sample objects.} \label{tab:abundances} \centering {\small \setlength{\tabcolsep}{1.5mm} \begin{tabular}{lll@{\hspace{0.1mm}}llllllllllc} \hline\hline ID\# & Object& & C & N & O & Ne & Mg & Al & Si & S & Ar & Fe & $Z$ \\ \hline 1 & HD 7902 & & 8.25 (9) & 8.27 (27) & 8.75 (17) & 7.96 (12) & 7.51 (8) & 6.44 (5) & 7.54 (10) & 6.96 (13) & 6.41 (4) & 7.59 (24) & 0.014 \\ & & $\pm$ & 0.04 & 0.06 & 0.05 & 0.05 & 0.07 & 0.09 & 0.09 & 0.07 & 0.07 & 0.11 & 0.002 \\ 2 & HD 14818 & & 8.00 (9) & 8.33 (24) & 8.50 (22) & 8.08 (6) & 7.49 (4) & 6.17 (5) & 7.62 (8) & 6.94 (5) & 6.54 (4) & 7.37 (17) & 0.011 \\ & & $\pm$ & 0.05 & 0.09 & 0.06 & 0.05 & 0.11 & 0.05 & 0.07 & 0.03 & 0.05 & 0.07 & 0.002 \\ 3 & HD 25914 & & 8.09 (8) & 8.22 (25) & 8.58 (18) & 7.96 (11) & 7.28 (3) & 6.19 (4) & 7.29 (9) & 6.72 (10) & 6.38 (3) & 7.50 (22) & 0.010 \\ & & $\pm$ & 0.09 & 0.09 & 0.05 & 0.06 & 0.11 & 0.06 & 0.08 & 0.09 & 0.06 & 0.07 & 0.002 \\ 4 & HD 36371 & & 8.10 (11) & 8.33 (27) & 8.57 (26) & 8.06 (14) & 7.40 (5) & 6.34 (3) & 7.44 (10) & 6.96 (14) & 6.34 (7) & 7.48 (26) & 0.012 \\ & & $\pm$ & 0.08 & 0.06 & 0.07 & 0.05 & 0.07 & 0.02 & 0.08 & 0.09 & 0.05 & 0.11 & 0.002 \\ 5 & HD 183143 & & 8.31 (9) & 8.69 (26) & 8.78 (12) & 8.09 (12) & 7.69 (6) & 6.44 (6) & 7.58 (7) & 7.10 (10) & 6.56 (2) & 7.66 (26) & 0.018 \\ & & $\pm$ & 0.07 & 0.06 & 0.05 & 0.07 & 0.09 & 0.06 & 0.06 & 0.08 & 0.08 & 0.10 & 0.002 \\ 6 & HD 184943 & & 8.43 (6) & 8.63 (19) & 8.84 (8) & 8.02 (12) & 7.63 (4) & 6.47 (7) & 7.66 (7) & 7.05 (12) & 6.63 (2) & 7.75 (16) & 0.019 \\ & & $\pm$ & 0.04 & 0.06 & 0.06 & 0.05 & 0.01 & 0.08 & 0.08 & 0.07 & 0.12 & 0.10 & 0.002 \\ 7 & HD 191243 & & 8.28 (9) & 8.24 (35) & 8.72 (27) & 7.94 (14) & 7.57 (9) & 6.34 (4) & 7.52 (10) & 6.98 (13) & 6.39 (7) & 7.62 (31) & 0.014 \\ & & $\pm$ & 0.08 & 0.08 & 0.08 & 0.05 & 0.05 & 0.13 & 0.06 & 0.07 & 0.07 & 0.09 & 0.002 \\ 8 & HD 199478 & & 8.20 (6) & 8.63 (26) & 8.74 (15) & 7.99 (12) & 7.64 (5) & 6.40 (4) & 7.58 (7) & 7.11 (13) & 6.54 (5) & 7.76 (23) & 0.016 \\ & & $\pm$ & 0.05 & 0.06 & 0.08 & 0.05 & 0.08 & 0.09 & 0.06 & 0.08 & 0.09 & 0.09 & 0.002 \\ 9 & HD 51309 & & 8.29 (12) & 8.23 (27) & 8.71 (24) & 8.02 (11) & 7.52 (8) & 6.52 (3) & 7.56 (9) & 7.03 (13) & 6.41 (9) & 7.55 (28) & 0.014 \\ & & $\pm$ & 0.06 & 0.05 & 0.07 & 0.05 & 0.09 & 0.10 & 0.06 & 0.08 & 0.09 & 0.16 & 0.002 \\ 10 & HD 111990 & & 8.13 (13) & 8.21 (31) & 8.65 (17) & 8.06 (12) & 7.50 (3) & 6.17 (6) & 7.50 (7) & 7.07 (11) & 6.44 (10) & 7.49 (22) & 0.012 \\ & & $\pm$ & 0.06 & 0.06 & 0.08 & 0.07 & 0.06 & 0.11 & 0.05 & 0.09 & 0.08 & 0.07 & 0.002 \\ 11 & HD 119646 & & 8.24 (15) & 8.02 (24) & 8.65 (16) & 8.15 (11) & 7.51 (5) & 6.22 (5) & 7.54 (7) & 7.01 (4) & 6.47 (7) & 7.41 (19) & 0.012 \\ & & $\pm$ & 0.08 & 0.06 & 0.05 & 0.04 & 0.06 & 0.08 & 0.03 & 0.09 & 0.05 & 0.06 & 0.002 \\ 12 & HD 125288 & & 8.35 (8) & 8.50 (29) & 8.80 (20) & 8.06 (12) & 7.54 (10) & 6.26 (6) & 7.62 (11) & 7.11 (13) & 6.52 (10) & 7.60 (34) & 0.017 \\ & & $\pm$ & 0.07 & 0.07 & 0.06 & 0.06 & 0.09 & 0.07 & 0.09 & 0.05 & 0.08 & 0.08 & 0.002 \\ 13 & HD 159110 & & 8.53 (19) & 7.92 (35) & 8.85 (22) & 8.10 (21) & 7.49 (10) & 6.34 (5) & 7.54 (7) & 7.20 (16) & 6.54 (16) & 7.55 (21) & 0.016 \\ & & $\pm$ & 0.06 & 0.05 & 0.07 & 0.07 & 0.06 & 0.04 & 0.10 & 0.09 & 0.06 & 0.08 & 0.002 \\ 14 & HD 164353 & & 8.31 (10) & 8.37 (33) & 8.81 (20) & 8.05 (19) & 7.51 (7) & 6.32 (4) & 7.65 (10) & 7.11 (12) & 6.47 (12) & 7.63 (32) & 0.016 \\ & & $\pm$ & 0.04 & 0.06 & 0.06 & 0.06 & 0.05 & 0.04 & 0.10 & 0.07 & 0.08 & 0.11 & 0.002 \\ \hline & CAS~$^{a,b}$ & & 8.35 & 7.79 & 8.76 & 8.09 & 7.56 & 6.30 & 7.50 & 7.14 & 6.50 & 7.52 & 0.014\\ & & $\pm$ & 0.04 & 0.04 & 0.05 & 0.05 & 0.05 & 0.07 & 0.06 & 0.06 & 0.08 & 0.03 & 0.002\\ \hline \end{tabular} \tablefoot{Uncertainties are 1$\sigma$-values from the line-to-line scatter. Numbers in parentheses quantify the analysed lines.\\ $^{(a)}$~\citet{NiPr12}~~~$^{(b)}$~\citet{Przybillaetal13}}} \end{table*} \subsection{Comparison with previous analyses}\label{section:comparison_previous_analyses} Many of our sample stars were analysed in previous studies that employed full non-LTE model atmospheres. For comparison, data from the following studies were considered:\\ {\sc i)} \cite{MaPu08} employed {\sc Fastwind} for their analyses. They utilised hydrogen, helium, and \ion{Si}{ii/iii/iv} lines to derive temperature, surface gravity, and microturbulence iteratively. The derivation of projected rotational velocities was based on the analysis of the shape of the Fourier transform (FT) of absorption lines \citep{Gray75, SiHe07}. Two objects are in common.\\ {\sc ii)} \cite{Searle_etal_08} used the stellar atmosphere codes {\sc Tlusty} and {\sc Cmfgen} \citep{HiMi98}. To estimate the temperature, the diagnostic silicon lines of \ion{Si}{iv} 4089 and \ion{Si}{iii} 4552--4574\,{\AA} were used in supergiants of spectral types B0 to B2 and \ion{Si}{ii} 4128--4130 and \ion{Si}{iii} 4552--4574\,{\AA} for B2.5 to B5 supergiants. The luminosity was then constrained by inferred values of the absolute visual magnitude $M_V$ and corrected if necessary. Surface gravity $\log g$ was determined by fitting H$\gamma$ and H$\delta$. The microturbulent velocity was determined by analysing the \ion{Si}{iii} triplet lines. Three objects are in common.\\ {\sc iii)} \cite{Fraser_etal_10} used the hydrostatic line-blanketed non-LTE codes {\sc Tlusty} and {\sc Synspec} \citep{hubeny_88,HuLa95} that consider plane-parallel geometry. Effective temperatures were estimated on the basis of silicon ionisation equilibria and surface gravities from a fit of the H$\gamma$ and H$\delta$ lines. For the determination of microturbulence they relied solely on the analysis of the \ion{Si}{iii} triplet at 4552--4574\,{\AA} and projected rotational velocities were derived using the FT method. Six objects are shared.\\ {\sc iv)} \cite{IACOBIII} used the hydrodynamic line-blanketed non-LTE code {\sc Fastwind} \citep{Santolaya-Rey97,Pulsetal05} that accounts for spherical geometry, following the spectroscopic analysis strategy described by \cite{Castroetal12}. They analysed H$\beta$, H$\gamma$, H$\delta$, multiple lines of \ion{He}{i}, as well as the silicon multiplets \ion{Si}{ii} 4128-4130\,{\AA}, \ion{Si}{iii} 4552--4574\,{\AA}, and \ion{Si}{iv}\,4116\,{\AA} to derive $T_\mathrm{eff}$ and $\log g$. For the derivation of projected rotational velocities they used the \,{\sc iacob-broad} tool \citep{SiHe14} on lines of O, Si, Mg, and C, depending on the spectral type of the star. Eight objects are common to the present work. Furthermore, a sample of 25 O9.5--B3 Galactic supergiants were analysed by \citet{Crowther_etal_06} based on {\sc Cmfgen} models. We do not compare with this paper since it has only one object (HD~14818) in common with the present work. Figure~\ref{fig:comparison_combined}, panel a, shows a comparison of the effective temperatures from the literature $T_{\mathrm{eff}}^{\mathrm{lit}}$ with those derived here. An overall good correlation is found with a mean relative difference $\delta T_{\mathrm{eff}}^{\mathrm{comp}}$\,=\,$\frac{1}{N-1}\sum^{N}_{i} (T_{\mathrm{eff},i} - T_{\mathrm{eff},i}^{\mathrm{lit}})/T_{\mathrm{eff},i}^{\mathrm{lit}}$ of less than 1\%, and a standard deviation of 5\%. Two objects, HD~119646 and HD~183143 (marked with open symbols), differ by $\sim$12\% in literature versus present-work $T_\mathrm{eff}$ with both higher and lower values realised. Some objects are present in two or more of the cited studies, in which case the objects are depicted as diamonds. For such objects the values never scatter by more than about 1000\,K. The comparison of surface gravities is shown graphically in Fig.~\ref{fig:comparison_combined}, panel b. Again, values occurring in more than one study are depicted as diamonds, but in this comparison, some objects show a larger scatter among the various studies. For HD~164353 estimates range from $\log g$\,=\,2.46 in \cite{IACOBIII}, $\log g$\,=\,2.50 in \cite{Fraser_etal_10} to $\log g$\,=\,2.75 in \cite{Searle_etal_08}; for HD~191243 \cite{IACOBIII} derive $\log g$\,=\,2.45, while \cite{MaPu08} finds $\log g$\,=\,2.61 and \cite{Searle_etal_08} again give the largest value of $\log g$\,=\,2.75, that is differences up to a factor of 2. Treating the literature values as one complete comparison set, a systematic offset towards higher $\log g$ values in the present work may be noticed. One may speculate that this trend may be due to the inclusion of $P_{\mathrm{turb}}$ in our analysis, resulting in a systematic increase of $\log g$. However, we have demonstrated that the effect of this term in the model is limited to $\sim$0.05 dex even in objects with large microturbulent velocities close to the Eddington limit (see Sect.~\ref{subsection:turb_pressure}). In light of the large uncertainties of the literature values, small number statistics, and general differences between the trends in the different comparison studies, we cannot draw definite conclusions on the origin of these differences. Figure~\ref{fig:comparison_combined}, panel c shows the comparison of microturbulent velocities. Both the correlation of literature and present values, as well as the agreement between literature values of different studies is poor. In the case of HD~191243, a maximum difference of $\Delta \xi$\,=\,13\,km\,s$^{-1}$ is found between \cite{MaPu08} and our work on the one hand and \cite{Searle_etal_08} on the other. Overall, our assessments of microturbulent velocity are systematically lower by $\sim$10\,km\,s$^{-1}$ than the literature values, with the exception of \citet{MaPu08}. While our microturbulent velocities remain subsonic, literature values are often found to be supersonic. Such systematically lower microturbulent velocities were also found in previous work on B-type main-sequence stars \citep{NiPr12}, where a broad variety of microturbulence indicators were employed versus the usual reliance on the \ion{Si}{iii} 4552--4574\,{\AA} triplet alone. We note that microturbulence velocities were not provided by \citet{IACOBIII}. Finally, a comparison of projected rotational velocities is shown in Figure~\ref{fig:comparison_combined}, panel d. Good agreement is achieved overall, though there are some small-scale differences between the compared works. Values by \citet{IACOBIII} agree very well with ours, showing little to no offset and small scatter, while values derived in this work are systematically larger by $\sim$4\,km\,s$^{-1}$ in comparison with data of \cite{Fraser_etal_10}. The only significant outlier is the $\varv \sin i$-value of HD~191243 in the work by \cite{MaPu08}, which is likely a statistical outlier, given the good accordance of the corresponding value in \cite{IACOBIII}. \subsection{Elemental abundances and stellar metallicity}\label{section:abundances_and_metallicity} The mean abundances of all the metal species studied here (which constitute the ten most abundant elements besides hydrogen and helium) along with their uncertainties and the number of analysed lines are summarised in Table \ref{tab:abundances}. In addition, the resulting metallicities of the sample stars are shown. For a conservative estimate of the error margins the $1\sigma$ sample standard deviation of individual line abundances was chosen, as tests on single line statistical uncertainty resulted in unreasonably low margins. In general, these statistical uncertainties range from $\sim$0.05--0.10\,dex, and rarely exceed the latter value. The number of lines analysed per species and object is at least two in very few cases and usually much larger. Standard errors of the mean therefore amount to typically 0.02--0.03\,dex for the elemental abundances in each star. Metal mass fractions $Z$ ('metallicities') of the sample stars were calculated from the available metal abundances and are indicated in the last column of Table~\ref{tab:abundances}. As these cover the ten most abundant metal species these should be representative for the sum of all metals. The systematic uncertainties depend primarily on the quality of the respective model atoms and on the uncertainties in effective temperatures, surface gravities, and microturbulent velocities, see for example the discussions by \citet{Przybillaetal00,Przybillaetal01a,Przybillaetal01b} and \citet{PrBu01}. Given the experience gained in these works, we expect the systematic uncertainties of the elemental abundances to amount to $\sim$0.1\,dex. The derivation of abundances for all chemical species that show spectral lines in the optical allowed global synthetic spectra to be calculated, that is one model spectrum based on the derived atmospheric parameters and abundances per star. This also includes the blended features that were excluded from the chemical analysis. As can be expected from the small abundance uncertainties, the reproduction of the observed spectra by the global synthetic spectrum is excellent overall, as shown for the exemplary case of HD~164353 in Appendix~\ref{section:appendixA}, Figs.~\ref{fig:HD164353_3900_4500} to \ref{fig:HD164353_8100_8700}. Apart from some occasional very weak features, for instance of \ion{S}{ii} where the model atom would need to be extended to include more energy levels, all important stellar spectral lines are included in the spectrum synthesis. Noticeable omissions are several interstellar ('IS') atomic features, such as the Ca H and K lines, the Na D lines, a \ion{K}{i} resonance line\footnote{Only the \ion{K}{i} $\lambda$7698.9\,{\AA} line is clearly visible in this case, while the other fine-structure component \ion{K}{i} $\lambda$7664.9\,{\AA} overlaps with a saturated telluric O$_2$ line \citep[see e.g.][]{Kimeswengeretal21}, which depends on the radial velocity of the target star.}, the diffuse interstellar bands (DIBs), and the telluric absorption features typically due to O$_2$ and H$_2$O bands that occur with increasing frequency towards the near-IR. Some residual problems remain for a few stellar lines, for example the mismatch of the H$\alpha$ Doppler core, which is likely caused by the (weak) stellar wind in this object and not accounted for by the present modelling approach. The widths of the two strongest \ion{He}{i} lines $\lambda$5875 and 6678\,{\AA} are not perfectly matched. It would certainly be worthwhile to investigate this further as the widths of all other helium lines are reproduced well, but this is beyond the scope of the present paper. A few metal lines also show somewhat larger deviations, such as the \ion{C}{ii} $\lambda$6578/82\,{\AA} doublet, which may hint at the possibility that the model atom may need to be improved with respect to these lines. However, in view of the overall solution these are minor details, the few discrepant features were not considered for the analysis. Previous work on abundances of early B-type stars in the solar neighbourhood (distances out to $\sim$400\,pc from the Sun) has found chemical homogeneity, establishing a present-day cosmic abundance standard \citep[CAS,][]{NiPr12,Przybillaetal13}, see Table~\ref{tab:abundances}. Such a comparison of abundances between the present sample stars and the CAS is inappropriate here because of the widely different distances of the sample objects from the Galactic centre (see Sect.~\ref{section:spectroscopic_distances}), for example $\sim$7\,kpc for HD~184943 versus $\sim$13\,kpc for HD~25914. For the same reason, an important test to verify the independence of abundances of atmospheric parameters such as $T_\mathrm{eff}$ and $\log g$ that could be made by \citet{NiPr12}, cannot be repeated here. We note, however, that the supergiants closest to the Sun in the sample, HD~125288 and HD~164353, are consistent with the CAS values within the mutual uncertainties, but they show overall larger abundances. In particular the surface abundances of nitrogen show clear indication of mixing of the atmospheres with CN-processed material from the stellar cores. \subsection{Signatures of mixing with CNO-processed material}\label{section:CNO} Different physical mechanisms can lead to mixing of CNO-cycled matter from the stellar core to the surface of rotating stars. Examples are meridional circulation or shear mixing due to differential rotation \citep[e.g][]{MaMe12,Langer12} further modified by the presence of magnetic fields. As a consequence, ratios of the surface carbon, nitrogen, and oxygen mass fractions, and the helium mass fractions are expected to appear in relatively narrow regions in diagnostic diagrams \citep{Przybillaetal10,Maederetal14} as shown in Fig.~\ref{fig:cnoy}. All ratios were normalised to the initial values so as to make the comparison to the evolution tracks easier -- the observations were normalised relative to CAS abundances (see Table~\ref{tab:abundances}, $Y_{\mathrm{ini}}$\,=\,0.276), the models to their respective (solar) initial values. As the $N/C$ versus~$N/O$ plot shows little dependence on the initial stellar masses, rotation velocities, and nature of the mixing processes up to relative enrichment of $N/O$ by a factor of about four, it constitutes an ideal quality test for observational results \citep{Maederetal14}. The CNO signatures of the present sample supergiants closely follow both the path predicted by models and the observational data of \citet{Przybillaetal10} and \citet{NiPr12}. This gives confidence that systematic errors in the present atmospheric parameters are indeed small. The star HD~159110 (ID\,\#13) appears at CAS initial values for CNO abundances, while HD~119646 (ID\,\#11) exhibits CNO abundances consistent with mixing signatures on the the main sequence (i.e. relative enrichment of $(N/O)/(N/O)_\mathrm{ini}$\,$\lesssim$\,3). The majority of the sample stars is noticeably enriched in CN-processed matter, with enhancement almost reaching the high values observed for some of the more evolved BA-type supergiants. % For most of the analysed objects, helium abundances are slightly lower than predicted by the models while being consistent with the initial helium abundance within the 1$\sigma$ uncertainties. Three of the sample objects (ID\#9, \#11, and in particular \#13) deviate somewhat from this value while they are expected to show no modification. This is true when only statistical uncertainties are considered, which are displayed in Fig.~\ref{fig:cnoy}. However, potential systematic errors also need to be considered and we emphasise that the offset of ID\#13 from the initial value corresponds to only 0.02\,dex. Such differences are much smaller than the symbol sizes in the upper panel of Fig.~\ref{fig:cnoy}, stressing the enormous changes in mostly nitrogen (and to a lesser extent carbon and oxygen) abundances versus the enrichment of helium which is difficult to determine. The highest helium enrichment found in our sample stars is about 15\% above the initial value. This is different to the BA-type supergiants, which show larger enhancement values. We note that different helium lines were analysed by \citet{Przybillaetal10}, as many of the stars are cooler than the present sample stars. However, an investigation of the cause of these differences is beyond the scope of the present paper. \subsection{Spectroscopic distances}\label{section:spectroscopic_distances} The spectroscopic distances derived in this work depend on several parameters deduced both from the quantitative spectral analysis and inferred from fundamental parameters on the basis of stellar evolution models, spectral synthesis codes, and photometric data (see Eqn.~\ref{eq:spec_dist}). As already mentioned in brief in Sect.~\ref{sec:spec_dist_method}, the comparison with independent distance estimations (e.g. Gaia EDR3) can provide valuable insight into systematic problems in the derivation of important parameters for the entire sample. It can, however, also highlight individual sample objects that may have undergone exceptional evolutionary pathways. Binarity (with and without associated mass transfer) and post-asymptotic giant branch evolutionary histories can leave signatures detectable in this approach. Regardless as to whether the star has evolved 'normally' or not, it may be stated that the primary sources of uncertainty are evolutionary mass $M_{\mathrm{evol}}$ and surface gravity $\log g$, so that potential offsets in distances most probably stem from systematically biased parameters. Figure~\ref{fig:distances} shows a direct comparison (upper panel) and relative difference (lower panel) of our spectroscopic distances $d_{\mathrm{spec}}$ and distances $d_{\mathrm{Gaia}}$ derived from Gaia EDR3 parallaxes. Specifically, $d_{\mathrm{Gaia}}$ is the 'photogeometric' Bayesian estimation of distance by \cite{Bailer-Jones_etal_2021}, which in addition to Gaia EDR3 parallaxes also takes into account the objects colour and apparent magnitude to achieve yet higher accuracy. In the direct comparison, we see a good agreement of the two distance determinations for the individual objects. The relative differences display a small mean offset of $\mu_s$\,=\,$-$6\% with a sample standard deviation of $\sigma_s$\,=\,12\%, showing the excellent agreement between the distances. Even though most objects lie within about 2.5\,kpc from the Sun, the relationship does not seem to degrade noticeably at larger distances, as can be seen for the cases of HD~184943 (ID\,\#6, $d_{\mathrm{spec}}$\,=\,4\,kpc) and HD~25914 (ID\,\#3, $d_{\mathrm{spec}}$\,=\,6\,kpc). Two of our sample stars, HD~7902 (ID\,\#1) and HD~183143 (ID\,\#5), depart somewhat from the mean relationship. While we cannot offer a robust explanation for the discrepancy in distance of either of these objects, we note that both are evolved stars towards the upper mass limit of our sample. Small scale systematic errors in mass estimates, as discussed in Sect.~\ref{section:masses_radii_method}, are maximised in this region. For ID\#1 a mass reduced by 1\,$M_\odot$ would be sufficient to reach agreement within the mutual 1$\sigma$-uncertainties of the two distances. Maximum systematic effects would be needed for ID\#5 in this picture, requiring an initially non-rotating single star, but at the same time it is one of the two stars with the largest CNO mixing signature in the sample. This could possibly be interpreted in terms of a binary history. However, a further discussion of this is not warranted by the information available. Considering the offset $\mu_s$ of the relative differences is of the order of $-0.5\sigma_s$, we may conclude that the line of regression is compatible with an offset of zero. It may on the other hand reflect some unaccounted low-scale systematics, which, however, have no significant impact on the basic conclusions of the present work. The distribution of the sample stars in the Galactic disk is depicted in Fig.~\ref{fig:galactic_plane}. The sample objects span Galactocentric distances in the range of $R_g$\,=\,7-13\,kpc \citep[calculated for a distance of the Sun to the Galactic centre of 8.178\,kpc,][]{GravityCol1aboration19}, while the range of elevations above and below the Galactic plane is fairly small, typically within 200\,pc. Only our outermost supergiant, HD~25914 (ID\#3), is located $\sim$0.4\,kpc above the Galactic plane. While the arrangement of the objects along the spiral arms is not immediately obvious, a closer inspection using the spiral arm delineation by \citet{Xuetal21} shows that stars with IDs\,\#10, 11, and 13 are located in the Carina-Sagittarius Arm, \#6, 7, 8, 9, 12, and 14 in the Local Arm, \#1, 2, and 4 are associated with the Perseus Arm and \#3 is situated~in~the~Outer~Arm. Star \#5 lies in the Sagittarius-Carina arm if $d_\mathrm{Gaia}$ is adopted, and otherwise between that and the Local Arm if $d_\mathrm{spec}$ is considered. \subsection{Sight lines -- reddening law}\label{section:sightlines} For the high-precision determination of the interstellar sight lines to the sample objects, the model \,{\sc Atlas9}-SEDs were fitted to photometric observations in various bands as well as to UV spectrophotometry from the IUE-satellite. Figure~\ref{fig:sed_fits} exemplarily summarises the result of this fitting process for three out of the 14 sample stars to give an impression of the quality of the fits, ordered from top to bottom by increasing values of colour excess $E(B-V)$. For most objects sufficient constraining observations were suitable for comparison and resulted in very small associated uncertainties in both $R_V$ and $E(B-V)$. Values for $R_V$ vary between 2.9 and 3.6, mostly concentrating around the typical ISM value of 3.1, and reddening values vary typically between 0.1 and 0.8, see Table~\ref{tab:stellar_parameters} for a summary of the results. The peculiar case of HD~183143 was already briefly discussed in Sect.~\ref{section:sed_fitting}. \subsection{Evolutionary status}\label{section:mass_estimates_and_discrepancy} The evolutionary status of the sample stars can be derived by comparison to stellar evolution tracks. Two complementary diagnostic diagrams may be employed for this, the spectroscopic HRD \citep[sHRD, $\log(\mathscr{L}/\mathscr{L}_{\odot})$ versus $\log T_\mathrm{eff}$, introduced by][]{LaKu14} and the HRD ($\log L/L_\odot$ versus $\log T_\mathrm{eff}$). The sHRD is based only on observed atmospheric parameters (like the Kiel diagram -- $\log g$ versus $\log T_\mathrm{eff}$ --, not shown here), while the HRD requires knowledge of the distance and corrections for interstellar extinction to be taken into account. Both were derived in the present work and we give preference to spectroscopic distances. The positions of the sample stars in both diagrams with respect to evolutionary tracks for rotating stars by \citet{Ekstroemetal12} are shown in Fig.~\ref{fig:hrd_comp}. We note the very similar positions of the sample stars relative to the evolution tracks. This is consistent with them likely being post-main-sequence objects with ZAMS masses between about 9 to 30\,$M_\sun$ on the first crossing of the HRD towards the red supergiant phase (star \#13 is a potential exception, it may alternatively be in the last stages of core H-burning, depending on its detailed properties). They are located on the cool side of the bi-stability jump for stellar winds \citep[e.g.][]{Lamersetal95} and are slowly rotating, with $\varv \sin i$ in the range of about 20 to 50\,km\,s$^{-1}$, as expected for such B-type supergiants \citep[see e.g.][]{Vinketal10}. Evolutionary ages vary between about 7\,Myr for the most massive to about 29\,Myr for the least massive sample objects, as inferred from isochrones indicated in the upper panel of Fig.~\ref{fig:hrd_comp}. We emphasise again that the masses and ages are derived assuming that the particular rotation rates in evolution models and isochrones are representative on average for the sample. Systematic shifts in mass and age result if the initial rotational velocities had other values, but we expect them to be covered by our uncertainties in most cases. We note that the sample stars show a variety of metallicities (see Table~\ref{tab:abundances}) because of their different positions in the Galactic disk. The most metal-poor star in the present work is HD~25914 at $Z$\,=\,0.010 in the Outer Arm, while several objects reach super-CAS metallicities in the inner Milky Way, up to $Z$\,=\,0.019, that is the variations reach up to about 30\% below and 40\% above the CAS value. Moreover, the chemical composition of the sample stars varies from (scaled) solar, as implemented by \citet{Ekstroemetal12} for the $Z$\,=\,0.014 models, to the bracketing analogous $Z$\,=0.006 \citep{Eggenbergeretal21} and $Z$\,=\,0.020 models \citep{Yusofetal22}. The net effects are a more efficient transport of angular momentum and CNO-processed material with decreasing metallicity and a higher mass-loss with increasing metallicity. However, we do not see the resulting differences as critical for the present work in terms of parameters deduced from the comparison such as ZAMS or evolutionary masses. The evolutionary tracks remain similar throughout the metallicity range \citep[see e.g.][their Fig.~5]{Yusofetal22}, such that the resulting systematics are expected to lie within our uncertainties. The number of sample stars is too low to investigate the effects responsible for the mixing of the surface layers with CNO-processed material from the core systematically. Two findings are in line with the general picture of rotational mixing: the two stars with CNO signatures closest to the pristine values (\#11, \#13) are closest to the terminal-age main sequence, towards lower masses, and the stars showing the highest processing (\#5 and \#8) are the most evolved (i.e. showing the coolest temperatures) and tend to be among the most massive sample stars. On the other hand, star \#3 -- the most massive and most metal-poor object of the sample -- shows only a milder degree of chemical mixing, which may be the consequence of an initially slower rotation than average. The issue has to be revisited based on a much larger sample of objects. \section{Test for extragalactic applications at intermediate spectral resolution}\label{section:intermediateR} High-resolution spectroscopy of B-type supergiants as presented here can be conducted in galaxies beyond the Magellanic Clouds only at the cost of long exposure times of the order of hours on large telescopes \citep[e.g.][]{Urbanejaetal11}. Fortunately, many of the stronger diagnostic lines are isolated, so that intermediate-resolution spectroscopy ($R$\,$\simeq$\,1000--5000) suffices to allow quantitative analyses, at the loss of only the weaker spectral lines. This also opens up the possibility of employing multi-object spectroscopy, in particular when investigating galaxies beyond the Local Group, providing multiplexing of the order a few tens to hundreds of objects to be observed simultaneously. This comprises various successful techniques that have been implemented already as multi-slit spectroscopy \citep[e.g. with the FOcal Reducer/low dispersion Spectrograph 2, FORS2, on the ESO Very Large Telescope VLT, e.g.][]{Kudritzkietal16}, multi-fibre spectroscopy \citep[with the Large Sky Area Multi-Object Fibre Spectroscopic Telescope, LAMOST, e.g.][]{Liuetal22} or integral-field spectroscopy \citep[as with the Multi Unit Spectroscopic Explorer, MUSE, on the VLT, e.g.][]{Gonzalez-Toraetal22}. Isolated spectral lines of H, He, C, N, O, Mg, and Si, or pure blends thereof, are strong enough to allow atmospheric parameters and individual elemental abundances to be constrained even at intermediate spectral resolution. The blue spectral region from the Balmer jump to about 5000\,{\AA} is particularly useful for analyses, preferentially towards the earlier B spectral types, as they show stronger and a larger number of metal lines, see Fig.~\ref{fig:spect_lum_showcase}. The comparison of the final synthetic spectrum and the observed spectrum for the hottest of the sample stars in two extended blue spectral windows is shown in Fig.~\ref{fig:extragal_3900_4200}. The same comparison, however for an artificially downgraded $R$\,=\,1000 reachable with FORS2, is shown in the lower subpanels, where a $S/N$\,=\,50 is simulated for the observation. Excellent agreement is achieved in both cases, except for some small details. Moreover, modified models by $\pm$1000\,K in $T_\mathrm{eff}$ and $\pm$0.3\,dex in metal abundances are also shown in the intermediate-resolution case. This shows that a simultaneous evaluation of all the spectral features, both the atmospheric parameters ($\log g$ is constrained by the response of the Balmer lines, not shown here) as well as the elemental abundances can be performed using $\chi^2$ minimisation techniques in the multi-parameter space, with uncertainties that are only slightly larger than in the high-resolution case: $\Delta T_\mathrm{eff}$ in the range of about 300-1000\,K, $\Delta \log g$ of about 0.10\,dex, and elemental abundances in the range of about 0.10 to 0.15\,dex. We note in particular that the ionisation equilibria \ion{Si}{ii/iii(/iv)}, and in the case that red wavelengths are also covered \ion{O}{i/ii}, remain available at intermediate resolution. The microturbulent velocity can best be constrained from the rather numerous \ion{Si}{ii/iii} and \ion{O}{i/ii} lines \citep[in contrast to the minimalistic approach of concentrating only on the \ion{Si}{iii} triplet 4552-4574\,{\AA}, e.g.][]{Hunteretal07}. We conclude that the present hybrid non-LTE spectrum synthesis technique based on reliable model atoms allows for comprehensive quantitative analyses of B-type supergiants on the basis of intermediate-resolution spectra. This opens up the prospect of B-type supergiants as versatile tools to address a number of highly-relevant astrophysical topics in the context of extragalactic stellar astronomy. Even with available instrumentation on the current generation of 8-10m telescopes, a wide range of detailed studies, in particular concerning galactic evolution \citep[galactic abundance gradients, the galaxy mass-metallicity relationship, e.g.][]{Urbanejaetal05a,Kudritzkietal12,Kudritzkietal14,Castroetal12} and the cosmic distance scale \citep[via application of the FGLR, e.g.][]{Urbanejaetal17}, can be addressed by investigating supergiants in galaxies in the field and in the nearby galaxy groups. With the advent of the Extremely Large Telescopes (ELTs), the step to investigate supergiants in galaxies in the nearby Virgo and Fornax galaxy clusters will become feasible, allowing environmental effects to be studied. However, as adaptive optics techniques will be required to reach the full potential of the ELTs in terms of spatial resolution, spectroscopic observations will have to concentrate on redder wavelength regions, at least initially. For example, the High Angular Resolution Monolithic Optical and Near-infrared Integral field spectrograph \citep[HARMONI,][]{Thatteetal21} on the ESO ELT will cover wavelengths beyond 4700\,{\AA} and the multi-object spectrograph MOSAIC \citep{Hammeretal21} beyond 4500\,{\AA}. The information content will be lower than at bluer wavelengths, but suitable spectral lines for analyses are present, see the figures in Appendix~\ref{section:appendixA}. Important for the scientific return will be to achieve wide wavelength coverage. \section{Summary and conclusions}\label{section:conclusions} A hybrid non-LTE spectrum synthesis approach for quantitative analyses of luminous B-type supergiants with masses up to about 30\,$M_{\odot}$ was presented, where most spectral lines are formed in a photosphere that is not significantly affected by the stellar wind. It was shown that practically the entire observed optical to near-IR high-resolution spectra can be reliably reproduced, including the dozen chemical elements with the highest abundances. The modelling was thoroughly tested for 14 sample objects spanning a $T_\mathrm{eff}$-range from about 12\,000 to 20\,000\,K (i.e. spectral types B8 to B1.5) and luminosity classes II, Ib, Iab, and Ia. The present work helps to connect the region of late O- and early B-type stars on the main-sequence with luminosity classes V to III \citep{NiPr12,NiPr14} and the cooler BA-type supergiants \citep{Przybillaetal06,FiPr12}, which will allow stellar evolution to be tracked observationally throughout the hot regime of the HRD in a homogeneous manner. Due to the highly interactive and iterative nature of the approach, the time required to carry out the analysis procedure for a comprehensive solution of one sample object amounts to typically 2 weeks for experienced users. For a demonstration of the applicability of a method and a first application, this is an acceptable time investment. But, obviously, a combination of the models with faster, more automatised state-of-the-art analysis techniques \citep[see e.g. Sect. 3.8 of][]{Simon-Diaz20} is required for future larger-scale applications. It has been shown that the atmospheric parameters of B-type supergiants can be determined with high precision and accuracy using the hybrid non-LTE approach. The effects of turbulent pressure were taken into account for the first time for B-type supergiants, and they lead to (small) systematic shifts in the atmospheric parameters. Effective temperatures can be constrained to 2-3\% uncertainty, surface gravities to better than 0.07\,dex uncertainty, and elemental abundances with uncertainties of 0.05 to 0.10\,dex (statistical 1$\sigma$-scatter) and about 0.1\,dex (systematic error). Classical LTE analyses that can be partly successful for the analysis of main-sequence stars at similar $T_\mathrm{eff}$ cannot be expected to yield any meaningful results for supergiant analyses (Fig.~\ref{fig:tlusty_atlas12_HHE_20k250v10} gives an impression of the differences). Precise and accurate atmospheric parameters also allow an improved characterisation of the interstellar reddening and the reddening law along the sight lines towards the supergiants to be made. The importance of B-type supergiants in this context lies in their large luminosities, so that sight lines to very distant parts of the Milky Way may become traceable in the era of large spectroscopic surveys \citep[e.g.][]{Xiangetal22}. A comparison with stellar evolution models then also allows the fundamental parameters to be determined. In particular future Gaia data releases will help to further reduce the uncertainties for Galactic supergiants by providing stronger astro- and photometric constraints to cross-check resulting spectroscopic solutions. Most of the sample stars show signatures of the surface layers having experienced (rotational) mixing with CNO-processed material from the core, and the positions of the stars in the HRD are consistent with hydrogen shell-burning being active but core He-burning probably having not yet ignited \citep[which happens for the investigated mass range earliest at $\log T_\mathrm{eff}$\,$\simeq$\,4.1, and cooler, according to the models of][]{Ekstroemetal12}. Unlike main-sequence early B-type stars in the solar neighbourhood \citep{NiPr12}, the B-supergiant sample does not show chemical homogeneity for the heavier elements. However, this is not unexpected, as the objects cover a wider range of Galactocentric distances, that is they will be subject to Galactic abundance gradients \citep[e.g.][]{Mendez-Delgadoetal22}. Finally, it was shown that the full spectrum synthesis approach makes applications to intermediate-resolution spectra possible, with only slightly increased error margins. Extragalactic samples of B- and A-type supergiants below the $\sim$30\,$M_\odot$ limit can therefore be analysed homogeneously in the future. A major step for such applications will be reached once multi-object spectrographs on ELTs become available. Quantitative spectroscopy of supergiants in the star-forming galaxies of the Virgo and Fornax galaxy clusters can then commence, allowing galaxy evolution in the different environments to be studied -- in the field, in groups, and in clusters -- in more detail than currently feasible on the basis of gaseous nebulae. \begin{acknowledgements} D.W. and N.P. gratefully acknowledge support from the Austrian Science Fund FWF project DK-ALM, grant W1259-N27. Based on data obtained from the ESO Science Archive Facility with DOI(s): \url{https://doi.org/10.18727/archive/24}. Based on observations collected at the Centro Astron\'omico Hispano Alem\'an at Calar Alto (CAHA), operated jointly by the Max-Planck Institut f\"ur Astronomie and the Instituto de Astrof\'isica de Andaluc\'ia (CSIC), proposals H2001-2.2-011 and H2005-2.2-016. Travel of N.P. to the Calar Alto Observatory was supported by the Deutsche Forschungsgemeinschaft (DFG) under grant PR 685/1-1. The latter observational data are available under \url{https://doi.org/10.5281/zenodo.6802567}. We are grateful to A.~Irrgang for several updates of {\sc Detail} and {\sc Surface}. We thank the referee for useful suggestions to improve on the clarity of the paper. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. This research has made use of the SVO Filter Profile Service (http://svo2.cab.inta-csic.es/theory/fps/) supported from the Spanish MINECO through grant AYA2017-84089. \end{acknowledgements} \typeout{} \bibliographystyle{aa} \bibliography{biblio.bib} \begin{appendix} % \section{Example of a global model fit}\label{section:appendixA} The following figures show a comparison of the spectrum of HD~164353 as observed with FEROS and the best fitting global synthetic spectrum. The model was computed with the codes {\sc Atlas12/Detail/Surface} on basis of atmospheric parameters and elemental abundances for the star as summarised in Tables~\ref{tab:stellar_parameters} and~\ref{tab:abundances}, respectively. The diagnostic stellar lines are identified in Figs.~\ref{fig:HD164353_3900_4500} to \ref{fig:HD164353_8100_8700}. A few interstellar ('IS') lines -- the Ca~H+K, Na~D and \ion{K}{i} resonance lines -- and several diffuse interstellar bands (DIBs) are also identified, but they are missing in the model. Numerous sharp unmodelled features redwards of about 5870\,{\AA} are of telluric origin, due to H$_2$O or from the O$_2$ A-, B- and $\gamma$-bands. \end{appendix}
Title: J-comb: An image fusion algorithm to combine observations covering different spatial frequency ranges
Abstract: Ground-based, high-resolution bolometric (sub)millimeter continuum mapping observations on spatially extended target sources are often subject to significant missing fluxes. This hampers accurate quantitative analyses. Missing flux can be recovered by fusing high-resolution images with observations that preserve extended structures. However, the commonly adopted image fusion approaches do not maintain the simplicity of the beam response function and do not try to elaborate the details of the yielded beam response functions. These make the comparison of the observations at multiple wavelengths not straightforward. We present a new algorithm, J-comb, which combines the high and low-resolution images linearly. By applying a taper function to the low-pass filtered image and combining it with a high-pass filtered image using proper weights, the beam response functions of our combined images are guaranteed to have near-Gaussian shapes. This makes it easy to convolve the observations at multiple wavelengths to share the same beam response functions. Moreover, we introduce a strategy to tackle the specific problem that the imaging at 850 um from the present-date ground-based bolometric instrument and that taken with the Planck satellite do not overlap in the Fourier domain. We benchmarked our method against two other widely-used image combination algorithms, CASA-feather and MIRIAD-immerge, with mock observations of star-forming molecular clouds. We demonstrate that the performance of the J-comb algorithm is superior to those of the other two algorithms. We applied the J-comb algorithm to real observational data of the Orion A star-forming region. We successfully produced dust temperature and column density maps with ~10" angular resolution, unveiling much greater details than the previous results.
https://export.arxiv.org/pdf/2208.00588
\begin{picture}(0,0){\rm \put(0,-20){\makebox[160truemm][l]{\bf {\sanhao\raisebox{2pt}{.}} Article {\sanhao\raisebox{1.5pt}{.}}}}} \put(0,-34){\jiuwuhao {\textcolor[rgb]{0.5,0.5,0.5}{\sf }}} \end{picture} \def\bm{\boldsymbol} \def\dl{\displaystyle} \def\du{
Title: Stratified Distribution of Organic Molecules at the Planet-Formation Scale in the HH 212 Disk Atmosphere
Abstract: Formamide (NH2CHO) is considered an important prebiotic molecule because of its potential to form peptide bonds. It was recently detected in the atmosphere of the HH 212 protostellar disk on the Solar-System scale where planets will form. Here we have mapped it and its potential parent molecules HNCO and H2CO, along with other molecules CH3OH and CH3CHO, in the disk atmosphere, studying its formation mechanism. Interestingly, we find a stratified distribution of these molecules, with the outer emission radius increasing from ~ 24 au for NH2CHO and HNCO, to 36 au for CH3CHO, to 40 au for CH3OH, and then to 48 au for H2CO. More importantly, we find that the increasing order of the outer emission radius of NH2CHO, CH3OH, and H2CO is consistent with the decreasing order of their binding energies, supporting that they are thermally desorbed from the ice mantle on dust grains. We also find that HNCO, which has much lower binding energy than NH2CHO, has almost the same spatial distribution, kinematics, and temperature as NH2CHO, and is thus more likely a daughter species of desorbed NH2CHO. On the other hand, we find that H2CO has a more extended spatial distribution with different kinematics from NH2CHO, thus questioning whether it can be the gas-phase parent molecule of NH2CHO.
https://export.arxiv.org/pdf/2208.10693
\def\NHtCHO{NH$_2$CHO} \def\HtCO{H$_2$CO} \def\DtCO{D$_2$CO} \def\HttCO{H$_2$$^{13}$CO} \def\CHtDOH{CH$_2$DOH} \def\CHtOH{CH$_3$OH} \def\tCHtOH{$^{13}$CH$_3$OH} \def\CHtCHO{CH$_3$CHO} \def\CHtDCN{CH$_2$DCN} \def\uJyb{$\mu$Jy beam$^{-1}$} \def\scnum#1#2{#1$\times10^{#2}$} \section{Introduction} Formamide (\NHtCHO{}) is an interstellar complex organic molecule (iCOM, referring to C-bearing species with six atoms or more) \citep{Herbst2009,Ceccarelli2017} and a key precursor of more complex organic molecules, that can lead to the origin of life, because of its potential to form peptide bonds \citep{Saladino2012,Kahane2013,Lopez2019}. It has been detected in gas phase in hot corinos \citep{Kahane2013,Coutens2016,Imai2016,Lopez2017,Bianchi2019,Hsu2022}, which are the hot ($\gtrsim 100$ K) and compact ($\lesssim 100$ au) regions immediately around low-mass (sun-like) protostars \citep{Ceccarelli2007}. The formamide origin is still under debate. In principle, formamide could be synthesized on the grain surfaces or in the gas phase. Two routes have been proposed in the first case: the hydrogenation of HNCO \citep{Charnley2008} and the combination of the HCO and NH$_2$ radicals, when they become mobile upon the warming of the dust by the protostar. However, the first route has been challenged by both experiments \citep{Noble2015} and quantum chemical (QM) calculations \citep{Song2016}. Later, the hydrogenation of HNCO is found to be feasible and followed by H abstraction of \NHtCHO{} in a dual-cycle consisting of H addition and H abstraction \citep{Haupa2019}. The second route (i.e., the combination of HCO and NH$_2$) has also been challenged by QM calculations \citep{Rimola2018} and found possible, even though it can also form NH$_3$ $+$ CO in competition with the formamide \citep{Enrique-Romero2022}. In the gas-phase formation theory, it has been proposed that formamide is formed by the gas-phase reaction of H$_2$CO with NH$_2$ \citep{Kahane2013}. This hypothesis was later challenged by \citet{Song2016}. Nonetheless, QM computations \citep{Vazart2016,Skouteris2017} coupled with astronomical observations in shocked regions \citep{Codella2017} support this hypothesis. On the same vein, the observed deuterated isomers of formamide (including NH$_2$CDO, cis- and trans-NHDCHO) \citep{Coutens2016} fits well with the theoretical predictions of a gas-phase formation route \citep{Skouteris2017}. On the other hand, the observed high deuterium fractionation of $\sim$ 2\% for the three different forms of formamide (NH$_2$CDO, cis- and trans-NHDCHO) could also be consistent with the formation in ice mantles on dust grains. The hot corino in the HH 212 protostellar system \citep{Codella2016} in Orion at a distance of $\sim$ 400 pc is particularly interesting because recent observations have spatially resolved it and found it to be an atmosphere of a Solar-System scale protostellar disk around a protostar \citep{Lee2017COM}. This disk atmosphere is rich in iCOMs \citep{Lee2017COM,Codella2018,Lee2019COM}, including formamide. More importantly, these iCOMs have a relative abundance similar to that of other hot corinos \citep{Cazaux2003,Imai2016,Lopez2017,Bianchi2019,Manigand2020}, and even comets \citep{Biver2015}. Therefore, the study of formamide in protostellar disks is key to investigate the emergence of prebiotic chemistry in nascent planetary bodies. In this paper we will study the origin and formation pathways of formamide in this protostellar disk. Previously, the HH 212 disk was mapped at a wavelength $\lambda \sim$ 0.85 mm \citep{Lee2017COM} covering one \NHtCHO{} line. Here we map it at a longer wavelength $\lambda \sim$ 1.33 mm, with spectral windows set up to cover more \NHtCHO{} lines in order to derive the physical properties of \NHtCHO{}. This set up also covers the lines of HNCO and \HtCO{}, which have not been reported before, in order to investigate the formation pathways of \NHtCHO{}. At longer wavelength, since the continuum emission of the disk is optically thinner, we can also map the molecular line emission in the disk atmosphere closer to the midplane and the central source. Moreover, the deuterated species and $^{13}$C isotopologue of \HtCO{} are also detected, allowing us to constrain the origin and correct the optical depth of \HtCO{}, respectively. In addition, \CHtOH{} and \CHtCHO{} are also detected, allowing us to further constrain the formation mechanism of \NHtCHO{}. More importantly, with the recently updated binding energies of these molecules, we can investigate the formation mechanism of these molecules and the chemical relationship among them. \section{Observations} The HH 212 protostellar disk was observed with Atacama Large Millimeter/submillimeter Array (ALMA) in Band 6 centered at a frequency of $\sim$ 226 GHz (or $\lambda \sim$ 1.33 mm) in Cycle 5. Project ID was 2017.1.00712.S. Two observations were executed. One was executed on 2017 October 04 in C43-9 configuration with 46 antennas for $\sim$ 18 mins on source with a baseline length of 41.4 m to 15 km to achieve an angular resolution of $\sim$ \arcsa{0}{02}. The other was executed on 2017 December 31 in C43-6 configuration with 46 antennas for 9 mins on source with a baseline length of 15.1 m to 2.5 km to recover a size scale up to $\sim$ \arcsa{1}{8}, which is 4 times the disk size. The correlator was set up to have 4 spectral windows (centered at 232.005, 234.005, 217.765, and 219.705 GHz) , each with a bandwidth of 1.875 GHz and 1920 channels, and thus a spectral resolution of 0.976 MHz per channel, corresponding to $\sim$ 1.3 \vkm{} per channel. The primary beam was $\sim$ \arcs{25}, much larger than the disk size. The data were calibrated with the CASA package version 5.1.1, with quasar J0423-0120 (a flux of $\sim$ 0.93 Jy) as a passband and flux calibrator, and quasar J0541-0211 (a flux of $\sim$ 0.096 Jy) as a gain calibrator. Line-free channels were combined to generate a visibility for the continuum centered at 226 GHz. We used a robust factor of $-$1.0 for the visibility weighting to generate the continuum map with a synthesized beam of \arcsa{0}{021}$\times$\arcsa{0}{016} at a position angle of $\sim$ 79\degree{}. The noise level is $\sim$ 20 \uJyb{} or 1.4 K. The channel maps of the molecular lines were generated after continuum subtraction. Using a visibility weighting of 0.5, the synthesized beam has a size of \arcsa{0}{055}$\times$\arcsa{0}{042} at a position angle of $\sim$ 49\degree{}. The noise levels are $\sim$ 0.9 \mJyb{} (or $\sim$ 10 K) in the channel maps. The velocities in the channel maps are LSR velocities. \section{Results} The detected lines of \NHtCHO{} (16 lines), HNCO (7 lines), \HtCO{} (2 lines) as well as its doubly deuterated species \DtCO{} (2 lines) and $^{13}$C isotopologue \HttCO{} (1 line), \CHtOH{} (12 lines), and \CHtCHO{} (7 lines) are listed in Table \ref{tab:lines}. They have upper level energy $E_u \lesssim 500$ K, but with $E_u < 120$ K for \CHtCHO{}, \HtCO{} as well as its deuterated species and isotopologue. In order to increase the sensitivity for better detections, we divided them into 2 ranges of upper level energies: $E_u < 120$ K and $E_u > 120 K$, and then stacked them to produce the mean channel maps, and then the total line intensity maps, and the position-velocity (PV) diagrams. \subsection{Stratified Distribution of Molecules} Figure \ref{fig:contCOMs} shows the total line intensity maps (red contours) of these molecules on top of the continuum map of the disk at $\lambda \sim$ 1.33 mm, in order to pin point the location of these molecules in the disk and the chemical relationship among them. As shown in Figure \ref{fig:contCOMs}c, the disk is nearly edge-on, with an equatorial dark lane tracing the cooler midplane sandwiched by two brighter features (outlined by the 4th and 5th contour levels) on the top and bottom tracing the warmer surfaces, as seen before in continuum at a shorter wavelength of $\sim$ 0.85 mm \citep{Lee2017Disk}. As can be seen, the emission structure of a given molecule is similar in different $E_u$ ranges, suggesting that it is more dominated by the distribution of the molecule than the upper energy level. After stacking the lines, we achieved a better sensitivity in \NHtCHO{} than that in the previous observations obtained at higher resolution \citep{Lee2017COM}, and detected \NHtCHO{} not only in the lower disk atmosphere, but also in the upper disk atmosphere. More importantly, we can better pinpoint its emission and found it to be in the inner disk where the disk is warmer. It is brighter in the lower disk atmosphere, with two emission peaks clearly seen in the map with $E_u > 120$ K (see Figure \ref{fig:contCOMs}b). HNCO is detected with the spatial distribution and radial extent consistent with \NHtCHO{}. Looking back at the previous results of other iCOMs detected at higher frequency of $\sim$ 346 GHz \citep{Lee2019COM}, we find that t-HCOOH was also detected with the spatial distribution and radial extent consistent with \NHtCHO{} \cite[see Figure \ref{fig:contCOMs}f adopted from][]{Lee2019COM}. Notice that the radial distribution of molecular gas detected at higher frequency can also be compared with that here at lower frequency, because the optical depth of the underlying continuum of the dusty disk mainly affects the vertical distribution (i.e., height) of molecular gas in the atmosphere (see Section \ref{sec:midplane}). On the other hand, \HtCO{} is only detected with $E_u < 70$ K, and its emission extends further out in radial direction beyond the centrifugal barrier (CB) (Figure \ref{fig:contCOMs}g). The emission is also detected at a larger distance from the disk miplane and extends away from the disk atmosphere, overlapping with the base of the SO disk wind \citep{Lee2021DW} and thus tracing the wind from the disk. Its deuterated species \DtCO{} is detected mainly in the disk atmosphere, also at a larger distance from the midplane and a larger radius from the central protostar than \NHtCHO{}. On the other hand, the emission of the $^{13}$C isotopologue \HttCO{} is very faint and mainly detected in the disk atmospheres. \CHtOH{} is detected in the atmosphere extending out to the CB, as found before \citep{Lee2017COM,Lee2019COM}. The emission also extends away from the disk midplane, suggesting that part of it also traces the wind from the disk. As for \CHtCHO{}, the emission is mainly detected in the disk atmosphere and extends radially toward the CB. In summary, \NHtCHO{}, HNCO, \DtCO{}, \HttCO{} and \CHtCHO{} trace mainly the disk atmosphere, while \HtCO{} and \CHtOH{} trace not only the disk atmosphere, but also the disk wind. We can also measure the vertical height of these molecules (using lines with $E_u < 120$ K) along the jet axis in the lower atmosphere where the emission is brighter, and find it to be $\sim$ 15, 19, 20, 24, and 26 au, respectively for \NHtCHO, HNCO, \CHtCHO{}, \CHtOH{}, and \HtCO{}. We will discuss the vertical height later with the outer radius of these molecules measured from the PV diagrams. \subsection{Kinematics} The spatio-kinematic relationship among these molecules can be studied with the PV diagrams cut across the upper and lower disk atmospheres, as shown in Figure \ref{fig:pv_atms}. Here we use the emission with $E_u < 120$ K, where all molecules are detected. In addition, this emission is expected to trace the lowest temperature and thus the outermost radius at which the molecules start to appear. Previously, the disk was found to be rotating roughly with a Keplerian rotation due a central mass of $\sim$ 0.25 \solarmass{} (including the masses of the central protostar and the disk) \citep{Codella2014,Lee2017COM}. Therefore, the associated Keplerian rotation curves (blue curves) are plotted here for comparison. The emissions of these molecules trace the disk atmosphere within the CB and are thus enclosed by the Keplerian rotation curves. In the upper disk atmosphere, their emissions form roughly linear PV structures (as marked by the magenta lines), indicating that they arise from rings rotating at certain velocities. For edge-on rotating rings, the radial velocity observed along the line of sight is proportional to the position offset from the center, forming the linear PV structures. Interestingly, the PV structures of HNCO and t-HCOOH are aligned with those of \NHtCHO{}, and the PV structures of \DtCO{} are roughly aligned with those of \CHtOH{}. Except for these similarities, different molecules have different velocity gradients connecting to different locations of the Keplerian curves, indicating that they arise from rings at different disk radii. From the location of their PV structure on the Keplerian curve, we find that the disk radius of these molecules increases from $\sim$ 24 au for \NHtCHO{}/HNCO/t-HCOOH, to $\sim$ 36 au for \CHtCHO{}, to $\sim$ 40 au for \CHtOH{}/\DtCO{}, and then to $\sim$ 48 au for \HtCO{}. This trend is the same as the increasing order of the vertical height measured earlier for these molecules, indicating that the height increases with increasing radius, as expected for a flared disk in hydrostatic equilibrium. Plotting the velocity gradients in the upper disk atmosphere onto the lower disk atmosphere, we find that the emission detected in the upper disk atmosphere is actually only from the outer radius where their emission start to appear, and the emission also extends radially inward to where \NHtCHO{} is detected. Since the nearside of the disk is tilted slightly downward to the south, the emission in the upper disk atmosphere further in is lost due to the absorption against the bright and optically thick continuum emission of the disk surface (see Figure 9b in Lee et al. 2019). Note that for \NHtCHO{}/HNCO/t-HCOOH, there seems to be a small velocity shift of $\sim$ 0.5 \vkm{} between the upper and lower disk atmosphere. This velocity shift could suggest an infall (or accretion) velocity of $\sim$ 0.25 \vkm{}, which is $\sim$ 8\% of the rotation velocity at $\sim$ 24 au. However, observations at higher spectral and spatial resolution are needed to verify this possibility. \subsection{Physical Properties in the Disk Atmosphere} In order to understand the nature and spatial origin of the detected methanol, we analyzed the observed methanol lines (Table \ref{tab:lines}) via a non-LTE Large Velocity Gradient (LVG) approach, using the code \textsc{grelvg}, initially developed by \citet{Ceccarelli2003}. We used the collisional coefficients of methanol with para-H$_2$, computed by \citet{Rabli2010} between 10 and 200 K for the J$\leq$15 levels and provided by the BASECOL database \citep{Dubernet2012,Dubernet2013}. We assumed an A-/E- CH$_3$OH ratio equal to 1. To compute the line escape probability as a function of the line optical depth we adopted the semi-infinite slab geometry \citep{Scoville1974} and a linewidth equal to 4 km~s$^{-1}$, following the observations. We ran several grids of models to sample the $\chi^2$ surface in the parameter space. Specifically, we varied the methanol column density N(A-CH$_3$OH) and N(E-CH$_3$OH) simultaneously from $2\times 10^{15}$ to $1\times 10^{19}$ cm$^{-2}$ (with a step of a factor of 2), the H$_2$ density $\nHm$ from $10^{6}$ to $10^{9}$ cm$^{-3}$ (with a step of a factor of 2) and the gas temperature T from 50 to 120 K (with a step of 5 K). We then fit the measured the velocity-integrated line intensities ($W=\int T_B dv$ with $T_B$ being the brightness temperature) by comparing them with those predicted by the model, leaving N(A-CH$_3$OH) and N(E-CH$_3$OH), $\nHm$, and $T$ as free parameters. Given the limitation on the J level ($\leq$15), we used only seven of the twelve detected methanol lines with $E_u < 200$ K for the LVG fitting. We considered the line intensities at the emission peak (marked by a blue circle in Figure \ref{fig:contCOMs}j) in the lower disk atmosphere, as listed in Table \ref{tab:lines} . The results of the fit are shown in Figure \ref{fig:LVG}. The best fit gives the following values, where the errors are estimated considering the 1$\sigma$ confidence level and the uncertainties of $\sim$ 40\% in our measurements: N(CH$_3$OH)$=$N(A-CH$_3$OH)$+$N(E-CH$_3$OH)$\sim 1.6^{+4.4}_{-0.8} \times 10^{18}$ cm$^{-2}$; $\nHm \sim 10^{9}$ cm$^{-3}$, which should be the lower limit because it is in the LTE regime at this density; and $T \sim 75\pm20$ K. The lines are predicted to be all optically thick with the lowest line opacity $\tau \sim 1$ for the line at 234.699 GHz and the highest $\tau\sim 19$ for the line at 218.440 GHz, and $\tau$=3--10 for the other lines. We also derived the excitation temperature and column density from rotation diagram using the remaining five transition lines with $E_u > 200 $K (see Figure \ref{fig:LVG}c), assuming optically thin emission and LTE \citep{Goldsmith1999}. In particular, we fit the data with a linear equation, and then derived the temperature from the negative reciprocal of the slope and the column density from the y-intercept. We found that $T\sim 109\pm31$ K and N(CH$_3$OH)$=(1.4\pm0.7)\times 10^{18}$ cm$^{-2}$. Taking the mean values from the two methods, we have $T\sim 92^{+48}_{-37}$ K and N(CH$_3$OH)$=1.5^{+4.5}_{-0.8}\times 10^{18}$ cm$^{-2}$. Notice that previous LTE estimation of excitation temperature of \CHtOH{} and \CHtDOH{} together from rotation diagram was pretty uncertain, with a value of 165$\pm$85 K \citep{Lee2017COM}, due to a large scatter of the data points. More importantly, the excitation temperature was also overestimated because almost all the lines had E$_u$ $<$ 200 K and were thus likely optically thick. For less abundant molecules detected with a broad range of $E_u$, such as \NHtCHO{} and HNCO, the mean excitation temperature and column density of the molecular lines in the disk atmosphere can be roughly estimated from rotation diagram assuming optically thin emission and LTE \citep{Goldsmith1999}. We used the brighter emission in the lower disk atmosphere. Table \ref{tab:lines} lists the integrated line intensities averaged over a rectangular region (with a size of 68 au $\times$ 20 au) that covers most of the emission in the lower atmosphere, measured with a cutoff of 2$\sigma$. Figure \ref{fig:popdia} shows the resulting rotation diagrams for \NHtCHO{} and HNCO. The blended lines of \NHtCHO{} are excluded from the diagram. The HNCO line at the lowest $E_u$ (marked with an open square) seems to be optically thick with an intensity much lower than the line next down the $E_u$ axis, and is thus excluded from the fitting. For \NHtCHO{} and HNCO, we fit the data points to obtain the temperature and column density. It is interesting to note that \NHtCHO{} and HNCO have roughly the same excitation temperature of $\sim$ $226\pm130$ K, although with a large uncertainty. On the other hand, since \HtCO{}, \DtCO{}, and \CHtCHO{} are only detected with a narrow range of $E_u < 120$ K and their emission can be optically thick there, we can not derive their excitation temperature from the rotation diagram. In addition, \HtCO{} and \DtCO{} are only detected with two lines. Also, \HttCO{} is only detected with one line. Since \DtCO{} has roughly the same radial extent as \CHtOH{}, it is assumed to have an excitation temperature of 92 K, the same as that found for \CHtOH{}. Since \HtCO{} has a slightly larger radius than \CHtOH{}, it and its $^{13}$C isotopologue \HttCO{} are assumed to have an excitation of 60 K. \CHtCHO{} has a smaller radial extent than \CHtOH{} and is thus assumed to have an excitation temperature of 100 K. The resulting excitation temperature and column density are listed in Table \ref{tab:colabun}. In addition, the abundance of these molecules are also estimated by dividing the column density of the molecules by the mean H$_2$ column density derived from a dusty disk model \citep{Lee2021Pdisk} in the same region, which is found to be $\sim$ \scnum{1.08}{25} \cms{}. This disk model was constructed before to reproduce the continuum emission of the disk at $\lambda \sim$ 850 \micron{} \citep{Lee2021Pdisk} and it can also roughly reproduce the continuum emission of the disk observed here at $\lambda \sim$ 1.33 mm \citep{Lin2021}. Since \HtCO{} and \DtCO{} lines are each detected with two lines that are likely optically thick, their lines at higher E$_u$ are used to derive the lower limit of their column density. Indeed, the \HtCO{} column density can be better derived from the \HttCO{} line assuming [$^{12}$C]/[$^{13}$C] ratio of $\sim$ 50, as estimated in the Orion Complex \citep{Kahane2018}. As can be seen from Table \ref{tab:colabun}, the \HtCO{} column density derived this way is $\sim$ 3 times that derived from the \HtCO{} lines. Thus, the deuteration of \HtCO{}, i.e., the abundance ratio [\DtCO{}]/[\HtCO{}], is $\gtrsim$ 0.053. As for \CHtCHO{}, we fixed its temperature to 100 K by fixing the negative reciprocal of the slope in the linear equation and then derived its column density from the y-intercept of the linear fit to the rotation diagram, as shown in Figure \ref{fig:popdia}c. \section{Discussion} \subsection{Lack of Molecular Emission in Disk Midplane} \label{sec:midplane} As discussed in \citet{Lee2017COM,Lee2019COM}, the lack of molecular emission in the disk midplane can be due to an exponential attenuation by the high optical depth of dust continuum. Figure \ref{fig:conttau}a shows the optical depth of the dust continuum at 1.33 mm derived from the dusty disk model that reproduced the thermal emission of the disk \citep{Lee2021Pdisk}. As can be seen, the optical depth is $\gtrsim$ 3 toward the midplane within the CB, where no molecular emission is detected, supporting this possibility. The faint \NHtCHO{} and \CHtOH{} emission detected in the midplane likely comes from the upper and lower disk atmospheres due to the beam convolution. However, the \HtCO{} emission in the midplane near the CB (see Figure \ref{fig:contCOMs}g) should be real detection because the optical depth of the dust continuum decreases to smaller than 3 at the edge. \subsection{Distribution of Molecules and Binding Energy} As discussed earlier, a stratification is seen in the distribution of molecules in the disk atmosphere, with the outer disk radius decreasing from \HtCO{}, to \CHtOH{}, to \CHtCHO{}, and then to \NHtCHO{}/HNCO/HCOOH, as shown in Figure \ref{fig:conttau}b together with the temperature structure of the dusty disk model \citep{Lee2021Pdisk}. Similar stratification of \HtCO{}, \CHtCHO{}, and \NHtCHO{} has been seen toward the bow shock region B1 in the young protostellar system L1157 \citep{Codella2017}. That shock region is divided into 3 shock subregions, with shock 1 in the bow wing, shock 3 in the bow tip, and shock 2 in between. The authors interpreted that shock 1 is the youngest shock while shock 3 is the oldest. They found that the observed decrease in abundance ratio [\CHtCHO{}]/[\NHtCHO{}] from shock 3 to shock 2 and to shock 1 can be modeled if both \NHtCHO{} and \CHtCHO{} are formed in gas phase. Here in the HH 212 disk, since the temperature of the atmosphere is expected to increase inward toward the center as the disk (Figure \ref{fig:conttau}b), the stratification in the distribution of these molecules could be related to their binding energy (BE) (and thus sublimation temperature). Table \ref{tab:BE} lists the recently computed BE for these molecules. For consistent comparison, we adopt the values obtained from similar methods on amorphous solid water ice \citep{Ferrero2020,Ferrero2022}. Since HNCO was not included in those studies, we adopt its value from \citet{Song2016}. Notice that different methods can result in different BE, e.g., HCOOH was found to have a BE value of less than 5000 K on pure ice \citep{Kruckiewicz2021}, significantly lower than that adopted here. As can be seen, the increasing order of the observed outer radius of \NHtCHO{}/t-HCOOH, \CHtOH{}, and \HtCO{} is consistent with the decreasing order of their BE, indicating that these molecules are thermally desorbed from the ice mantle on dust grains. Notice that this does not necessarily mean that these molecules are formed in the ice mantle, because the density in the disk is so high that even if the molecules are formed in the gas phase they freeze-out quickly and are, therefore, only detected in regions where the dust temperature is larger than the sublimation temperature. As for HNCO and \CHtCHO{}, their outer radii do not fit in to those of \HtCO{}, \CHtOH{}, and \NHtCHO{} based on their BE and they can form in gas phase from other species. On the other hand, HNCO and HCOOH may come from the decomposition of the desorbed organic salts (NH$_4^+$OCN$^-$ and NH$_4^+$HCOO$^-$), which have similar BE to that of \NHtCHO{} \citep{Kruckiewicz2021,Ligterink2018}. Further work is needed to check this possibility. Previously at $\sim$ \arcsa{0}{15} (60 au) resolution, \citet{Codella2018} detected deuterated water around the disk. Although the deuterated water was found to have an outer radius of $\sim$ 60 au, its kinematics was found to be consistent with that of the centrifugal barrier at $\sim$ 44 au. More importantly, since water has a BE similar to that of \HtCO{} (see Table \ref{tab:BE}), it is likely that water, like \HtCO{}, is also desorbed from the ice mantle on the dust grains. Thus the water snowline can be located around or slightly outside the centrifugal barrier. \subsection{Centrifugal Barrier and \HtCO{} and \CHtOH{}} The high deuteration of \HtCO{} (with [\DtCO{}]/[\HtCO{}] $\gtrsim$ 0.053) and methanol (with [\CHtDOH]/[\CHtOH] $\sim$ 0.12) \citep{Lee2019COM} supports that both are originally formed in ice. These ratios of [\DtCO{}]/[\HtCO{}] and [\CHtDOH]/[\CHtOH] are consistent with those found in prestellar cores to Class I sources \cite[references therein]{Mercimek2022}. It is possible that \HtCO{} is formed by hydrogenation to CO frozen in the ice mantle on dust grains and then \CHtOH{} is formed from it with the addition of two H atoms \citep{Charnley2004}. The derived kinetic temperature of \CHtOH{} agrees with the sublimation temperature, also supporting that the methanol is thermally desorbed into gas phase. \HtCO{} and \CHtOH{} are detected with the outer radius near the CB where an accretion shock is expected as the envelope material flows onto the disk \citep{Lee2017COM}, suggesting that they are desorbed into gas phase due to the heat produced by the shock interaction. It is possible that they were already formed in the ice mantle on dust grains in the collapsing envelope stage and then brought in to the disk \citep{Herbst2009,Caselli2012}. \HtCO{} has a lower sublimation temperature than \CHtOH{}, and thus can be desorbed into gas phase further out beyond the CB. Interestingly, both \HtCO{} and \CHtOH{} also extend vertically away from the disk surface, and thus can also trace the disk wind as SO \citep{Tabone2017,Lee2018DW,Lee2021DW}. In addition, since \HtCO{} has an outer radius outside the centrifugal barrier, it may also trace the wind from the innermost envelope transitioning to the disk, carrying away angular momentum from there. \subsection{Formamide, HNCO, and \HtCO{}} HNCO not only has similar spatial distribution and kinematics, but also has a similar excitation temperature to \NHtCHO{}, though with a large uncertainty. In addition, the abundance ratio of HNCO to \NHtCHO{} agrees well with the nearly linear abundance correlation found before across several orders of magnitude in molecular abundance \citep{Lopez2019}. All these suggest a chemical link between the two molecules. However, as discussed earlier based on the BE sequence, HNCO itself is likely formed in gas phase but not desorbed from ice mantle, unless the BE of HNCO is significantly underestimated. In particular, although HNCO has a much lower BE than \NHtCHO{}, it is detected only in the inner and warmer disk where \NHtCHO{} is detected, but not detected in the outer part of the disk where the temperature is lower. Thus, our result implies that HNCO, instead of being parent molecule, is likely a daughter molecule of \NHtCHO{} and formed in gas phase. One possible reaction is \NHtCHO{} $+$ H $\rightarrow$ HNCO \citep{Haupa2019}. It is also possible that HNCO is formed by destructive gas-phase ion-molecule interactions with amides (also larger amides than \NHtCHO{}) \citep{Garrod2008,Tideswell2010}. It has also been proposed that formamide can be formed from formaldehyde (\HtCO) in warm gas through the reaction \HtCO{} $+$ NH$_2$ $\rightarrow$ \NHtCHO{} $+$ H \citep{Kahane2013,Vazart2016,Codella2017,Skouteris2017}. However, we find that \HtCO{} has a more extended distribution with different kinematics from formamide, and is thus unclear if it can be the parent molecule in gas phase. Unfortunately, we have no information on the other reactant, NH$_2$. Very likely it is the product of sublimated NH$_3$ \citep{Codella2017}, whose binding energy (see Table \ref{tab:BE}) is larger than that of \HtCO{}, which may explain why formamide is not present where \HtCO{} is. In conclusion, based on the current observations, it is not possible to constrain the formation route of formamide in the disk atmosphere of HH 212. Nonetheless, our work has added precious information about the formation route of formamide in disk atmosphere, complementing those in different environments, e.g., the L1157 shock \citep{Codella2017}. \acknowledgements We thank the anonymous reviewers for their insightful comments. This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2017.1.00712.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. C.-F.L. acknowledges grants from the Ministry of Science and Technology of Taiwan (MoST 107-2119-M-001-040-MY3, 110-2112-M-001-021-MY3) and the Academia Sinica (Investigator Award AS-IA-108-M01). CC acknowledges the funding from the European Union’s Horizon 2020 research and innovation programs under projects “Astro-Chemistry Origins” (ACO), Grant No 811312; the PRIN-INAF 2016 The Cradle of Life - GENESIS-SKA (General Conditions in Early Planetary Systems for the rise of life with SKA); the PRIN-MUR 2020 BEYOND-2P (Astrochemistry beyond the second period elements), Prot. 2020AFB3FX. \def\tlabel#1{{\bf #1}} \newcounter{mtable}[section] \newenvironment{mtable}[1][]{\refstepcounter{mtable}\par\medskip \noindent \textbf{Table~\themtable. #1} \rmfamily}{\medskip} \renewcommand{\arraystretch}{0.6} \begin{table} \scriptsize \centering \begin{mtable} \bf Line Properties from Splatalogue\\ \label{tab:lines} \end{mtable} \begin{tabular}{llcccrc} \hline Transition & Frequency & log($A_{ul}$) & $E_{u}$ &$g_u$ & $W^a$ & Remarks\\ QNs & (MHz) & (s$^{-1}$) & (K) & &(K \vkm{})&\\ \hline\hline \NHtCHO{} 10( 1, 9)- 9( 1, 8) & 218459.21 & -3.126 & 60.812 & 21 & 119 & CDMS \\ % \NHtCHO{} 11( 2,10)-10( 2, 9) & 232273.64 & -3.054 & 78.949 & 23 & 120 & CDMS \\ \NHtCHO{} 11( 8, 3)-10( 8, 2) & 233488.88 & -3.360 & 257.724 & 23 & 29$^m$ & CDMS \\ % \NHtCHO{} 11( 8, 4)-10( 8, 3) & 233488.88 & -3.360 & 257.724 & 23 & 29$^m$ & CDMS \\ % \NHtCHO{} 11( 7, 4)-10( 7, 3) & 233498.06 & -3.258 & 213.124 & 23 & 42 & CDMS \\ % \NHtCHO{} 11( 7, 5)-10( 7, 4) & 233498.06 & -3.258 & 213.124 & 23 & 42 & CDMS \\ % \NHtCHO{} 11( 6, 6)-10( 6, 5) & 233527.79 & -3.186 & 174.449 & 23 & 38 & CDMS \\ % \NHtCHO{} 11( 6, 5)-10( 6, 4) & 233527.79 & -3.186 & 174.449 & 23 & 38 & CDMS \\ % \NHtCHO{} 11( 5, 7)-10( 5, 6) & 233594.50 & -3.133 & 141.714 & 23 & 56$^m$ & CDMS \\ % \NHtCHO{} 11( 5, 6)-10( 5, 5) & 233594.50 & -3.133 & 141.714 & 23 & 56$^m$ & CDMS \\ % \NHtCHO{} 11( 4, 8)-10( 4, 7) & 233734.72 & -3.093 & 114.932 & 23 & 91 & CDMS \\ \NHtCHO{} 11( 4, 7)-10( 4, 6) & 233745.61 & -3.093 & 114.933 & 23 & 132 & CDMS \\ \NHtCHO{} 11( 3, 9)-10( 3, 8) & 233896.57 & -3.064 & 94.110 & 23 & 91 & CDMS \\ \NHtCHO{} 11( 3, 8)-10( 3, 7) & 234315.49 & -3.062 & 94.158 & 23 & 96 & CDMS \\ \\ HNCO 10( 1,10)-9( 1, 9) & 218981.00 & -3.847 & 101.078 & 21 & 212 & CDMS \\ HNCO 10( 3, 8)-9( 3, 7) & 219656.76 & -3.920 & 432.959 & 21 & 41$^m$ & CDMS \\ % HNCO 10( 3, 7)-9( 3, 6) & 219656.77 & -3.920 & 432.959 & 21 & 41$^m$ & CDMS \\ % HNCO 10( 2, 9)-9( 2, 8) & 219733.85 & -3.871 & 228.284 & 21 & 101 & CDMS \\ % HNCO 10( 2, 8)-9( 2, 7) & 219737.19 & -3.871 & 228.285 & 21 & 121 & CDMS \\ % HNCO 10( 0,10)-9( 0, 9) & 219798.27 & -3.832 & 58.019 & 21 & 192$^b$ & CDMS \\ % HNCO 10( 1, 9)-9( 1, 8) & 220584.75 & -3.837 & 101.502 & 21 & 198 & CDMS \\ \\ \HtCO{} 3( 0, 3)- 2( 0, 2) & 218222.19 & -3.550 & 20.956 & 7 & 266$^b$ & CDMS \\ \HtCO{} 3( 2, 2)- 2( 2, 1) & 218475.63 & -3.803 & 68.093 & 7 & 230 & CDMS \\ \\ \DtCO{} 4(0,4) - 3(0,3) & 231410.23 & -3.45914 & 27.88284 & 18 & 67$^b$ & CDMS \\ \DtCO{} 4(2,3) - 3(2,2) & 233650.44 & -3.57046 & 49.62595 & 18 & 104 & CDMS \\ % \\ H$_2\,^{13}$CO 3( 1, 2)- 2( 1, 1) & 219908.52 & -3.59109 & 32.93810 & 21 &120 & CDMS \\ \\ \CHtOH{} 5( 1) - 4( 2) E1 vt=0 & 216945.52 & -4.915 & 55.871 & 11 & 347$^b$ & JPL \\ \CHtOH{} 6( 1) - - 7( 2) - vt=1 & 217299.20 & -4.367 & 373.924 & 13 & 155 & JPL \\ \CHtOH{} 20( 1) -20( 0) E1 vt=0 & 217886.50 & -4.471 & 508.375 & 41 & 89 & JPL \\ \CHtOH{} 4( 2) - 3( 1) E1 vt=0 & 218440.06 & -4.329 & 45.459 & 9 & 263$^b$ & JPL \\ \CHtOH{} 8( 0) - 7( 1) E1 vt=0 & 220078.56 & -4.599 & 96.613 & 17 & 347$^b$ & JPL \\ \CHtOH{} 10(-5) -11(-4) E2 vt=0 & 220401.31 & -4.951 & 251.643 & 21 & 256 & JPL \\ \CHtOH{} 10( 2) - - 9( 3) - vt=0& 231281.11 & -4.736 & 165.347 & 21 & 240$^b$ & JPL \\ \CHtOH{} 10( 2) + - 9( 3) + vt=0& 232418.52 & -4.729 & 165.401 & 21 & 295$^b$ & JPL \\ \CHtOH{} 18( 3) + -17( 4) + vt=0& 232783.44 & -4.664 & 446.531 & 37 & 214 & JPL \\ \CHtOH{} 18( 3) - -17( 4) - vt=0& 233795.66 & -4.658 & 446.580 & 37 & 230 & JPL \\ \CHtOH{} 4( 2) - - 5( 1) - vt=0& 234683.37 & -4.734 & 60.923 & 9 & 285$^b$ & JPL \\ \CHtOH{} 5(-4) - 6(-3) E2 vt=0 & 234698.51 & -5.197 & 122.720 & 11 & 195$^b$ & JPL \\ \\ \CHtCHO{} 12( 4, 8)-11( 4, 7) E, vt=0 & 231484.37 & -3.409 & 108.289 & 50 & 29 & JPL \\ \CHtCHO{} 12( 4, 9)-11( 4, 8) E, vt=0 & 231506.29 & -3.409 & 108.251 & 50 & 111 & JPL \\ \CHtCHO{} 12( 3,10)-11( 3, 9) E, vt=0 & 231748.71 & -3.388 & 92.510 & 50 & 65 & JPL \\ \CHtCHO{} 12( 3, 9)-11( 3, 8) E, vt=0 & 231847.57 & -3.387 & 92.610 & 50 & 46 & JPL \\ \CHtCHO{} 12( 3, 9)-11( 3, 8) A, vt=0 & 231968.38 & -3.383 & 92.624 & 50 & 116 & JPL \\ \CHtCHO{} 12( 2,10)-11( 2, 9) E, vt=0 & 234795.45 & -3.352 & 81.864 & 50 & 54 & JPL \\ \CHtCHO{} 12( 2,10)-11( 2, 9) A, vt=0 & 234825.87 & -3.351 & 81.842 & 50 & 50 & JPL \\ \hline \end{tabular} \mbox{}\\ $a$: Integrated line intensities (see text for the definition) measured from the lower disk atmosphere. Except for \CHtOH{} which used the values at the emission peak position, they are the mean values averaging over a rectangular region (with a size of \arcsa{0}{17}$\times$\arcsa{0}{05} covering most of the emission) centered at the lower atmosphere. In this column, the line intensities commented with ``m" are the mean values obtained by averaging over 2 or more lines with similar $E_u$ and log $A_{ul}$ for better measurements. $b$: likely optically thick and thus ignored in the fitting of the rotation diagram and calculation of column density. The line intensities here are assumed to have an uncertainty of 40\%. \end{table} \begin{table} \small \centering \begin{mtable} \bf Column Densities and Abundances in the Lower Disk Atmosphere\\ \label{tab:colabun} \end{mtable} \begin{tabular}{llrr} \hline Species & Excitation Temperature & Column Density & Abundance$^\dagger$\\ & (K) & (\cms) & \\ \hline\hline \CHtOH{}$^a$ &$92^{+48}_{-37}$&$1.5^{+4.5}_{-0.8} \times 10^{18}$ cm$^{-2}$& $1.4^{+4.2}_{-0.7} \times 10^{-7}$ \\ \NHtCHO{}$^b$ &$221\pm132$& \scnum{($5.2\pm2.6$)}{15} & \scnum{($4.8\pm2.4$)}{-10}\\ HNCO$^b$ &$231\pm100$ & \scnum{($1.8\pm0.5$)}{16} & \scnum{($1.7\pm0.9$)}{-9} \\ \HtCO{}$^c$ &$60\pm20^e$ & $\gtrsim$\scnum{($1.4\pm0.7$)}{16} & $\gtrsim$ \scnum{($1.3\pm0.7$)}{-9} \\ \HtCO{}$^d$ & & \scnum{($4.5\pm2.3$)}{16} & \scnum{($4.2\pm2.1$)}{-9} \\ \HttCO{} &$60\pm20^e$ & \scnum{($9.0\pm4.5$)}{14} & \scnum{($8.3\pm4.2$)}{-11} \\ \DtCO{} &$92\pm30^f$ & $\gtrsim$\scnum{($2.4\pm1.2$)}{15} & $\gtrsim$ \scnum{($2.2\pm1.1$)}{-10} \\ \CHtCHO{} &$100\pm50^g$ & \scnum{($8.7\pm4.4$)}{15} & \scnum{($8.0\pm4.0$)}{-10} \\ \hline \end{tabular} \mbox{}\\ $a$: Mean temperature and column density derived from non-LTE LVG calculation and rotation diagram.\\ $b$: Temperature and column density derived from rotation diagram.\\ $c$: Column density derived from \HtCO{} line.\\ $d$: Column density derived from \HttCO{} line, assuming [$^{12}$C]/[$^{13}$C] ratio of 50.\\ $e$: Temperature assumed to be 60 K.\\ $f$: Mean temperature assumed to be the same as \CHtOH{}.\\ $g$: Temperature assumed to be 100 K.\\ $\dagger$: Abundance derived by dividing the column densities of the molecules by the H$_2$ column density in the disk atmosphere, which is $\sim$ \scnum{1.08}{25} \cms{} (see text).\\ \end{table} \begin{table} \small \centering \begin{mtable} \bf Binding Energy (BE) and Sublimation Temperature (T$_\textrm{\scriptsize sub}$) in Amorphous Solid Water Ice\\ \label{tab:BE} \end{mtable} \begin{tabular}{lcccc} \hline Species & Binding Energy & T$_\textrm{\scriptsize sub}$ & Outer Radius & References\\ & (K) & (K) & (au) & \\ \hline\hline \CHtCHO{} &2809-6038(4423) & 48-102(75) & 36 & Ferrero+2022\\ \HtCO{} &3071-6194(4632) & 52-104(78) & 48 & Ferrero+2020 \\ HNCO &2400-8400(4800) & 41-140(81) & 24 & Song \& Kastner 2016 \\ \CHtOH{} &3770-8618(6194) & 64-144(104) & 40 & Ferrero+2020 \\ HCOOH &5382-10559(7970) & 91-176(133) & 24 & Ferrero+2020 \\ \NHtCHO{} &5793-10960(8376) & 97-183(140) & 24 & Ferrero+2020 \\ NH$_3$ &4314-7549(5931) & 73-126(100) & -- & Ferrero+2020 \\ H$_2$O &3605-6111(4858) & 61-103(82) & 60$^a$ & Ferrero+2020 \\ \hline \end{tabular} \mbox{}\\ \mbox{}\\ $a$: The outer radius is given by that of HDO mapped at 60 au resolution \citep{Codella2018}.\\ The numbers in the parenthesis are the mean values, except for HNCO, for which it is a value for maximum sublimation. The sublimation temperature (T$_\textrm{\scriptsize sub}$) is calculated for an age of 10$^6$ yr, appropriate for a young protoplanetary disk. Adopting a shorter time of 10$^5$ yr, appropriate for a Class 0/I protostellar system, would increase the sublimation temperature by a few degrees. Note that the computed BE values, notably those of acetaldehyde and ammonia, may be slightly at odds with the published experimental ones, which depend on the structure of the ices as well as the distribution of the species population on the ices. For this reason, we chose to stick to the BEs computed by the same authors, Ferrero et al., (with the exception of HNCO because these authors did not compute it), to make the comparison possibly more reliable. \end{table} \setcounter{mfigure}{0} \renewcommand{\themfigure}{S\arabic{mfigure}} \renewenvironment{mfigure}[1][]{\refstepcounter{mfigure}\par\medskip \noindent \textbf{Extended Data Figure~\themfigure. #1} \rmfamily}{\medskip}
Title: $E_{\mathrm{iso}}$-$E_{\mathrm{p}}$ correlation of gamma ray bursts: calibration and cosmological applications
Abstract: Gamma-ray bursts (GRBs) are the most explosive phenomena and can be used to study the expansion of Universe. In this paper, we compile a long GRB sample for the $E_{\mathrm{iso}}$-$E_{\mathrm{p}}$ correlation from Swift and Fermi observations. The sample contains 221 long GRBs with redshifts from 0.03 to 8.20. From the analysis of data in different redshift intervals, we find no statistically significant evidence for the redshift evolution of this correlation. Then we calibrate the correlation in six sub-samples and use the calibrated one to constrain cosmological parameters. Employing a piece-wise approach, we study the redshift evolution of dark energy equation of state (EOS), and find that the EOS tends to be oscillating at low redshift, but consistent with $-1$ at high redshift. It hints a dynamical dark energy at $2\sigma$ confidence level at low redshift.
https://export.arxiv.org/pdf/2208.09272
\label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \begin{keywords} dark energy -- cosmological parameters -- gamma-ray burst: general \end{keywords} \section{Introduction}\label{sec:intro} The study of type Ia supernovae (SNe Ia) revealed the evidence of accelerating expansion of the universe \citep{1998AJ....116.1009R,1999ApJ...517..565P}, which shed light on the mysterious component — dark energy. Additionally, several independent observations have confirmed the accelerated expansion of the universe, including the cosmic microwave background \citep[CMB;][]{2003ApJS..148..175S}, and the baryonic acoustic oscillations \citep[BAO;][]{2005ApJ...633..560E}. The $\Lambda$CDM model successfully accounts for most cosmological observations \citep{2020A&A...641A...6P,2021MNRAS.504.2535I,2022MNRAS.513.5686C}. However, other dark energy models can not be ruled out due to the precision of current measurements. Currently, the highest redshift of SNe Ia is 2.26 \citep{2018ApJ...859..101S} and there is still blankness between SNe Ia and CMB. Fortunately, high redshift observations (for example GRBs and quasars) provide an opportunity for us to explore the cosmic blank history. \par GRBs are the most violent phenomena in the Universe, which have the isotropic equivalent energy up to $10^{54}$ erg \cite[for reviews, see][]{2009ARA&A..47..567G, 2015PhR...561....1K}. GRBs are usually classified into two types based on the duration time ($T_{90}$): long GRBs ($T_{90} > 2 s$) and short GRBs ($T_{90} < 2 s$) \citep{1993ApJ...413L.101K}. The former is thought to result from the core collapse of massive stars $\left(\geq 25 M_{\odot}\right)$. The progenitor of the latter is thought to be mergers of compact object binary \citep{2009ARA&A..47..567G,2017ApJ...848L..13A}. The redshift range that they cover is very wide, up to $z\sim$ 9.40, making them as attractive cosmological probes \citep{2015NewAR..67....1W}. Hence, there have been a lot of studies demonstrating that GRBs are useful in extending the Hubble diagram to high redshifts \citep{2001ApJ...562L..55F,2004ApJ...612L.101D,2004ApJ...616..331G,2005ApJ...633..611L,2007ApJ...660...16S,2012A&A...543A..91W}. To use GRBs as "standard candles", researchers have found several correlations between various characteristics of the prompt emission and the afterglow emission \citep{2002A&A...390...81A,2004ApJ...616..331G,2005ApJ...633..603X,2006MNRAS.369L..37L}. Attempts to use GRBs for constraining cosmological parameters have also obtained encouraging results \citep{2009MNRAS.400..775C,2014ApJ...783..126P,2019MNRAS.486L..46A,2019ApJS..245....1T,2021MNRAS.501.1520C,2021ApJ...914L..40D,2021MNRAS.507..730H,2021JCAP...09..042K,2021ApJ...920..135X,2022arXiv220408710C,2022MNRAS.510.2928C,2022MNRAS.512..439C,2022MNRAS.514.1828D}. Some reviews on luminosity correlations and cosmological applications of GRBs can be found in \cite{2015NewAR..67....1W,2017NewAR..77...23D,2018AdAst2018E...1D,2018PASP..130e1001D}. In this paper, we adopt the $E_{\mathrm{iso}}-E_{\mathrm{p}}$ correlation to explore the high-redshift universe using a long GRB sample from Swift and Fermi catalogs. The $E_{\mathrm{iso}}-E_{\mathrm{p}}$ correlation that the isotropic energy $E_{\mathrm{iso}}$ is correlated with the rest-frame peak energy $E_{\mathrm{p}}$ was discovered by \cite{2002A&A...390...81A} with a small sample of GRBs. Subsequently, \cite{2016A&A...585A..68W} updated 42 long GRBs and calibrated the $E_{\mathrm{iso}}-E_{\mathrm{p}}$ correlation with SNe Ia. The combination of GRBs and SNe Ia gave $\Omega_{m}=0.271\pm0.019$ and $H_0=70.1\pm0.2$ km s$^{-1}$ Mpc$^{-1}$ for the flat $\Lambda$CDM model. Recently, through analysing the correlation parameters and six different cosmological models simultaneously, \cite{2020MNRAS.499..391K} found that the $E_{\mathrm{iso}}-E_{\mathrm{p}}$ correlation is independent of cosmological models but GRB data can not constrain cosmological parameters to a great extent at present. In order to constrain cosmological model parameters strictly, the $E_{\mathrm{iso}}-E_{\mathrm{p}}$ correlation is also capable of being combined with the Combo-relation. The results are consistent with flat $\Lambda$CDM model, dynamical dark energy models and non-spatially-flat models \citep{2021JCAP...09..042K}. The data of Observational Hubble Dataset measurements (OHD) also help to constrain the cosmological parameters. \cite{2021MNRAS.503.4581L} calibrated the $E_{\mathrm{iso}}-E_{\mathrm{p}}$ correlation with the data of OHD and generated mock catalogs with machine learning techniques. They tested the $\Lambda$CDM model and the Chevallier-Polarski-Linder parametrization, finding possible extensions of the $\Lambda$CDM model toward a weakly evolving dark energy evolution. Combining GRBs with other probes, a joint analysis of the $H(z)$+BAO+quasar+HII starburst galaxy+GRBs data provides $\Omega_{m} = 0.313\pm0.013$ in a model-independent way \citep{2021MNRAS.501.1520C}. Their results provide a supporting consistency for the $\Lambda$CDM model, but it could not rule out mild dark energy dynamics. \cite{2022arXiv220700440L} used OHD and BAO data to calibrate the $E_{\mathrm{iso}}-E_{\mathrm{p}}$ correlation. Basing on the assumption that the GRB data obey a special redshift distribution, \cite{2022ApJ...935....7L} constrained $\Omega_m$ to be $0.308^{+0.066}_{-0.230}$ and $0.307^{+0.057}_{-0.290}$ with an improved $E_{\mathrm{iso}}-E_{\mathrm{p}}$ correlation in the $\Lambda$CDM model and $w$CDM model, respectively. \par In order to calibrate the correlations of GRBs, many methods have been tried. Using Bézier parametric curve to approximate the Hubble function is a model independent calibration method. \cite{2019MNRAS.486L..46A} fitted the $E_{\mathrm{iso}}-E_{\mathrm{p}}$ correlation with 193 long GRBs and the results show that the $\Lambda$CDM model is statistically superior to the $w$CDM model. The slope parameter of the Combo-relation was calibrated from small sub-samples of GRBs lying almost at the same redshift. And the intercept parameter was determined from the SNe Ia located near the GRBs \citep{2021ApJ...908..181M}. Another method is using the Gaussian process with the data of OHD to calibrate GRB correlations \citep{2022ApJ...924...97W}. Considering the number of the GRB sample used in this paper, we decide to study the $E_{\mathrm{iso}}-E_{\mathrm{p}}$ correlation by dividing them into several sub-samples. \par Increasing GRB observations have given rise to use the $E_{\mathrm {iso}}-E_{\mathrm{p}}$ correlation in cosmology. In this study, we use 221 GRBs to test the $E_{\mathrm {iso}}-E_{\mathrm{p}}$ correlation. The full sample is based on \cite{2016A&A...585A..68W}, and 29 GRBs from \cite{2019MNRAS.486L..46A}, and 49 GRBs from Fermi catalog are added. The spectral parameters are also taken from Fermi catalog. After converting the observed values to the cosmological rest frame, the bolometric fluence is calculated with the $k$-correction. For the $E_{\mathrm {iso}}-E_{\mathrm{p}}$ correlation, in view of the extrinsic scatter $\sigma_{\mathrm{ext}}$ should also depend on hidden variables, we take $\sigma_{\mathrm{ext}}$ assigned to $E_{\mathrm{iso}}$. This is consistent with the method proposed by \cite{2005physics..11182D} and more detail are discussed in Sec. \ref{Sec3}. The possible redshift evolution is studied by dividing the full sample into five redshift bins. The results show that the correlation does not have an evolution with redshift within 2$\sigma$ confidence level. To avoid the circularity problem, six groups within small redshift ranges are selected from the full GRB sample. The redshift range is small so that the $E_{\mathrm {iso}}-E_{\mathrm{p}}$ correlation in each sub-sample is almost model-independent. The correlation can be calibrated. % \par This paper is structured as follows. In Sec. \ref{Sec2}, we introduce the GRB sample and perform the $k$-correction. In Sec. \ref{Sec3}, we fit coefficients of the $E_{\mathrm {iso}}-E_{\mathrm{p}}$ correlation, and test whether the correlation evolves with redshifts. To avoid the circularity problem, we calibrate the correlation in sub-samples. In Sec. \ref{Sec4}, we use the calibrated correlation to constrain cosmological parameters. In Sec. \ref{Sec5}, we study the dark energy EOS in a model-independent way. We summarize the results and make some discussions in Sec. \ref{Sec6}. \section{GRBs sample}\label{Sec2} The Swift satellite has provided a large number of GRBs with redshifts. Its three instruments give scientists the ability to scrutinize GRBs. But the BAT instrument of this satellite is only capable of detecting energies up to 150 keV \citep{Gehrels2004}, which is lower than the average peak energy of GRBs \citep{2006ApJS..166..298K}. Hence, for many GRBs observed by the Swift satellite, the fluence and $E_{\mathrm{p,obs}}$ can not be directly determined. While the Fermi satellite has two main instruments: the Large Area Telescope (LAT) and the Gamma-ray Burst Monitor (GBM). It studies the cosmos between the energy range of 10 keV to 300 GeV. The most significant advantage is that Fermi is able to determine all the spectral parameters in the Band function. Consequently, we compile a sample of long GRBs that appear in both Swift and Fermi catalogs. \par Basing on the data set constructed by \cite{2016A&A...585A..68W}, we collect all GRBs with information of fluence, peak energy, and power law index from Fermi catalogue including observations from August 2008 to June 2021 \citep{2014ApJS..211...12G,2014ApJS..211...13V,2016ApJS..223...28N,2020ApJ...893...46V}. The redshifts are obtained from the Swift database\footnote{\url{https://swift.gsfc.nasa.gov/archive/grb_table.html/}}. Noting that some of the GRBs listed in the Fermi catalogue present no values for the spectral parameters or $E_{\mathrm{p,obs}}$. We download the corresponding time-tagged event dataset from Fermi public data archive\footnote{\url{https://heasarc.gsfc.nasa.gov/FTP/fermi/data/gbm/daily/}}. Data reduction and analysis follow the procedures discussed by \cite{2011ApJ...730..141Z,2016ApJ...816...72Z}. We select up to three sodium iodide (NaI) detectors and one bismuth germanium oxide (BGO) detector based on the method proposed by \cite{2021ApJ...923L..30Z} for all GRBs to perform the spectral fitting. Meanwhile, to ensure sufficient detector response, the viewing angles from the GRB location should be less than 60 degrees for NAI detectors and closest for BGO detector. For each detector, the source spectrum and background spectrum in a specific time interval are generated by summing the total and background photons in each energy channel, respectively. And the response matrices are required using the GBM Response Generator\footnote{\url{https://fermi.gsfc.nasa.gov/ssc/data/analysis/rmfit/gbmrsp-2.0.10.tar.bz2}}. Then we use McSpecFit discussed by \cite{2018NatAs...2...69Z} to perform the spectral fitting, which packages the nested sampler Multinest and utilizes pastat as the statistic to constrain parameters. The band function is employed to fit spectra. \par The GRBs prompt emission spectrum can be described as an empirical spectral function, which is a broken power law known as the Band function \citep{1993ApJ...413..281B} \begin{equation} \Phi(E)= \begin{cases}A E^{\alpha} \mathrm{e}^{-(2+\alpha) E / E_{\mathrm{p}, \mathrm{obs}}} & \mathrm { if \ } E \leq \frac{\alpha-\beta}{2+\alpha} E_{\mathrm{p}, \mathrm{obs}} \\ B E^{\beta} & \mathrm { otherwise, }\end{cases} \end{equation} where $E_{\mathrm{p}, \mathrm{obs}}$ is the observed peak energy , $\alpha$ and $\beta$ are the low- and high-energy indices, respectively. With $E_{\mathrm{p}, \mathrm{obs}}$ and redshift $z$, we get the peak energy in the rest frame by $E_{\mathrm{p}}=E_{\mathrm{p}, \mathrm{obs}} \times(1+z)$. % \par The bolometric fluence is calculated in the energy band of $1-10^{4} \mathrm{keV}$ by $k$-correction \citep{2001AJ....121.2879B} \begin{equation} S_{\mathrm {bolo }}=S \times \frac{\int_{1 /(1+z)}^{10^{4} /(1+z)} E \Phi(E) \mathrm{d} E}{\int_{E_{\min }}^{E_{\max }} E \Phi(E) \mathrm{d} E}, \end{equation} where $S$ is the observed fluence, and the detection thresholds are ($E_{\min }$, $E_{\max }$). \par $E_{\mathrm{iso }}$ is the isotropic equivalent energy in gamma-ray band, which can be calculated in terms of \begin{equation} E_{\mathrm{iso }}=4 \pi d_{\mathrm{L}}^{2} S_{\mathrm{bolo }}(1+z)^{-1}, \end{equation} here $d_{\mathrm{L}}$ is the luminosity distance. The factor $(1+z)^{-1}$ transforms the duration to the source rest-frame. The luminosity distance depends on cosmological models. Here we use the standard cosmological parameters: $\Omega_{\mathrm{m}}=0.315$, $\Omega_{\Lambda}=0.685$ and $H_{0}$ = 67.4 $\mathrm{km~s}^{-1} \mathrm{Mpc}^{-1}$ \citep{2020A&A...641A...6P}, where $\Omega_{\mathrm{m}}$ is the non-relativistic matter density parameter, $\Omega_{\Lambda}$ is the cosmological constant density and $H_{0}$ is the Hubble constant. Thus the luminosity distance $d_{\mathrm{L}}$ is expressed as \begin{equation} \begin{aligned} d_{\mathrm{L}}(z)=& \frac{c(1+z)}{H_{0}}\int_{0}^{z} \frac{\mathrm{d} z^{\prime}}{\sqrt{\Omega_{m}(1+z^{\prime})^{3}+\Omega_{\mathrm{\Lambda}}}}. \end{aligned} \end{equation} \par The full sample contains 221 GRBs and covers the redshift range from 0.0335 to 8.20. The GRB sample is listed in Table \ref{GRBsample}. During the calculation, we only take into account the propagation of errors from bolometric fluence $S_{\mathrm{bolo}}$. The uncertainties from other parameters are attributed into the $\sigma_{\mathrm{ext}}$. \section{The \EE{} correlation}\label{Sec3} \subsection{Fitting the $E_{\mathrm{iso }}-E_{\mathrm{p}}$ correlation} The $E_{\mathrm{iso}}-E_{\mathrm{p}}$ correlation is expressed as a logarithmic form \begin{equation} \log \frac{E_{\mathrm{iso }}}{\mathrm{ erg }}=a+b \log \frac{E_{\mathrm{p}}}{\mathrm{keV}}. \end{equation} The coefficient $a$ is the intercept parameter and $b$ is the slope parameter. \par As the method of fitting procedure, we use the Markov Chain Monte Carlo (MCMC) technique with the emcee\footnote{\url{https://emcee.readthedocs.io/en/stable/}} package to analyse our data \citep{2013PASP..125..306F}. The posterior probability density functions clearly express the best-fit values of parameters. For the fitting of the linear correlation \citep{2005physics..11182D}, the likelihood function is \begin{equation} \begin{aligned} \mathcal{L}\left( \Omega_{\mathrm{m}}, a, b, \sigma_{\mathrm{ext}}\right) \propto \prod_{i} & \frac{1}{\sqrt{\sigma_{\mathrm{ext}}^{2}+\sigma_{y_{i}}^{2}+b^{2} \sigma_{x_{i}}^{2}}} \\ & \times \exp \left[-\frac{\left(y_{i}-a-b x_{i}\right)^{2}}{2\left(\sigma_{\mathrm{ext}}^{2}+\sigma_{y_{i}}^{2}+b^{2} \sigma_{x_{i}}^{2}\right)}\right] , \end{aligned} \end{equation} where $x_{i}$ and $y_{i}$ are the observational data for the $i$th GRB. Basing on the description from \cite{2005physics..11182D}, the parameter $y$ should not only depend on $x$, but also some hidden variables ($\Omega_{\mathrm{m}}$ here). Thus, we write the $E_{\mathrm {iso }}-E_{\mathrm{p}}$ correlation as $y=\log {E_{\mathrm{iso}}}/{\mathrm{erg}}$ and $x=\log {E_{\mathrm{p}}}/{\mathrm{keV}}$. The best-fit values with 1$\sigma$ uncertainties are $a=49.24\pm 0.16$, $b=1.46\pm0.06$ and $\sigma_{\mathrm{ext}}=0.39\pm0.02$, respectively. Fig. \ref{F_EisoEp} illustrates the $E_{\mathrm{iso }}-E_{\mathrm{p}}$ correlation for the GRB sample. \subsection{Testing the evolution of $E_{\mathrm{iso }}-E_{\mathrm{p}}$ correlation with redshifts} Whether the $E_{\mathrm {iso }}-E_{\mathrm{p}}$ correlation evolves with redshifts is important. Here we divide the full GRB sample into five redshift bins: [0-0.55], [0.55-1.18], [1.18-1.74], [1.74-2.55], [2.55-8.20]. The number of GRBs in each sub-sample are 20, 54, 44, 48 and 55, respectively. The best-fit values and 1$\sigma$ uncertainties of $E_{\mathrm {iso }}-E_{\mathrm{p}}$ correlation in each sub-sample are shown in Table \ref{T1}. Fig. \ref{Fabevo} shows the evolution of the coefficients at different redshift intervals. The results show that the values are in agreement with each other within 2$\sigma$ uncertainties, and $\sigma_{\mathrm{ext}}$ does not show an evolution trend in each bin. \par From Fig.\ref{Fabevo}, the best-fit values of $a$ go up and then down with the increase of redshifts, while the evolution of $b$ is opposite. Although there seems to be an evolutionary trend, they are consistent with each other at 2$\sigma$ level. Therefore, the $E_{\mathrm{iso }}-E_{\mathrm{p}}$ correlation is consistent for all redshift ranges. The correlation shows no significant evolution with redshifts, which is in line with \cite{2011MNRAS.415.3423W} and \cite{2021A&A...651L...8D}. If the correlation evolves with evolution, the method mentioned in \cite{2022MNRAS.514.1828D} can be used to fit the evolutionary function. \par \subsection{Calibrating the \EE{} correlation}\label{calibrating} During the calculation of $E_{\mathrm{iso }}$, the cosmological parameters are fixed as benchmark parameters \citep{2020A&A...641A...6P}. This may make the results depend on the choice of cosmological models \citep{2015NewAR..67....1W}. To avoid this circularity problem, we select some sub-samples of GRBs lying in a small redshift range. Among the GRBs in each sub-sample, the luminosity distances $d_{\mathrm{L}}$ are approximately same, that is why it can overcome the effect of cosmological models. Our selection criteria are:\par (1) The numbers of GRBs in each sub-sample should be large enough. A larger sample size would increase the reliability of the results and avoid selection bias whenever possible. \par (2) The extrinsic scatter of the fitting results should be small, because it indicates the quality of the fitting degree. So we prefer to select the sub-sample with relatively small $\sigma_{\mathrm{ext}}$, which also means that the $E_{\mathrm {iso }}-E_{\mathrm{p}}$ correlation in these groups of samples are better standardized. \par (3) The even distribution makes for a better fitting result. Points of each sub-samples are distributed on the $E_{\mathrm {iso }}-E_{\mathrm{p}}$ plane. We prefer to select points that are distributed evenly on the plane rather than concentrated on a small numerical range.\par These six sub-samples are listed in Table \ref{T2}, and all have good fitting results. From Fig. \ref{Fdingbiao}, we can see that data from the fourth sub-sample distribute evenly on the $E_{\mathrm {iso }}-E_{\mathrm{p}}$ plane. In addition, the number of these data is relatively larger than other bins except for the second sub-sample. The extrinsic scatter of the fourth sub-sample is small. Therefore, we choose the fitting results of the fourth sub-sample: $a=49.14\pm0.45$, $b=1.51\pm0.17$ and $\sigma_{\mathrm{ext}}=0.24\pm0.08$. \cite{2019ApJ...873...39W} use the mock gravitational wave events associated with GRBs to get strict constraints on the parameters as $a=52.93\pm0.04$, $b=1.41\pm0.07$ and $\sigma_{\mathrm{ext}}=0.39\pm0.03$. \subsection{The comparison with the methodology adopted in Dainotti fundamental plane relation} The Dainotti fundamental plane is the correlation among the peak prompt luminosity $L_{\mathrm{peak}}$, the X-ray luminosity of plateaus $L_X$, and the time at the end of the plateau emission $T^{*}_X(s)$ \citep{2016ApJ...825L..20D,2017A&A...600A..98D,2020ApJ...904...97D,2021PASJ...73..970D}, which is usually expressed as \begin{equation} \log L_{X}=C_{o}+a \log T_{X}^{*}+b \log L_{\text {peak }}. \end{equation} This correlation is tight and can be used to constrain cosmological parameters \citep{2022PASJ..tmp...83D,2022MNRAS.514.1828D}. In the study of Dainotti fundamental plane, the screening criteria of long GRB sample are even more demanding \citep{2022MNRAS.tmp.2047C,2022ApJ...924...97W}. Therefore, the number in the sample is small. The $E_{\mathrm {iso }}-E_{\mathrm{p}}$ correlation focuses on the prompt emission, while the Dainotti correlation contains the characteristics of the afterglow emission. \cite{2022MNRAS.514.1828D} proved that GRB can be seen as cosmological distance indicators by analyzing the 3D Dainotti correlation based on the optical and X-ray sample. The determination on $\Omega_m$ from the optical sample is as efficacious as the X-ray one, making the optical plateau usable for cosmological applications. \par The selection bias and redshift evolution in GRB data may skew the analysis. In the process of fitting the Dainotti correlation, they use the Efron-Petrosian method to remove the evolution and recover the intrinsic relationships \citep{2021Galax...9...95D,2022MNRAS.514.1828D}. In this paper, we use the binning method to search the evolution of $E_{\mathrm {iso }}-E_{\mathrm{p}}$ correlation with redshifts. The consistency in five redshift bins shows no significant evolution for the correlation. \par \section{Constraining cosmological models}\label{Sec4} \subsection{Cosmological models} To analyse the information of high-redshift universe carried by the GRB data, we consider $\Lambda$CDM and $w$CDM models. For the $\Lambda$CDM model, the Hubble parameter is \begin{equation} H(z)=H_{0} \sqrt{\Omega_{m}(1+z)^{3}+\Omega_{k}(1+z)^{2}+\Omega_{\mathrm{\Lambda}}}. \end{equation} Since the constraint $\Omega_m + \Omega_k + \Omega_{\Lambda} = 1 $, $\Omega_{m} ,\Omega_{\Lambda}$ and $H_{0}$ are free parameters to be constrained in the $\Lambda$CDM model. \par In the $w$CDM model, the Hubble parameter is \begin{equation} H(z)=H_{0} \sqrt{\Omega_{m}(1+z)^{3}+\Omega_{k}(1+z)^{2}+\Omega_{\mathrm{DE}}(1+z)^{3\left(1+w\right)}}, \end{equation} where $\Omega_{\mathrm{DE}}$ is the dark energy density parameter and $w$ is the dark energy EOS parameter. In this parametrization, $w$ is a constant but $w \neq -1$. For the $w$CDM model, the free parameters are $\Omega_{m}, \Omega_{\mathrm{DE}}, w$ and $H_0$. \par \subsection{Constraining cosmological models} The distance moduli of GRBs is calculated from $\mu=25+5 \log \left(d_{\mathrm{L}}/ \mathrm{Mpc}\right)$. So the distance moduli is \begin{equation}\label{distance moduli} \mu_{\mathrm{GRB}}=25+\frac{5}{2}\left[a+b \log E_{\mathrm{p}}-\log \frac{4 \pi S_{\mathrm{bolo}}}{(1+z)}\right]. \end{equation} The propagated uncertainties of $E_{\mathrm{iso }}$ is given by the following equation \begin{equation} \sigma_{\log E_{\mathrm{iso }}}^{2}=\sigma_{a}^{2}+\left(\sigma_{b} \log \frac{E_{\mathrm{p}}}{\mathrm{keV}}\right)^{2}+\left(\frac{b}{\ln 10} \frac{\sigma_{E_{\mathrm{p}}}}{E_{\mathrm{p}}}\right)^{2}+\sigma_{\mathrm{ext }}^{2}. \end{equation} Then the propagated uncertainties of the distance moduli is calculated as \begin{equation} \sigma_{\mu}=\left[\left(\frac{5}{2} \sigma_{\log E_{\mathrm{iso }}}\right)^{2}+\left(\frac{5}{2 \ln 10} \frac{\sigma_{S_{\mathrm{bolo }}}}{S_{\mathrm {bolo }}}\right)^{2}\right]^{1 / 2}. \end{equation} These GRBs reveal the information of high-redshift universe, and their distance moduli are able to constrain cosmological models. Here we use the $\chi^{2}$ method to constrain the cosmological models mentioned above, and $\chi^{2}$ is \begin{equation} \chi_{\mathrm{GRB}}^{2}=\sum_{i=1}^{N} \frac{\left[\mu_{\mathrm{GRB}}\left(z_{i}\right)-\mu\left(z_{i}\right)\right]^{2}}{\sigma^{2}_\mu (z_{i})}, \end{equation} where $N$ is the number of the GRB sample, and $\mu_{\mathrm{GRB}}$ is the distance moduli calculated by Eq. (\ref{distance moduli}). For the MCMC analysis, the priors used for parameters are as follows: $\Omega_{m} \in$ [0,1], $H_0 \in$ [50,80], $\Omega_{\Lambda} \in$ [0,2] and $w \in$ [-5,0.33]. The GRB sample constrain cosmological parameters effectively. In order to get better limits, we also combine the GRB sample with the Pantheon SNe Ia sample \citep{2018ApJ...859..101S} to constrain cosmological models. \par For the non-flat $\Lambda$CDM model, the Hubble constant $H_0$ is first fixed as 67.4 km s$^{-1}$ Mpc$^{-1}$ \citep{2020A&A...641A...6P} and then 73.2 km s$^{-1}$ Mpc$^{-1}$ \citep{2021ApJ...908L...6R}. The best-fit results are $\Omega_{m}=0.35^{+0.09}_{-0.08}$ and $\Omega_{\Lambda}=0.66^{+0.30}_{-0.36}$ with 1$\sigma$ uncertainties when $H_0=67.4$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_{m}=0.26\pm0.07$ and $\Omega_{\Lambda}=0.79^{+0.24}_{-0.33}$ when $H_0=73.2$ km s$^{-1}$ Mpc$^{-1}$. The fitting results of the GRB sample are shown in Fig. \ref{FGRBnonflatLCDM}. The GRB sample is combined with the Pantheon sample to get better limits, the results of which are $\Omega_{m}=0.34\pm0.04$, $\Omega_{\Lambda}$=$0.79\pm0.06$ and $H_0=70.17\pm0.29$ km s$^{-1}$ Mpc$^{-1}$. For the flat $\Lambda$CDM model, the results obtained by combining GRB data with SNe Ia data are better than those obtained by GRB data alone. The best-fit results are $\Omega_{m}=0.29\pm0.01$ and $H_0=69.91\pm0.21$ km s$^{-1}$ Mpc$^{-1}$ for the joint data. The results of the joint data are shown in Fig. \ref{FGRBSN_LCDM}. In addition, the value of $\Omega_{m}$ is consistent with the constraints from SNe Ia \citep{2018ApJ...859..101S} and CMB \citep{2020A&A...641A...6P} within 1$\sigma$ range. In \cite{2022MNRAS.510.2928C}, they fit the parameters of $E_{\mathrm {iso }}-E_{\mathrm{p}}$ correlation and $\Omega_m$ simultaneously. The result is $\Omega_m>0.247$ in flat $\Lambda$CDM model, $\Omega_m>0.287$ and $\Omega_k=0.694^{+0.626}_{-0.848}$ for the non-flat $\Lambda$CDM model. The lower limits on the matter density parameter are consistent with currently accelerating cosmological expansion. The three-parameter fundamental plane relation in \cite{2022MNRAS.512..439C} provided $\Omega_m>0.411$ in flat $\Lambda$CDM model and $\Omega_m>0.491$ for the non-flat $\Lambda$CDM model. After being combined with OHD, BAO and GRBs, $\Omega_m=0.300^{+0.016}_{-0.018}$ and $\Omega_m=0.293\pm0.023$ were found in the flat and non-flat $\Lambda$CDM models, respectively. \par For the non-flat $w$CDM model, the sample combined with GRB data and SNe Ia data constrain the cosmological parameters as $\Omega_{m}=0.32^{+0.04}_{-0.05}$, $\Omega_{\mathrm{DE}}=0.55^{+0.23}_{-0.16}$, $w=-1.39^{+0.37}_{-0.63}$ and $H_0=70.32^{+0.39}_{-0.36}$ km s$^{-1}$ Mpc$^{-1}$. For the flat $w$CDM, the results are $\Omega_{m}=0.35^{+0.03}_{-0.04}$, $w=-1.20^{+0.13}_{-0.14}$ and $H_0=70.30\pm0.34$ km s$^{-1}$ Mpc$^{-1}$. The fitting results of the joint data are shown in Fig. \ref{FwCDM}. \cite{2022MNRAS.512..439C} provide $\Omega_m=0.282^{+0.023}_{-0.021}$, $w=-0.731^{+0.150}_{-0.096}$ and $H_0= 65.54^{+2.26}_{-2.58}$ km s$^{-1}$ Mpc$^{-1}$ in the flat $w$CDM model. The constraints from OHD and BAO trend to a low value of $H_0$. \par \section{The evolution of dark energy EOS}\label{Sec5} To study the evolution of dark energy EOS, a flat universe with an evolving dark energy EOS is considered. According to the observations from Planck \citep{2020A&A...641A...6P}, the assumption of flatness is reasonable. The EOS of dark energy is $w = p/\rho$, where $p$ is the pressure and $\rho$ is the energy density. The EOS $w$ is a remarkable characterization of dark energy. For revealing dark energy, it is crucial to research whether and how it evolves over time. In order to avoid adding some priors on the nature of dark energy, a non-parametric approach is used here \citep{2003PhRvL..90c1301H,2005PhRvD..71b3506H}. \par From the Friedmann equation, the expansion rate in a flat universe is expressed as \begin{equation}\label{Hubble expansion rate} \frac{H^{2}(z)}{H_{0}^{2}}=\Omega_{m}(1+z)^{3}+\Omega_{\mathrm{DE}}f(z) , \end{equation} where $f(z) = \exp \left(3 \int \frac{d z^{\prime}}{1+z^{\prime}}\left[1+w\left(z^{\prime}\right)\right]\right)$, $\Omega_{\mathrm{DE}} = 1-\Omega_{m}$ is the dark energy density parameter at present and $w$ is the parameter, which describes the properties of dark energy EOS. The function $f(z)$ is related to the evolution of dark energy EOS at different redshifts. If we split the function up into several redshift bins and consider $w(z)$ is a constant in each redshift bin, then $f(z)$ becomes a piece-wise function and is described as \begin{equation} f\left(z_{n-1}<z \leq z_{n}\right)=(1+z)^{3\left(1+w_{n}\right)} \prod_{i=0}^{n-1}\left(1+z_{i}\right)^{3\left(w_{i}-w_{i+1}\right)}. \end{equation} The parameter $w_i$ is the EOS $w(z)$ in the $i$th redshift bin, $n$ is the serial number of the redshift bin, and the zeroth bin is defined as $z_0$ = 0. Here we add an assumption that the EOS is fixed as $w = -1$ at $z > 8.2$ without affecting the fitting results \citep{2014PhRvD..89b3004W}. \par In this parameterization, no assumptions are made about the nature of dark energy, since different parameters are introduced in each redshift bin. When choosing the number and range of redshift bins, the limitation from the whole sample should be taken into account. Redshift intervals for each bin are determined in the process of separating redshift bins. First, we find that the number of data in each bin should be big enough to get a strict constraint on EOS $w_i$. This implies that in order to avoid poor restrictions, we have to choose loose intervals as the number of GRBs decreases with increasing redshifts. Second, the magnitude of each redshift interval should be reasonable. A too loose redshift interval may conflict with the approximation that $w(z)$ is a constant in each redshift bin. Finally, we expect the amount of data in each bin to be as equal as possible. After testing many kinds of redshift bins, we finally choose 11 bins in this analysis. The upper boundaries are $z_i$ = 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.3, 8.2. We have to adopt a large redshift interval for the last redshift bin due to the lack of data at high redshifts. \par The MCMC method mentioned above is used to fit $w_i$ in each redshift bin. Due to the function $f(z)$ depends on the summation of $w_i$ over redshift, the EOS parameters $w_i$ are correlated. For the sake of removing the correlation, the covariance matrix of $w_i$ is calculated as \begin{equation} \mathbf{C}=\langle \mathbf{w} \textbf{w}^{\rm T}\rangle-\langle\mathbf{w}\rangle\langle\mathbf{w}^{\rm T}\rangle, \end{equation} where $\mathbf{w}$ is a vector with components $w_i$. It is not diagonal, but through multiplying a transformation matrix, we obtain a set of decorrelated parameters \begin{equation} \centering \tilde{\mathbf{w}}=\mathbf{Tw}, \end{equation} in which $\tilde{\mathbf{w}}$ is the uncorrelated dark energy parameters with components $\tilde{{w_i}}$. The transformation can be computed as \cite{2005PhRvD..71b3506H}. First the Fisher matrix is \begin{equation} \mathbf{F} \equiv \mathbf{C}^{-1} \equiv \mathbf{O}^{\mathrm{T}} \mathbf{\Lambda} \mathbf{O} , \end{equation} where $\mathbf{\Lambda}$ is diagonal. Then the transformation matrix $\mathbf{T}$ is defined as \begin{equation} \mathbf{T}=\mathbf{O}^{\mathrm{T}} \mathbf{\Lambda}^{\frac{1}{2}} \mathbf{O} . \end{equation} The transformation $\mathbf{T}$ is normalized so that its rows, which represents the weights for $w_i$, sum to unity. Another advantage of this transformation is that the weights are almost positive everywhere . \par The method mentioned above is used in conjunction with a joint data set of the latest observations including the GRB sample, CMB from Planck, SNe Ia, and the OHD. For SNe Ia data, the Pantheon sample from \cite{2018ApJ...859..101S} are used. The distance priors are taken from \cite{2019JCAP...02..028C}, such as CMB shift parameters $R=1.7502\pm 0.0046$, $l_A=301.471^{+0.089}_{-0.090}$ and $\Omega_bh^2=0.02236\pm0.00015$. The definitions of the distance priors are as follows \begin{equation} R\left(z_{*}\right) \equiv \frac{\left(1+z_{*}\right) D_{\mathrm{A}}\left(z_{*}\right) \sqrt{\Omega_{m} H_{0}^{2}}}{c}, \end{equation} \begin{equation} l_{\mathrm{A}}=\left(1+z_{*}\right) \frac{\pi D_{\mathrm{A}}\left(z_{*}\right)}{r_{s}\left(z_{*}\right)}, \end{equation} in which $z_*$ is the redshift at the photon decoupling epoch, $D_{A}$ is the angular diameter distance, and $r_s$ is the comoving sound horizon. For the OHD, the data from \cite{2018ApJ...856....3Y} are adopted. \par During the fitting process, $\Omega_{m}$ and $H_0$ are taken as free parameters, so there are 13 cosmological parameters to be constrained. The final results are $\Omega_{\mathrm{m}}=0.26\pm0.01$, $H_0=70.64^{+0.39}_{-0.38}$ km s$^{-1}$ Mpc$^{-1}$. And the uncorrelated dark energy EOS parameters $w_i$ at different redshift bins are shown in Fig.\ref{Fwevo}. In \cite{2022PASJ..tmp...83D}, they provided $\Omega_m=0.321\pm0.003$ and $H_0=69.644\pm0.116$ km s$^{-1}$ Mpc$^{-1}$ for a BAOs+SNe Ia+GRBs sample. We also find that the joint sample can improve the precision of the result. The effect of the number in each bin is taken into account in determining the redshift interval. \par Combined with the cosmological models mentioned above, the results are used to check whether the $\Lambda$CDM model is still the best candidate. The dark energy EOS is equal to -1 for $\Lambda$CDM model but a function of redshifts in dynamical dark energy models. The results show an evolutionary trend to deviate from the $\Lambda$CDM model. But within 2$\sigma$ uncertainties, the results of EOS $w_i$ are still consistent with -1 except for the second bin. The dark energy EOS evolves with redshifts and crosses the -1 boundary similar to previous investigations \citep{2014PhRvD..89b3004W,2017NatAs...1..627Z}. \par It is worth noting that the dark energy EOS seems to be oscillating among the first four bins. What is more, it crosses the -1 boundary with the increase of redshifts, which is not permitted in the $\Lambda$CDM model. This may be a clue to the dynamical dark energy models, although most observations support the $\Lambda$CDM model. The data seem to prefer an upward tendency at redshifts $0.2 < z < 0.5$, which is consistent with \cite{2014PhRvD..89b3004W}. And the best-fit values of dark energy EOS parameters $w_i$ are all greater than -1 at $0.3 < z < 0.8$, showing no difference with \cite{2009MNRAS.398L..78Q}. For the last bin, the error is very small, although the redshift interval is from 1.3 to 8.2. It may be due to the fact that this bin contains more OHD data than others. In order to reduce the range of the last bin, more high redshift observational data are needed. \par We also notice that the errors will be smaller if we fix the cosmological parameters $\Omega_m$ and $H_0$. But this will add some priors on the cosmological model and significantly affect the final fitting results. Considering the Hubble tension between the value of $H_0$ from Cepheids ($H_0=73.2\pm1.3$ km s$^{-1}$ Mpc$^{-1}$; \citealt{2021ApJ...908L...6R}) and the value of $H_0$ from CMB ($H_0=67.4\pm0.5$ km s$^{-1}$ Mpc$^{-1}$; \citealt{2020A&A...641A...6P}), it is difficult to determine a specific value of $H_0$ and we final decide to free it. Furthermore, our fitting results of $H_0$ are consistent with \cite{2021ApJ...908L...6R} within 2$\sigma$ ranges. This may because the main data of our analysis come from the local observations. \par \section{Conclusions and Discussion }\label{Sec6} In this paper, a sample including 221 long GRBs is compiled for the $E_{\mathrm{iso}}-E_{\mathrm{p}}$ correlation. Fitting this correlation with the sample, we obtain $a=49.24\pm0.16$, $b=1.46\pm0.06$ and $\sigma_{\mathrm{ext}}=0.39\pm0.02$. Then, the possible redshift evolution of $E_{\mathrm{iso}}-E_{\mathrm{p}}$ correlation is studied in five redshift bins. The results show that the correlation does not show significant evolution with redshifts in 2$\sigma$ uncertainties. The correlation is calibrated by GRBs in a small redshift range, which is model-independent. The calibrated results are $a=49.14\pm0.45$, $b=1.51 \pm0.17$ and $\sigma_{\mathrm{ext}}=0.24\pm0.08$. The parameters are consistent with the results fitted by the whole GRB sample within $1\sigma$ confidence level, which may also confirms that the correlation does not evolve with redshifts. \par With the calibrated correlation, the sample is used to constrain cosmological parameters. Here, we consider $\Lambda$CDM and $w$CDM cosmological models. In order to get better constraints, the sample is combined with SNe Ia data. The results show that the combination of GRBs data and SNe Ia data constrain the cosmological parameters better. The fitting results support the $\Lambda$CDM model. \par In order to study the physical properties of dark energy, we use \textbf{a} non-parametric approach. Eleven redshift bins are used in this work due to the abundance of data. Our result shows that there is a hint for dynamical energy models. The evolution of dark energy EOS $w_i$ has a tendency to deviate from $-1$. It is oscillating at low redshift and consistent with the $\Lambda$CDM model at high redshift at 2$\sigma$ confidence level. Compared with previous works, the GRBs data fills the gap between SNe Ia and CMB. There are more than half of GRBs at redshift $z > 1.5$, helping to constrain the EOS more strictly. The deviation from $-1$ in some bins is a weak hint for the dynamical dark energy models. \par In the future, as more GRBs will be detected, some correlations will be found and current correlation can be improved. We are looking forward to the observations by the French-Chinese satellite space-based multi-band astronomical variable objects monitor (SVOM) \citep{2016arXiv161006892W}, the Einstein Probe (EP) \citep{2015arXiv150607735Y} and the Transient High-Energy Sky and Early Universe Surveyor (THESEUS) \citep{2018AdSpR..62..191A} to help us explore high-redshift universe using GRBs, such as cosmic expansion, reionization and metal enrichment history \citep{Wang2012}. \section*{Acknowledgements} This work was supported by the National Natural Science Foundation of China (grant No. U1831207), the China Manned Spaced Project (CMS-CSST-2021-A12), Jiangsu Funding Program for Excellent Postdoctoral Talent (20220ZB59). \section*{Data Availability} The data that support the findings of this study are available in Table \ref{GRBsample}. \bibliographystyle{mnras} \bibliography{MNRAS} % \newpage \onecolumn {\small \begin{longtable}{@{} c @{ } c @{ } c @{ } c @{ } c } \hline GRB & Redshift & $E_{\rm p}$(keV) & $E_{\rm iso}^{\mathrm{(a)}}$($10^{52}$ erg) & Refs. $^{\mathrm{(b)}}$\\ \hline \endhead 060218 & 0.034 & 4.90 $\pm$ 0.30 & 0.0054 $\pm$ 0.0003 & (1) \\ \hline 180728 & 0.117 & 87.04 $\pm$ 1.95 & 0.28 $\pm$ 0.001 & (5) \\ \hline 060614 & 0.125 & 55.00 $\pm$ 45.00 & 0.22 $\pm$ 0.09 & (1) \\ \hline 030329 & 0.17 & 100.00 $\pm$ 23.00 & 1.48 $\pm$ 0.26 & (1) \\ \hline 020903 & 0.25 & 3.37 $\pm$ 1.79 & 0.0024 $\pm$ 0.0006 & (1) \\ \hline 130427A & 0.34 & 1112.20 $\pm$ 6.70 & 95.10 $\pm$ 30.10 & (4) \\ \hline 011121 & 0.36 & 1060.00 $\pm$ 275.00 & 7.97 $\pm$ 2.19 & (1) \\ \hline 020819 & 0.41 & 70.00 $\pm$ 21.00 & 0.69 $\pm$ 0.18 & (1) \\ \hline 101213 & 0.414 & 440.00 $\pm$ 180.00 & 2.72 $\pm$ 0.53 & (3) \\ \hline 190114 & 0.424 & 1477.49 $\pm$ 17.31 & 36.87 $\pm$ 0.02 & (5) \\ \hline 990712 & 0.434 & 93.00 $\pm$ 15.00 & 0.69 $\pm$ 0.13 & (1) \\ \hline 010921 & 0.45 & 129.00 $\pm$ 26.00 & 0.97 $\pm$ 0.09 & (1) \\ \hline 130831A & 0.48 & 81.35 $\pm$ 5.92 & 0.80 $\pm$ 0.30 & (4) \\ \hline 091127 & 0.49 & 51.00 $\pm$ 5.00 & 1.65 $\pm$ 0.18 & (3) \\ \hline 081007 & 0.53 & 61.00 $\pm$ 15.00 & 0.18 $\pm$ 0.02 & (2) \\ \hline 090618 & 0.54 & 250.41 $\pm$ 4.47 & 28.59 $\pm$ 0.52 & (3) \\ \hline 100621 & 0.54 & 146.49 $\pm$ 23.90 & 4.60 $\pm$ 2.00 & (4) \\ \hline 060729 & 0.543 & 77.00 $\pm$ 38.00 & 0.42 $\pm$ 0.09 & (1) \\ \hline 090424 & 0.544 & 249.97 $\pm$ 3.32 & 4.07 $\pm$ 0.35 & (2) \\ \hline 101219 & 0.55 & 108.00 $\pm$ 12.00 & 0.63 $\pm$ 0.06 & (3) \\ \hline 170607 & 0.557 & 174.06 $\pm$ 9.03 & 1.10 $\pm$ 0.03 & (5) \\ \hline 130215 & 0.6 & 247.54 $\pm$ 100.61 & 4.70 $\pm$ 2.40 & (4) \\ \hline 050525 & 0.606 & 129.00 $\pm$ 6.50 & 2.29 $\pm$ 0.49 & (1) \\ \hline 110106 & 0.618 & 194.00 $\pm$ 56.00 & 0.73 $\pm$ 0.07 & (3) \\ \hline 131231 & 0.642 & 292.42 $\pm$ 4.03 & 23.76 $\pm$ 0.33 & (5) \\ \hline 161129 & 0.645 & 240.84 $\pm$ 42.61 & 1.84 $\pm$ 0.25 & (5) \\ \hline 050416 & 0.653 & 22.00 $\pm$ 4.50 & 0.11 $\pm$ 0.018 & (1) \\ \hline 180720 & 0.654 & 1052.01 $\pm$ 15.43 & 56.57 $\pm$ 1.05 & (5) \\ \hline 111209 & 0.68 & 519.87 $\pm$ 88.88 & 87.70 $\pm$ 36.10 & (4) \\ \hline 080916 & 0.689 & 208.00 $\pm$ 11.00 & 0.98 $\pm$ 0.09 & (2) \\ \hline 020405 & 0.69 & 354.00 $\pm$ 10.00 & 10.64 $\pm$ 0.89 & (1) \\ \hline 970228 & 0.695 & 195.00 $\pm$ 64.00 & 1.65 $\pm$ 0.12 & (1) \\ \hline 991208 & 0.706 & 313.00 $\pm$ 31.00 & 22.97 $\pm$ 1.86 & (1) \\ \hline 041006 & 0.716 & 98.00 $\pm$ 20.00 & 3.11 $\pm$ 0.89 & (1) \\ \hline 140512 & 0.725 & 1191.99 $\pm$ 58.24 & 9.21 $\pm$ 4.64 & (5) \\ \hline 090328 & 0.736 & 1157.91 $\pm$ 55.55 & 14.18 $\pm$ 0.99 & (2) \\ \hline 160804 & 0.736 & 123.93 $\pm$ 4.18 & 2.43 $\pm$ 0.23 & (5) \\ \hline 150821 & 0.755 & 493.55 $\pm$ 17.11 & 16.92 $\pm$ 0.83 & (5) \\ \hline 030528 & 0.78 & 57.00 $\pm$ 9.00 & 2.22 $\pm$ 0.27 & (1) \\ \hline 051022 & 0.8 & 754.00 $\pm$ 258.00 & 56.04 $\pm$ 5.34 & (1) \\ \hline 100816 & 0.805 & 246.72 $\pm$ 8.48 & 7.30 $\pm$ 0.02 & (3) \\ \hline 150514 & 0.807 & 116.74 $\pm$ 5.91 & 1.22 $\pm$ 0.08 & (5) \\ \hline 151027 & 0.81 & 364.54 $\pm$ 24.47 & 5.16 $\pm$ 0.37 & (5) \\ \hline 110715 & 0.82 & 218.40 $\pm$ 20.93 & 5.10 $\pm$ 1.60 & (4) \\ \hline 970508 & 0.835 & 145.00 $\pm$ 43.00 & 0.61 $\pm$ 0.13 & (1) \\ \hline 990705 & 0.842 & 459.00$\pm$ 139.00 & 18.70 $\pm$ 2.67 & (1) \\ \hline 000210 & 0.846 & 753.00 $\pm$ 26.00 & 15.41 $\pm$ 1.69 & (1) \\ \hline 040924 & 0.859 & 102.00 $\pm$ 35.00 & 0.98 $\pm$ 0.09 & (1) \\ \hline 170903 & 0.886 & 179.29 $\pm$ 13.39 & 0.87 $\pm$ 0.91 & (5) \\ \hline 140506 & 0.889 & 371.53 $\pm$ 25.30 & 1.10 $\pm$ 0.35 & (5) \\ \hline 091003 & 0.897 & 810.00 $\pm$ 157.00 & 10.70 $\pm$ 1.78 & (3) \\ \hline 141225 & 0.915 & 341.55 $\pm$ 19.28 & 2.24 $\pm$ 0.31 & (5) \\ \hline 080319B & 0.937 & 1261.00 $\pm$ 65.00 & 117.87 $\pm$ 8.93 & (1) \\ \hline 071010 & 0.947 & 88.00 $\pm$ 21.00 & 2.32 $\pm$ 0.40 & (1) \\ \hline 970828 & 0.958 & 586.00 $\pm$ 117.00 & 30.38 $\pm$ 3.57 & (1) \\ \hline 980703 & 0.966 & 503.00 $\pm$ 64.00 & 7.42 $\pm$ 0.71 & (1) \\ \hline 091018 & 0.971 & 55.00 $\pm$ 20.00 & 0.63 $\pm$ 0.35 & (3) \\ \hline 021211 & 1.01 & 127.00 $\pm$ 52.00 & 1.16 $\pm$ 0.13 & (1) \\ \hline 991216 & 1.02 & 648.00 $\pm$ 134.00 & 69.79 $\pm$ 7.16 & (1) \\ \hline 140508 & 1.027 & 521.76 $\pm$ 12.12 & 24.87 $\pm$ 0.87 & (5) \\ \hline 080411 & 1.03 & 524.00 $\pm$ 70.00 & 16.19 $\pm$ 0.98 & (1) \\ \hline 000911 & 1.06 & 1856.00 $\pm$ 371.00 & 69.86 $\pm$ 14.33 & (1) \\ \hline 091208 & 1.063 & 246.00 $\pm$ 25.00 & 2.06 $\pm$ 0.18 & (3) \\ \hline 091024 & 1.092 & 396.22 $\pm$ 25.31 & 18.38 $\pm$ 1.99 & (3) \\ \hline 980613 & 1.096 & 194.00 $\pm$ 89.00 & 0.61 $\pm$ 0.09 & (1) \\ \hline 080413B & 1.1 & 163.00 $\pm$ 47.50 & 1.61 $\pm$ 0.27 & (2) \\ \hline 201216 & 1.1 & 735.40 $\pm$ 10.24 & 64.59 $\pm$ 0.02 & (5) \\ \hline 981226 & 1.11 & 87.00 $\pm$ 40.00 & 0.81 $\pm$ 0.18 & (1) \\ \hline 180620 & 1.118 & 371.90 $\pm$ 49.79 & 8.55 $\pm$ 0.38 & (5) \\ \hline 000418 & 1.12 & 284.00 $\pm$ 21.00 & 9.51 $\pm$ 1.79 & (1) \\ \hline 210610 & 1.13 & 665.18 $\pm$ 9.30 & 49.76 $\pm$ 0.01 & (5) \\ \hline 061126 & 1.159 & 1337.00 $\pm$ 410.00 & 31.42 $\pm$ 3.59 & (1) \\ \hline 130701A & 1.16 & 191.80 $\pm$ 8.62 & 1.70 $\pm$ 0.50 & (4) \\ \hline 190324 & 1.172 & 285.96 $\pm$ 7.67 & 9.31 $\pm$ 0.07 & (5) \\ \hline 140213 & 1.208 & 190.18 $\pm$ 4.10 & 12.56 $\pm$ 0.32 & (5) \\ \hline 140213A & 1.21 & 176.61 $\pm$ 4.42 & 10.10 $\pm$ 2.60 & (4) \\ \hline 140907 & 1.21 & 308.19 $\pm$ 10.31 & 2.71 $\pm$ 0.78 & (5) \\ \hline 130907A & 1.24 & 881.77 $\pm$ 24.62 & 314.00 $\pm$ 79.70 & (4) \\ \hline 020813 & 1.25 & 590.00 $\pm$ 151.00 & 68.35 $\pm$ 17.09 & (1) \\ \hline 200829 & 1.25 & 716.24 $\pm$ 4.63 & 124.40 $\pm$ 0.04 & (5) \\ \hline 061007 & 1.262 & 890.00 $\pm$ 124.00 & 89.96 $\pm$ 8.99 & (1) \\ \hline 131030A & 1.29 & 405.86 $\pm$ 22.93 & 4.80 $\pm$ 1.50 & (4) \\ \hline 130420A & 1.3 & 128.63 $\pm$ 6.89 & 7.90 $\pm$ 2.20 & (4) \\ \hline 990506 & 1.3 & 677.00 $\pm$ 156.00 & 98.13 $\pm$ 9.90 & (1) \\ \hline 061121 & 1.314 & 1289.00 $\pm$ 153.00 & 23.50 $\pm$ 2.70 & (1) \\ \hline 141220 & 1.32 & 415.34 $\pm$ 10.07 & 2.72 $\pm$ 0.56 & (5) \\ \hline 140801 & 1.32 & 276.98 $\pm$ 2.64 & 6.06 $\pm$ 0.19 & (5) \\ \hline 071117 & 1.331 & 112.00 $\pm$ 56.00 & 5.86 $\pm$ 2.70 & (1) \\ \hline 070521 & 1.35 & 522.00 $\pm$ 55.00 & 10.81 $\pm$ 1.80 & (3) \\ \hline 100414 & 1.368 & 1295.00 $\pm$ 120.00 & 54.99 $\pm$ 5.41 & (3) \\ \hline 120711 & 1.405 & 2340.00 $\pm$ 230.00 & 180.41 $\pm$ 18.04 & (3) \\ \hline 180205 & 1.409 & 84.80 $\pm$ 17.02 & 0.89 $\pm$ 0.17 & (5) \\ \hline 100814 & 1.44 & 312.32 $\pm$ 48.80 & 7.70 $\pm$ 3.10 & (4) \\ \hline 180314 & 1.445 & 251.73 $\pm$ 4.49 & 10.23 $\pm$ 0.68 & (5) \\ \hline 141221 & 1.452 & 225.87 $\pm$ 28.73 & 2.65 $\pm$ 0.44 & (5) \\ \hline 110213 & 1.46 & 223.86 $\pm$ 70.11 & 8.80 $\pm$ 4.10 & (4) \\ \hline 150301 & 1.517 & 460.62 $\pm$ 28.66 & 3.43 $\pm$ 0.59 & (5) \\ \hline 161117 & 1.549 & 205.62 $\pm$ 3.05 & 23.63 $\pm$ 0.93 & (5) \\ \hline 110503 & 1.61 & 572.25 $\pm$ 50.95 & 18.90 $\pm$ 5.50 & (4) \\ \hline 131105 & 1.686 & 721.80 $\pm$ 18.31 & 20.58 $\pm$ 1.71 & (5) \\ \hline 080928 & 1.692 & 95.00 $\pm$ 23.00 & 3.99 $\pm$ 0.91 & (3) \\ \hline 100906 & 1.73 & 387.23 $\pm$ 244.07 & 27.70 $\pm$ 11.80 & (4) \\ \hline 120119 & 1.73 & 417.38 $\pm$ 54.56 & 36.00 $\pm$ 11.70 & (4) \\ \hline 150314 & 1.758 & 957.48 $\pm$ 7.90 & 89.16 $\pm$ 2.15 & (5) \\ \hline 110422 & 1.77 & 421.04 $\pm$ 13.85 & 75.80 $\pm$ 16.70 & (4) \\ \hline 131011 & 1.874 & 625.49 $\pm$ 40.88 & 14.74 $\pm$ 1.59 & (3) \\ \hline 140623 & 1.92 & 953.53 $\pm$ 138.25 & 3.74 $\pm$ 0.45 & (5) \\ \hline 060814 & 1.923 & 751.00 $\pm$ 246.00 & 56.71 $\pm$ 5.27 & (1) \\ \hline 210619 & 1.937 & 799.33 $\pm$ 5.07 & 423.63 $\pm$ 0.12 & (5) \\ \hline 170113 & 1.968 & 333.92 $\pm$ 58.79 & 2.45 $\pm$ 0.68 & (5) \\ \hline 170705 & 2.01 & 294.61 $\pm$ 7.64 & 18.31 $\pm$ 0.77 & (5) \\ \hline 161017 & 2.013 & 718.76 $\pm$ 40.77 & 7.49 $\pm$ 1.55 & (5) \\ \hline 140620 & 2.04 & 211.21 $\pm$ 10.72 & 9.72 $\pm$ 0.56 & (5) \\ \hline 081203 & 2.05 & 1541.00 $\pm$ 756.00 & 31.85 $\pm$ 11.83 & (3) \\ \hline 150403 & 2.06 & 1311.94 $\pm$ 21.06 & 99.28 $\pm$ 2.42 & (5) \\ \hline 080207 & 2.086 & 333.00 $\pm$ 222.00 & 16.39 $\pm$ 1.82 & (3) \\ \hline 061222 & 2.088 & 874.00 $\pm$ 150.00 & 30.04 $\pm$ 6.37 & (1) \\ \hline 130610 & 2.09 & 911.83 $\pm$ 132.65 & 9.00 $\pm$ 3.00 & (4) \\ \hline 120624 & 2.197 & 1791.00 $\pm$ 134.00 & 282.00 $\pm$ 1.20 & (3) \\ \hline 121128 & 2.2 & 243.20 $\pm$ 12.80 & 10.40 $\pm$ 3.50 & (4) \\ \hline 080804 & 2.204 & 810.00 $\pm$ 45.00 & 12.03 $\pm$ 0.55 & (3) \\ \hline 081221 & 2.26 & 284.00 $\pm$ 14.00 & 31.92 $\pm$ 1.82 & (3) \\ \hline 130505 & 2.27 & 2063.37 $\pm$ 101.37 & 57.70 $\pm$ 17.90 & (4) \\ \hline 141028 & 2.33 & 976.02 $\pm$ 17.98 & 76.16 $\pm$ 1.97 & (5) \\ \hline 131108 & 2.4 & 1247.43 $\pm$ 16.30 & 63.94 $\pm$ 2.57 & (5) \\ \hline 171222 & 2.409 & 59.80 $\pm$ 4.14 & 3.41 $\pm$ 1.83 & (5) \\ \hline 190719 & 2.469 & 295.42 $\pm$ 23.25 & 12.17 $\pm$ 0.10 & (5) \\ \hline 120716 & 2.486 & 397.00 $\pm$ 40.00 & 30.15 $\pm$ 0.27 & (3) \\ \hline 120811 & 2.671 & 198.00 $\pm$ 19.00 & 6.41 $\pm$ 0.64 & (3) \\ \hline 140206 & 2.73 & 452.10 $\pm$ 5.83 & 29.69 $\pm$ 3.05 & (5) \\ \hline 161014 & 2.823 & 646.18 $\pm$ 14.42 & 9.62 $\pm$ 1.05 & (5) \\ \hline 181020 & 2.938 & 1544.13 $\pm$ 28.97 & 80.25 $\pm$ 0.07 & (5) \\ \hline 060607 & 3.075 & 478.00 $\pm$ 118.00 & 11.93 $\pm$ 2.75 & (1) \\ \hline 140423 & 3.26 & 494.90 $\pm$ 15.89 & 69.38 $\pm$ 2.61 & (5) \\ \hline 140808 & 3.29 & 503.85 $\pm$ 6.46 & 8.99 $\pm$ 0.63 & (5) \\ \hline 110818 & 3.36 & 1117.47 $\pm$ 241.11 & 25.60 $\pm$ 8.50 & (4) \\ \hline 060306 & 3.5 & 315.00 $\pm$ 135.00 & 7.63 $\pm$ 1.01 & (3) \\ \hline 151111 & 3.5 & 533.91 $\pm$ 50.33 & 5.43 $\pm$ 1.84 & (5) \\ \hline 170405 & 3.51 & 1204.23 $\pm$ 9.29 & 255.20 $\pm$ 5.02 & (5) \\ \hline 100704 & 3.6 & 809.60 $\pm$ 135.70 & 19.06 $\pm$ 1.91 & (4) \\ \hline 130408 & 3.76 & 1003.94 $\pm$ 137.98 & 28.90 $\pm$ 9.60 & (4) \\ \hline 060210 & 3.91 & 574.00 $\pm$ 187.00 & 32.23 $\pm$ 1.84 & (3) \\ \hline 120712 & 4.174 & 641.00 $\pm$ 130.00 & 21.19 $\pm$ 1.84 & (3) \\ \hline 130606 & 5.91 & 2031.54 $\pm$ 483.70 & 28.60 $\pm$ 11.60 & (4) \\ \hline 050318 & 1.44 & 115.00 $\pm$ 25.00 & 2.34 $\pm$ 0.17 & (1) \\ \hline 010222 & 1.48 & 766.00 $\pm$ 30.00 & 85.57 $\pm$ 8.79 & (1) \\ \hline 120724 & 1.48 & 68.45 $\pm$ 18.60 & 0.88 $\pm$ 0.12 & (4) \\ \hline 060418 & 1.489 & 572.00 $\pm$ 143.00 & 13.63 $\pm$ 2.96 & (1) \\ \hline 030328 & 1.52 & 328.00 $\pm$ 55.00 & 39.42 $\pm$ 3.69 & (1) \\ \hline 070125 & 1.547 & 934.00 $\pm$ 148.00 & 84.62 $\pm$ 8.27 & (1) \\ \hline 090102 & 1.547 & 1149.00 $\pm$ 166.00 & 22.14 $\pm$ 4.01 & (2) \\ \hline 040912 & 1.563 & 44.00 $\pm$ 33.00 & 1.36 $\pm$ 0.39 & (1) \\ \hline 990123 & 1.6 & 1724.00 $\pm$ 466.00 & 242.38 $\pm$ 39.27 & (1) \\ \hline 071003 & 1.604 & 2077.00 $\pm$ 286.00 & 36.18 $\pm$ 4.01 & (2) \\ \hline 090418 & 1.608 & 1567.00 $\pm$ 384.00 & 16.06 $\pm$ 4.03 & (2) \\ \hline 990510 & 1.619 & 423.00 $\pm$ 42.00 & 17.99 $\pm$ 2.77 & (1) \\ \hline 080605 & 1.64 & 650.00 $\pm$ 55.00 & 24.08 $\pm$ 1.98 & (2) \\ \hline 131105A & 1.686 & 547.68 $\pm$ 83.53 & 35.39 $\pm$ 1.19 & (4) \\ \hline 091020 & 1.71 & 507.23 $\pm$ 68.20 & 8.40 $\pm$ 1.08 & (3) \\ \hline 120326 & 1.798 & 129.97 $\pm$ 10.27 & 3.68 $\pm$ 0.17 & (4) \\ \hline 080514B & 1.8 & 627.00 $\pm$ 65.00 & 17.01 $\pm$ 4.03 & (2) \\ \hline 090902B & 1.822 & 2187.00 $\pm$ 31.00 & 277.68 $\pm$ 8.66 & (4) \\ \hline 020127 & 1.9 & 290.00 $\pm$ 100.00 & 3.51 $\pm$ 0.09 & (1) \\ \hline 080319C & 1.95 & 906.00 $\pm$ 272.00 & 14.53 $\pm$ 2.91 & (1) \\ \hline 081008 & 1.968 & 261.00 $\pm$ 52.00 & 9.45 $\pm$ 0.89 & (2) \\ \hline 030226 & 1.98 & 289.00 $\pm$ 66.00 & 12.94 $\pm$ 0.99 & (1) \\ \hline 130612 & 2.006 & 186.07 $\pm$ 31.56 & 0.81 $\pm$ 0.10 & (4) \\ \hline 000926 & 2.07 & 310.00 $\pm$ 20.00 & 27.98 $\pm$ 6.46 & (1) \\ \hline 090926 & 2.106 & 974.00 $\pm$ 50.00 & 167.34 $\pm$ 8.54 & (3) \\ \hline 011211 & 2.14 & 186.00 $\pm$ 24.00 & 5.71 $\pm$ 0.68 & (1) \\ \hline 071020 & 2.145 & 1013.00 $\pm$ 160.00 & 9.97 $\pm$ 4.58 & (1) \\ \hline 050922C & 2.198 & 415.00 $\pm$ 111.00 & 5.62 $\pm$ 1.91 & (1) \\ \hline 110205 & 2.22 & 740.60 $\pm$ 322.00 & 40.39 $\pm$ 8.27 & (4) \\ \hline 060124 & 2.296 & 784.00 $\pm$ 285.00 & 43.85 $\pm$ 6.45 & (1) \\ \hline 021004 & 2.3 & 266.00 $\pm$ 117.00 & 3.49 $\pm$ 0.52 & (1) \\ \hline 051109A & 2.346 & 539.00 $\pm$ 200.00 & 6.83 $\pm$ 0.67 & (1) \\ \hline 060908 & 2.43 & 514.00 $\pm$ 102.00 & 10.38 $\pm$ 0.99 & (1) \\ \hline 080413 & 2.433 & 584.00 $\pm$ 180.00 & 7.98 $\pm$ 1.99 & (2) \\ \hline 090812 & 2.452 & 2000.00 $\pm$ 700.00 & 44.43 $\pm$ 7.65 & (4) \\ \hline 100728B & 2.453 & 359.11 $\pm$ 48.34 & 4.19 $\pm$ 0.14 & (4) \\ \hline 130518 & 2.49 & 1382.04 $\pm$ 31.41 & 182.93 $\pm$ 1.19 & (4) \\ \hline 081121 & 2.512 & 871.00 $\pm$ 123.00 & 25.73 $\pm$ 4.97 & (2) \\ \hline 081118 & 2.58 & 147.00 $\pm$ 14.00 & 4.25 $\pm$ 0.89 & (2) \\ \hline 080721 & 2.591 & 1741.00 $\pm$ 227.00 & 124.66 $\pm$ 21.73 & (2) \\ \hline 050820 & 2.612 & 1325.00 $\pm$ 277.00 & 102.89 $\pm$ 8.04 & (1) \\ \hline 030429 & 2.65 & 128.00 $\pm$ 26.00 & 2.31 $\pm$ 0.33 & (1) \\ \hline 120811C & 2.671 & 157.49 $\pm$ 20.92 & 12.35 $\pm$ 1.17 & (4) \\ \hline 080603B & 2.69 & 376.00 $\pm$ 100.00 & 10.81 $\pm$ 0.98 & (2) \\ \hline 140206A & 2.73 & 447.60 $\pm$ 22.38 & 29.27 $\pm$ 0.52 & (4) \\ \hline 091029 & 2.752 & 230.00 $\pm$ 66.00 & 8.25 $\pm$ 0.77 & (3) \\ \hline 081222 & 2.77 & 505.00 $\pm$ 34.00 & 29.64 $\pm$ 3.02 & (2) \\ \hline 050603 & 2.821 & 1333.00 $\pm$ 107.00 & 64.03 $\pm$ 3.66 & (1) \\ \hline 110731 & 2.83 & 1164.32 $\pm$ 49.79 & 46.16 $\pm$ 0.18 & (4) \\ \hline 111107 & 2.893 & 420.44 $\pm$ 124.58 & 3.43 $\pm$ 0.57 & (4) \\ \hline 050401 & 2.9 & 467.00 $\pm$ 110.00 & 36.39 $\pm$ 7.66 & (1) \\ \hline 090715B & 3.0 & 536.00 $\pm$ 172.00 & 22.08 $\pm$ 3.44 & (4) \\ \hline 080607 & 3.036 & 1691.00 $\pm$ 226.00 & 185.12 $\pm$ 9.92 & (2) \\ \hline 081028 & 3.038 & 234.00 $\pm$ 93.00 & 16.75 $\pm$ 1.96 & (2) \\ \hline 120922 & 3.1 & 156.62 $\pm$ 0.04 & 33.99 $\pm$ 3.85 & (4) \\ \hline 020124 & 3.2 & 448.00 $\pm$ 148.00 & 27.02 $\pm$ 2.25 & (1) \\ \hline 060526 & 3.21 & 105.00 $\pm$ 21.00 & 2.72 $\pm$ 1.36 & (1) \\ \hline 080810 & 3.35 & 1470.00 $\pm$ 180.00 & 44.15 $\pm$ 4.85 & (2) \\ \hline 030323 & 3.37 & 270.00 $\pm$ 113.00 & 2.94 $\pm$ 0.98 & (1) \\ \hline 971214 & 3.42 & 685.00 $\pm$ 133.00 & 22.06 $\pm$ 2.76 & (1) \\ \hline 060707 & 3.425 & 279.00 $\pm$ 28.00 & 5.78 $\pm$ 1.01 & (1) \\ \hline 060115 & 3.53 & 285.00 $\pm$ 34.00 & 6.59 $\pm$ 1.06 & (1) \\ \hline 090323 & 3.57 & 1901.00 $\pm$ 343.00 & 402.48 $\pm$ 49.17 & (3) \\ \hline 130514 & 3.6 & 496.80 $\pm$ 151.80 & 51.19 $\pm$ 6.81 & (4) \\ \hline 120802 & 3.796 & 274.33 $\pm$ 93.04 & 12.74 $\pm$ 2.07 & (4) \\ \hline 100413 & 3.9 & 1783.60 $\pm$ 374.85 & 72.95 $\pm$ 23.80 & (4) \\ \hline 120909 & 3.93 & 1651.55 $\pm$ 123.25 & 84.16 $\pm$ 7.19 & (4) \\ \hline 131117A & 4.042 & 221.85 $\pm$ 37.31 & 1.63 $\pm$ 0.33 & (4) \\ \hline 060206 & 4.048 & 394.00 $\pm$ 46.00 & 4.59 $\pm$ 0.98 & (1) \\ \hline 090516 & 4.109 & 971.00 $\pm$ 390.00 & 65.78 $\pm$ 12.75 & (4) \\ \hline 080916C & 4.35 & 2646.00 $\pm$ 566.00 & 371.24 $\pm$ 78.06 & (2) \\ \hline 000131 & 4.5 & 987.00 $\pm$ 416.00 & 181.48 $\pm$ 30.89 & (1) \\ \hline 111008 & 5.0 & 894.00 $\pm$ 240.00 & 48.05 $\pm$ 4.99 & (4) \\ \hline 060927 & 5.6 & 475.00 $\pm$ 47.00 & 14.49 $\pm$ 2.15 & (1) \\ \hline 050904 & 6.29 & 3178.00 $\pm$ 1094.00 & 127.35 $\pm$ 12.74 & (1) \\ \hline 080913 & 6.695 & 710.00 $\pm$ 350.00 & 8.36 $\pm$ 2.44 & (2) \\ \hline 090423 & 8.2 & 491.00 $\pm$ 200.00 & 11.15 $\pm$ 2.97 & (2) \\ \hline \hline \caption{\label{GRBsample} 221 GRBs with redshifts, peak energy in cosmological rest frame and isotropic-equivalent energy. The 1\,$\sigma$ uncertainties are also given.\\ (a) $E_{\rm iso}$ is computed with cosmological parameters: $H_{0}$=67.4 $\mathrm{kms}^{-1} \mathrm{Mpc}^{-1}$, $\Omega_{\mathrm{M}}=0.315$, $ \Omega_{\Lambda}=0.685$. \\ (b) References for GRBs: (1)\citet{2008MNRAS.391..577A};(2)\citet{2009A&A...508..173A} ;(3)\citet{2019MNRAS.486L..46A}; (4)\citet{2016A&A...585A..68W};(5)\citet{2020ApJ...893...46V,2014ApJS..211...12G,2014ApJS..211...13V,2016ApJS..223...28N}} \centering \end{longtable} } \clearpage \begin{table*} \caption{ The $E_{\mathrm{iso }}-E_{\mathrm{p}}$ correlation fitting results in five redshift bins. We give the best-fit values with 1$\sigma$ uncertainties. The first column is the redshfit range of each bin. The last column is the number of GRBs in redshift bins.}\label{T1} \centering \begin{tabular}{|c |c c c |c|} % \hline\hline Redshift range & $a$ & $b$ & $\sigma_{\mathrm{ext}}$ & Number of GRBs \\ \hline\hline [0,0.55] & 48.83 $\pm$ 0.34 & 1.56 $\pm$ 0.16 & 0.41 $\pm$ 0.09 & 20 \\ \hline [0.55,1.18] & 49.11 $\pm$ 0.35 & 1.47 $\pm$ 0.14 & 0.38 $\pm$ 0.04 & 54 \\ \hline [1.18,1.74] & 50.01 $\pm$ 0.46 & 1.21 $\pm$ 0.17 & 0.42 $\pm$ 0.05 & 44 \\ \hline [1.74,2.55] & 49.91 $\pm$ 0.53 & 1.25 $\pm$ 0.19 & 0.43 $\pm$ 0.05 & 48 \\ \hline [2.55,8.20] & 49.74 $\pm$ 0.38 & 1.30 $\pm$ 0.13 & 0.34 $\pm$ 0.04 & 55\\ \hline \hline \end{tabular} \end{table*} \begin{table*} \caption{ The best-fit results of six sub-samples. The number of each sub-samples are given in the last column.} \label{T2} \centering \begin{tabular}{|c c|c c c |c|} % \hline\hline $z_{min}$ & $z_{max}$ & $a$ & $b$ & $\sigma_{\mathrm{ext}}$ & Number of GRBs \\ \hline\hline 0.736 & 0.807 & 49.57 $\pm$ 0.87 & 1.37 $\pm$ 0.36 & 0.32 $\pm$ 0.12 & 6 \\ \hline 0.897 & 1.092 & 49.06 $\pm$ 0.75 & 1.51 $\pm$ 0.28 & 0.36 $\pm$ 0.08 & 14 \\ \hline 1.100 & 1.210 & 48.81 $\pm$ 1.02 & 1.64 $\pm$ 0.41 & 0.35 $\pm$ 0.09 & 11 \\ \hline 1.350 & 1.489 & 49.14 $\pm$ 0.45 & 1.51 $\pm$ 0.17 & 0.24 $\pm$ 0.08 & 12 \\ \hline 2.469 & 2.671 & 49.35 $\pm$ 0.48 & 1.50 $\pm$ 0.17 & 0.19 $\pm$ 0.07 & 9\\ \hline 2.612 & 2.770 & 49.67 $\pm$ 0.70 & 1.40 $\pm$ 0.28 & 0.20 $\pm$ 0.09 & 8\\ \hline \hline \end{tabular} \end{table*} \bsp % \label{lastpage}
Title: First measurement of interplanetary scintillation with the ASKAP radio telescope: implications for space weather
Abstract: We report on a measurement of interplanetary scintillation (IPS) using the Australian Square Kilometre Array Pathfinder (ASKAP) radio telescope. Although this proof-of-concept observation utilised just 3 seconds of data on a single source, this is nonetheless a significant result, since the exceptional wide field of view of ASKAP, and this validation of its ability to observe within 10 degrees of the Sun, mean that ASKAP has the potential to observe an interplanetary coronal mass ejection (CME) after it has expanded beyond the field of view of white light coronagraphs, but long before it has reached the Earth. We describe our proof of concept observation and extrapolate from the measured noise parameters to determine what information could be gleaned from a longer observation using the full field of view. We demonstrate that, by adopting a `Target Of Opportunity' (TOO) approach, where the telescope is triggered by the detection of a CME in white-light coronagraphs, the majority of interplanetary CMEs could be observed by ASKAP while in an elongation range $<$30 degrees. It is therefore highly complementary to the colocated Murchison Widefield Array, a lower-frequency instrument which is better suited to observing at elongations $>$20 degrees.
https://export.arxiv.org/pdf/2208.04981
\verso{Rajan Chhetri \textit{et al.}} \begin{frontmatter} \title{First measurement of interplanetary scintillation with the ASKAP radio telescope: implications for space weather} \author[1,2]{Rajan \snm{Chhetri}\corref{cor1}} \cortext[cor1]{Corresponding author: Tel.: +61-8-9266-3577; fax: +61-8-9266-9246;} \ead{rzn.chhetri@gmail.com} \author[1]{John \snm{Morgan}} \ead{john.morgan@curtin.edu.au} \author[3]{Vanessa \snm{Moss}} \ead{Vanessa.Moss@csiro.au} \author[1,3]{Ron \snm{Ekers}} \ead{Ron.Ekers@csiro.au} \author[1]{Danica \snm{Scott}} \ead{danica.scott@postgrad.curtin.edu.au} \author[3]{Keith \snm{Bannister}} \ead{Keith.bannister@csiro.au} \author[4]{Cherie K. \snm{Day}} \ead{cday@swin.edu.au} \author[5]{Adam T. \snm{Deller}} \ead{adeller@astro.swin.edu.au} \author[5]{Ryan M. \snm{Shannon}} \ead{rshannon@swin.edu.au} \address[1]{International Centre for Radio Astronomy Research, Curtin University, GPO Box U1987, Perth, WA 6845, Australia} \address[2]{CSIRO Space and Astronomy, P.O. Box 1130, Bentley, WA 6102, Australia} \address[3]{CSIRO Space and Astronomy, P.O. Box 76, Epping, NSW 1710, Australia} \address[4]{Department of Physics, McGill University, Montreal, Quebec H3A 2T8, Canada} \address[5]{Centre for Astrophysics and Supercomputing, Swinburne University of Technology, John St, Hawthorn, VIC 3122, Australia} \received{6 Apr 2022} \finalform{6 Apr 2022} \accepted{xx XXX 20XX} \availableonline{XX Xxx 20XX} \communicated{S. Sarkar} \begin{keyword} \KWD Interplanetary scintillation\sep Wide field of view\sep ASKAP\sep Space weather \end{keyword}% \end{frontmatter} \section{Introduction} Interplanetary Scintillation (IPS) was discovered by \citet{Clarke:phdthesis}, who postulated that the phenomenon may be associated with the Solar Corona. The initial use of IPS by \citet{1964Natur.203.1214H} was as a technique for identifying compact ($\lesssim$0.3\arcsec) astrophysical radio sources. Later it was confirmed that enhancements in the observed scintillation index were associated with solar flares \citep{1967Natur.213..377S}, and over the following decades, IPS observations played a vital role in uncovering the nature of the solar wind, including measurement of the solar wind velocity beyond the plane of the ecliptic \citep{1967Natur.213..343D}, observation of the acceleration of the solar wind close to the Sun \citep{1971A&A....10..310E}, and changes in the solar wind over the solar cycle \citep{1980Natur.286..239C}. More relevant to this work, \citet{1968PASA....1..142D} used a ``grid'' of IPS sources to reconstruct the path of an enhancement in the solar wind, using both scintillation indices and power spectra. Later, \citet{1982Natur.296..633G} (see also \citealp{Vlasov1979}) proposed that a network of IPS sources be monitored daily, so that the changes in scintillation index could be used to track interplanetary disturbances as they move outwards through the Heliosphere. These findings motivated the construction of dedicated IPS arrays \citep[e.g.][]{2011RaSc...46.0F02T}, whose data can be used in near-real time to reconstruct the inner heliosphere. \citep{1998JGR...10312049J,2013PJAB...89...67T}. In addition to purpose-built instruments, most radio telescopes can be used to make IPS measurements, the main requirement being sub-second time resolution. We have shown that the Murchison Widefield Array \citep[MWA;][]{2013PASA...30....7T} is an outstanding instrument for IPS studies \citep{2018MNRAS.473.2965M} due to its extremely wide field of view ($\sim$900 sq. deg. at 162\,MHz) and its ability to make high-fidelity images at high time resolution. In just a few minutes, we can make simultaneous IPS observations across hundreds of sources and several frequency bands (the MWA has 30.72\,MHz of instantaneous bandwidth which can be deployed flexibly across the observing frequency range of 75--300\,MHz). Also located at the Murchison Radio Observatory (MRO) in Western Australia, the Australian Square Kilometre Array Pathfinder radio telescope (ASKAP) is an array of 36 $\times$ 12\,m dishes operated as part of the Australia Telescope National Facility. Each of the 36 dishes is equipped with a 188-element phased array feed (PAF) receiver, which widens the 36-beam field of view to approximately 5 $\times$ 5 degrees and enables ASKAP to be an excellent wide-field, high-speed survey instrument. The frequency range of ASKAP is 700--1800\,MHz, with a current instantaneous bandwidth of 288\,MHz, a standard channel resolution of 18.5\,kHz and a standard integration time of 10\,s. The ASKAP telescope and encompassing system elements are fully described in \cite{2021PASA...38....9H}. Since ASKAP shares the key characteristics of wide field of view and high fidelity imaging capability with the MWA, we were keen to assess the potential of ASKAP for making IPS observations using the same widefield imaging approach that we have pioneered with the MWA. ASKAP observes at higher frequencies than the MWA, and so is more suited to making observations closer to the Sun (5\degr--30\degr or so), making it complementary to the MWA, which is better suited to observing beyond 20\degr\ elongation. Since ASKAP's standard 10\,s correlator integration time precludes IPS observations, we used the alternative pathway offered by the Commensal Real-time ASKAP Fast Transients (CRAFT) System \citep{2010PASA...27..272M}. This system has the advantage that it offers extremely high time resolution, but the disadvantage that (for now at least) only a very short time interval can be captured. Notwithstanding this limitation, we were able to use this system to unambiguously detect IPS, demonstrating both that the presence of the Sun (an extremely bright $\sim 10^6$\,K source at ASKAP frequencies) does not present any obvious problems, and that the instrument is sufficiently stable on the relevant timescales for reliable IPS measurements to be made. Below, we describe this detection in detail as well as exploring the potential for ASKAP as a space weather monitoring facility. The paper is presented as follows: in Section 2 we present the details of our ASKAP observation and results. In section 3, we present our analyses of the results. Finally, in Section 4 we describe a pathway to regular IPS observations with ASKAP and it's implications. \section{Observations and Results} \subsection{Observations} \label{Sec:Observations} The CRAFT system was developed to detect fast ($<$5\,s) transient radio sources \citep{2010PASA...27..272M}, and is primarily used to detect and localise Fast Radio Bursts \citep{Macquart2020Natur.581..391M, Heintz2020}. Upon receipt of a trigger, the system downloads voltage data from a specified beam of each individual antenna to correlate offline \citep{Bannister2019}. Since the subsequent processing takes place offline using the Distributed FX (DiFX) software correlator \citep{DIFX}, arbitrarily short integration times are possible. However, the limited memory available for buffering voltage data, combined with the high data rate, means that the voltage download duration is limited to a maximum of 3.1\,s when the data are stored at the standard 4-bit precision. With respect to conducting IPS observations using ASKAP, there are a few considerations in terms of how close to the Sun ASKAP can point, how quickly it can get on source, and how science operations are generally conducted with the telescope. When pointing at the Sun in rainy conditions, the wet surface of the parabolic ASKAP antenna becomes reflective rather than diffusive to optical wavelengths, and focused solar radiation can damage the PAF. Due to the consequent risk associated with solar observations, ASKAP science observations are limited to field centres beyond a solar elongation of 10 degrees. However, since the telescope has a wide field of view, such a pointing will cover a 5 degree range of elongations centred on 10 degrees. Each ASKAP dish, due to its relatively small size, can slew rapidly to position, reaching any pointing within a few minutes. In order to select a candidate source for our test observation, we examined strongly scintillating sources from our MWA IPS catalogue (Morgan et al. in prep) which were expected to be bright at ASKAP frequencies as indicated by the 1.4\,GHz flux density listed in the NRAO VLA Sky Survey \citep[NVSS;][]{1998AJ....115.1693C}. NVSS 070029+190541, our chosen candidate source, has a flux density in NVSS of $\sim$400mJy and at MWA frequencies has an IPS scintillation index consistent with an unresolved source. NVSS 070029+190541 was observed on 25 June 2021 at a solar elongation of 11.2 degrees. For this proof-of-concept observation, only the data from the PAF beam with our target source in it was preserved. The methods of correlation, calibration, and imaging were very similar to those described in \cite{Bannister2019} and \cite{2020MNRAS.497.3335D}. Voltages were downloaded for both linear polarisations for a single PAF beam across 24 antennas. The voltages were then correlated with an integration time of 100\,ms, yielding a total of 31 timesteps over 3.1\,s duration. Antenna-based, frequency-dependent phase and flux calibration solutions were derived from a similar set of voltages downloaded during an observation of PKS 0407$-$638 made on the same day as our target source. \subsection{Results} We then made individual Stokes-I images for each of the 31 timesteps correlated with angular resolution of 25.0\,arcsec (major axis of synthesized beam). We also detected another source (NVSS J070048+190346) at a separation of 4.8\arcmin\ ($\sim$11.5 resolution elements) from the target source with an average S/N of 4.6. The schematic of the sky coverage with respect to the position of the Sun for our observation is shown in Figure \ref{Fig:ASKAPfov_Sun} . The non-target source is not a known scintillator at 162\,MHz, and it is not sufficiently bright for our current MWA IPS survey data to provide a strong constraint on any scintillation. These two sources were the only two clearly visible in our images. Visual inspection of the images confirms that they are not affected by Radio Frequency Interference (RFI) or solar radio bursts. The point spread function of the bright target source is stable, and there are no other artefacts. We conclude from this that there are no obvious instrumental errors, which is as expected, since the CRAFT system is a well-established and such errors would compromise its scientific productivity. We measured the flux densities and peak brightness for the two objects by fitting an elliptical Gaussian (major axis: 25$\arcsec$.02, minor axis: 16$\arcsec$.64, pa: -17.7$\degr$) to their positions in each 98\,ms image. The time series of brightness thus produced is plotted in Figure \ref{Fig:time_series}. The uncertainties in brightness are the image RMS (estimated by taking RMS of the pixel values in a large part of each image, away from any radio source). The plot also shows time series for four offset pixels (100 pixels away from source positions) in four directions for each of the two objects, which represent brightness values purely due to noise in the image. Since there is no noticeable correlation in the time series of the two sources or between the sources and the respective offset pixels, we can be confident that the variation in brightness is not due to calibration issues. The power spectra of the time series plotted in Figure~\ref{Fig:time_series} are plotted in Figure~\ref{Fig:power_spec}. While the errors are necessarily large given the very short time series, there is no suggestion of excess on-source variability for the non-IPS source, so instrumental effects, such as unstable gain are unlikely to be responsible for the excess variance observed for the IPS source. Moreover, the power spectrum for the IPS source matches that expected for scintillation in the weak regime, with the `Fresnel Knee' located just below 1\,Hz. The location of this knee scales with the Fresnel scale, which scales with wavelength, $\lambda$, as $\sqrt{\lambda}$ \citep{Narayan1992}, and indeed this power spectrum appears shifted higher in frequency by a factor of $\sim$2 compared to those observed at MWA frequencies \citep[see][Figure~1]{2018MNRAS.473.2965M}. We note that the IPS signature in this case is fully resolved with 200\,ms time resolution, so we have averaged to this resolution to generate the power spectrum (the power spectra without this averaging step are all consistent with white noise above 2.5\,Hz). It is conceivable that higher resolution may be required to capture the IPS power spectrum for very high solar wind speeds (since this will shift the IPS signal to higher frequencies). \citet{McConnell2020} report the sensitivity of ASKAP as a function of observing frequency (Figure 1). We use this to estimate an expected image RMS (due to system noise) of 5.13 mJy/beam for images made at 200-ms cadence. A line representing white noise at this level is also plotted on Figure~\ref{Fig:power_spec} and is consistent with the noise level that we observe (both on the non-IPS source, off source, and at high frequencies on the IPS source where the IPS signature is fully resolved). From the time series of our target source, we estimate its scintillation index to be 0.098. \cite{Rickett1973} provides the empirical relationship $m_{pt} = 0.06\lambda^{1.0}p^{-1.6}$ which gives the expected scintillation index ($m_{pt}$) as a function of wavelength ($\lambda$) and point of closest approach of line of sight to the Sun in au (p) for a point source. At our central observing frequency of 863.5\,MHz, the maximum scintillation index expected for a point-like object is 0.28 . This indicates that the source scintillation is slightly resolved at these frequencies (in contrast to the unresolved scintillation index observed at MWA frequencies). This is perhaps expected since Very Long Baseline Interferometry (VLBI) observations of peaked spectrum sources show a double morphology, and the spatial scales probed by IPS are smaller at higher frequencies. As well as hinting at the potential astrophysical utility of IPS observations with ASKAP, this finding also underscores that the IPS characteristics of sources can change at different frequencies, and so care should be taken when translating IPS catalogues from one frequency to another. Nonetheless, our central finding is that we are able to report the first detection of interplanetary scintillation with the ASKAP telescope, demonstrating the potential to use ASKAP to probe the solar wind to within $\sim$ 42 solar radii. \section{Analysis} As noted above, our test image was constructed from only one PAF beam out of 36, and the total length of the observation was 3.1 seconds. In the future, using a new system in development (see Section.~\ref{sec:operations}), we expect to be able to use all 36 PAFs to obtain a wide field of view ($\sim$6$\times$6 sq. deg.) to make IPS observations of much longer duration. If we centre the FoV at the solar elongation of 11.5 degrees, as we did with our test observation, we expect a typical solar elongation coverage between 8.5 and 14.5 degrees per pointing. \subsection{Potential for ASKAP to perform Widefield IPS measurements} In \cite{2019PASA...36....2M}, we showed that noise in the `variability image' (i.e., the image made by taking the standard deviation of each pixel from each image in the time series) increases as $t^{1/4}$. For a sensitivity in a single 200\,ms image of 5.13\,mJy, we expect a single minute long observation to have an 5$\sigma$ detection limit of 4.12\,mJy. Since flat-spectrum sources are the dominant compact objects at gigahertz frequencies and above, we used the 5\,GHz counts of flat-spectrum objects \citep{Condon1984a} to estimate that approximately 131 IPS sources would be detected in a single 1-minute observation with ASKAP, equating to 5.2 sources per square degree. This is an extremely high density of sources, far exceeding any IPS observations taken up to now. In MWA observations as yet unpublished (Morgan et al. in prep.), we have detected CMEs in our wide-field IPS data. They typically present as an enhancement of the ``g-level'' \citep[the ratio of the observed scintillation index to baseline,][]{1982Natur.296..633G} over an arc roughly equidistant from the Sun, a few degrees in thickness. With 131 sources detected across the field of view in a single pointing, it should be possible to localise solar elongation of the CME to degree-level accuracy. 10s of sources would be expected to lie within the CME, and power spectra could be generated for each, from which solar wind parameters such as the velocity \citep{1990MNRAS.244..691M}, and the power law index of the turbulence \citep{1983A&A...123..207S} can be determined, at least for the stronger detections. For example, \citet[][]{2015SoPh..290.2539M} have estimated solar wind speed in the inner heliosphere by model fitting to power spectra of sources observed with high S/N. We estimate 2.5 sources per square degree with S/N$\geq$10 in the ASKAP field of view which can be used to for such studies. The high density of sources probed simultaneously also enables an alternative approach to velocity determination: with observations of the same field spaced a few hours apart, motion of the CME (typically $\sim1^\circ$ per hour) across the sky should be directly discernible. \subsection{Determining the baseline scintillation level of ASKAP IPS sources} In order to make measurements of the g-level of an IPS source, it is necessary to have a measurement of the source's baseline scintillation level. This will be different for each source based on its sub-arcsecond structure. Eventually we hope to perform an IPS survey of the entire ecliptic using ASKAP, which would provide the required information for each source. However, until we have such a survey there are a number of strategies that could be employed to extract Space Weather information from ASKAP IPS observations. First, we can rely on other IPS observatories (including the MWA) to provide us with lists of known IPS sources and their properties. However, this negates the most unique feature of ASKAP observations: the very high number of sources ASKAP can detect (thanks to its high sensitivity and our widefield approach); unfortunately this means that most of these sources will not be known IPS sources. The flat spectrum of an extragalactic source is a reliable indicator of its very compact nature \citep[e.g.][]{Petrov2019, moldon_2015A&A...574A..73M, Jackson_2016A&A...595A..86J, Jackson2022A&A...658A...2J}, especially at high radio frequencies where the extended steep spectrum components become weak. Our study of flat-spectrum sources at 200\,MHz (Chhetri et al. under review with MNRAS) using the GLEAM catalogue finds that 6.3\% of overall sources show flat-spectrum. We can use this number to estimate that $>$20\% of total sources at ASKAP frequencies will be flat-spectrum sources, which is in line with findings of other studies \citep[e.g.][]{Condon2009}. This distribution of flat-spectrum, hence, compact sources further supplemented with other approaches such as MWA IPS measurements (Morgan et al. in prep.) and existing VLBI catalogues \citep[e.g.][]{Petrov2019} will provide a dense network of IPS sources at ASKAP frequencies. Finally, when observing a CME event, ASKAP would make multiple observations spaced $\sim$1 hour apart, and for each pair of observations adjacent in time, the ratio in scintillation index can be determined for each source. This approach is similar to the ``running difference images'' which are so often employed in analysis of white light coronagraphs \citep[e.g.][]{2004A&A...425.1097R} The additional advantage in this case, is that in the ratio of g-levels, the baseline level of scintillation cancels out, and so construction of these `g-ratio' maps do not require a reference catalogue. \subsection{Observing Space Weather with ASKAP: a `target of opportunity' approach} \label{sec:too} IPS can be used to observe the ambient background solar wind, as well as for observing a range of solar wind phenomena including interplanetary CMEs, as well as interactions between the fast and slow solar wind \citep[e.g.][]{1997PCE....22..387B}. There are also a number of potential strategies for detecting and characterising significant space weather events with ASKAP, including blind searches. Here, we focus on one potential approach for using a modest amount of ASKAP observing time to provide information that may be a useful input to Space Weather forecasts: using ASKAP to follow up the detection of a CME in Large Angle Spectral Coronagraph images \citep[LASCO;][]{1995SoPh..162..357B} using the Computer Aided CME Tracking algorithm \citep[CACTus;][]{2004A&A...425.1097R,2009ApJ...691.1222R}. We assume this approach for our case study here, since 1) ASKAP is a multi-user instrument, and so using other instruments dedicated to solar observations to ensure maximum utility of any ASKAP observations is appropriate; 2) CACTus and its accompanying catalogue is widely used, and timely alerts of significant CMEs are broadcast to the Space Weather community; 3) we have used exactly this technique to detect CMEs in MWA IPS data (Morgan et al. in prep.). Alternative approaches are discussed briefly in Section~\ref{sec:discussion}. Essentially, CACTus provides, among other parameters, the launch time of the CME and its radial velocity away from the Sun in the plane of the sky in km/s. The latter can readily be converted back to angular speed (in degrees per hour). For our purposes, since ASKAP has such a wide field of view, we can assume that this angular speed will be constant as the CME moves away from the Sun (i.e., we neglect complications such as the 3D motion of the CME and changes in speed, e.g. due to the ambient wind), though in principle we could adopt a more nuanced model. Our proof-of-concept observation has demonstrated that we can make IPS observations at 11.2\,degrees elongation, and there is no reason to think that we cannot make IPS observations from 7.5\degr\ from the Sun, where IPS enters the weak regime all the way out to 30 degrees, by which point the scintillation index drops to 0.06. We wish to determine what fraction of CACTus-detected CMEs would, in principle, be observable with ASKAP. This depends on the typical latency between CME launch time and the CACTus alert being disseminated in addition to whether or not the Sun (and the CME) is above the horizon at this time, or shortly after. In order to assess the latency of the LASCO/CACTus alert system, we monitored the status of the CME alert webpage continuously for a 6-month period (2021-Sep--2022-Mar). During this time, a new alert was issued 299 times. On each of these updates, we recorded the time that the alert was issued and the estimate of the CME launch time. The difference between these two times is the latency, the distribution of which is shown in Figure~\ref{fig:latency}. From the velocity and launch time provided by CACTus, we can determine, for each CME, the time at which it will reach each elongation in the range 0--30\,degrees. This is shown in Figure~\ref{fig:cme_prop}, where each time for each CME as it propagates from 0--30\,degrees, it is indicated whether the Sun is up at the ASKAP site. Almost all CMEs would be observable with ASKAP (many on consecutive days), the only exception being the fastest-moving (velocity, $v>$1000\,km/s) CMEs with higher latency, which are either beyond 30\,degrees before the alert is even issued, or the alert is issued during the night (for ASKAP), with the CME moving beyond 30\,degrees by dawn. Using the distribution of latencies in Figure~\ref{fig:latency}, it is possible to be a little more quantitative about what fraction of CMEs would be observable (it is also useful to know that CACTus alerts are currently issued every 3 hours starting at 01:30 UTC). For all but the 10:30, 13:30, and 17:30 (all UTC) alerts, the Sun is up, or will be shortly. Thus 5/8 of CMEs will be observable for all unless they are extremely fast ($v>$1500\,km/s) and/or high latency (above 90th percentile). For the 10:30 UTC alert, CMEs will be observable unless they are fast ($v>$1000\,km/s) and/or at median latency. For the 13:30 UTC alert, CMEs will be observable unless they are fast ($v>$1000\,km/s) and/or moderately high latency (above 75th percentile). For the 16:30 UTC alert, CMEs will be observable unless they are fast ($v>$1000\,km/s) and/or very high latency (above 90th percentile). We should note that the fastest events that are the most damaging at Earth, and these can take less than one day to the 1 AU distance to Earth. A large proportion of the small number of CMEs that are not observable with ASKAP could easily be observed by the MWA, which is colocated with ASKAP, and can make IPS observations in the range 20--50~degrees. This is discussed further in section~\ref{sec:discussion}. However we reiterate that the very fastest CMEs may not be observable by ASKAP or the MWA, unless very fortuitously timed, and this provides strong motivation for multiple IPS observatories located at different longitudes. \section{Discussion} \label{sec:discussion} \subsection{Towards Operational IPS observations with ASKAP} \label{sec:operations} The mode used for the observation presented here serves for demonstrating the capability of ASKAP for IPS observations but is not suited to operational use, primarily because it only allows such a short observing time. This is because the mode was developed for capturing Fast Radio Bursts, which are very short duration (compared to an IPS observation). On the other hand, IPS observations do not require the same time or spectral resolution; the 100\,ms duration used here is more than adequate (in fact we used 200\,ms to generate the power spectrum in Figure~\ref{Fig:power_spec}). This raises the possibility of developing a new system which would allow us to record ASKAP visibility data with sufficient frequency resolution for imaging \citep{Perley1981, 2021PASA...38....9H}, sufficient time resolution $\sim100$\,ms for IPS, and a sufficiently long observing time (up to a few minutes). Fortunately, such a system is currently being developed as an upgrade to the ASKAP FRB detection system that will process high time resolution visibility data. This system, known as CRACO (the CRAFT Coherent system), will be able to record visibility data products at the required 100\,ms time and 1\,MHz frequency resolution to disk. The data can be recorded continuously, up to the limits of disk space, which should be several hours. Once recorded, these data can be calibrated, imaged, and processed offline, in roughly the same manner as detailed above. ASKAP is currently in the process of transitioning from commissioning to full survey operations, the latter of which are expected to start towards the end of 2022. Science operations for ASKAP are focused on maximising the automation and autonomy of the telescope while minimising the reliance on human decisions at any point in the system and ensuring high scientific data quality (Moss et al. in prep). This approach extends also to the scientific scheduling of the telescope, which is primarily managed by SAURON (Scheduling Autonomously Under Reactive Observational Needs) with manual input. SAURON is designed to make decisions autonomously and dynamically based on the pool of pending observations, the current state of the system, the status of the surrounding environment, and any associated weightings or priorities that feed into the decision-making process. In the context of scheduling IPS, this means that long-term we expect to be able to automatically ingest triggers and incorporate them into scientific scheduling with minimal human oversight and minimal observational disruption. \subsection{The potential of ASKAP IPS measurements to contribute to Space Weather research and forecasting} As demonstrated here, ASKAP has the potential to track interplanetary CMEs once they have left the field of view of most heliographs, but long before the CME is approaching the Earth. Being ground-based, ASKAP is limited to daytime observing of IPS. However, as we have shown in Section~\ref{sec:too}, the vast majority of CMEs will still be observable. The small number of CMEs that will be missed (due to extreme speed or slow alerts) can be observed by the MWA, a similarly wide field of view instrument colocated with ASKAP. Since the MWA operates at a lower observing frequency of 70--300\,MHz, it is better suited to observations further from the Sun. Furthermore, radio observations are much less impacted by weather events than observations at other wavelengths (such as optical observations), and even quite severe ionospheric conditions have limited effect on IPS observations, especially at ASKAP frequencies. We anticipate that ASKAP would be used occasionally to track significant space weather events, leaving routine monitoring of the Sun and solar wind to dedicated instruments. We expect that further development of the TOO capability of ASKAP via SAURON should enable IPS observations to be carried out in a prompt and automated way. Any number of observations or models could be used to trigger ASKAP IPS observations. In section~\ref{sec:too}, we discuss one possibility in detail. For ground-based triggers, geographical considerations might influence the choice. For example, the K-Cor white-light Coronagraph \citep{2017SpWea..15.1288T} is 6 hours ahead of the Murchison Radio Observatory in longitude, so a rapidly-moving CME discovered very close to the Sun could be followed up as it moves further away from the Sun at a later time. Similarly, the ISEE telescope, a dedicated and very well-established multi-station IPS telescope in Japan \citep{2011RaSc...46.0F02T}, is just 1 hour ahead of ASKAP and has already been used to provide reference for MWA IPS studies \citep{2018MNRAS.473.2965M}. ISEE provides rapid information on IPS sources (with multi-station velocity measurements for a subset of sources outside the Northern Hemisphere winter), but with a relatively sparse network of sources. ASKAP could rapidly densify the IPS measurements at the sky location where an unusually strong ISEE detection was made, allowing the nature, location and morphology of the enhanced scattering to be determined. Similarly, ASKAP can inform further ground-based observations as part of a Worldwide Interplanetary Scintillation Stations (WIPSS) Network \citep{2021cosp...43E2370B}. For example, the location and motion of a CME as determined by ASKAP can facilitate the planning of IPS observations by International LOFAR \citep{2013A&A...556A...2V}, which has its own unique IPS observing capabilities \citep{2021cosp...43E1026F}. Beyond IPS observations, ASKAP and other observations can assist in the scheduling and interpretation of radio observations (by ASKAP or by other instruments) designed to determine (via the Faraday Rotation (FR) and hence the Magnetic field orientation) the geoeffectiveness of CMEs. IPS identifies exactly which points on the sky the CME is dense; FR measurements made at those locations will have the highest signal-to-noise ratio. We note that while LOFAR and the MWA cannot make IPS observations (at least in the weak scintillation regime) as close to the Sun as ASKAP, they can make FR measurements closer, where the FR is likely to be higher and therefore more detectable \citep{2012RaSc...47.0K08O}. Finally, ASKAP IPS observations can feed directly into heliospheric modelling efforts which provide Space Weather Forecasts \citep[e.g.][]{1998JGR...10312049J, Jackson2020FrASS...7...76J}. ASKAP measurements, being made relatively close to the Sun, and with a predicted density of sources much higher than any previous IPS observations, may be particularly useful for providing inner boundary conditions to simulations of the inner heliosphere \citep{2015SpWea..13..104J}. \section{Acknowledgments} The authors wish to acknowledge the support of A. Hotan, the broader ASKAP Operations team at CSIRO and the CRAFT team to make this test observation possible. The Australian Square Kilometre Array Pathfinder is part of the Australia Telescope National Facility which is managed by CSIRO. Operation of ASKAP is funded by the Australian Government with support from the National Collaborative Research Infrastructure Strategy. ASKAP uses the resources of the Pawsey Supercomputing Centre. Establishment of ASKAP, the Murchison Radio-astronomy Observatory and the Pawsey Supercomputing Centre are initiatives of the Australian Government, with support from the Government of Western Australia and the Science and Industry Endowment Fund. We acknowledge the Wajarri Yamatji as the traditional owners of the Murchison Radio-astronomy Observatory site. Ryan Shannon acknowledges support through Australian Research Council Future Fellowship FT190100155. \bibliographystyle{model5-names} \biboptions{authoryear} \bibliography{refs}
Title: Bouncing Dark Matter
Abstract: We present a novel mechanism for thermal dark matter production, characterized by a "bounce": the dark matter equilibrium distribution transitions from the canonical exponentially falling abundance to an exponentially rising one, resulting in an enhancement of the freezeout abundance by several orders of magnitude. We discuss multiple qualitatively different realizations of bouncing dark matter. The bounce allows the present day dark matter annihilation cross section to be significantly larger than the canonical thermal target, improving the prospects for indirect detection signals.
https://export.arxiv.org/pdf/2208.08453
\preprint{DESY-22-125} \title{Bouncing Dark Matter} \author{Lucas Puetter} \affiliation{II. Institute of Theoretical Physics, Universität Hamburg, 22761 Hamburg, Germany} \author{Joshua T.~Ruderman} \affiliation{Center for Cosmology and Particle Physics, Department of Physics, New York University, New York, NY 10003, USA} \author{Ennio Salvioni} \affiliation{Dipartimento di Fisica e Astronomia, Universit\`a di Padova and\\ INFN, Sezione di Padova, Via Marzolo 8, 35131 Padua, Italy} \author{Bibhushan Shakya} \affiliation{Deutsches Elektronen-Synchrotron DESY, Notkestr.~85, 22607 Hamburg, Germany} \section{Introduction} Discovering the underlying nature of dark matter (DM) is one of the main goals of contemporary research in particle physics. Efforts in this direction primarily focus on two key questions: how DM achieved its observed relic abundance, and how its microscopic interactions can be detected with experiments today. DM in thermal equilibrium with the Standard Model (SM) bath in the early Universe follows an abundance distribution that falls exponentially as the Universe cools, until the rates of interactions that keep it in equilibrium become slower than the cosmic expansion rate (see e.g.~\cite{Kolb:1990vq}). This thermal freezeout paradigm represents a strongly motivated and widely studied framework for DM\@. The simplest realization, known as the WIMP miracle, makes a sharp prediction for the DM annihilation cross section expected today: $\langle \sigma v\rangle_{\text{canonical}} \approx 2\times 10^{-26}$ cm$^3$ s$^{-1}$ for DM masses around the weak scale, which provides a compelling target for a variety of current and planned experimental searches for DM\@. Several variations to this canonical thermal freezeout picture are possible, and have been explored in numerous papers in the literature: DM freezeout can be driven by different processes \cite{Griest:1990kh,Carlson:1992fn,Hochberg:2014dra,DAgnolo:2015ujb,Kuflik:2015isi,Kopp:2016yji,DAgnolo:2017dbv,Berlin:2017ife,DAgnolo:2018wcn,Kim:2019udq,DAgnolo:2020mpt,Kramer:2020sbb,Frumkin:2021zng,Frumkin:2022ror}, can involve interactions with particles whose abundances differ from their equilibrium abundances \cite{Bandyopadhyay:2011qm,Farina:2016llk,Dror:2016rxc,Cline:2017tka}, or feature DM at a temperature different from the temperature of the thermal bath \cite{Feng:2008mu,Fitzpatrick:2020vba}. However, all of these scenarios are still characterized by an exponentially decreasing DM abundance until freezeout. Furthermore, in many thermal DM scenarios, the present day annihilation cross section is generally equal to or smaller than $\langle \sigma v\rangle_{\text{canonical}}$, as the existence of any stronger interaction would suppress the DM freezeout abundance below its observed value (there are exceptions to this; for instance, Sommerfeld enhancement effects \cite{Arkani-Hamed:2008hhe}). The aim of this letter is to highlight the existence of a novel mechanism for producing thermal dark matter that deviates from this general pattern. Specifically, we explore scenarios where the DM abundance transitions away from the standard exponentially suppressed distribution to a {\it rising} equilibrium curve in the final stages of freezeout, resulting in an enhancement of the final DM abundance by several orders of magnitude.\,\footnote{This feature has been observed in~\cite{Katz:2020ywn} for metastable dark sector particles (see also~\cite{Griest:1989bq}), then in~\cite{Ho:2022erb,Ho:2022tbw} for dark matter, but without detailed discussion of the mechanism.} We term this transition a \textit{bounce}, and DM exhibiting such behavior \textit{bouncing dark matter}. A late increase in the DM abundance is known to be possible in various scenarios, e.g.\,\cite{Feng:2003xh,Garny:2017rxs,Forestell:2018dnu}, but out of equilibrium; bouncing dark matter is, to our knowledge, the first realization of this behavior in a thermal context. We first provide a technical description of the general conditions necessary for bouncing dark matter (Section~\ref{sec:idea}), followed by a detailed discussion of the physics behind the bounce within a simplified three particle framework (Section~\ref{sec:bounce}). The most salient phenomenological feature of bouncing DM is that the present day DM annihilation cross sections can be significantly larger than $\langle \sigma v\rangle_{\text{canonical}}$: while such large cross sections would lead to a too-small relic abundance of DM in standard freezeout scenarios, here the subsequent bouncing phase raises the DM abundance to the correct value. Such enhanced present day annihilation cross sections greatly improve the prospects of discovering DM signals with various indirect detection experiments (Section~\ref{subsec:indirectdetection}). We also present other illustrative examples of bouncing DM (Section~\ref{sec:others}). \section{Chemistry of the Bounce} \label{sec:idea} The evolution of number densities of various species can be tracked via the corresponding chemical potentials $\mu_i$, defined as $n_i \!\approx\!n_i^{\rm eq}e^{\mu_i/T}$, where $n_i^{\rm eq}$ is the number density of a species in kinetic equilibrium with the photon bath and with vanishing chemical potential, and here and below we assume $T\!\ll\!m_i$. In this limit, $n^{\rm eq}_i = g_i \left(\frac{m_i T}{2 \pi}\right)^{3/2}\!e^{-m_i/T}$, where $g_i$ is the number of degrees of freedom of the particle. If an interaction $A_1+ ...+A_p \leftrightarrow B_1+ ...+B_q$ is rapid compared to the expansion rate of the Universe, i.e.~the Hubble parameter, the chemical potentials of the species involved are related as $\mu_{A_1}+ ...+\mu_{A_p}=\mu_{B_1}+ ...+\mu_{B_q}$. Whether or not a state undergoes a bounce will depend on the behavior of its chemical potential, as we now describe. Suppose that DM shares the same chemical potential as some lighter species, $A$, whose abundance $n_A$ does not rise. In this case, if $\mu_\chi=\mu_A$, then $n_\chi= (n^{\rm eq}_\chi / n^{\rm eq}_A)\,n_A$, which implies that $n_\chi$ falls exponentially, since $n^{\rm eq}_\chi / n^{\rm eq}_A$ is a falling exponential. Therefore, a necessary condition for dark matter to undergo a bounce is that its chemical potential departs from the chemical potentials of all lighter states in the thermal bath. If this is satisfied, further requiring that the DM comoving number density, or yield, $Y_\chi=n_\chi/s$ (where $s= 2\pi^2 g_{*} T^3/45$ is the total entropy density, and $g_*$ is the effective number of degrees of freedom in the bath) $\textit{rises}$ as the temperature drops imposes a stringent condition on $\mu_\chi$. Since the yield scales as $Y_\chi\sim e^{-m_\chi/T} e^{\mu_\chi/T}$, the second exponential must grow faster than the first one drops. More precisely, the condition for $Y_\chi$ to rise as the temperature drops, $dY_\chi/dx>0$, where $x \equiv m_\chi/T$, is that $\mu_\chi$ must satisfy \be \label{eq:condition} \mu_\chi (x)+x\frac{d\mu_\chi (x)}{dx}>m_\chi\left(1-\frac{3}{2x}\right)\;, \ee in the limits $x\gg 1$ and constant $g_{*}$. Even if $\mu_\chi< m_\chi$, this condition can be satisfied with a sufficiently large $d\mu_\chi/dx$. We define a state to undergo a bounce if it is in equilibrium, and if its chemical potential satisfies Eq.\,\eqref{eq:condition} at some moment in the early Universe. As discussed above, this requires that the DM chemical potential deviates from those of all other lighter species in the bath. The aim of this paper is to explore scenarios that satisfy Eq.\,\eqref{eq:condition}. \section{Bouncing Dark Matter in a Three Particle Framework} \label{sec:bounce} We now illustrate the physics behind the bounce within a simple framework. Consider a dark sector that contains three scalar particles -- the DM candidate $\chi$ and two additional states $\phi_1$ and $\phi_2$ -- with the following interactions: \be - \mathcal{L} \supset\, \lambda_{\chi 1} \chi^2\phi_1^2+\lambda_{\chi 2} \chi^2\phi_2^2+\lambda_{12} \phi_1^2\phi_2^2+\lambda \phi_2^2\chi\phi_1\,. \label{eq:interactions} \ee The first three terms are couplings between two dark sector species, while the final term represents an interaction involving all three states, which will facilitate the bounce. We present an explicit model that naturally realizes these interactions in Section~\ref{sec:model}. We assume that the dimensionless couplings ($\lambda_i$'s) are comparable in size, whereas the masses of the dark sector particles satisfy \be\label{eq:spectrum} m_\chi>m_{\phi_2}>m_{\phi_1},\qquad 2\,m_{\phi_2}>m_\chi+m_{\phi_1}\,. \ee Furthermore, we assume that all three particles are stable on the timescale over which DM freezeout occurs. We also take $g_i=1$ for all three species for simplicity. This setup contains all the ingredients needed to discuss the general aspects of the bounce mechanism. \subsection{Physics of the Bounce} A schematic of the decoupling of the processes leading to freezeout of dark sector particles is shown in Fig.\,\ref{fig:schematic}. As illustrated, the cosmological history can be divided into three distinct phases. In the first (equilibrium) phase, at sufficiently early times, we assume that portal interactions between the dark and SM sectors keep all dark sector particles in thermal equilibrium with the SM bath to temperatures below their masses. Hence all dark sector particles follow the standard equilibrium distributions, and we have \be \mu_\chi=\mu_{\phi_1}=\mu_{\phi_2}=0~~~~\text{(dark $\leftrightarrow$ SM active).} \ee We assume that chemical equilibrium between the dark and SM sectors is maintained down to $x=x_c$. Depending on the details of the portal interactions, kinetic equilibrium between the two sectors can last until much later; the implications of this are discussed below. In the second (chemical) phase, after the dark and SM sectors chemically decouple, the dark species develop nonzero chemical potentials, and the total comoving number density of dark sector particles is conserved.\,\footnote{This requires $4\leftrightarrow 2$ number changing interactions within the dark sector to have decoupled by this point; we have checked that this occurs for all cases we consider in this paper. Note that there are no $3\leftrightarrow2$ processes in the dark sector, since all the interactions in Eq.\,\eqref{eq:interactions} involve an even number of states.} The evolution of the number densities is now governed by $2\leftrightarrow2$ interactions that can efficiently interconvert the three dark species, $AA\leftrightarrow BB$, where $A,B=\chi, \phi_1,\phi_2$. The chemical potentials thus follow the relations \be \mu_\chi=\mu_{\phi_1}=\mu_{\phi_2}\neq0~~~~\text{(dark $\leftrightarrow$ dark active).} \label{eq:chem2} \ee As $n_i \approx n_i^{\rm eq}e^{\mu_i/T}$, the equilibrium abundances of the heavier states continue to get exponentially suppressed compared to those of the lighter states, with equilibrium distributions uniformly shifted by the common nonzero chemical potential. Finally, at $x=x_b$ the system enters the third stage of evolution, the bouncing phase, driven by the process $\chi\phi_1\leftrightarrow \phi_2\phi_2$. This occurs when the $AA\leftrightarrow BB$ processes discussed above, as well as the process $\chi\phi_2\leftrightarrow \phi_1\phi_2$, which force the DM to share the same chemical potential as the other dark states, go out of equilibrium. This imposes a modified relation between the chemical potentials \be \mu_\chi+\mu_{\phi_1}=2\mu_{\phi_2}~~~~\text{(only $\chi\phi_1\leftrightarrow \phi_2\phi_2$ active).} \label{eq:chem3} \ee When $\chi\phi_1\leftrightarrow \phi_2\phi_2$ also goes out of equilibrium, the DM abundance finally freezes out to a constant value. In Fig.\,\ref{fig:bounce1}, we show the evolution of the yields $Y_i$ for the three dark sector states, obtained by numerically solving the Boltzmann equations for the system (see the Appendix for details) for illustrative benchmark parameters. The solid curves assume kinetic equilibrium throughout, i.e.~all bath particles share a common temperature $T$. The transition between the second and third phases, marked by the ``bounce" from an exponentially falling to an exponentially rising curve for DM, is clearly visible. We show the corresponding evolution of the chemical potentials in Fig.\,\ref{fig:chem}, which reflects the discussions above; in particular, note that the bounce corresponds to the instance when the DM chemical potential deviates away from those of the other lighter dark states. In Fig.\,\ref{fig:bounce1} we also show (dotted curves) the effect of kinetic decoupling between the dark and SM sectors at $x=x_c$, which results in the dark sector cooling faster than the SM bath, with temperature $T_{d}=m_{\chi} x_c/x^2$. This shifts the evolution of the system to lower $x$ but otherwise maintains the main qualitative features of the bounce, and the final DM freezeout abundance is only modified by an $\mathcal{O}(1)$ number. Depending on the exact nature of the portal interactions, kinetic decoupling generally occurs at some $x>x_c$, hence the two sets of curves represent the two extremes of late (solid) and early (dashed) kinetic decoupling, and explicit models are expected to fall in between (see also Fig.\,\ref{fig:crosssections}). The physics behind the bounce in this setup is very intuitive: since we have $2\,m_{\phi_2}>m_\chi+m_{\phi_1}$, the $ \phi_2\phi_2\to \chi\phi_1$ process is kinematically allowed, whereas the inverse one requires thermal support (note that this is the reverse of standard freezeout dynamics, where processes that deplete DM are kinematically open). Therefore, as the temperature drops, the lighter combination $\chi\phi_1$ is preferentially populated over $\phi_2\phi_2$. Since $\phi_2$ is far more abundant than $\chi$, this results in an exponential \textit{increase} in the comoving number density of $\chi$ as $\phi_2$ particles rapidly get converted into $\chi$ and $\phi_1$. We now present an analytic discussion of the bounce. The conservation of comoving number density in the dark sector after chemical decoupling can be expressed as \be Y_\chi+Y_{\phi_1}+Y_{\phi_2}\equiv Y_S\,, \label{eq:c1} \ee where $Y_S$ denotes the sum of the yields of dark sector particles at the time of chemical decoupling. Next, the relation between chemical potentials after the bounce, Eq.\,\eqref{eq:chem3}, can be rewritten as \be Y_{\phi_2}^2=R\, Y_{\phi_1} Y_\chi\,,\qquad R (T)\equiv (n_{\phi_2}^{\rm eq})^2/(n_{\phi_1}^{\rm eq} n_{\chi}^{\rm eq})\,. \label{eq:c2} \ee Furthermore, when $\chi\phi_1\leftrightarrow \phi_2\phi_2$ is the only rapid interaction, $Y_{\phi_1} - Y_{\chi}$ is also conserved, hence \be Y_{\phi_1}-Y_\chi=Y_{\phi_1}^b-Y_\chi^b\equiv Y_D\,, \label{eq:c3} \ee where the superscript $b$ denotes the yields calculated at the bounce. We thus have three equations,~\eqref{eq:c1},~\eqref{eq:c2}, and~\eqref{eq:c3}, which can be solved analytically for the three unknowns $Y_\chi, Y_{\phi_1},$ and $Y_{\phi_2}$: \bea Y_\chi&=&\frac{4Y_S-(4-R)Y_D-\sqrt{4 R Y_S^2 - (4-R) R Y_D^2}}{2(4-R)}\,,\nonumber\label{eq:dm}\\ Y_{\phi_1}&=&\frac{4Y_S+(4-R)Y_D-\sqrt{4 R Y_S^2 - (4-R) R Y_D^2}}{2(4-R)}\,, \nonumber \label{eq:Labundance}\\ Y_{\phi_2}&=&\frac{\sqrt{4 R Y_S^2 -(4-R) R Y_D^2} - R Y_S}{4-R}\,. \label{eq:Mabundance} \eea Note that these no longer satisfy $\mu_\chi=\mu_{\phi_1}=\mu_{\phi_2}$. These expressions describe the abundances of the dark sector species in the bouncing phase and are completely determined by the three quantities $Y_S, Y_D$, and $R$. Moreover, since $Y_S$ and $Y_D$ are constant, the temperature dependence of the solutions is entirely encoded by \be R = \left(\frac{m_{\phi_2}^2}{m_{\phi_1} m_\chi}\right)^{3/2} \exp{\Big(\hspace{-1mm} -\frac{2m_{\phi_2}-m_{\phi_1}-m_\chi}{T}\Big)\,.} \ee From this, we see that $2\,m_{\phi_2}<m_\chi+m_{\phi_1}$ and $2\,m_{\phi_2}>m_\chi+m_{\phi_1}$ lead to drastically different behaviors. In the former case, $R$ rises exponentially as $T$ drops, and $Y_{\chi}$ drops exponentially as a consequence, as is characteristic of standard freezeout processes. However, in the latter case we see the opposite behavior: $R$ falls exponentially with decreasing temperature, hence $Y_\chi$ \textit{increases} after the bounce. In particular, $R\to 0$ as $T \to0$, and in this limit the DM yield approaches a constant value, \be Y_\chi\to\frac{1}{2}(Y_S-Y_D)=\frac{1}{2}Y_{\phi_2}^b+Y_{\chi}^b\,. \label{eq:yhlimit} \ee This limit is straightforward to understand: all of the $\phi_2$ particles present at the bounce are converted to $\chi\phi_1$ at later times, thus contributing the first term, which gets added to the $Y_{\chi}^b$ already present in the bath at the bounce. In this limit, the enhancement in the dark matter relic abundance relative to the canonical freezeout abundance is \be \frac{Y_\chi}{Y_\chi^b}\approx \frac{1}{2} \frac{Y_{\phi_2}^b}{Y_\chi^b} \approx \frac{1}{2}\left(\frac{m_{\phi_2}}{m_\chi}\right)^{3/2} \exp{\Big(\frac{m_\chi-m_{\phi_2}}{T_b}\Big)}\,. \ee This ratio is maximized by maximizing the $m_\chi-m_{\phi_2}$ splitting, which occurs in the limit $m_{\phi_1}\to 0$ and $m_\chi\to 2m_{\phi_2}$. Noting that obtaining the correct relic density for weak scale masses in this limit requires $T_b\sim m_{\phi_2}/25$, we estimate that $Y_\chi/Y_\chi^b$ can be as large as $\sim10^{10}$. In practice, however, the asymptotic limit in Eq.\,(\ref{eq:yhlimit}) is not reached for two reasons. First, $\chi\phi_1\leftrightarrow \phi_2\phi_2$ freezes out in some finite time. Second, when $Y_\chi\sim Y_{\phi_2}$, the processes $\chi\phi_2\to \phi_1\phi_2$ and/or $\chi\chi \to \phi_2 \phi_2$ become comparable in strength, causing a departure from the above conditions. As a result, $\chi$ and $\phi_2$ tend to freeze out with comparable abundances. Both $\chi$ and $\phi_2$ may survive until today, forming two-component DM, or $\phi_2$ may decay before Big Bang nucleosynthesis (BBN), leaving only $\chi$ in the late Universe. The very large freezeout abundance of $\phi_1$, however, implies that it must decay before BBN\@. If it gives rise to an early matter dominated era before decaying, the subsequent injection of entropy into the thermal bath will dilute the DM abundance. Decays of dark states are discussed within a concrete model in Section~\ref{sec:model}. \subsection{Indirect Detection} \label{subsec:indirectdetection} In the simple setup we are considering, the present day DM annihilation cross section $\chi\chi\to \phi_i\phi_i$ has approximately the same size as in the early Universe, since it proceeds through the $s$-wave. If $\phi_i$ decays to SM states, this can give rise to observable signals at current and future experiments. The indirect detection signatures from such cascade processes in dark sectors have been discussed in various papers, see e.g.~\cite{Elor:2015tva,Elor:2015bho,Bell:2016fqf,Barnes:2020vsc,Barnes:2021bsn,Gori:2018lem}. The most interesting phenomenological aspect of bouncing DM is that the associated cross sections can be significantly larger than $\langle \sigma v\rangle_{\text{canonical}}$. In standard freezeout scenarios driven by DM self-annihilation, increasing $\langle\sigma v\rangle_{\chi\chi\to \phi_i\phi_i}\!>\!\langle \sigma v\rangle_{\text{canonical}}$ would lead to DM tracking its exponentially falling equilibrium curve for longer, freezing out with a relic abundance too small to match observations. For bouncing DM, however, this suppression can be overcome by the exponential enhancement after the bounce, hence larger $\langle\sigma v\rangle_{\chi\chi\to \phi_i\phi_i}$ remains compatible with the observed relic abundance. This is illustrated in Fig.\,\ref{fig:crosssections}, where we plot the predicted present day $\chi\chi\to\phi_1\phi_1$ annihilation cross section for some representative sets of parameters consistent with the observed DM relic density (for these calculations, we assume $g_*=100$ for simplicity; this affects the cross section results by $\lesssim 15\%$). The exact nature of the visible signals depends on model-specific details, in particular the dominant decay modes of $\phi_{1}$ and $\phi_2$ (if the latter is unstable). Here we assume that $\phi_2$ decays away before BBN, so $\chi$ makes up all of DM, and that the $\chi\chi\to \phi_2 \phi_2$ cross section is sufficiently suppressed that we can neglect its contribution. For concreteness, we also assume the decay mode $\phi_1\to WW$ and choose $m_{\phi_1}=m_\chi/2$, which enables us to adapt results from~\cite{Barnes:2021bsn} to plot bounds from Fermi observations of dwarf galaxies \cite{Fermi-LAT:2015att} and the projected sensitivity from 500 hours of observation of the Galactic Center with the Cherenkov Telescope Array (CTA)~\cite{CTA:2020qlo}.\,\footnote{We use the results from Fig.\,7 of \cite{Barnes:2021bsn} for $\chi\chi\to H'H'$, where $H'$ is a dark Higgs that decays dominantly to $WW$.} We show a baseline case assuming kinetic equilibrium throughout (solid black curve), as well as the modified cross sections to maintain the correct relic density assuming kinetic decoupling at $x_c$ (black dashed), or with the specific choice of a Higgs portal between $\phi_1$ and the SM Higgs doublet $\mathcal{H}$, $\lambda_v \phi_1^2 |\mathcal{H}|^2$ (black dotted). For the Higgs portal case, the size of $\lambda_v$ controls both chemical and kinetic decoupling (for details on kinetic decoupling calculations, see e.g.~\cite{Kuflik:2015isi}). We thus see that the details of kinetic decoupling can modify the cross section by an $\mathcal{O}(1)$ number. We also show the effects of a smaller mass splitting (solid green), which leads to an enhancement of $n_{\phi_2}$ before the bounce, hence requiring a slightly larger overall cross section to trigger the bounce later and achieve the correct DM relic density. The plot illustrates that the annihilation cross sections for bouncing DM can be larger than the thermal target by more than an order of magnitude (in other parts of parameter space, these cross sections can be much larger or smaller). In particular, CTA is unable to reach the thermal target for the chosen decay modes, but can probe the cross sections predicted by bouncing DM for all shown cases over almost the entire mass range, highlighting the improved indirect detection prospects in this framework. \subsection{Model and Constraints} \label{sec:model} We now present a concrete realization of the three-species framework. Consider scalar multiplets transforming as $\chi \sim \mathbf{3}_0$, $\phi_2 \sim \mathbf{2}_{+1}$, $\phi_1 \sim \mathbf{1}_0$ under a dark global $SU(2)\times U(1) \times Z_2$ symmetry, with all fields odd under the $Z_2$. This allows the following dark sector interactions at the renormalizable level, \bea -\mathcal{L} \supset&\, \lambda_{\chi 1}\mathrm{Tr}(\chi^2) \phi_1^2 + \lambda_{\chi 2}\mathrm{Tr}(\chi^2) |\phi_2|^2 + \lambda_{12} \phi_1^2 |\phi_2|^2\nonumber\\ &+\lambda \phi_2^\dagger \chi \phi_2 \phi_1\,, \qquad \chi \equiv \sigma^a \chi^a\,, \label{eq:interactions_model} \eea where $\sigma^a$ are the Pauli matrices. All other number-changing quartics are automatically forbidden: in particular, $\chi^3\phi_1$ vanishes since $\epsilon^{abc}\chi^a \chi^b \chi^c = 0$, whereas $\chi \phi_1^3$, which would efficiently suppress the bounce, is also not allowed.\,\footnote{The above symmetry structure arises naturally in a three-flavor (dark) QCD model with $m_d = m_s$, as considered in~\cite{Katz:2020ywn}.} In addition, we assume $\phi_1$ couples to the SM as $\mathcal{L} =- \lambda_v \phi_1^2\, \mathcal{H}^\dagger \mathcal{H} + (\bar{g} \phi_1/\Lambda) W_{\mu\nu}^a W^{a\,\mu\nu}$.\,\footnote{An interesting alternative~\cite{Katz:2020ywn} would be to gauge the dark $U(1)$ and introduce a kinetic mixing of its vector boson with SM hypercharge, thus realizing a vector portal.} The first coupling keeps the dark and visible sectors in equilibrium at early times, while the second coupling (which is just one of many possible choices) explicitly breaks the $Z_2$ and ensures that $\phi_1$ decays to SM particles. As we find below, $\bar{g}$ must be tiny, parametrically smaller than all other couplings in the model. Since all the interactions we consider preserve the dark $SU(2)\times U(1)$, both $\phi_2$ and $\chi$ are stable and can contribute to DM; $\chi$ stability is guaranteed by Eq.\,\eqref{eq:spectrum}, which implies $m_\chi < 2\,m_{\phi_2}$. The presence of two DM components is an interesting feature of this minimal model. The $\phi_1$ decay width receives two contributions. At tree level, the decays to transverse $WW, ZZ$ give $\Gamma \simeq 3\bar{g}^2 m_{\phi_1}^3/(4\pi \Lambda^2)$. At one loop, a tadpole term $\sim \bar{g}\Lambda^3\phi_1 / (4\pi)^2$ is generated in the scalar potential, leading to a vacuum expectation value (VEV) $\langle \phi_1 \rangle \sim \bar{g}\Lambda^3/[(4\pi)^2 m_{\phi_1}^2]$. As a consequence, $\phi_1$ also decays via the Higgs portal to $hh$ and longitudinal $WW, ZZ$ with $\Gamma \simeq \lambda_v^2 \bar{g}^2 \Lambda^6/ [2\pi (4\pi)^4 m_{\phi_1}^5]$. We require the $\phi_1$ lifetime to be $10^{-6} \lesssim \tau_{\phi_1}/\mathrm{s} \lesssim 1$, i.e.~long enough to enable the bounce, but short enough to not affect BBN\@. We also require that the trilinear couplings generated by the $\phi_1$ VEV not impact the bounce. These give rise to effective quartics, hence $2\to 2$ processes, that are much smaller than those already present in the Lagrangian provided $\Lambda\,\bar{g}^{1/3}\! \ll\!5\,\mathrm{TeV}\,(m/\mathrm{TeV}) / \lambda_i^{1/6}$, where $m$ is the approximate mass scale of the particles and $\lambda_i$ the generic size of the quartics in Eq.\,\eqref{eq:interactions_model}. The trilinears also give rise to $3\to 2$ processes such as $\phi_1 \phi_1 \phi_1 \to \phi_2 \phi_2$; imposing that these decouple before $x_c$ gives $\Lambda\,\bar{g}^{1/3} \lesssim 2 \,\,\mathrm{TeV}\, (m/\mathrm{TeV})^{7/6} (x_c/10)^{2/3} (10^{-4}/Y_{\phi_1})^{1/3} /\lambda_i^{2/3}$ (using benchmark values from Fig.\,\ref{fig:bounce1}). Finally, requiring that $\phi_1 \phi_1 \to WW$ also decouples before $x_c$ leads to $\Lambda\,\bar{g} \lesssim 0.03\,\,\mathrm{TeV}\, (m/\mathrm{TeV})^{5/4} (x_c/10)^{1/4} (10^{-4}/Y_{\phi_1})^{1/4} /\lambda_i^{1/2}$; notice that for very small $\bar{g}$ this condition is weaker than the previous ones. All the above constraints are satisfied together with those on the $\phi_1$ lifetime in a broad swath of parameter space, spanning $\bar{g} < 10^{-9}$ and $\Lambda > 10$ TeV\@. Scattering of $\chi$ with nuclei occurs via one loop processes, with cross-section $\sigma_{\chi N}^{\rm SI} \approx 10^{-48}\,\mathrm{cm}^2 \lambda_{\chi 1}^2 \lambda_v^2\, (\mathrm{TeV}/m_\chi)^2$ (analogous expressions hold for $\phi_2$). For $\lambda_{\chi 1}, \lambda_{1 2} \sim \mathcal{O}(1)$ and $\lambda_v \ll 1$, as is typical in our parameter space, these cross sections are below the neutrino floor. Although not required, $\phi_2$ decay can be induced through a $\phi_2$-SM-SM interaction, parametrized by an effective coupling $g_2$ that explicitly breaks the dark global symmetry, leading to $\Gamma_{\phi_2}\! \sim\!g_2^2 m_{\phi_2}/(4\pi)$. $\phi_2$ decays before BBN provided $g_2 \gtrsim 10^{-13}$. This also makes $\chi$ unstable, and we need to ensure that it is sufficiently long-lived to satisfy experimental bounds. If $m_\chi \gtrsim m_{\phi_1} + m_{\phi_2}$, the DM undergoes $4$-body decays with amplitude suppressed by only one insertion of $g_2$, leading to an excessively short lifetime. However, if $m_\chi< m_{\phi_1} + m_{\phi_2}$, DM decays to $5$-body final states (or $3$-body final states via one loop processes) with $\Gamma_{\chi} \sim \lambda^2 g_2^2\, \mathrm{max}(g_1^2, g_2^2) m_\chi / (4\pi)^7$, where $g_1$ is the effective $\phi_1$-SM-SM coupling that controls $\phi_1$ decay (in the minimal model, $g_1 \sim \bar{g}\, m_{\phi_1}/\Lambda$ if tree level decays dominate). The resulting $\chi$ lifetime satisfies current bounds, yet is potentially interesting for future indirect detection probes~\cite{CTA:2020qlo,Arguelles:2019ouk} of decaying DM: for instance, with $g_1\sim g_2\sim10^{-12}$, $\lambda \sim1$ and $m_\chi\sim\mathrm{TeV}$, we find $\tau_\chi \sim 10^{28}\;\mathrm{s}$. \section{Other Bounce Scenarios} \label{sec:others} In Section \ref{sec:bounce}, we discussed the bounce in the framework of a three particle system with the mass relations in Eq.\,(\ref{eq:spectrum}). However, bouncing DM can be more broadly realized in other qualitatively different scenarios; it only requires a transition to a new equilibrium curve (such as Eq.\,\eqref{eq:chem3}) that allows the DM chemical potential to depart from those of other species in the bath, and increase sufficiently rapidly to counteract the standard $e^{-m/T}$ suppression. Here, we discuss some other scenarios that realize these conditions. We consider scalar DM for simplicity; however, the bounce can be realized for DM of any spin. For definiteness, kinetic equilibrium is assumed at all temperatures in the examples in this section. \subsection{Coannihilation with a Decaying Partner} \label{sec:bounce2} In Section~\ref{sec:bounce}, the bounce was realized through kinematics; here, we consider a qualitatively different setup that utilizes the decay of a particle to trigger the bounce. Consider the same setup as in Section~\ref{sec:bounce}, but with the following modifications: \vspace{0.5mm} 1. The reversed condition $2\,m_{\phi_2}\lesssim m_\chi+m_{\phi_1}$;\,\footnote{In this case, note that a sufficiently light $\phi_1$ can lead to a rapid decay channel $\chi\to\phi_1\phi_2\phi_2$ even if $\phi_2$ is stable.} 2. $\phi_1$ decays around the time when $\chi$ freezes out, i.e. $\Gamma_{\phi_1}\sim H (x\sim x_f)$. \vspace{0.5mm} \noindent Due to the modified relation between the masses, $\phi_2\phi_2\to \chi\phi_1$ is now kinematically closed, and cannot enforce the bounce. Instead, the key ingredient that enables the bounce is the decay of $\phi_1$. To understand this, note that the relation between chemical potentials $\mu_\chi+\mu_{\phi_1}=2\mu_{\phi_2}$ still holds due to $\chi\phi_1\leftrightarrow \phi_2\phi_2$ being rapid in the final stage of freezeout. The decays of $\phi_1$ cause $\mu_{\phi_1}$ to drop; to maintain the above relation, this must be accompanied by a decrease in $\mu_{\phi_2}$ and an increase in $\mu_\chi$, i.e.~the forward process $\phi_2\phi_2\to \chi\phi_1$ is preferred despite being kinematically disfavored. Since the relations between the yields in Eqs.\,\eqref{eq:c1} and~\eqref{eq:c3} no longer hold due to $\phi_1$ decaying, analytic solutions are difficult to derive. However, the existence of the bounce can be verified numerically, as shown in Fig.\,\ref{fig:bounce3}. The corresponding chemical potentials are shown in Fig.\,\ref{fig:chemdec}. \subsection{Freezeout Driven by a $3\leftrightarrow 2$ Process } \label{sec:bounce3} In contrast to the frameworks considered so far (Section~\ref{sec:bounce} and Section~\ref{sec:bounce2}), we now turn to an example where DM pair-interacts, and the bounce is driven by a $3\leftrightarrow 2$ process. Consider a dark sector with two states, $\chi$ (DM) and $\phi$, with the mass relations \be m_\chi>m_\phi\,, \qquad 2 m_\chi<3 m_\phi\,. \label{eq:m2system} \ee Let us assume that $\chi^2\phi^3$ is the only important interaction (in particular, we assume that $\chi^2\phi^2$ is suppressed and negligible). This gives rise to several $3\leftrightarrow2$ number changing interactions, while $2\leftrightarrow2$ processes are absent. When all $3\leftrightarrow2$ interactions such as $\phi\phi\chi\leftrightarrow\phi\chi$ and $\phi\phi\phi\leftrightarrow\chi\chi$ are rapid, the system tracks the standard equilibrium distribution $\mu_\chi=\mu_\phi=0$. As these interactions go out of equilibrium, the system undergoes a bounce at the point where $\phi\phi\phi\leftrightarrow\chi\chi$ remains as the only rapid interaction; this corresponds to a transition to a new equilibrium curve governed by \be 3\mu_\phi=2\mu_\chi~~~(\phi\phi\phi\leftrightarrow\chi\chi ~\text{active}). \label{eq:c21} \ee This triggers an exponential increase of the DM abundance as $\phi\phi\phi\to \chi\chi$ is kinematically open, while the inverse process requires thermal support. This result can be derived analytically by noting that if only $\phi\phi\phi\leftrightarrow\chi\chi$ is active, the following quantity is conserved \be 2Y_\phi+3Y_\chi=2Y_\phi^b+3Y_\chi^b\,. \label{eq:c22} \ee Thus, we have two equations,\,\eqref{eq:c21} and \eqref{eq:c22}, with two unknowns, $Y_\chi$ and $Y_\phi$. Since Eq.\,\eqref{eq:c21} can be rewritten as $(Y_\phi/Y_\phi^{\rm eq})^3=(Y_\chi/Y_\chi^{\rm eq})^2$, we need to solve a cubic equation, and a closed analytic form of the solution, while possible, is not very illuminating. Instead, we note that if the bounce occurs at $T\ll m_\phi, m_\chi$, then $n_\chi\ll n_\phi$, hence in the early stages of the bouncing phase $Y_\phi$ does not decrease appreciably, remaining approximately constant. This implies $\mu_\phi\approx m_\phi-c T$, where the constant $c>0$ is determined by $Y_\phi^b$. Using $Y_\chi\approx e^{-(m_\chi-\mu_\chi)/T}$ together with Eq.\,\eqref{eq:c21}, the $\chi$ yield is \be Y_\chi\approx e^{-3 c/2} \,e^{- (2 m_\chi-3m_\phi)/(2T)}\,. \label{eq:3to2approx} \ee This makes it clear that the bounce corresponds precisely to the mass condition $2 m_\chi<3 m_\phi$ in Eq.\,\eqref{eq:m2system}, which leads to an exponential increase of the DM yield. In Fig.\,\ref{fig:bounce3to2}, we show the evolution of the yields for a benchmark case illustrating the bounce in this framework (the DM abundance in the bouncing phase can be approximated by Eq.\,\eqref{eq:3to2approx} with $c\approx 16$). The above scenario can arise, for instance, if $\chi$ and $\phi$ are complex scalars whose interactions are mediated only by a heavy scalar $S$ sharing the same quantum numbers as $\chi$. Then the operator $\chi^\ast S |\phi|^2$ is unsuppressed, whereas $\chi^\ast S \phi$ can naturally have a small coupling $\epsilon$ as it violates the $U(1)$ symmetry associated with $\phi\,$-$\,$number. In the effective theory obtained by integrating out $S$, the operator $|\chi|^2|\phi|^2$ is proportional to $\epsilon^2$ and can naturally be much smaller than $|\chi|^2 |\phi|^2 \phi$, which only receives a single $\epsilon\,$-$\,$suppression. \section{Discussion} In this letter, we have introduced the concept of bouncing dark matter, a novel mechanism for producing thermal relic dark matter. Its defining feature is that DM inherits a large chemical potential, from processes that are in equilibrium, leading to an exponential rise of the DM abundance before it freezes out. This behavior is in stark contrast to most thermal mechanisms, where the DM abundance prior to freezeout is generally characterized by a falling exponential. The main phenomenological consequence of the bounce is the possibility of enhanced present day DM annihilation cross sections over the canonical thermal target, which can improve the prospects of DM indirect detection with current and near future experiments. In this work, we have focused on presenting the key physics concepts underlying the bounce within simplified frameworks. It will be interesting to study whether the bounce naturally occurs in existing beyond the Standard Model (BSM) constructions. In general, the bounce requires one or more companion particles with mass comparable to that of DM, which are stable over the timescale of DM freezeout. These conditions are readily realized in extended BSM sectors with nearly degenerate, metastable particles. Explorations of the mass regimes over which the bounce can occur, the portal interactions that couple the dark and visible sectors, the nature of indirect detection signals, and the enhanced reach in parameter space afforded by improved indirect detection prospects can be framed within specific BSM constructions. Such studies can shed light on additional aspects of the bouncing dark matter framework, and therefore represent promising directions for future work. \section*{Acknowledgments} We thank Simon Knapen and Stefan Vogl for helpful discussions and comments. JTR and BS are supported by the Deutsche Forschungsgemeinschaft under Germany’s Excellence Strategy - EXC 2121 Quantum Universe - 390833306. JTR is also supported by NSF grant PHY-1915409 and NSF CAREER grant PHY-1554858. ES acknowledges partial support from the EU’s Horizon 2020 programme under the MSCA grant agreement 860881-HIDDeN\@. JTR, ES, and BS warmly thank the CERN Theory Group, where part of this research was completed, for hospitality. JTR also acknowledges hospitality from the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611. BS also thanks the Berkeley Center for Theoretical Physics for hospitality. \section*{Appendix} Here we present the details of the Boltzmann equations that were numerically solved to obtain the abundances of the dark sector particles. For the three particle framework, we track the yields of the dark sector particles in terms of the dimensionless variable $x=m_{\chi}/T$, where $T$ is the temperature of the SM bath. The initial conditions are set at the time of chemical decoupling of the dark sector, $x=x_c$. In cases where $\phi_1$ decays are not relevant on the bounce timescale (Section \ref{sec:bounce}), $Y_{\phi_1}$ is assumed to be constant after the chemical decoupling. The yields of $\phi_2$ and $\chi$ are obtained by solving the following Boltzmann equations: \begin{align} \frac{\mathrm dY_{\phi_2}}{\mathrm dx} = -\frac{s(x)}{\tilde{H}(x)x}&\left[2\,\sigma_{\chi\chi\phi_2\phi_2}\left(\frac{(Y_{\chi}^{\textup{eq}})^2}{(Y_{\phi_2}^{\textup{eq}})^2}Y_{\phi_2}^2-Y_{\chi}^2\right)\right.\nonumber\\&\left.+\,2\,\sigma_{\phi_2\phi_2\phi_1\phi_1}\left(Y_{\phi_2}^2-\frac{(Y_{\phi_2}^{\textup{eq}})^2}{(Y_{\phi_1}^{\textup{eq}})^2}Y_{\phi_1}^2\right)\right.\nonumber\\&\left.+\,2\,\sigma_{\phi_2\phi_2\chi\phi_1}\left(Y_{\phi_2}^2-\frac{(Y_{\phi_2}^{\textup{eq}})^2}{Y_{\chi}^{\textup{eq}}Y_{\phi_1}^{\textup{eq}}}Y_{\chi}Y_{\phi_1}\right)\right]\,, \end{align} \begin{align} \frac{\mathrm dY_{\chi}}{\mathrm dx} = -\frac{s(x)}{\tilde{H}(x)x}&\left[2\,\sigma_{\chi\chi\phi_2\phi_2}\left(Y_{\chi}^2-\frac{(Y_{\chi}^{\textup{eq}})^2}{(Y_{\phi_2}^{\textup{eq}})^2}Y_{\phi_2}^2\right)\right.\nonumber\\&\left.+\,2\,\sigma_{\chi\chi\phi_1\phi_1}\left(Y_{\chi}^2-\frac{(Y_{\chi}^{\textup{eq}})^2}{(Y_{\phi_1}^{\textup{eq}})^2}Y_{\phi_1}^2\right)\right.\nonumber\\&\left.+\,\sigma_{\phi_2\phi_2\chi\phi_1}\left(\frac{(Y_{\phi_2}^{\textup{eq}})^2}{Y_{\chi}^{\textup{eq}}Y_{\phi_1}^{\textup{eq}}}Y_{\chi}Y_{\phi_1}-Y_{\phi_2}^2\right)\right.\nonumber\\&\left.+\,\sigma_{\phi_2\phi_2\chi\phi_1}\left(Y_{\phi_2}Y_{\chi}-\frac{Y_{\chi}^{\textup{eq}}}{Y_{\phi_1}^{\textup{eq}}}Y_{\phi_2}Y_{\phi_1}\right)\right]\,. \end{align} Note that we have replaced the thermally averaged cross sections $\langle \sigma v\rangle_{ABCD}$ with their $T=0$ values $\sigma_{ABCD}$ to lighten the notation, since in this work all processes proceed through the $s$-wave; in general, the proper thermally averaged values should be used. The zero temperature cross sections are related to the quartic couplings in Eq.\,\eqref{eq:interactions} as \begin{equation} \sigma_{AACD} = \tfrac{\lambda_i^2(1 + \delta_{CD})}{8\pi m_A^2}\left( 1 - 2 \tfrac{m_C^2 + m_D^2}{4\,m_A^2} + \tfrac{(m_C^2 - m_D^2)^2}{16\,m_A^4} \right)^{1/2}\,, \end{equation} where $\lambda_i$ corresponds to the relevant coupling from Eq.\,\ref{eq:interactions}, and $\delta_{CD}$ is the Kronecker delta function. For instance, the benchmark cross sections in Fig.\,\ref{fig:bounce1} correspond to $\lambda_{\chi1} = 0.09, \lambda_{\chi 2} = 0.1, \lambda_{12} = 0.04$, and $\lambda = 0.6$. In addition, we have defined $\tilde{H} \equiv H/ [1 + (1/3)\text{d} \log g_* /\text{d} \log T]$, where $H=\pi \sqrt{g_*}\, T^2/(3 \sqrt{10}\, M_{\rm Pl})$ is the Hubble parameter, with $M_{\rm Pl}$ the reduced Planck mass, and $s = 2\pi^2 g_\ast T^3 / 45$ is the total entropy density. Recall that $n^{\rm eq}_i = g_i \left(\frac{m_i T}{2 \pi}\right)^{3/2}\!e^{-m_i/T}$ in the nonrelativistic limit $T \ll m_i$. In cases where the dark and SM sectors kinetically decouple at some temperature $T_k$, we solve the above equations with the modified equilibrium distributions corresponding to the modified dark sector temperature, $n^{\rm eq}_i(T)\to n^{\rm eq}_i(T_d)$ with $T_d = T^2/T_k$, which corresponds to instantaneous kinetic decoupling of the dark sector at $T_k$. When $\phi_1$ decays are relevant for the bounce (Section~\ref{sec:bounce2}), the evolution of the $\phi_1$ abundance is obtained by solving \begin{align} \frac{\mathrm dY_{\phi_1}}{\mathrm dx} = -\frac{Y_{\phi_1}\Gamma_{\phi_1}}{\tilde{H}(x)x}&-\frac{s(x)}{\tilde{H}(x)x}\left[2\,\sigma_{\chi\chi\phi_1\phi_1}\left(\frac{(Y_{\chi}^{\textup{eq}})^2}{(Y_{\phi_1}^{\textup{eq}})^2}Y_{\phi_1}^2-Y_{\chi}^2\right)\right.\nonumber\\&\quad\,\left.+\,2\,\sigma_{\phi_2\phi_2\phi_1\phi_1}\left(\frac{(Y_{\phi_2}^{\textup{eq}})^2}{(Y_{\phi_1}^{\textup{eq}})^2}Y_{\phi_1}^2-Y_{\phi_2}^2\right)\right.\nonumber\\&\quad\,\left.+\,\sigma_{\chi\phi_1\phi_2\phi_2}\left(Y_{\chi}Y_{\phi_1}-\frac{Y_{\chi}^{\textup{eq}}Y_{\phi_1}^{\textup{eq}}}{(Y_{\phi_2}^{\textup{eq}})^2}Y_{\phi_2}^2\right)\right.\nonumber\\&\quad\,\left.+\,\sigma_{\chi\phi_1\phi_2\phi_2}\left(\frac{Y_{\chi}^{\textup{eq}}}{Y_{\phi_1}^{\textup{eq}}}Y_{\phi_2}Y_{\phi_1}-Y_{\phi_2}Y_{\chi}\right)\right]~. \end{align} The Boltzmann equations for $\phi_2$ and $\chi$ are identical to those shown above, except for the terms corresponding to $\chi_2\chi_2\leftrightarrow\chi\phi_1$, where the $Y^{\rm eq}$ factors need to be appropriately shifted to the other term in the parentheses to reflect the change in mass hierarchy between $2 m_{\phi_2}$ and $m_\chi+m_{\phi_1}$. For the two particle framework (Section \ref{sec:bounce3}), the relevant Boltzmann equations are \begin{align} \frac{\mathrm dY_{\phi}}{\mathrm dx} &= -\frac{s(x)^2}{\tilde{H}(x)x}\,\sigma_{\phi\phi\phi\chi\chi}\left[3\left(Y_{\phi}^3-\frac{(Y_{\phi}^{\textup{eq}})^3}{(Y_{\chi}^{\textup{eq}})^2}Y_{\chi}^2\right)\right.\nonumber\\&\quad\,\left.+\left(Y_{\phi}^2Y_{\chi}-Y_{\phi}^{\textup{eq}}Y_{\phi}Y_{\chi}\right)+\left(\frac{(Y_{\chi}^{\textup{eq}})^2}{Y_{\phi}^{\textup{eq}}}Y_{\phi}^2-Y_{\phi}Y_{\chi}^2\right)\right]\,, \end{align} \vskip8cm \begin{align} \frac{\mathrm dY_{\chi}}{\mathrm dx} &= -\frac{s(x)^2}{\tilde{H}(x)x}\,\sigma_{\phi\phi\phi\chi\chi}\left[2\left(\frac{(Y_{\phi}^{\textup{eq}})^3}{(Y_{\chi}^{\textup{eq}})^2}Y_{\chi}^2-Y_{\phi}^3\right)\right.\nonumber\\&\hspace*{2.9cm} \left.\,+\,2\left(Y_{\phi}Y_{\chi}^2-\frac{(Y_{\chi}^{\textup{eq}})^2}{Y_{\phi}^{\textup{eq}}}Y_{\phi}^2\right)\right]\,. \end{align} Here $\sigma_{ABCDE}$ corresponds to the $T\to 0$ limit of $\langle \sigma v^2\rangle_{ABCDE}$, where the thermal average can be evaluated, for example, using the methods in Appendix E of Ref.~\cite{Cline:2017tka}. \bibliography{DMbounce}{}
Title: Inflationary Cosmology in the Modified $f(R, T)$ Gravity
Abstract: In this work, we study the inflationary cosmology in modified gravity theory $f(R, T) = R + 2 \lambda T$ ($\lambda$ is the modified gravity parameter) with three distinct class of inflation potentials (i) $\phi^p e^{-\alpha\phi}$, (ii) $(1-\phi^p)e^{-\alpha\phi}$ and (iii) $\frac{\alpha\phi^2}{1+\alpha\phi^2}$ where $\alpha$, $p$ are the potential parameters. We have derived the Einstein equation, potential slow-roll parameters, the scalar spectral index $n_s$, tensor to scalar ratio $r$, and tensor spectral index $n_T$ in modified gravity theory. We obtain the range of $\lambda$ using the spectral index constraints in the parameter space of the potentials. Comparing our results with PLANCK 2018 data and WMAP data, we found out the modified gravity parameter $\lambda$ lies between $-0.37<\lambda<1.483$.
https://export.arxiv.org/pdf/2208.11042
\date{} \title{Inflationary Cosmology in the Modified $f(R, T)$ Gravity } \begin{center} \author{Ashmita}\footnote{p20190008@goa.bits-pilani.ac.in} ~Payel Sarkar\footnote{p20170444@goa.bits-pilani.ac.in}~Prasanta Kumar Das\footnote{Corresponding author: pdas@goa.bits-pilani.ac.in} \\ \end{center} \begin{center} Birla Institute of Technology and Science-Pilani, K. K. Birla Goa Campus, NH-17B, Zuarinagar, Goa-403726, India \end{center} \vspace*{0.25in} {\bf Keywords:} Modified gravity, Inflation, Slow-roll parameters, Spectral index parameters. \section{Introduction} A number of recent observational findings indicate that we are living in an accelerating universe \cite{Starobinsky, Guth, Linde}, such results are from redshift of type Ia supernova\cite{Reiss}, Cosmic Microwave Background (CMB) \cite{Kolb, Spergel} anisotrpy from Planck\cite{PLANCK}, Wilkinson Microwave Anisotropy Probe (WMAP)\cite{WMAP}, Baryon Acoustic Oscillations (BAO)\cite{Anderson}, Large Scale Structures\cite{spergel1}. This cosmic acceleration can be explained using two different approaches, one by introducing the Cosmological constant model i.e $\Lambda$CDM\cite{Sahni, Ratra, caroll,Turner} which has negative pressure\cite{Arturo} and the other is by modification of usual Einstein's gravity.\\ Although the idea of Cosmological constant is simpler to describe inflation but it faces few problems such as fine tuning problem\cite{Sahni, Weinberg}, coincidence problem\cite{Sahni, Zlatev}, etc. \noindent Though the classical theory of general relativity is unquestionably the most suited model of gravity but still it does not fit with the cosmological data which has been regarded as one of the primary drivers behind research into alternate theories of gravity.\\ In this approach, Einstein-Hilbert action is modified by adding some polynomial function of Ricci scalar $R$ (i.e. $f(R)$ gravity)\cite{Nojiri,samanta, buchdahl,capozziello,clifton}, or some function of the Ricci scalar $R$ and/or the trace of the energy-momentum tensor $T$ ($f(R,T)$ gravity) \cite{Harko,Myrzakulov} or some Gauss-Bonnet function ($f(G)$ gravity) \cite{Nojiri2} etc. Among these, $f(R, T)$ gravity has gained popularity recently since it can be used to explain a variety of astrophysical problems such as Inflation\cite{Bhattacharjee}, Dark energy \cite{Bhatti}, Dark matter \cite{Zaregonbadi}, Wormhole \cite{Moraes2}, Gravitational waves etc \cite{Gamonal, sahoo, Sahoo2, Goncalves}.\\ The most straightforward method to study inflation is to consider a scalar field called Inflaton, which under the influence of a particular potential along with the slow-roll approximation (where the kinetic terms are neglected with respect to the potential term) is used in order to examine the inflationary scenario \cite{Baumann,Kinney}. This type of inflaton potential in modified gravity has been extensively studied in many literatures \cite{Alberto, Martin2, Chowdhury, Biswajit,Martin3} along with various cosmological parameters, density perturbation, power-spectrum has been verified by CMB anisotropy measurement\cite{Martin,PLANCK}. The first work of Inflation in modified gravity was done in \cite{Bhattacharjee} using a quadratic potential. Same type of analysis has been shown using non-minimal power-law potential, natural and hill-top potentials in modified gravity\cite{Gamonal}. Starobinsky type potential can also predict compatible results with observational data in $f(R,T)$ gravity.\\ In this manuscript, we have studied some aspects of inflationary cosmology in a modified $f(R,T)$ gravity theory using a class of three distinct inflaton potentials. Working under the slow-roll approximation, we obtain limits on the modified gravity parameter $\lambda$ and determine the values CMB spectral index parameters which match with the data given by PLANCK2018 and WMAP. \\ The paper is organised as follows: In section 2, we obtain the Einstein Field equations in $f(R,T)$ gravity and derive the slow-roll parameters in this modified gravity theory. We apply these conclusions to numerous inflationary models using the updated formulas for the slow-roll parameters. In section 3, the inflationary scenario has been discussed for three different potentials, and the cosmological parameters such as scalar spectral index $n_s$, tensor to scalar ratio $r$, tensor spectral index $n_T$ have been derived. These parameters have been subject to constraints in the parameter space of potential $(\lambda,\alpha,p)$ within the context of modified gravity. In section 4, we analyse our results and compare those with the PLANCK 2018\cite{PLANCK} and the WMAP\cite{WMAP} data. \section{Field equations in Modified gravity:} The Einstein-Hilbert action of the modified $f(R,T)$ theory of gravity, in presence of matter can be written as, \begin{equation} \mathcal{S} = \frac{1}{16 \pi G} \int d^4x \sqrt{-g}~f(R,T)+ \int d^4x\sqrt{-g}~\mathcal{L}_m \label{action} \end{equation} where $R$ is the trace of the Ricci curvature tensor $R_{\mu \nu}$, $T$ is the trace of energy-momentum tensor, $f(R,T) $ is an arbitrary function of $R$ and $T$, $g$ is the determinant of the metric tensor $g_{\mu\nu}$ and G is the Newtonian constant of Gravitation. \footnote{We have used natural units, $c = \hbar = 1$ and considered $8\pi G=1$}. $\mathcal{L}_m$ is the matter Lagrangian and is related to the energy-momentum tensor $T_{\mu\nu}$ as, \begin{equation} T_{\mu\nu}=-\frac{2}{\sqrt{-g}}~\frac{\delta}{\delta g^{\mu\nu}}(\sqrt{-g}\mathcal{L}_m) \end{equation} By taking the metric variation of the action (\ref{action}), we find the modified Einstein equation as \begin{equation} f_R(R,T)~R_{\mu\nu}-\frac{1}{2}f(R,T)~g_{\mu\nu}+(g_{\mu\nu}\Box-\nabla_{\mu}\nabla_{\nu})~f_R(R,T)=T_{\mu\nu}-f_T(R,T)~T_{\mu\nu}-f_T(R,T)~\theta_{\mu\nu} \label{field} \end{equation} where $f_R(R,T)=\frac{\partial f(R,T)}{\partial R}$, $f_T(R,T)=\frac{\partial f(R,T)}{\partial T}$ and $\theta_{\mu\nu}=g^{\alpha\beta}\frac{\partial T_{\alpha\beta}}{\partial g^{\mu\nu}}$. The energy momentum tensor for perfect fluid is, $T_{\mu\nu}=(\rho+p)u_{\mu}u_{\nu}-pg_{\mu\nu}$, where $u^{\mu}$ is the four velocity of the perfect fluid satisfying $u_{\mu}u^{\mu}=1$ in the comoving frame. We choose the matter Lagrangian such that, $\mathcal{L}_m=-p$ which yields $\theta_{\mu\nu}=-2T_{\mu\nu}-pg_{\mu\nu}$. Here $\rho$ and $p$ are the energy density and pressure respectively, There are many functional forms of $f(R,T)$ available in different literatures\cite{Harko, Singh, Jamil}. Here we have chosen $f(R,T)=R+2f(T)=R+2\lambda T$, $\lambda$ is a constant. Considering the above form of $f(R,T)$, the field equation given by Eq.~(\ref{field}) gives, \begin{equation} \label{Einstein} R_{\mu\nu} - \frac{1}{2} g_{\mu\nu} R = T_{\mu\nu}^{eff} \end{equation} where $T_{\mu\nu}^{eff}=T_{\mu\nu}+2\lambda T_{\mu\nu}+2\lambda pg_{\mu\nu}+\lambda Tg_{\mu\nu}$. Assuming that the Universe is filled up with a single and homogeneous inflaton field, the effective energy momentum tensor of the inflaton field will take a diagonal form and we can define the effective energy density and pressure as, \begin{equation} \label{effectiveTmunu} T_{00}^{eff}=\rho_{\phi}^{eff}=\frac{1}{2}\dot{\phi}^2(1+2\lambda)+V(\phi)(1+4\lambda), ~~ T_{ij}^{eff}=p_{\phi}^{eff} \delta_{ij}=\left(\frac{1}{2}\dot{\phi}^2(1+2\lambda)-V(\phi)(1+4\lambda)\right) \delta_{ij} \end{equation} Hence, the equation of state parameter $\omega^{eff}_{\phi}$ will be, \begin{equation} \omega_{\phi}^{eff}=\frac{p_{\phi}^{eff}}{\rho_{\phi}^{eff}}=\frac{\dot{\phi}^2(1+2\lambda)-2V(\phi)(1+4\lambda)}{\dot{\phi}^2(1+2\lambda)+2V(\phi)(1+4\lambda)} \end{equation} \noindent The trace of energy momentum tensor can be obtained from Eq.~(\ref{effectiveTmunu}) as, \begin{equation} T^{eff}=\rho_{\phi}^{eff}-3p_{\phi}^{eff}=-\dot{\phi}^2(1+2\lambda)+4V(\phi)(1+4\lambda) \end{equation} \noindent The line element for the Friedman-Lemaitre-Robertson-Walker (FLRW) metric in spherical coordinates has the following form, \begin{equation} ds^2=dt^2-a^2(t)\left[\frac{dr^2}{1-K r^2} + r^2 d\theta^2 + r^2 \sin^2 \theta d\phi^2 \right] \label{metric} \end{equation} The effective FRW equation in modified $f(R,T)$ gravity can be derived as, \begin{equation} 3H^2=\rho_{\phi}^{eff}, ~~ 2\dot{H} + 3H^2= - p_{\phi}^{eff} \label{hubble} \end{equation} In the above equations $a(t)$ is the scale factor, $H=\frac{\dot a}{a}$ is the Hubble parameter and the dot represents the derivative with respect to time($t$). The continuity equation for $\rho_{\phi}^{eff}$ and $p_{\phi}^{eff}$ can be derived as, \begin{equation} \dot{\rho_{\phi}}^{eff} + 3H(\rho_{\phi}^{eff} + p_{\phi}^{eff})=0 \end{equation} which gives \begin{equation} \label{EOM} \ddot{\phi}~(1+2\lambda) + 3H\dot{\phi}~(1+2\lambda)+V_{,\phi}~(1+4\lambda)=0 \end{equation} \subsection{Slow-roll parameters and CMB constraints:} We assumed that the universe is filled up with a scalar field which is minimally coupled to modified gravity. We intend to employ the slow-roll approximation to different inflaton potential to study the spectral index parameters given by CMB. We can define the first slow-roll parameter as \cite{Gamonal}, \begin{equation} \label{epsilon} \bar{\epsilon}=-\frac{\dot{H}}{H^2}=\frac{3}{2}\left[\frac{\dot{\phi}^2(1+2\lambda)}{\frac{1}{2}\dot{\phi}^2(1+2\lambda)+V(\phi)(1+4\lambda)}\right] \end{equation} Under the slow-roll approximation, we find \begin{equation} \label{condition} \dot{\phi}(1+2\lambda)<<V(1+4\lambda),~~ 3H\dot{\phi}(1+2\lambda)=-V_{,\phi}(1+4\lambda) \end{equation}\\ Applying these in Eq.~(\ref{epsilon}), we find the slow-roll parameter $\bar{\epsilon}$ for $f(R,T)$ gravity as, \begin{equation} \label{epsilonv} \bar\epsilon=\frac{3\dot{\phi}^2(1+2\lambda)}{2V(1+4\lambda)}=\frac{1}{2(1+2\lambda)}\left(\frac{V_{,\phi}}{V}\right)^2=\bar{\epsilon}_v \end{equation} Similarly by taking the derivative on Eq. (\ref{condition}), we can define the second slow-roll parameter as, \begin{equation} \label{etav} \bar{\eta}=-\frac{\ddot{\phi}}{H\dot{\phi}}=\frac{1}{1+2\lambda}\left(\frac{V_{,\phi\phi}}{V}\right)= \bar{\eta}_v \end{equation} where $\bar\epsilon$ and $\bar\eta$ represents Hubble slow roll parameters whereas $\bar{\epsilon}_v$ and $\bar{\eta}_v$ represents potential slow-roll parameters. The amount of inflation, required to produce isotropic and homogeneous universe, is described by the e-fold number $N$, which can be derived from Eq.~(\ref{condition}) and Eq.~(\ref{hubble})as, \begin{equation} \label{efold} N=-\int\frac{H}{\dot{\phi}}d\phi=-(1+2\lambda)\int_{\phi_{in}}^{\phi_{final}}\frac{V}{V_{,\phi}}d\phi \end{equation} Here, $\phi_{final}$ is calculated by taking $\bar{\epsilon}_v=1$, which corresponds to the end of inflation and $\phi_{in}$ is the initial value of inflation field at the beginning of inflation. The CMB parameters i.e. scalar spectral index $n_s$, scalar to tensor ratio $r$ and tensor spectral index $n_T$ can be written in terms of slow roll parameters as, \begin{equation} \label{spectralindex} n_s-1=-6 \bar{\epsilon}_v+2\bar{\eta}_v, ~~ r=16\bar{\epsilon}_v, ~~ n_t=-2\bar{\epsilon}_v \end{equation} \section{Analysis of different inflationary models in modified gravity} In this section, we calculate the CMBR spectral index parameters for three different inflaton potentials and obtain constraint on the modified gravity parameter $\lambda$ in potential parameter space \cite{sarkar}. \subsection{Case 1: Inflaton potential $V=V_0\phi^p e^{-\alpha\phi}$, $\lambda \neq 0$} To start with, we consider the scalar (inflaton) potential of inflationary expansion as, \begin{equation*} V(\phi) = V_0 \phi^p e^{-\alpha\phi} \end{equation*} where $V_0$ is a constant, $p$ and $\alpha$ are the potential parameters. Under the slow-roll approximation, the potential slow-roll parameters can be obtained (using Eq. (\ref{epsilonv}) and Eq. (\ref{etav})) as, \begin{equation} \bar{\epsilon}_v = \frac{1}{2}\frac{(p- \alpha \phi)^2}{(1+2 \lambda) \phi^2}, ~~ \bar{\eta}_v = \frac{p^2 + \alpha^2 \phi^2 - p(1+2 \alpha \phi)}{(1+2 \lambda) \phi^2} \end{equation} From Eq. (\ref{spectralindex}), the CMBR spectral index parameters $n_s$, $r$ and $n_T$ can be evaluated as follows, \begin{equation} n_s = \frac{-p^2 +(1-\alpha^2 +2 \lambda)\phi^2 + 2p(-1+\alpha \phi)}{(1+2 \lambda) \phi^2},~~ r = \frac{8 (p- \alpha \phi)^2}{(1+2 \lambda) \phi^2}, ~~ n_T = -\frac{ (p- \alpha \phi)^2}{(1+2 \lambda) \phi^2} \end{equation} \noindent We now explore the parameter space $(p, \alpha)$ of the inflaton potential in modified gravity approach - which can produce the desired number of e-fold and estimate the spectral index parameters of CMB. \begin{table}[htb] \centering \addtolength{\tabcolsep}{-0.5pt} \small \begin{tabular}{|c c c c c c c c c|} \hline Potential, & $V=V_0\phi^{p}e^{-\alpha\phi}$, & $p = 2$ & & & & & & \\[-0.01ex] \hline Range of $\lambda$ & $\lambda$ & $\alpha$ & $\phi $ & $\phi_f$ & N & $n_s$ & r & $n_T$ \\ $ -0.21580 < \lambda < 0.00783 $ & -0.11410 & 0.01 & 16.51 & 1.59691 & 55 & 0.964983 & 0.128031 & -0.01600 \\ \hline $ -0.08205 < \lambda < 0.24490 $ & 0.06919 & 0.05 & 10.98 & 1.24309 & 56 & 0.964927 & 0.04743 & -0.00593 \\ $ -0.06555 < \lambda < 0.27860 $ & 0.09024 & 0.1 & 12 & 1.26060 & 53 & 0.964939 & 0.092241 & -0.01153 \\ \hline\hline Potential, & $V=V_0\phi^{p}e^{-\alpha\phi}$, & $p = 4$ & & & & & & \\[-0.01ex] \hline Range of $\lambda$ & $\lambda$ & $\alpha$ & $\phi $ & $\phi_f$ & N & $n_s$ & r & $n_T$ \\ $ 0.21590 < \lambda < 0.15320 $ & 0.18000 & 0.01 & 19.02 & 2.41074 & 63 & 0.95424 & 0.23601 & -0.02950 \\ \hline $ -0.33410 < \lambda < -0.27390 $ & -0.31670 & 0.05 & 30 & 4.41369 & 55 & 0.956810 & 0.15154 & -0.01894 \\ $ -0.08268 < \lambda < 0.06080 $ & 0.02000 & 0.1 & 18.01 & 2.59366 & 60 & 0.961950 & 0.11468 & -0.01433 \\ \hline \end{tabular} \caption{\label{table:1} For $V=V_0 \phi^{p} e^{-\lambda\phi}$, the e-fold number $N$ and the spectral index parameters $n_s$, $r$ and $n_T$ are calculated for a fixed value of $\phi$ and $\lambda$ taken from given range.} \end{table} In Table (\ref{table:1}), we have shown the values of $n_s$, $r$, $n_T$ respectively for a particular value of scalar field $\phi$ and modified gravity parameter $\lambda$. The range of $\lambda$ is obtained from the $\pm 3 \sigma$ constraints of $n_s=0.9649 \pm 0.0042$ for a fixed $\phi$. For a particular $\lambda$ value (chosen from the range), and potential parameters ($\alpha, p$), the e-fold and spectral index parameters are calculated which are shown in the Table (\ref{table:1}). Note that, $\phi_f$ is evaluated by taking $\epsilon_v=1$ (exit of inflation) for different values of potential parameters $p=2$, $4$ and $\alpha=0.01$, $0.05$, $0.1$. The range of $\lambda$ mentioned in the table for potential parameters choice (e.g. $-0.21580 < \lambda < 0.00783$ corresponding to $p=2$, $\alpha = 0.01$) can produce the e-fold number $N$ lying between $40 - 70$. Note that for a given $p=2$ (say), as $\alpha$ changes from $0.01$to $0.1$, $r$ changes from $0.128$ to $0.092$ and $n_T$ changes from $-0.0160$ to $-0.0115$. The spectral index parameters $r$ and $n_T$ estimated above in the potential parameters space can be compared with the existing experimental (PLANCK+BAO) upper bound. From Table.~(\ref{table:1}) we can conclude that for $p=2$ and $\alpha=0.05$, $0.1$ we obtain $n_s$ values lies within $\pm3\sigma$ limit of PLANCK 2018 data along with $r<0.106$ (PLANCK+BAO). For the rest of the choices of $p$ and $\alpha$ although $n_s$ matches with PLANCK2018 data, but $r$ is beyond the limit given by PLANCK+BAO. \subsection{Case 2: Inflaton potential $V=V_0(1-\phi^p)e^{-\alpha\phi}$, $\lambda \neq 0$} We next consider the potential of the form \begin{equation*} V=V_0 (1 - \phi^p) e^{-\alpha\phi} \end{equation*} where $p$ and $\alpha$ are the potential parameters. Similarly, the potential slow-roll parameters can be calculated as \begin{equation} \bar{\epsilon}_v = \frac{\bigl\{ p \phi^p -\alpha \phi ~(-1+\phi^p) \bigr\}^2}{2 ~(1+2\lambda) \phi^2 (-1+\phi^p)},~~ \bar{\eta}_v = \frac{(-1+p) ~p ~\phi^p -2 p ~\alpha ~\phi^{1+p} + \alpha^2 \phi^2 (-1+\phi^p) }{(1+2\lambda)~\phi^2~(-1+\phi^p)} \end{equation} and the CMBR spectral index parameters $n_s$, $r$ and $n_T$ as, \begin{equation} n_s = 1 - \frac{3 \bigl\{ p \phi^p -\alpha \phi~(-1 + \phi^p) \bigr\}^2}{(1+2 \lambda)~ \phi^2 ~(-1 + \phi^p)^2} + \frac{2 \bigl\{ (-1+p) p~\phi^p -2 p~ \alpha \phi^{1+p} + \alpha^2 \phi^2 (-1+\phi^p) \bigr\}}{(1+2 \lambda)~ \phi^2 ~(-1 + \phi^p)^2} \end{equation} \begin{equation} r = \frac{8 \bigl\{ p \phi^p -\alpha \phi~(-1 + \phi^p) \bigr\}^2}{(1+2 \lambda)~ \phi^2 ~(-1 + \phi^p)^2}, ~~ n_T = -\frac{ \bigl\{ p \phi^p -\alpha \phi~(-1 + \phi^p) \bigr\}^2}{(1+2 \lambda)~ \phi^2 ~(-1 + \phi^p)^2} \end{equation} The power law $\phi^p$ or $1-\phi^p$ type potential do not give good results for the cosmological parameters. The tensor to scalar ratio is fairly high($\sim 0.4$) in comparison to the PLANCK2018 and WMAP data which gives motivation to choose the combined potentials of power-law and exponent type. Also, in $1-\phi^p$ potential, for $p=2$, the e-fold number blows. It is also one of the main reason to choose these type of combined potentials.\\ The results for this case is displayed in Table (\ref{table:2}). \begin{table}[htb] \centering \addtolength{\tabcolsep}{-1pt} \small \begin{tabular}{|c c c c c c c c c|} \hline Potential, & $V=V_0(1 - \phi^{p})e^{-\alpha\phi}$, & $p = 2$ & & & & & & \\[-0.01ex] \hline Range of $\lambda$ & $\lambda$ & $\alpha$ & $\phi $ & $\phi_f$ & N & $n_s$ & r & $n_T$ \\ $ -0.37000 < \lambda < -0.26770 $ & -0.32440 & 0.01 & 24 & 2.72518 & 54 & 0.964750 & 0.122985 & -0.01537 \\ \hline $ -0.23480 < \lambda < -0.03140 $ & -0.14180 & 0.05 & 14.99 & 2.08376 & 53 & 0.964964 & 0.078829 & -0.009854 \\ $ 0.04388 < \lambda < 0.64820 $ & 0.23710 & 0.1 & 10 & 1.69176 & 53 & 0.96498 & 0.05648 & -0.00706 \\ \hline\hline Potential, & $V=V_0(1 - \phi^{p})e^{-\alpha\phi}$, & $p = 4$ & & & & & & \\[-0.01ex] \hline Range of $\lambda$ & $\lambda$ & $\alpha$ & $\phi $ & $\phi_f$ & N & $n_s$ & r & $n_T$ \\ $ 0.42660 < \lambda < 1.48300 $ & 0.50000 & 0.01 & 16 & 2.09830 & 64 & 0.95550 & 0.23041 & -0.02880 \\ \hline $ 0.07318 < \lambda < 0.71010 $ & 0.20000 & 0.05 & 18 & 2.39400 & 65 & 0.96118 & 0.16900 & -0.02118 \\ $ -0.18670 < \lambda < 0.16430 $ & -0.08000 & 0.1 & 20 & 2.90578 & 63 & 0.96420 & 0.09520 & -0.01190 \\ \hline \end{tabular} \caption{\label{table:2} For $V=V_0 (1 - \phi^{p}) e^{-\alpha\phi}$, the e-fold number $N$ and the spectral index parameters $n_s$, $r$ and $n_T$, calculated for a fixed value of $\phi$ and $\lambda$ are presented.} \end{table} We see that for $p=2, \alpha=0.1,0.05 $, $n_s$ lies within $\pm 3 \sigma$ of PLANCK2018 data along with $r$ range given by PLANCK+BAO except for $\alpha=0.01$. On the other hand, for $p=4,\alpha=0.1$ the observational parameters values match with experimental data. We also find that for $p=2(4)$, as $\alpha$ changes from $0.01$ to $0.1$, $r$ decreases from $0.12298$ to $0.05648$, $n_T$ changes from $-0.01537$ to $-0.00706$. \subsection{Case 3: Inflaton potential $V=V_0\frac{\alpha\phi^2}{1+\alpha\phi^2}$, $\lambda \neq 0$} Finally, we consider the potential \begin{equation*} V=V_0\frac{\alpha\phi^2}{1+\alpha\phi^2} \end{equation*} where $\alpha$ is the potential parameter. The potential slow-roll parameters are found to be \begin{equation} \bar{\epsilon}_v = \frac{2}{(1+2 \lambda)~(\phi + \alpha \phi^3)^2}, ~~ \bar{\eta}_v = \frac{2 - 6\alpha \phi^2}{(1+2 \lambda)~(\phi + \alpha \phi^3)^2} \end{equation} and the spectral index parameters are obtained as, \begin{equation} n_s = 1 - \frac{12 \alpha}{(1+2 \lambda)~(1+\alpha \phi^2)^2} - \frac{8}{(1+2 \lambda)~(\phi + \alpha \phi^3)^2} \end{equation} \begin{equation} r = \frac{32}{(1+2 \lambda)~(\phi + \alpha \phi^3)^2}, ~~ n_T = -\frac{4}{(1+2 \lambda)~(\phi + \alpha \phi^3)^2} \end{equation} Here, we have explored the desired number of e-fold($N$) and the CMB parameters for different values of $\alpha$ of the inflaton potential in modified gravity model. In Table (\ref{table:3}), we have shown the e-fold number $N$ and the spectral index parameters $n_s$, $r$, $n_T$ for a fixed $\phi$ and $\lambda$. \begin{table}[htb] \centering \addtolength{\tabcolsep}{-1pt} \small \begin{tabular}{|c c c c c c c c c|} \hline Potential, & $V=V_0\frac{\alpha\phi^2}{1+\alpha\phi^2}$ & & & & & & & \\[-0.01ex] \hline Range of $\lambda$ & $\lambda$ & $\alpha$ & $\phi $ & $\phi_f$ & N & $n_s$ & r & $n_T$ \\ $ 0.10680 < \lambda < 0.79830 $ & 0.32930 & 1 & 3.7 & 0.721898 & 44 & 0.96484 & 0.00653 & -0.00082 \\ \hline $ -0.10680 < \lambda < 0.34250 $ & 0.03791 & 2 & 3.5 & 0.694246 & 43 & 0.96480 & 0.003734 & -0.00047 \\ \hline \end{tabular} \caption{\label{table:3} For $V=V_0 \frac{\alpha\phi^2}{1+\alpha\phi^2}$, the e-fold number $N$ and the spectral index parameters are calculated for a fixed value of $\phi$ and $\lambda$. } \end{table} For $\alpha = 1(2)$ and $\lambda=0.3293(0.0379)$, we find $r \sim 0.00653(0.00373)$ and $n_T \sim -0.00082(-0.00047)$ together with the number of e-fold $N=44(43)$ lies well within the range $40-60$. Hence for this particular form of potential, all the cosmological parameters exist within the experimental data range. \subsection{Case 4: $V=V_0 \phi^p e^{-\alpha\phi}$, $V=V_0(1-\phi^p)e^{-\alpha\phi}$, $V=V_0\frac{\alpha\phi^2}{1+\alpha\phi^2}$ with $\lambda = 0$} In this section, we analyze the last three inflationary potentials for $\lambda = 0$ which implies $f(R,T) = R + 2 \lambda T = R$, i.e. normal Einstein gravity. In Table \ref{table:4}, we have tabulated the values of $N$, $n_s$, $r$, $n_T$ for $p=2$, $4$ and $\alpha=0.01$ and $0.1$ for the potentials $\phi^p e^{-\alpha\phi}$, $(1-\phi^p)e^{-\alpha\phi}$ and $\frac{\alpha\phi^2}{1+\alpha\phi^2}$, respectively. From the table, we see that although the e-fold lies in the range $40-70$ and $n_s$ value matches with the experimental data but $r$ value is little higher than the PLANCK+BAO data for some values of $\alpha$ and $p$. \begin{table}[htb] \centering \addtolength{\tabcolsep}{-0.5pt} \small \begin{tabular}{|c c c c c c c|} \hline Potential, & $V=V_0\phi^{p}e^{-\alpha\phi}$, & $p = 2$, & $\lambda = 0$ & & & \\[-0.01ex] $\alpha$ & $\phi $ & $\phi_f$ & N & $n_s$ & r & $n_T$ \\ \hline 0 & 12.73 & 1.41400 & 40 & 0.95063 & 0.19747 & -0.01653\\ 0.01 & 14.48 & 1.40428 & 55 & 0.964507 & 0.131321 & -0.0164151 \\ 0.05 & 13.05 & 1.36592 & 54 & 0.96585 & 0.0852956 & -0.0106619 \\ 0.1 & 11.486 & 1.32082 & 41 & 0.964186 & 0.0439562 & -0.005494 \\ \hline Potential, & $V=V_0 \phi^{p} e^{-\alpha\phi}$, & $p = 4$, & $\lambda = 0$ & & & \\[-0.01ex] $\alpha$ & $\phi $ & $\phi_f$ & N & $n_s$ & r & $n_T$ \\ \hline 0 & 18.13 & 2.82800 & 40 & 0.92698 & 0.38942 & -0.0487\\ 0.01 & 25.44 & 2.80857 & 62 & 0.9543 & 0.233916 & -0.029239 \\ 0.05 & 19.755 & 2.73184 & 58 & 0.95625 & 0.186002 & -0.023250 \\ 0.1 & 17.32 & 2.64164 & 53 & 0.956185 & 0.137177 & -0.017147 \\ \hline Potential, & $V=V_0(1-\phi^{p})e^{-\alpha\phi}$, & $p = 2$, & $\lambda = 0$ & & & \\[-0.01ex] $\alpha$ & $\phi $ & $\phi_f$ & N & $n_s$ & r & $n_T$ \\ \hline 0.01 & 14.55 & 1.92403 & 54 & 0.964424 & 0.131296 & -0.016412\\ 0.05 & 13 & 1.89392 & 52 & 0.964932 & 0.08780 & -0.010975\\ 0.1 & 11.65 & 1.8588 & 53 & 0.964547 & 0.042571 & -0.005321\\ \hline Potential, & $V=V_0(1-\phi^{p})e^{-\alpha\phi}$, & $p = 4$, & $\lambda = 0$ & & & \\[-0.01ex] $\alpha$ & $\phi $ & $\phi_f$ & N & $n_s$ & r & $n_T$ \\ \hline 0 & 18.107 & 2.87070 & 40 & 0.92680 & 0.39041 & -0.0488\\ 0.01 & 25 & 2.85169 & 80 & 0.9647 & 0.18 & -0.0225 \\ 0.05 & 21.5 & 2.77846 & 70 & 0.9642 & 0.14807 & -0.0185\\ 0.1 & 19 & 2.69285 & 67 & 0.965622 & 0.097731 & -0.0122\\ \hline Potential, & $V=V_0\frac{\alpha\phi^2}{1+\alpha\phi^2}$, & & $\lambda = 0$ & & & \\[-0.01ex] $\alpha$ & $\phi $ & $\phi_f$ & N & $n_s$ & r & $n_T$ \\ \hline 1 & 4.2 & 0.834039 & 43 & 0.964157 & 0.00522 & -0.00065\\ 2 & 3.55 & 0.707107 & 42 & 0.964126 & 0.003698 & -0.00046 \\ \hline \end{tabular} \caption{\label{table:4} The e-fold number $N$ and the spectral index parameters $n_s$, $r$ and $n_T$ are presented here corresponding to $\lambda=0$ for all three potentials.} \end{table} Note that in the Einstein gravity theory, the predicted value of $r$ and $n_T$ are slightly different than than of the modified gravity theory for the same set of potential parameter choice. From Table.~(\ref{table:2}) and Table.~\ref{table:4}), we have noticed that for $p=4, \alpha=0.01$ value, although $r$ value is little less in $\lambda=0$ case but $N$ is beyond $40-70$ whereas in $\lambda\neq 0$ case $N$ resides between $40-70$. It is also eminent that out of the three potentials, the potential $V=V_0\frac{\alpha\phi^2}{1+\alpha\phi^2}$ gives the best results for e-fold and CMB parameters such as $n_s,~r$ and $n_T$ for both $\lambda = 0$ as well as $\lambda \neq 0$. \newpage \section{Analysis and Conclusion:} In this paper, we have gone through the basics of slow-roll inflation in the context of modified gravity approach. We have analyzed inflationary cosmology for a particular form of $f(R,T) = R + 2 \lambda T$. This option has been extensively researched in the literature and is typically offered as an alternate strategy to deal with various cosmological issues, such as dark energy and dark matter. A discernible change to the results can also be produced by changing the functional form of $f(R,T)$ and its analysis is outside the purview of this paper. In this manuscript, we have focused on three different potentials$-$ $\phi^p e^{-\alpha\phi}$, $(1-\phi^p)e^{-\alpha\phi}$, $\frac{\alpha\phi^2}{1+\alpha\phi^2}$ to study the inflationary scenario in two context, one by taking the modified gravity $\lambda$ into account and the other by switching off $\lambda$ i.e. in normal Einstein gravity. We have calculated the slow-roll parameters, e-fold number and spectral index parameters in the case of a scalar field ($\phi$) minimally coupled to modified gravity. In order to do so, we have calculated $n_s$, $r$, $n_T$, $N$ for a particular value of field $\phi$ and have taken a range of $\lambda$ values which match with the spectral index data given by PLANCK2018 and WMAP. \\ In Fig. (\ref{Plot1}), we have shown the variation of $n_s$ and $r$ for all three potentials of inflationary expansion. The blue and red shaded region corresponds to WMAP data upto $95\%$ and $68\%$ C.L whereas grey, green and purple shaded regions corresponds to PLANCK, PLANCK+BK15, PLANCK+BK15+BAO respectively. The $N=40$ and $60$ values correspond to potentials $V=V_0\phi^pe^{-\alpha\phi}$ for i) $p=2$, $\alpha=0.01$, $0.05$, $0.1$, ii) $p=4$, $\alpha=0.01$, $0.05$, $0.1$ which are represented by black, green blue, cyan, yellow and peach lines and for potential $V=V_0(1-\phi^p)e^{-\alpha\phi}$ with i) $p=2$, $\alpha=0.01$, $0.05$, $0.1$, ii) $p=4$, $\alpha=0.01$, $0.05$, $0.1$, are shown in white, grey, orange, brown, magenta and light grey respectively. Purple and red lines are for potential $V=V_0\frac{\alpha\phi^2}{1+\alpha\phi^2}$ with $\alpha=1$, $2$. From Fig. (\ref{Plot1}), we can say that all the potentials are within WMAP data (at least $N=60$) whereas $V=V_0\frac{\alpha\phi^2}{1+\alpha\phi^2}$ is in the limit of PLANCK+BK15+BAO. From Table \ref{table:4}, we have noticed that for a fixed $p$, $\phi$ value decreases with increasing $\alpha$ along with decreasing $N$. On the other hand, from Table \ref{table:1} and \ref{table:2}, we can see for $\lambda \neq 0$, we are getting good results for $p=2$, $\alpha=0.1$, $0.1$. As an example, for $V=V_0\phi^pe^{-\alpha\phi}$ and $p=2$, $\alpha=0.01$ we obtained $r=0.131321$, $0.128031$ for $\lambda=0$, $-0.114110$ respectively. So, $r$ value is little less for $\lambda\neq 0$ although $n_s$ is lying within $3\sigma$ limit of C.V. Similarly for other potentials, we can compare the cosmological parameters value from Table \ref{table:4} and Table \ref{table:1}, \ref{table:2}, \ref{table:3}.\\ We infer that $V=V_0\frac{\alpha\phi^2}{1+\alpha\phi^2}$ fits best for all cosmological parameters with observational data given by PLANCK2018 and WMAP. The constraints on the potential parameters are found to satisfy the desired number of e-fold of inflationary expansion i.e. $40<N<60$ for a range of modified gravity parameter $\lambda$ for each potential. Finally, we can say by taking modified gravity into account, the predictions to tensor-to-scalar ratio and the tensor spectral index can be improved for the given set of potentials discussed in this paper. In normal Einstein gravity, we find $r$ and $n_T$ slightly higher than what we found in modified gravity. \section{Acknowledgment} AR would like to thank BITS Pilani K K Birla Goa campus for the fellowship support. PS would like to thank Department of Science and Technology, Government of India for INSPIRE fellowship. PKD would like to thank Kinjal Banerjee for useful discussion.
Title: Ignition of carbon burning from nuclear fission in compact stars
Abstract: Type-Ia supernovae (SN Ia) are powerful stellar explosions that provide important distance indicators in cosmology. Recently, we proposed a new SN Ia mechanism that involves a nuclear fission chain reaction in an isolated white dwarf (WD) [PRL 126, 1311010]. The first solids that form as a WD starts to freeze are actinide rich and potentially support a fission chain reaction. In this letter we explore thermonuclear ignition from fission heating. We perform thermal diffusion simulations and find at high densities, above about 7x10^8 g/cm^3, that the fission heating can ignite carbon burning. This could produce a SN Ia or another kind of astrophysical transient.
https://export.arxiv.org/pdf/2208.00053
\title{Ignition of carbon burning from nuclear fission in compact stars} \author[0000-0001-7271-9098]{C. J. Horowitz} \affiliation{Center for Exploration of Energy and Matter and Department of Physics, Indiana University, Bloomington, IN 47405, USA\email{horowit@indiana.edu}} \keywords{Type Ia supernovae (1728); White dwarf stars (1799)} \section{Introduction}\label{Sec.Intro} Type\textrm{--}Ia supernovae (SN Ia) are great stellar explosions that provide important distance indicators in cosmology \cite{Abbott_2019,SN_cosmology,Sullivan2010}. They can be observed at great distances and appear to have a standardizable luminosity that can be inferred from other observations \cite{1996ApJ...473...88R,Phillips_1999,Goldhaber_2001,Phillips2017,Hayden_2019}. This allows a precise determination of the expansion rate of the universe known as the Hubble constant. Nevertheless, there is still some uncertainty as to the SN Ia explosion mechanism and their progenitor systems. Traditionally, SN Ia are thought to involve the thermonuclear explosion of a C/O white dwarf (WD) in a binary system. Here the companion is either a conventional star (single-degenerate mechanism) or another WD (double-degenerate) \cite{2012NewAR..56..122W,hillebrandt2013understanding,RUIZLAPUENTE201415}. Recently we proposed a new SN Ia mechanism that involves a nuclear fission chain reaction igniting thermonuclear carbon burning in an {\it isolated} WD \cite{PhysRevLett.126.131101,fission2}. Alternative mechanisms to ignite isolated WD include dark matter interactions \cite{PhysRevLett.115.141301,PhysRevD.105.083507} or pycnonuclear fusion of impurities \cite{10.1093/mnras/stv084}. Our model involves three stages. In the first stage, phase separation upon crystallization produces an actinide rich solid that could support a nuclear fission chain reaction. In a WD, melting points of the chemical elements scale as their atomic number $Z^{5/3}$. Actinides have the highest $Z$ and may therefore condense first. The composition of the first solids is discussed in \cite{PhysRevLett.126.131101}. The concentration of actinides by chemical separation in a WD is similar to the formation of uranium rich veins on earth. Not only has uranium been purified by natural processes on earth, natural chain reactions have occurred. The Oklo natural nuclear reactors operated 2 Gy ago in very rich Uranium deposits in Africa \cite{GAUTHIERLAFAYE19964831,PhysRevLett.93.182302,10.2307/24950391}. In the second stage of our model, a chain reaction occurs in a WD. Nuclear reaction network simulations of this stage were presented in \cite{Fission_network} where it was found that the reaction proceeds very rapidly. Fertile isotopes such as $^{238}$U or $^{232}$Th can burn via a two step process where a neutron is captured to produce an odd A isotope that fissions after absorbing a second neutron. As a result, a large fraction of the initial U and Th fissions producing significant heating. In the third stage, fission heating ignites carbon burning and initiates a SN Ia or other astrophysical transient. In this letter we present the first simulations of this stage. We find that the fission heating can initiate carbon burning if the density is high enough. For context, our mechanism is similar to a hydrogen bomb. The Classical Super is an H-bomb design that uses heat from an atomic bomb to ignite hydrogen isotopes \cite{Ford}. The Classical Super likely fails because too much energy is lost to radiation. In contrast modern weapons may use radiation to first compress the system to higher densities where there is less energy loss. Thermonuclear ignition may be easier at high densities. Therefore, we explore ignition for different WD densities. \cite{1992ApJ...396..649T} discuss ignition in terms of heating at least a trigger mass $M_{trig}$ of material. $M_{trig}$ is estimated from the mass in a sphere of radius equal to the carbon burning flame width $\delta$. $\delta$ decreases with density roughly as $\rho^{-5/3}$ so that $M_{trig}$ decreases rapidly as $\approx\rho^{-4}$. % In our model, the mass of an actinide rich crystal likely exceeds $M_{trig}$ at high densities. In Sec. \ref{Sec.form} we review the results of \cite{PhysRevLett.126.131101} for the size of the initial actinide rich crystal. Next, we extend the fission reaction network simulations of \cite{Fission_network} to higher densities. We then describe our thermal diffusion simulations. Results for carbon and oxygen ignition are presented in Sec. \ref{Sec.Results}. We end by discussing possible implications and conclude in Sec. \ref{Sec.Conclusions}. \section{Formalism}\label{Sec.form} {\it Actinide rich crystallization:} As a WD cools it eventually crystallizes. However just before the main C and O components start to freeze, higher $Z$ impurities may condense since they have much higher melting temperatures. This process is described in \cite{PhysRevLett.126.131101} where the crystal is assumed to grow by diffusion until a chain reaction is started by a neutron from spontaneous fission. The crystal mass $M_{pit}$ forms the fission core of our simulation and we refer to it as the pit in analogy with nuclear weapons. $M_{pit}$ is estimated by setting the time to grow by diffusion equal to the time between neutrons. Extending the analysis of \cite{PhysRevLett.126.131101} to other densities gives the results in Table \ref{Table2}. The pit mass is seen to increase slowly with density $M_{pit}\propto \rho^{3/10}$. \begin{table}[tbh] \caption{\label{Table2} Actinide rich crystal (pit) mass $M_{pit}$ and radius $r_{pit}$ for different densities $\rho$.} \begin{tabular*}{0.29\textwidth}{c c c } \hline $\rho$ (g/cm$^3$) & $M_{pit}$ (mg) & $r_{pit}$ (cm) \\ \hline $10^8$& 10 & $3\times 10^{-4}$ \\ % $8\times 10^8$ & 20 & $2\times 10^{-4}$\\ $3\times 10^{9}$ & 30 & $1.3\times 10^{-4}$\\ \hline \hline \end{tabular*} \end{table} {\it Fission chain reaction:} This crystal, if critical, will undergo a fission chain reaction. Nuclear reaction network simulations were presented in \cite{Fission_network}, see Fig. \ref{Fig1}, where the fission heating rate per baryon $\dot S_{fis}$ was calculated, see Fig. \ref{Fig2}. The initial composition included some Pb in addition to U and Th. Pb is essentially inert during the reaction but increases the heat capacity and therefore acts to dilute the fission heating and reduce the maximum temperature. However, the composition of the initial solid is uncertain. To explore ignition most simply, we now consider a composition identical to \cite{PhysRevLett.126.131101} but without Pb% . The composition shown in Fig. 1 and the fission heating in Fig. 2 is other wise identical to Case B of \cite{Fission_network}. If Pb is present, it pushes the threshold for ignition to higher densities as discussed below. {\it Fusion ignition simulations:} We now perform thermal diffusion simulations of ignition. Assuming constant pressure $P$, the conservation of energy can be written \cite{1992ApJ...396..649T}, \begin{equation} \frac{\partial E}{\partial t}+ P\frac{\partial}{\partial t}(\frac{1}{\rho_b})=\frac{1}{\rho_b}{\bf\nabla}\cdot \sigma{\bf \nabla} T + \dot S_{tot}\, , \label{Eq.E} \end{equation} where $E$ is the internal energy per baryon, $T$ the temperature, $\rho_b$ the baryon density, $\sigma$ the thermal conductivity and $\dot S_{tot}$ the total nuclear reaction heating rate per baryon. We expect constant pressure to be a good approximation because the flame moves subsonically and sound waves can restore the background pressure. Our goal is to demonstrate the physics in as clear a way as possible. The simple equation of state we use is a largely degenerate very relativistic electron gas with internal energy per baryon, \begin{equation} E\approx Y_e\bigl(\frac{3}{4}\epsilon_F+\frac{\pi^2}{2}\frac{T^2}{\epsilon_F}\bigr)\, , \end{equation} Fermi energy $\epsilon_F=(3\pi^2Y_e\rho_b)^{1/3}$, and electron fraction $Y_e$. We use units $\hbar={\rm c}={\rm k}_b=1$. The sum of the two terms on the left hand side of Eq. \ref{Eq.E} can be combined using the heat capacity at constant pressure $C_p=5\pi^2Y_eT/(4\epsilon_F)$, so that Eq. \ref{Eq.E} becomes, \begin{equation} \frac{\partial T}{\partial t}=\frac{1}{C_p\rho_b}{\bf\nabla}\cdot \sigma{\bf \nabla} T + \frac{\dot S_{tot}}{C_p}\, . \label{Eq.T} \end{equation} We directly simulate Eq. \ref{Eq.T}, assuming spherical symmetry, with a simple first order implicit scheme \cite{koonin}. The thermal conductivity $\sigma$ is from electron conduction where the mean free path is limited by electron ion scattering. This scattering depends on the average charge $\langle Z\rangle$ of the ions which decreases as ions fission, see Fig. \ref{Fig2}. To evaluate $\sigma$ we use the simple formulas of \cite{1980SvA....24..303Y}. The highly charged ions reduce $\sigma$ and somewhat slow thermal diffusion. The initial conditions involve an actinide rich crystal for $0\le r\le r_{pit}$ and a 50/50\% (by mass) C/O liquid for $r_{pit} < r\le r_{grid}$ (except where O/Ne/Mg composition is noted). Typically $r_{grid}= 2\times 10^{-3}$ to $4\times 10^{-3}$ cm. The initial temperature $T_i$ is uniform across the grid and equal to the crystallization temperature of the actinide mixture $\approx3$ keV. The boundary conditions are $\partial T(r,t)/\partial r|_{r=0}=0$ and $T(r_{grid},t)=T_i$. Simulations typically use a time step of $10^{-15}$ s and a uniform grid spacing of $2\times10^{-6}$ cm. Simulations with smaller time step and or grid spacing often yield very similar results. The nuclear heating $\dot S_{tot}=\dot S_{fis}+\dot S_{fus}$ comes from both fission $\dot S_{fis}$ and fusion $\dot S_{fus}$. For $r\le r_{pit}$, $\dot S_{fis}$ is taken from fission reaction simulations such as shown in Figs. \ref{Fig1},\ref{Fig2}. Since the fission chain reaction does not depend strongly on temperature, nuclear network simulations are run first and then the results simply used in thermal diffusion simulations. We note the total fission energy $S_{fis}$ for the simulation in Fig. \ref{Fig2} is $\int dt \dot S_{fis}(t)=0.679$ MeV/baryon. To estimate $\dot S_{fus}$ \added{we calculate the rate of $C+C$ fusion using the REACLIB database \cite{Cyburt_2010} and include strong screening. Using the rate from \cite{PhysRevC.74.035803} instead yields a slightly higher threshold density for carbon ignition.} $C+C$ fusion produces a number of reaction products and these undergo secondary reactions. Careful reaction network simulations in \cite{Calder_2007} determined the total energy released during carbon burning to be $\Delta E=3.5\times 10^{17}$ ergs/g or 0.362 MeV/baryon at $8\times 10^8$ g/cm$^3$ (P200 network in Table 2 of \cite{Calder_2007} ). For simplicity we calculate $\dot S_{fus}$ by assuming each $C+C$ fusion releases $Q_{eff}=(24/0.5)\Delta E=17.4$ MeV. Here there are 24 nucleons per $C+C$ fusion and only 0.5 of the fuel is carbon. This approximation can be checked by full reaction network simulations. If $Q_{eff}$ is somewhat smaller, the threshold density for ignition may increase somewhat. \section{Results}\label{Sec.Results} Figure \ref{Fig3} shows temperature $T$ versus radius $r$ for four thermal diffusion simulations. The simulation at a low density of $2\times 10^{8}$ g/cm$^3$ shown in Fig. \ref{Fig3} (a) fails to ignite. Here $r_{pit}=3\times10^{-4}$ cm. During the fission chain reaction $T$ rises so rapidly that there is only minimal thermal diffusion. However over longer times this heat simply diffuses away without initiating carbon burning. At a density of $4\times 10^8$ g/cm$^3$, ignition is possible if $M_{pit}$ is large. This is shown in Fig. \ref{Fig3} (b) where a carbon flame is started that burns to the right (off the edge of the figure). However $r_{pit}=6\times10^{-4}$ cm and $M_{pit}=0.36$ g. This is larger than the 10-20 mg suggested from Table \ref{Table2}. We conclude that ignition may be possible at this density, but only if $M_{pit}$ is large. If $r_{pit}$ is much less than $6\times10^{-4}$ cm the simulation fails to ignite. Figure \ref{Fig3} (c) shows carbon ignition for a simulation with $r_{pit}=1.7\times 10^{-4}$ cm at $\rho=8\times10^8$ g/cm$^3$. At this density $M_{pit}=16$ mg is consistent with Table \ref{Table2} so ignition may be likely. % If $r_{pit}$ is somewhat larger than $1.7\times 10^{-4}$ cm, ignition can take place at somewhat lower densities approximately $\ge 6\times 10^8$ g/cm$^3$. The nuclear fission chain reaction in Figs. \ref{Fig1},\ref{Fig2} emits a total fission heating of 0.679 MeV per nucleon. The fission energy released could be less if a smaller fraction of the actinides fission. Alternatively, non-fissioning impurities such as Pb could be present that dilute the fission energy over more nucleons. To explore this we multiply the fission heating rate in Fig. \ref{Fig2} by different time independent constants and find the total fission energy necessary for ignition at a given density, see Table \ref{Table3}. The fission energy required decreases from 0.66 to 0.19 MeV/nucleon as the density increases from $6\times 10^8$ to $4\times 10^9$ g/cm$^3$. For simplicity all of the simulations on which Table \ref{Table3} is based used $r_{pit}=3\times10^{-4}$ cm. \begin{table}[tbh] \caption{\label{Table3} Minimum total fission heating $S_{fis}$ necessary for carbon ignition at a given density $\rho$. \explain{Table revised to now show results for REACLIB C+C rates.}} \begin{tabular*}{0.272\textwidth}{c c } \hline $\rho$ (g/cm$^3$)& $S_{fis}$ (MeV/nucleon)\\ \hline $6\times10^8$& 0.66\\ % $8\times10^8$&0.53\\ $1\times10^9$&0.46\\ $2\times10^9$&0.29\\ $3\times10^9$&0.22\\ $4\times10^9$&0.19\\ \hline \hline \end{tabular*} \end{table} Oxygen ignition is difficult but appears possible at high densities. Figure \ref{Fig3} (d), at a density of $5\times 10^{9}$ g/cm$^3$, shows oxygen ignition. Again we use the REACLIB rates \cite{Cyburt_2010}. The initial composition is 60/30/10\% O/Ne/Mg by mass, $r_{pit}=2\times 10^{-4}$ cm, and $T_i=7$ keV. We somewhat arbitrarily use an effective energy release of $Q_{eff}=16.4$ MeV per O+O fusion. This is estimated from the Si rich final composition in Fig. 4b of \cite{1992ApJ...396..649T}. Note that the system reaches a higher temperature $T\approx 1.5$ MeV after the fission chain reaction. This is because, at very high densities, the system is more degenerate and the heat capacity is lower. In all of the simulations shown in Fig. \ref{Fig3}, the fission energy release is 0.679 MeV/nucleon. It is possible that a very massive O/Ne star, near the Chandrasekhar mass, could experience a thermonuclear runaway via our mechanism. In contrast an O/Ne WD might undergo electron capture induced collapse when it accretes matter from a companion. \section{Discussion and Conclusions}\label{Sec.Conclusions} {\it Electron capture and fission:} The threshold density for electron capture: $e+^{235}$U$\rightarrow ^{235}$Pa$+\nu_e$ is $9.2\times 10^7$ g/cm$^3$ (assuming $Y_e\approx 0.5$). This may be followed by $e+^{235}$Pa$\rightarrow^{235}$Th$+\nu_e$ with a threshold of $2.0\times 10^8$ g/cm$^3$. Thus the original $^{235}$U may be in the form of $^{235}$Th in the dense stellar interior. $^{235}$Th with an even number of protons and an odd number of neutrons (like $^{235}$U) may be fissile and have a significant cross section for neutron induced fission. In the laboratory $^{235}$Th beta decays so its fission cross section may not have been measured. We note that the single neutron separation energy of $^{236}$Th is 5.9 MeV. This is the energy available for $n+^{235}$Th fission and is significantly larger than the 4.8 MeV single n separation energy of $^{239}$U. If $^{235}$Th does have a significant fission cross section, although somewhat smaller than for $^{235}$U, reaction network simulations such as in Fig. \ref{Fig1} still find comparable fission heating provided the initial $^{235}$Th enrichment compared to $^{238}$U+$^{235}$Th is somewhat higher than the 14\% assumed in Fig. \ref{Fig1}.% {\it Alpha decay lifetimes:} Uranium $\alpha$ decays with a 700 My half-life for $^{235}$U and 4.5 Gy for $^{238}$U. However the $Q$ value (energy released) for $\alpha$ decay of $^{235}$Th is smaller than that for $^{235}$U so that the alpha decay systematics of \cite{VIOLA1966741} suggest $^{235}$Th will have a much longer half-life. {\it In a dense star $^{235}$Th should be effectively stable}. Over long time periods $^{238}$U will still decay. As a result, the enrichment of $^{235}$Th compared to $^{238}$U will actually increase with time. This could make a fission chain reaction more likely. {\it Chandrasekhar limit:} The fission mechanism does not explicitly involve the Chandrasekhar mass limit. Nevertheless, the high density required for ignition limits the mechanism to nearly Chandrasekhar mass WD and this might naturally produce transients of similar luminosities. {\it Ignition:} We have an explicit simulation of ignition. We predict ignition at a single nearly central point in a very massive WD. Nucleation of an actinide rich crystal is expected first in the highest density region and this should happen near, but perhaps not exactly at, the star's center. Ignition takes place in a cold star. Unlike in a conventional single degenerate model there is no period of carbon simmering before ignition. Ignition produces a deflagration. This might turn into a detonation later. Hydrodynamic simulations of the SN or other astrophysical transient that might follow this cold ignition should be performed. \added{It is possible that these simulations, with ignition densities above $4\times 10^8$ g/cm$^3$, will reproduce reasonable typical SN Ia composition \cite{2004NewAR..48..605T}.} {\it Ultra-massive WD:} One way to form ultra-massive WDs with C/O cores is through mergers \cite{WDmerger}. In our model the SN would not occur during or shortly after the merger. Instead it would occur some time later when the massive star formed in the merger had cooled \cite{10.1093/mnras/stac348} so that actinide crystallization could start. This is at about twice the temperature of C/O crystallization \cite{PhysRevLett.126.131101}. This mechanism might be related to somewhat of a hybrid between single degenerate and double degenerate models. Like the double degenerate model it would involve the merger of two WDs. Like the single degenerate model it would involve a deflagration ignition in a nearly Chandrasekhar mass WD. An observable signature of our mechanism could be no detectable gravitational wave signal in a space based detector such as DECIGO \cite{https://doi.org/10.48550/arxiv.2006.13545} (because the merger happened in the past) along with the lack of an observable ex-companion star \cite{nocompanion}. In conclusion, we have performed thermal diffusion simulations of thermonuclear ignition following a natural nuclear fission chain reaction. We find that carbon ignition is possible at high densities. This could initiate a SN Ia or other astrophysical transient. {\it Acknowledgements:} We thank Ed Brown, Ezra Booker, Alan Calder, Matt Caplan, Alex Deibel, % Erika Holmbeck, Wendell Misch, Matthew Mumpower, Witek Nazarewicz, Rolfe Petschek, Catherine Pilachowski, Tomasz Plewa, % and Rebecca Surman for helpful discussions. This work was performed in part at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611. This research was supported in part by the US Department of Energy Office of Science Office of Nuclear Physics grants DE-FG02-87ER40365 and DE-SC0018083 (NUCLEI SCIDAC). \providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
Title: Updated orbital monitoring and dynamical masses for nearby M-dwarf binaries
Abstract: Young M-type binaries are particularly useful for precise isochronal dating by taking advantage of their extended pre-main sequence evolution. Orbital monitoring of these low-mass objects becomes essential in constraining their fundamental properties, as dynamical masses can be extracted from their Keplerian motion. Here, we present the combined efforts of the AstraLux Large Multiplicity Survey, together with a filler sub-programme from the SpHere INfrared Exoplanet (SHINE) project and previously unpublished data from the FastCam lucky imaging camera at the Nordical Optical Telescope (NOT) and the NaCo instrument at the Very Large Telescope (VLT). Building on previous work, we use archival and new astrometric data to constrain orbital parameters for 20 M-type binaries. We identify that eight of the binaries have strong Bayesian probabilities and belong to known young moving groups (YMGs). We provide a first attempt at constraining orbital parameters for 14 of the binaries in our sample, with the remaining six having previously fitted orbits for which we provide additional astrometric data and updated Gaia parallaxes. The substantial orbital information built up here for four of the binaries allows for direct comparison between individual dynamical masses and theoretical masses from stellar evolutionary model isochrones, with an additional three binary systems with tentative individual dynamical mass estimates likely to be improved in the near future. We attained an overall agreement between the dynamical masses and the theoretical masses from the isochrones based on the assumed YMG age of the respective binary pair. The two systems with the best orbital constrains for which we obtained individual dynamical masses, J0728 and J2317, display higher dynamical masses than predicted by evolutionary models.
https://export.arxiv.org/pdf/2208.09503
\titlerunning{Binary M-dwarf orbital constraints} \authorrunning{Calissendorff et al.} \title{Updated orbital monitoring and dynamical masses for nearby M-dwarf binaries} \author{Per Calissendorff$^{1,2}$ \and Markus Janson$^{2}$ \and Laetitia Rodet$^{3}$ \and Rainer K\"{o}hler$^{4}$ \and Micka\"{e}l Bonnefoy$^{5}$ \and Wolfgang Brandner$^{6}$ \and Samantha Brown-Sevilla$^{6}$ \and Ga\"{e}l Chauvin$^{5,7}$ \and Philippe Delorme$^{5}$ \and Silvano Desidera$^{8}$ \and Stephen Durkan$^{2, 9}$ \and Clemence Fontanive$^{10}$ \and Raffaele Gratton$^{8}$ \and Janis Hagelberg$^{11}$ \and Thomas Henning$^{6}$ \and Stefan Hippler$^{6}$ \and Anne-Marie Lagrange$^{12, 5}$ \and Maud Langlois$^{13,14}$ \and Cecilia Lazzoni$^{8}$ \and Anne-Lise Maire$^{6,15}$ \and Sergio Messina$^{16}$ \and Michael Meyer$^{1}$ \and Ole M\"{o}ller-Nilsson$^{11}$ \and Markus Rabus$^{17}$ \and Joshua Schlieder$^{18}$ \and Arthur Vigan$^{14}$ \and Zahed Wahhaj$^{19}$ \and Francois Wildi$^{11}$ \and Alice Zurlo$^{14, 20,21}$ } \institute{ Department of Astronomy, University of Michigan, Ann Arbor, MI 48109, USA\\ e-mail:{\bf percal@umich.edu} \and Department of Astronomy, Stockholm University, 10691, Stockholm, Sweden \and Cornell Center for Astrophysics and Planetary Science, Department of Astronomy, Cornell University, Ithaca, NY, 14853, USA \and The CHARA Array of Georgia State University, Mount Wilson Observatory, Mount Wilson, CA 91023, USA \and University of Grenoble Alpes, CNRS, IPAG, 38000, Grenoble, France \and Max Planck Institute for Astronomy, Königstuhl 17, 69117, Heidelberg, Germany \and Unidad Mixta Internacional Franco-Chilena de Astronomía, CNRS/INSU UMI 3386 and Departamento de Astronomía, Universi- dad de Chile, Casilla 36-D, Santiago, Chile \and INAF - Osservatorio Astronomico di Padova, Vicolo dell'Osservatorio 5, 35122, Padova % \and Astrophysics Research Center, Queen's University Belfast, Belfast, Northern Ireland, UK % \and Center for Space and Habitability, University of Bern, 3012, Bern, Switzerland % \and Départment d'astronomie de l’Université de Genève, Chemin Pegasi 51, 1290 Versoix, Switzerland % \and LESIA, Observatoire de Paris, Université PSL, CNRS, Sorbonne Université, Université de Paris, 5 place Jules Janssen, 92195 Meudon, France % \and CRAL, UMR 5574, CNRS, Université de Lyon, ENS, 9 avenue Charles André, 69561 Saint Genis Laval Cedex, France % \and Aix Marseille Univ, CNRS, CNES, LAM, Marseille, France % \and STAR Institute, Université de Liège, Allée du Six Août 19c, B-4000 Liège, Belgium % \and INAF - Osservatorio Astrofisico di Catania, Via S. Sofia 78, 95123, Catania, Italy % \and Departamento de Matem\'atica y F\'isica Aplicadas, Facultad de Ingenier\'ia, Universidad Cat\'olica de la Sant\'isima Concepci\'on, Alonso de Rivera 2850, Concepci\'on, Chile % \and NASA Goddard Space Flight Center, 8800 Greenbelt Road, Greenbelt, MD 20771, USA % \and European Southern Observatory, Alonso de Cordova 3107, Vitacura, Casilla, 19001, Santiago, Chile % \and N\'ucleo de Astronom\'ia, Facultad de Ingenier\'ia y Ciencias, Universidad Diego Portales, Av. Ejercito 441, Santiago, Chile % \and Escuela de Ingenier\'ia Industrial, Facultad de Ingeniería y Ciencias, Universidad Diego Portales, Av. Ejercito 441, Santiago, Chile % } \date{Received 26 November 2021 / Accepted 13 August 2022} \abstract{ Young M-type binaries are particularly useful for precise isochronal dating by taking advantage of their extended pre-main sequence evolution. Orbital monitoring of these low-mass objects becomes essential in constraining their fundamental properties, as dynamical masses can be extracted from their Keplerian motion. Here, we present the combined efforts of the AstraLux Large Multiplicity Survey, together with a filler sub-programme from the SpHere INfrared Exoplanet (SHINE) project and previously unpublished data from the FastCam lucky imaging camera at the Nordical Optical Telescope (NOT) and the NaCo instrument at the Very Large Telescope (VLT). Building on previous work, we use archival and new astrometric data to constrain orbital parameters for 20 M-type binaries. We identify that eight of the binaries have strong Bayesian probabilities and belong to known young moving groups (YMGs). We provide a first attempt at constraining orbital parameters for 14 of the binaries in our sample, with the remaining six having previously fitted orbits for which we provide additional astrometric data and updated Gaia parallaxes. The substantial orbital information built up here for four of the binaries allows for direct comparison between individual dynamical masses and theoretical masses from stellar evolutionary model isochrones, with an additional three binary systems with tentative individual dynamical mass estimates likely to be improved in the near future. We attained an overall agreement between the dynamical masses and the theoretical masses from the isochrones based on the assumed YMG age of the respective binary pair. The two systems with the best orbital constrains for which we obtained individual dynamical masses, J0728 and J2317, display higher dynamical masses than predicted by evolutionary models. } \keywords{stars: low mass --- stars: fundamental parameters --- binaries: visual} \section{Introduction} \label{sec:intro} The study of the multiplicity of stars is a useful diagnostic for obtaining insight into their formation and dynamical evolution, as it allows for important properties such as binary fraction, semi-major axis distribution, and mass ratios to be constrained \citep[e.g.][]{burgasser_not_2007}. Since low-mass M-dwarf stars form a natural link between the substellar brown dwarfs and the solar-type stars, and the multiplicity frequency tend to decline with lower masses and later spectral types \citep{duchene_stellar_2013, moe_mind_2017, winters_solar_2019}, it becomes even more crucial to discover and characterise low-mass M-dwarf multiples. Hence, a rigorous understanding of the multiplicity characteristics and their evolution within this transitional mass-region that is made up of M dwarfs is vital for constraining formation scenarios of low-mass stars and brown dwarfs. Astrometric monitoring of such binary systems allow for dynamical masses to be derived, which become essential in the efforts of empirical calibrations of fundamental properties such as the mass-luminosity relation and evolutionary models \citep{dupuy_individual_2017, mann_how_2019, rizzuto_dynamical_2020}. This becomes even more important at the lowest stellar masses for which the current theoretical models have been shown to systematically underestimate M-dwarf masses below $M \leq 0.5\,M_\odot$ by $5-50\,\%$ \citep[e.g.][]{hillenbrand_assessment_2004, montet_dynamical_2015, calissendorff_discrepancy_2017, biller_dynamical_2022}. Since M dwarfs evolve slowly and remain in their pre-main sequence phase for $\sim\,100$ Myrs \citep{baraffe_evolutionary_1998}, M-dwarf binaries become valuable benchmark targets for astrophysical calibrations comparing dynamical masses from observational data to isochronal models \citep[e.g.][]{janson_binaries_2017}. As such, groups and associations of stars that can be expected to have originated from the same mutual cluster or region, commonly referred to as young moving groups (YMGs), have seen an increase in interest of late \citep[e.g.][]{torres_young_2008, malo_bayesian_2013}. Thus, M dwarfs residing in YMGs can be isochronally dated, and binaries with estimated dynamical masses can also be used to robustly test the coevality of the YMGs. Motivated by these arguments for binary characterisation and low-mass multiplicity studies, the AstraLux Large M-dwarf Multiplicity Survey systematically studied over 1,000 X-ray active M dwarfs with the lucky imaging technique, identifying $\approx 30\,\%$ as multiple systems, many of which are known YMG members \citep{bergfors_lucky_2010, janson_astralux_2012, janson_binaries_2017}. Although most of these binaries have separations which correspond to orbital periods of several decades to hundreds of years, some have periods short enough so that they can already be mapped out after a few years of monitoring. The SpHere INfrared Exoplanet (SHINE) project utilising the Spectro-Polarimetric High-contrast Exoplanet REsearch \citep[SPHERE;][]{beuzit_sphere_2019} instrument at the Very Large Telescope (VLT) is surveying $~500$ stars with the purpose of directly detecting substellar companions to the stars in order to better understand their formation and early evolution \citep{chauvin_discovery_2017, SHINE_results}. As an auxiliary result from the survey, several low-mass binaries that coincide with the AstraLux survey sample \citep{SHINE_sample} have been observed with high-contrast imaging which provides high quality precision measurements \citep{SHINE_observations}, which are excellent for astrometry and useful for constraining orbital motion. Here we present the latest results from the combined effort of the AstraLux M-dwarf multiplicity monitoring programme and the SHINE M-dwarf filler programme. We have identified the 20 most prominent systems for fundamental properties to be characterised from the AstraLux Large Multiplicity Survey which have sufficient orbital coverage with which first-hand constraints can be made from orbital fitting routines. Out of these 20 systems, eight have strong indicators to place them in YMGs and thereby have their ages constrained. The paper is divided up into the following sections that cover the following areas: Section~\ref{sec:obs_data} where we go into detail on how the target sample was collected from the different surveys we combined, and the observations taken along with how data were reduced. In section~\ref{sec:orbs} we explain the orbital fitting procedures and present the main results and discussions in Section~\ref{sec:discussion} where dynamical masses are compared to theoretical isochronal models. Finally we provide a summary and conclusions in Section~\ref{sec:summary}. The collected astrometric data are given in Appendix~\ref{appendixA}, together with the resulting orbital fits for the binaries in Appendix~\ref{appendixB}. \section{Observations and data reduction} \label{sec:obs_data} \subsection{Sample selection and observations} The target list was created from a sample of known binaries and higher hierarchical-systems from the AstraLux M-dwarf multiplicity survey \citep{janson_astralux_2012}, consisting of over 200 M-dwarf multiple systems with separations within $0.08''-6.0''$, and the extended AstraLux sample \citep{janson_noopsorta_2014} of $\approx 60$ multiples with spectral types M5 and later. We selected 20 systems for which we had identified to either have sufficient orbital coverage that dynamical masses could be robustly constrained, or undergone enough monitoring that $\geq 25\,\%$ of the orbit can be mapped out to provide some useful information. We present here new observations from our survey and some previously unpublished observations of the targets. We compared the space velocities and positions of the targets with those of known YMGs and associations using the BANYAN $\Sigma$-online tool \citep{gagne_banyan_2018}, the LACEwING code \citep{riedel_lacewing_2017} and the GALEX convergence tool \citep{rodriguez_galex_2013}. Unless otherwise specified, parallaxes and proper motions were obtained from the Gaia archive \citep{gaia_collaboration_gaia_2016}, both Gaia Data Release 2 \citep[DR2;][]{gaia_collaboration_gaia_2018} and Gaia Early Data Release 3 \citep[EDR3;][]{gaia_collaboration_gaia_2021}. Spectral types presented in Table~\ref{tab:targets} were derived by \citet{janson_astralux_2012}, using the $(i^{\prime}-z^{\prime})$ photometry obtained from the AstraLux observations and following the methods by \citet{daemgen_discovery_2007}. Some of the systems have additional information from the resolved near-IR medium resolution spectra from SINFONI which \citet{calissendorff_characterising_2020} used to derive near-IR spectral types from the $JHK$ bands and surface-gravity sensitive emission lines. As the systems we present in this survey are binaries, we refer to the distance to the systems from us as $d$ in pc, and the separation between the binary components as $s$, either in projected separations of milliarcseconds (mas) or physical as AU. The target binaries all have designated Two Micron All-Sky Survey (2MASS) identifiers, and we abbreviate them by their first four to six digits as Jhhmm(ss). The target systems are presented in Table~\ref{tab:targets}. The YMGs of interest and their adopted ages are shown in Table~\ref{tab:YMGs}, and the target parallaxes and space velocities shown in Table~\ref{tab:YMGprob} which were used to derive membership probabilities for each source. Not all YMGs and associations are included in each of the YMG membership probability tools, and we did not introduce additional YMGs to the existing code. \begin{table*}[t] \centering \caption{Target binary systems} \begin{tabular}{lcccccc} \hline \hline 2MASS ID & Alt. Name & SpT $(\pm0.5)$ & SpT $(\pm0.6)$ & $J$ & $H$ & $K$ \\ & & $(i^{\prime} - z^{\prime})$ & Near IR & mag & mag & mag \\ \hline J00085391+2050252 & GJ 3010 & M4.5+M6.0 & & $8.87 \pm 0.03$ & $8.26\pm0.03$ & $8.01\pm0.02$\\ J01112542+1526214 & GJ 3076 & M5+M6 & M3.1+M9.6 & $9.08\pm0.03$ & $8.51\pm0.04$ & $8.21\pm0.03$\\ J02255447+1746467 & LP 410-22 & M4+M5 & & $10.22\pm0.02$ & $9.60\pm0.02$ & $9.33\pm0.02$\\ J02451431-4344102 & LP 993-116 & M4.0+M4.5 & & $8.06\pm0.02$ & $7.53\pm0.04$ & $7.20\pm0.02$ \\ J04373746-0229282 & GJ 3305 & M0+M3 & & $7.30\pm0.02$ & $6.64\pm0.05$ & $6.41\pm0.02$\\ J04595855-0333123 & UCAC4 433-008289 & M4.0+M5.5 & M1.7+M4.5 & $9.76\pm0.02$ & $9.20\pm0.03$ & $8.91\pm0.02$ \\ J05320450-0305291 & V* V1311 Ori & M2.0+M3.5 & & $7.88\pm0.02$ & $7.24\pm0.04$ & $7.01\pm0.02$ \\ J06112997-7213388 & AL 442 & M4.0+M5.0 & M2.9+M5.2 & $9.55\pm0.02$&$8.96\pm0.03$&$8.70\pm0.03$ \\ J06134539-2352077 & HD 43162B & M3.5+M5.0 & & $8.37\pm0.03$&$7.79\pm0.04$&$7.53\pm0.02$ \\ J07285137-3014490 & GJ 2060 & M1.5+M3.5 & M1+M3 & $6.62\pm0.02$&$5.97\pm0.04$&$5.72\pm0.02$\\ J09075823+2154111 & UCAC4 560-047663 & M2.0+M3.5 & & $9.36\pm0.02$&$8.72\pm0.04$&$8.55\pm0.02$ \\ J09164398-2447428 & LP 845-40 & M0.5+M2.5 & & $8.70\pm0.03$ & $8.05\pm0.03$ & $7.83\pm0.02$ \\ J10140807-7636327 & [K2001c] 27 & M4.0+M5.5 & M2.9+M5.2 & $9.75\pm0.02$&$9.16\pm0.03$&$8.87\pm0.02$ \\ J10364483+1521394$^{\dagger}$ & UCAC4 527-051290 & M5.0+M5.0 & M5.8+M4.3 & $9.97\pm0.03$ & $8.97\pm0.03$ & $8.73\pm0.03$ \\ J20163382-0711456 & TYC 5174-242-1 & M0.0+M2.0 & & $8.59\pm0.03$&$7.96\pm0.05$&$7.71\pm0.02$\\ J21372900-0555082 & UCAC4 421-138878 & M3.0+M3.5 & & $8.78\pm0.02$&$8.22\pm0.03$&$7.91\pm0.02$\\ J23172807+1936469 & GJ 4326 & M3.0+M4.5 & & $8.02\pm0.02$&$7.41\pm0.02$&$7.17\pm0.02$ \\ J23261182+1700082 & UCAC4 536-150368 & M4.5+M6.0 & & $9.36\pm0.02$&$8.80\pm0.03$&$8.53\pm0.02$ \\ J23261707+2752034 & UCAC4 590-138502 & M3.0+M3.5 & & $8.46\pm0.02$&$7.87\pm0.02$&$7.64\pm0.02$ \\ J23495365+2427493 & UCAC4 573-135909 & M3.5+M4.5 & M4.1+M5.2 & $9.91\pm0.02$&$9.31\pm0.02$&$9.06\pm0.02$ \\ \hline \label{tab:targets} \end{tabular} {\small \\ $^{\dagger}$ J10364483+1521394 is a resolved triple system. Here we only consider the outer binary pair referred to as the BC components in the literature. The photometry for the BC components are based on SINFONI observations from \cite{calissendorff_characterising_2020}. } \end{table*} \begin{table}[t] \centering \caption{Young moving groups} \begin{tabular}{lccc} \hline \hline Group name & Acronym & Age [Myr] & Reference \\ \hline $\beta$pic & BPMG & $24 \pm 3$ & B15\\ AB Doradus & ABDMG & $149^{+51}_{-19}$ & B15\\ Argus & ARG & $45\pm5$ & Z18 \\ Carina & CAR & $45^{+11}_{-7}$ & B15\\ Carina-Near & CARN & $\sim 200$ & Z06\\ Columba & COL & $42^{+6}_{-4}$ & B15\\ Hyades & HYA & $750\pm100$ & BH15 \\ Octans & OCT & $35\pm5$ & M15 \\ Tucana-Horologium & THA & $45\pm4$ & B15\\ TW Hydrae & TWA & $10 \pm 3$ & B15 \\ Ursa-Majoris & UMA & $\sim 400$ & J15 \\ \hline \label{tab:YMGs} \end{tabular} {\small B15 = \citet{bell_self-consistent_2015}; BH15 = \citet{brandt_age_2015}; J15 = \citet{jones_ages_2015}; ML15 = \citet{murphy_new_2015}; Z06 = \citet{zuckerman_carina-near_2006}; Z18 = \citet{zuckerman_nearby_2019} } \end{table} \begin{table*}[t] \centering \caption{Young moving group membership probabilities} \begin{tabular}{lccccccccc} \hline \hline Name & Parallax & pmRA & pmDEC & \multicolumn{2}{c}{BANYAN $\Sigma$} & \multicolumn{2}{c}{LACEwING} & \multicolumn{2}{c}{Convergence} \\ & [mas] & [mas/yr] & [mas/yr] & YMG & Prob. & YMG & Prob. & YMG & Prob.\\ \hline J0008$^{\rm a}$ & $55.26 \pm 0.76$ & $-48.64 \pm 1.63$ & $-260.19 \pm 1.54$ & Field & & Field & & Field & \\ J0111$^{\rm b}$ & $58.00 \pm 7.30$ & $192 \pm 8$ & $-130 \pm 8$ & BPMG & 99.7 & BPMG & 84 & BPMG & 79.0 \\ J0225$^{\rm b}$ & $31 \pm 1.9$ & $185 \pm 8$ & $-39 \pm 8$ & ARG & 44 & TWA & 18 & CARN & 92.6\\ J0245$^{\rm c}$ & $87.37 \pm 1.33$ & $24.0 \pm 12.1$ & $-366.1 \pm 9.4$ & Field & & Field & & Field & \\ J0437 & $36.01 \pm 0.48$ & $54.78 \pm 0.50$ & $-47.31 \pm 0.39$ & BPMG & 98.2 & ARG & 60 & COL & 94.6\\ J0459 & $22.26 \pm 0.63$ & $69.71 \pm 0.55$ & $38.13 \pm 0.43$ & Field & & HYA & 97 & CARN & 38.3 \\ J0532 & $27.22 \pm 0.58$ & $10.10 \pm 0.53$ & $-40.12 \pm 0.39$ & BPMG & 99.2 & BPMG & 36 & ABDMG & 99.1 \\ J0611$^{\rm a}$ & $17.57 \pm 0.41$ & $22.58 \pm 0.90$ & $62.89 \pm 0.73$ & CAR & 97.7 & COL & 49 & CARN & 66.8 \\ J0613 & $59.38 \pm 0.41$ & $-36.87 \pm 0.32$ & $124.76 \pm 0.43$ & ARG & 85.5 & ARG & 100 & CARN & 0.1\\ J0728 & $64.14 \pm 0.47$ & $-112.74 \pm 0.43$ & $-160.97 \pm 0.48$ & ABDMG & 99.6 & ABDMG & 100 & ABDMG & 8.5 \\ J0907 & $27.39 \pm 0.70$ & $-56.98 \pm 0.70$ & $-187.22 \pm 0.50$ & Field & & ABDMG & 25 & THA & 46\\ J0916 & $23.02 \pm 0.24$ & $-194.86 \pm 0.22$ & $77.94 \pm 0.21$ & Field & & Field & & CARN & 38.5\\ J1014$^{\rm d}$ & $14.5 \pm 0.4$ & $-47.2 \pm 1.7$ & $30.6 \pm 3.6$ & CAR & 92.5 & CAR & 91 & CARN & 99.9 \\ J1036 & $49.98 \pm 0.09$ & $110.21 \pm 0.10$ & $-78.94 \pm 0.08$ & Field & & Field & & Field & \\ J2016 & $29.20 \pm 1.20$ & $42.34 \pm 2.00$ & $39.95 \pm 1.65$ & Field & & Field & & CARN & 2.8 \\ J2137$^{\rm e}$ & $62.1 \pm 18.6$ & 19 & 155 & Field & & Field & & Field & \\ J2317 & $60.77 \pm 0.76$ & $352.05 \pm 0.73$ & $-131.06 \pm 0.59$ & Field & & Field & & THA & 18.2 \\ J232611 & $46.18 \pm 0.45$ & $125.91 \pm 0.37$ & $-43.93 \pm 0.37$ & Field & & Field & & THA & 74.8 \\ J232617 & $38.76 \pm 0.40$ & $-42.38 \pm 0.43$ & $-42.88 \pm 0.32$ & Field & & Field & & Field & \\ J2349 & $21.11 \pm 0.6$ & $121.78 \pm 0.55$ & $-48.08 \pm 0.46$ & Field & & OCT & 16 & TWA & 99.8\\ \hline \label{tab:YMGprob} \end{tabular} {\small\\ Parallax and proper motions were obtained from the Gaia EDR3 catalogue with the exceptions:\\ $^{\rm a} = $ Gaia DR2; $^{\rm b} = $\citet{dittmann_trigonometric_2014}; $^{\rm c} = $\citet{riedel_solar_2014}; $^{\rm d} = $\citet{malo_bayesian_2013}; $^{\rm e} = $\citet{lepine_all-sky_2011}\\ } \end{table*} \subsubsection{AstraLux} The AstraLux Large Multiplicity Survey has been ongoing for over a decade, collecting data of numerous visual binaries by applying the lucky imaging technique. The survey primarily employs two principle instruments; AstraLux Norte on the 2.2m telescope in Calar Alto, Spain \citep{hormuth_astralux_2008}, as well as AstraLux Sur at the 3.5m New Technology Telecope (NTT) at La Silla, Chile \citep{hippler_astralux_2009}. The full frame field of view for the respective AstraLux instrument is $\approx\,24'' \times 24''$ for Norte and $\approx\,15.7''\times15.7''$ for Sur, although typical observations utilise subarray redouts in order to minimise readout times. AstraLux observations are mainly carried out in the SDSS $z^{\prime}$- and $i^{\prime}$-bands, with a preference towards the $z^{\prime}$- band due to its smaller susceptibility to atmospheric refraction compared to the $i^{\prime}$-band \citep{bergfors_lucky_2010}. Our observations typically consisted of 10 000 - 20 000 short exposures of just $15-30\,$ms each, adding up to a total of $300\,$s integration time. Both AstraLux Norte and AstraLux Sur data were reduced with the real-time pipeline at the time of the observations, producing a final image from each observation where a subset of $1-20\%$ of the best frames taken were kept. Generally images where $10\%$ of the frames were kept provided a decent trade-off between sensitivity and resolution. Occasionally for closely separated binaries of similar magnitudes the pipeline would centre the frames on the secondary instead of the primary star, leading to a false stellar ghost to appear at the same separation but shifted at a $180^{\circ}$ from the real secondary \citep{bergfors_lucky_2010}. We performed calibrations for the astrometric measurements with AstraLux by comparing observations of the Orion Trapezium Cluster and M15 to reference observations of the same fields from \citet{mccaughrean_high_1994} and \citet{van_der_marel_italhubble_2002}. The calibrations were performed by measuring the positions of bright stars within the same field that were recognisable and easily identified. We employed between 5-14 reference stars for the calibrations depending on the quality of the point spread function (PSF) and brightness. We assigned the brightest star in the field of view as the main reference, for which we calculated the relative separation and positional angle to for all other reference stars. We then compared the separations and positional angles for our AstraLux measurements to those of the reference observations, taking the average ratio of the separation as the plate scale and the standard deviation as its uncertainty. Correction for True North was performed in similar manner where the average difference in positional angle was used and standard deviation from the average assigned as the uncertainty. The final AstraLux astrometric calibrations calculated here and those obtained from earlier literature are listed in Table.~\ref{tab:calib}. For two epochs, February and April of 2015, we did not have proper reference fields to calibrate the astrometry to with AstraLux Norte, and we assumed a mean pixel scale and correction for True North from the other AstraLux Norte epochs with proper calibration. This only affected two observations of the J1036BC binary and we assumed that the instrument had not changed significantly at this time compared to our other observed epochs. An alteration in plate scale from our smallest to largest calibration values would only change the resulting projected separation for the binary by $\sim 1\,$mas. \begin{table}[t] \centering \caption{Astrometric calibration for AstraLux observations} \begin{tabular}{lccc} \hline \hline Date & Plate Scale & True North & Reference \\ & [mas/pxl] & [deg] & \\ \hline 2008.03 & $23.58 \pm 0.15 $ & $-0.319 \pm 0.18$ & J12 \\ 2008.88 & $23.68 \pm 0.01$ & $0.238 \pm 0.05$ & J14b \\ 2009.13 & $23.55 \pm 0.17$ & $0.224 \pm 0.20$ & This work \\ 2015.17 & $15.23 \pm 0.13$ & $2.87 \pm 0.26$ & This work \\ 2015.90 & $15.20 \pm 0.12$ & $-2.09 \pm 0.39$ & This work \\ 2015.99 & $15.20 \pm 0.11$ & $-2.41 \pm 0.30$ & This work\\ 2016.38 & $15.27 \pm 0.19$ & $2.64 \pm 0.22$ & This work \\ 2018.39 & $15.26 \pm 0.19$ & $3.43 \pm 0.41$ & This work \\ 2018.63 & $15.13 \pm 0.49$ & $-3.40 \pm 0.39$ & This work\\ \hline \label{tab:calib} \end{tabular}\\ {\small J12 = \citet{janson_astralux_2012}; J14b = \citet{janson_noopsortborbital_2014} } \end{table} \subsubsection{NaCo} We downloaded the NaCo data from the ESO archive together with their associated calibration files and performed basic reductions using custom scripts with Python. These basic reductions included corrections for bias, dark, flatfield division and combination of multiple frames. We applied the astrometric corrections from \citet{chauvin_deep_2010} using plate scales $27.01 \pm 0.05$ mas/pxl for observations in the $L'$-band and $13.25 \pm 0.05$ mas/pxl for shorter wavelengths. We do not correct the observing frames here for True North, but instead add a factor of $\pm 0.20$ deg to the uncertainty for the positional angle of each astrometric data point given by NaCo data, which is in line with the True North corrections obtained by \citet{chauvin_deep_2010}. \subsubsection{NOT FastCam} The Lucky Imaging FastCam is an instrument at the Roque de los Muchachos Observatory on La Palma in the Canaries, Spain, designed and capable of obtaining high-resolution images in optical wavelengths from medium-sized ground-based telescopes at the observatory \citep{oscoz_fastcam_2008}. The instrument features a $512 \times 512$ pixels L3CCD from Andor Technology, and a special software package that reduces images in parallel with the data acquisition at the telescope, so that a small fraction of images with minimal atmospheric turbulence can be evaluated in real-time. For our observations, the FastCam instrument was mounted at the 2.56-m Nordic Optical Telescope (NOT), and the observations were carried out in August and November of 2016 using the $I$-band at $820\,$nm. The astrometric calibrations were made in a similar way to our AstraLux astrometric calibrations, comparing reference fields of the M15 stellar cluster with images taken by the Hubble Space Telescope. We obtained a platescale of $30.6 \pm 0.1$ mas/pixel for the 2016.63 epoch in August, and a platescale of $30.5 \pm 0.1$ mas/pixel for the November epoch of 2016.87. The corresponding corrections for True North were $+3.64 \pm 0.01^\circ$ and $-1.54 \pm 0.1^\circ$ respectively. \subsubsection{SPHERE} The SPHERE data were collected as part of a sub-programme for the SHINE survey \citep{chauvin_shine_2017}. The filler programme from which our observations was taken were devoted to astrometric monitoring of tight visual binaries, many of which had been discovered in the AstraLux survey. The observations were taken with the instrument operating in field-tracking mode without any coronograph. Observations were carried out in the IRDIFS-EXT mode, which enabled for simultaneous observations with the integral field spectrograph \citep[IFS;][]{claudi_sphere_2008, mesa_performance_2015} and the dual-band imaging sub-instrument IRDIS \citep{dohlen_infra-red_2008,vigan_photometric_2010}. The IFS instrument operated in wavelengths between $0.96 - 1.64\,\mu$m in the Y to H bands, while IRDIS observations were predominately performed with the K1 ($\lambda_c = 2.110 \pm 0.102 \,\mu$m) and K2 ($\lambda_c = 2.251 \pm 0.109\,\mu$m) bands, as well as the H2 ($\lambda_c = 1.593 \pm 0.052\,\mu$m) and H3 ($\lambda_c = 1.667 \pm 0.054\,\mu$m) bands. All SPHERE data, both IRDIS and IFS modes, were downloaded and reduced using the SPHERE data centre \citep{pavlov_sphere_2008,delorme_sphere_2017}. The reductions carried out by the automated pipeline included basic corrections for bad pixels, dark current, flat field, as well as corrections for the instrument distortion \citep{maire_first_2016} and rotation. Calibrations of the platescale and for the True North angle were performed in accordance to \citet{maire_sphere_2016}, with typical corrections of $\approx 12.267$ mas/pixel for IRDIS and $\approx 7.46$ mas/pixel for IFS observations, with a True North of $\approx -1.75^\circ$. The specific plate scale and True North corrections handled by the pipeline are stated for each individual observing data point. From the astrometric measurements described in Section~\ref{sec:astrometry} we also obtained accurate flux-ratios for the components in each system. We summarised the resulting contrast magnitudes in the SPHERE dual-band images in Table~\ref{tab:contrast}. Not all observed epochs had separate dual-band images available and thus missing from the table. We did not use the IFS data to perform any spectral analysis of the targets here, some of which have superseding spectral information from the SINFONI observations instead \citep{calissendorff_characterising_2020}. Instead, we collapsed the data cubes and performed astrometry on a single frame from the IFS data. \begin{table}[b] \renewcommand{\arraystretch}{1.3} % \centering \caption{Contrast in magnitudes for dual band SPHERE observations.} \begin{tabular}{lccccc} \hline \hline Target & Obs. Date & Band & $\Delta$ mag \\ & yyyy-mm-dd & & \\ \hline J0611 & 2017-02-06 & $K1$ & $0.28 \pm 0.03$ \\ J0611 & 2017-02-06 & $K2$ & $0.29 \pm 0.01$ \\ J0611 & 2019-03-05 & $K2$ & $0.28 \pm 0.01$ \\ J0728 & 2016-03-27 & $H2$ & $1.07 \pm 0.02$ \\ J0728 & 2016-03-27 & $H3$ & $1.08 \pm 0.01$ \\ J0916 & 2018-01-27 & $K1$ & $0.44 \pm 0.07$ \\ J0916 & 2018-01-27 & $K2$ & $0.44 \pm 0.04$ \\ J0916 & 2018-02-25 & $K1$ & $0.45 \pm 0.07$ \\ J0916 & 2018-02-25 & $K2$ & $0.42 \pm 0.05$ \\ J0916 & 2019-03-06 & $K1$ & $0.27 \pm 0.05$ \\ J0916 & 2019-03-06 & $K2$ & $0.30 \pm 0.04$ \\ J1014 & 2018-05-06 & $H2$ & $0.04 \pm 0.01$ \\ J1014 & 2018-05-06 & $H3$ & $0.06 \pm 0.01$ \\ J1014 & 2019-03-09 & $K1$ & $0.05 \pm 0.01$ \\ J1014 & 2019-03-09 & $K2$ & $0.06 \pm 0.01$ \\ J1036 & 2018-04-17 & $K2$ & $0.02 \pm 0.01$ \\ J2016 & 2015-09-24 & $K1$ & $0.36 \pm 0.01$ \\ J2016 & 2015-09-24 & $K2$ & $0.32 \pm 0.01$ \\ J2317 & 2015-09-25 & $K1$ & $1.22 \pm 0.01$ \\ J2317 & 2015-09-25 & $K2$ & $1.17 \pm 0.01$ \\ \hline \label{tab:contrast} \end{tabular} \end{table} \subsection{Astrometry}\label{sec:astrometry} Astrometric positions were calculated with the same procedure as described in \citet{calissendorff_discrepancy_2017,calissendorff_spectral_2019,calissendorff_characterising_2020}. Concisely, a grid in $x$ and $y$ positions was constructed where we scaled the brightness of a reference PSF, placing two of them on the grid which were sequentially shifted in positions to match the observed data. A residual was then calculated by subtracting the constructed model from the observed data, and the procedure was iterated while scaling the brightnesses and shifting the positions of the model until a minimum residual could be found. The basic workflow of the astrometry extraction procedure is illustrated in Figure~\ref{fig:astrometry} where we used the J1036BC binary and our AstraLux data from April 2015 as an example. The astrometric extraction was performed in the same manner for all observations and instruments considered here. Generally we would try to obtain a good PSF reference from the same or close to the same epoch as the observation we were extracting astrometry from. In previous AstraLux observing campaigns, designated PSF reference in the form of single stars have been procured \citep{janson_astralux_2012, janson_noopsorta_2014}. However, that was not the case for later epochs. Instead, we identified which binaries or higher hierarchical systems had large relative separations or isolated components, which we then used as PSF references. For AstraLux, FastCam and SPHERE we had access to the primary in the triplet 2MASS J10364483+1521394 system for several epochs, which was close to an ideal PSF reference for our intended purposes given the similar M-dwarf spectral type of the component to the rest of our target sample and that it was available for most out of our observed epochs. The primary component in the system has a projected separation of $\approx 1''$ to the outer binary pair and can be viewed as a single star-proxy in this context. Another benefit from using the primary of J1036 was that whatever aberrations afflicted the observations, altering the PSFs of the binary, would also be seen in the reference PSF so that they could be accounted for. Nevertheless, to increase the statistical certainty of the astrometric measurements we typically used between 3-10 different reference PSFs depending on target quality and instrument applied. We then calculated the mean separation and positional angles, using the standard deviation as the error which we added quadratically together with the instrumental errors (plate scale and True North errors) to the uncertainty. We did not try to mix PSF references from observations taken with different instruments or settings. Due to the larger field of view and smaller plate scales for the VLT instruments NaCo and SPHERE, we could with ease isolate single components and use them as PSF references. For the AstraLux observations we had the advantage of having a plethora of multiple systems to choose from in the AstraLux Multiplicity Survey. The FastCam observations however did not have quite the same luxury, as the coarser plate scale made it more cumbersome to isolate single components. As such, we mainly used the primaries from the J0111 observations taken in August and the J1036 observations taken in November as PSF references. We also included three additional references for our FastCam astrometry; J0103, J0916 and J1641, which are known tightly bound binaries but appeared as unresolved single sources in the FastCam observations. Since SINFONI was not calibrated for astrometry we added an extra uncertainty term. We checked the consistency between the SINFONI astrometry presented in \citet{calissendorff_characterising_2020} and our SPHERE astrometry for the binaries which were observed at similar epochs and found no large discrepancy between the two. We therefore included the SINFONI data points into the fitting procedure when they were believed to aid in constraining orbital parameters. \subsection{Radial velocities} The orbital motion from the two components in the binaries make them subject to Doppler shifts which can be measured and useful for constraining the orbital motion further. We searched the literature and uncovered unresolved radial velocity (RV) observations for 16 binaries in our sample, which we included into our MCMC fitting to aid the orbital fitting procedure. However, the RV data in the literature is mostly compromised of unresolved measurements in which the two lines from the two components in the binary are blended together. As the strength of these lines are dependent on the spectral template used and fitting method for deriving the RV measurement, which differs from authors and instruments, most RV data were deemed unusable for our the orbital fitting. Hence, only the RV data for seven systems was used in the final orbital fits, listed here in Table~\ref{tab:RV} and shown in their respective fit in Figure~\ref{fig:RV}, where targets with too few RV observations or lack of baseline were omitted. For the seven instances where RV data were available and useful, we included two additional parameters to the MCMC code which evaluated the probability density functions (PDFs) of the offset velocity $v_0$ and RV amplitude $K$. In the adopted formalism which assumed a Keplerian orbit, the radial velocity can be described as \begin{equation} v_{\rm rad} = K \frac{\cos(\theta+\omega) + e\cos(\omega)}{\sqrt{1 - e^2}} +v_0, \end{equation} which amplitude for pure SB1 binaries is deduced from the mass fraction of the secondary component $m_{\rm B}/m_{\rm tot}$ as \begin{equation} K = \frac{2\pi}{P} \frac{m_{\rm B}}{m_{\rm tot}}a \sin i. \end{equation} In principle, the RV data allows for fractional mass and thereby individual masses for the binary components to be derived. However, that is when considering pure SB1 binaries, which is a questionable assumption for the relatively high flux-ratios (and mass-ratios) in our target sample. Therefore, we applied the same method as in \citet{rodet_dynamical_2018}, proposed by \citet{montet_dynamical_2015}, and assumed the sum of two flux-weighted individual RVs to be the considered RV measured. The orbital fit could then fit the RV amplitude as \begin{eqnarray} K &=& (1-F)K_{\rm A}-F\,K_{\rm B} \\ &=& \frac{2\pi}{P} a\sin i\left( (1-F)\frac{m_{\rm B}}{m_{\rm tot}} - F\frac{m_{\rm A}}{m_{\rm tot}}\right) \end{eqnarray} with $F = L^{\rm V}_{\rm B}/(L_{\rm A}^{\rm V} + L_{\rm B}^{\rm V})$ being the fractional flux, $L_{\rm A}^{\rm V}$ and $L_{\rm B}^{\rm V}$ being the luminosities in the visible spectrum for each component, and $K_{\rm A}$ and $K_{\rm B}$ the respective RV amplitude. The majority of the RV data were obtained from the Fiber-fed Extended Range Optical Spectrograph (FEROS) instrument at the ESO-2.20m telescope \citep{FEROS}, with a wavelength coverage of $\lambda ~3500 - 9200\,$Å and resolving power of $R = 48,000$. We did not perform the RV observations or any reanalysis of the data here, using only the values stated from the given literature cited in Table~\ref{tab:RV}. We estimated the flux ratios from the magnitude difference in the $i^{\prime}$-band from the AstraLux observations, as the $i^{\prime}$ span the wavelength $\lambda \sim 6700 - 8400\,$Å, thereby encompassing the most similar wavelength range as the FEROS RV data. For the J2016 system we did not posses any flux-ratio in the $i^{\prime}$-band and applied the flux-ratio in the $I$-band from the FastCam/NOT observations instead, which has a reference wavelength of $\lambda_{\rm ref} \sim 8200\,$Å. The flux ratios used in our calculations are shown in Table~\ref{tab:RVflux}. The reported uncertainties of the flux-ratios are likely underestimated due to the different wavelength coverage by the photometric bands and that of FEROS. This approach of weighing RV signals by the flux-ratio proved to have some limitations, where some estimated flux-ratios resulted in fractional-masses with higher dynamical mass for the secondary component B compared to the primary A component. We kept the results we obtained from our calculations, but highlight the caveat of the method not being fully reliable for our target sample, mainly serving as a first-order method. In order to disentangle the two lines for more precise estimates require more refined methods, for example tracing back individual RVs \citep[e.g.][]{czekala_disentangling_2017}. \begin{table}[t] \centering \caption{Radial velocity data} \begin{tabular}{lccc} \hline \hline Target & MJD & Instrument & RV [km$/$s] \\ \hline J0437 & 53707.288 & FEROS & $20.40 \pm 0.09$ \\ & 55203.146 & FEROS & $24.25 \pm 0.09$ \\ & 56912.331 & FEROS & $23.04 \pm 0.09$ \\ & 56979.236 & FEROS & $22.89 \pm 0.08$ \\ & 57059.095 & FEROS & $22.89 \pm 0.08$ \\ & 57290.319 & FEROS & $22.21 \pm 0.08$ \\ & 57291.257 & FEROS & $22.46 \pm 0.10$ \\[.3em] J0459 & 55942.000 & MIKE & $43.33 \pm 0.21$ \\ & 56912.344 & FEROS & $43.17 \pm 0.21$ \\ & 56980.125 & FEROS & $43.03 \pm 0.17$ \\ & 57060.127 & FEROS & $43.00 \pm 0.19$ \\ & 57291.272 & FEROS & $42.80 \pm 0.19$ \\[.3em] J0532 & 55526.282 & FEROS & $24.26 \pm 0.13$ \\ & 55615.041 & FEROS & $24.24 \pm 0.12$ \\ & 56164.407 & FEROS & $24.80 \pm 0.14$ \\ & 56645.000 & DuPont& $25.58 \pm 0.65$ \\ & 56980.258 & FEROS & $24.82 \pm 0.14$ \\ & 57059.134 & FEROS & $25.23 \pm 0.13$ \\[.3em] J0613 & 55522.312 & FEROS & $21.28 \pm 0.21 $ \\ & 56168.403 & FEROS & $22.07 \pm 0.25 $ \\ & 56402.000 & CRIRES &$22.90 \pm 0.20 $ \\ & 56700.142 & FEROS & $22.90 \pm 0.19 $ \\ & 56980.335 & FEROS & $22.91 \pm 0.23 $ \\ & 57058.209 & FEROS & $23.11 \pm 0.23 $ \\[.3em] J0728 & 53421.159 & FEROS & $29.93 \pm 0.10$ \\ & 53423.153 & FEROS & $30.09 \pm 0.10$ \\ & 54168.043 & FEROS & $28.31 \pm 0.10$ \\ & 55526.355 & FEROS & $27.74 \pm 0.11$ \\ & 56173.407 & FEROS & $28.08 \pm 0.10$ \\ & 56980.349 & FEROS & $28.91 \pm 0.12$ \\ & 57058.295 & FEROS & $28.74 \pm 0.12$ \\ & 57166.001 & FEROS & $28.90 \pm 0.13$ \\ & 57853.031 & FEROS & $28.15 \pm 0.08$ \\ & 57855.144 & FEROS & $28.29 \pm 0.10$ \\[.3em] J0916 & 56746.000 & DuPont& $22.85 \pm 0.70$ \\ & 56984.343 & FEROS & $21.21 \pm 0.12$ \\ & 57059.297 & FEROS & $20.43 \pm 0.15$ \\ & 57060.209 & FEROS & $20.54 \pm 0.15$ \\ & 57166.015 & FEROS & $19.66 \pm 0.16$ \\[.3em] J2317 & 54995.000 & DuPont &$-1.04 \pm 0.84$ \\ & 56432.000 & ESPaDOnS&$4.40 \pm 0.20$ \\ & 56912.207 & FEROS & $-0.06 \pm 0.14$ \\ & 56979.091 & FEROS & $-0.79 \pm 0.13$ \\ \hline \label{tab:RV} \end{tabular} \\ {\small FEROS ($3500-9200\,$Å) data from \citet{durkan_radial_2018}; DuPpont ($3700-7000\,$Å), ESPaDOnS ($3700-10500\,$Å) and MIKE ($4900-10000\,$ Å) from \citet{schneider_acronym_2019}; CRIRES ($15306-15688\,$Å) from \citet{malo_banyan_2014}. } \end{table} \begin{table}[t] \centering \caption{Flux ratios for radial velocities} \begin{tabular}{lc} \hline \hline FEROS & $3500-9200\,$Å\\ AstraLux $i^{\prime}$ & $6689-8389\,$Å \\ \hline \hline Target & $i^{\prime}$-band flux ratio \\ \hline J0437 & $0.03 \pm 0.01$ \\ % J0459 & $0.19 \pm 0.01$ \\ % J0532 & $0.24 \pm 0.02$ \\ % J0613 & $0.20 \pm 0.01$ \\ % J0728 & $0.21 \pm 0.01$ \\ % J0916 & $0.30 \pm 0.04$ \\ % J2317 & $0.17 \pm 0.02$ \\ % \hline \label{tab:RVflux} \end{tabular} \\ \end{table} \section{Orbital fitting} \label{sec:orbs} For the orbital fitting procedure we fit the relative orbit of the fainter component in the binary, typically denoted as B here, with respect to the brighter A component. We assumed Keplerian orbits projected on the plane of the sky, so that in the chosen formalism the astrometric position of the companion B could be written as: \begin{eqnarray} x &=& \Delta {\rm Dec} = r (\cos(\omega+\theta)\cos\Omega-\sin(\omega+\theta)\cos i \sin \Omega)\\ y &=& \Delta {\rm Ra} = r (\cos(\omega+\theta)\sin \Omega + \sin(\omega+\theta)\cos i \cos \Omega) \end{eqnarray} with $\omega$ being the argument of the periastron, $\theta$ the true anomaly, $\Omega$ the longitude of the ascending node and $i$ the inclination. Here, $r = a(1 - e^2)/(1 + e\cos\theta)$ is the radius with $a$ being the semi-major axis and $e$ the eccentricity. The orbital fits were then performed using the observed astrometries to derive the most likely seven orbital parameters; $a$, $e$, $\omega$, $\Omega$, $i$, period $P$ and time of periastron $t_p$. In order to derive and constrain orbital parameters for our target binaries we applied two complementary approaches and codes. Initially, we applied a grid-search of the seven orbital parameters, the same as described in \citet{calissendorff_discrepancy_2017} and \citep{kohler_orbits_2008,kohler_orbits_2012,kohler_orbits_2013,kohler_orbits_2016}. The procedure determined Thiele-Innes elements for points in a grid by solving a linear fit to the astrometric data utilising singular value decomposition. The grid-search was then repeated until a minimum was found and refined for a smaller grid step size. The best-fitted parameters were then determined by comparing the reduced $\chi^2$ from the resulting orbit obtained from the fitted parameters and the relative astrometry for the binary components as $$ \chi^2_\nu = \frac{\chi^2}{2 N_{\rm obs} - 7} $$ with $$ \chi^2 = \sum_i \left( \left(\frac{s_{\rm obs,\,i} - s_{\rm mod,\,i}}{\sigma_{s,{\rm i}}}\right)^2 + \left(\frac{{\rm PA}_{\rm obs,\,i} - {\rm PA}_{\rm mod,\,i}}{\sigma_{\rm PA,\,i}}\right)^2 \right), $$ where 7 is the number of orbital parameters fitted (9 parameters for systems where RV measurements were applied), $s$ and PA are the separation and positional angles respectively, and $\sigma$ their uncertainty. The algorithm is based on a Levenberg-Marquardt $\chi^2$ minimisation \citep{press_numerical_1992}, and relies heavily on the starting values which may bias certain orbital parameters if given insufficient orbital coverage or poor initial conditions. We stress that a low $\chi^2_\nu$-value is not necessarily a good indicator for a good orbital fit by itself, rather that the measured astrometric data points are well fitted to the calculated orbit. As such, a high $\chi^2_\nu$ value is not inevitably a bad orbital fit, but could indicate for the uncertainty in the measured astrometric data points to be underestimated, and therefore also the uncertainty in the derived orbital parameters and resulting dynamical mass estimate. In order to address the underestimation of the uncertainty we scaled the astrometric errors for orbits in the grid by $\sqrt{\chi^2_\nu}$ and refitted the orbits while ensuring that the $\chi^2_\nu$ was equal to 1. For the second approach for constraining the orbital parameters we employed an Monte-Carlo Markov Chain (MCMC) Bayesian analysis technique \citep{ford_quantifying_2005,ford_improving_2006}, the same as used in \citet{rodet_dynamical_2018}. From the MCMC code we obtained the PDFs for the parameters. A sample of 500,000 orbits were randomly picked following the convergence criterion of the applied Gelman-Rubin statistics in the fitting. The sample was assumed as representative of the PDFs of the orbital elements given the initial priors, which were chosen to be uniform in $p = (\ln a, \ln P, e, \cos i, \Omega+\omega, \omega-\Omega, t_p)$. In this way, any orbital solution with the couples $(\omega, \Omega)$ and $(\omega+\pi, \Omega+\pi)$ yield the same astrometries, and thus the algorithm fits $\Omega+\omega$ and $\omega-\Omega$ to avoid this degeneracy. Nevertheless, due to how the $(\Omega,\,\omega)$ pair is defined, some degeneracy remains and the MCMC occasionally found two families of solutions for some orbits, which is interpreted as a $\pm 180\,^{\circ}$ ambiguity by the routine. The actual uncertainty is more centred upon the probability peak for each family of solutions. Therefore, for systems which lack RV and are subjected to this degeneracy we cut $90^{\circ}$ around the most probable peak and computed the error around that single interval, allowing for a well-defined uncertainty when the distribution is clearly peaked. The introduction of RV breaks the degeneracy of the ($\Omega, \omega$) couple, so that unique values for these variables could be derived for the systems which had sufficient RV data. The observations with associated astrometric measurements gathered in this work are listed at the end of the paper in the Appendix~\ref{appendixA}, where $s$ is the separation between the binaries in mas, PA the positional angle in degrees. In the table we also listed the deviation between the orbital fit from the grid-method and the observations as $|\Delta s|/\sigma_s$ and $|\Delta {\rm PA}|/\sigma_{\rm PA}$, which were calculated as $\sqrt{({\rm obs} - {\rm fit})^2}/ \sigma_{\rm obs}$. The $\chi^2$ is then related as the sum of the squares of the deviations. The resulting orbits from the grid-search orbital fitting procedure are shown in Figures~\ref{fig:orb_J0008} - \ref{fig:orb_J2349} together with their associated best-fit parameters, and the $68\,\%$ confidence interval around the probability peak for the MCMC. The astrometric measurements are included in the figures as black dots, with grey ellipses representing their associated uncertainty at the 1-$\sigma$ level before scaling, and blue lines connect their expected positions from the fit. Most epochs are labelled with their date of observations, but some plots feature less explicitly spelled out dates to avoid cluttering the figure. The semi-major axis $a$ and total system mass $M_{\rm s}$ are listed in both AU and solar masses, as well as in units of mas and mas$^2$/yr$^3$ in order to give the values without distance measurements and uncertainties incorporated. The $(\omega, \Omega)$ couple which have confidence intervals defined with a $\pm 180^{\circ}$ degeneracy for systems that lack RV measurements are marked with an asterisk $(^*)$. The results from the MCMC with associated probability peaks and orbits are given in Appendix~\ref{appendixB2} for the previously unpublished constraints of the J0613, J0916 and J232617 systems. We noted that the minimised $\chi_\nu^2$-value was not sufficient alone to objectively disclose the robustness of a given orbital fit. Instead, we adopted a custom-made grading system loosely based on the orbital fit grading criteria from \citet{worley_fourth_1983} and \citet{hartkopf_2001_2001} in order to quantitatively assess how well-constrained each orbital fit was. A direct comparison to the grading criteria from \citet{worley_fourth_1983} could not be made, as for example it does not account for the additional RV data we possessed for some of our systems, nor did we weight the observations or literature epochs when estimating our grades. For each orbit we calculated a value from a linear combination of the largest gap for the positional angle and phase\footnote{The phase coverage is calculated from the time of periastron and the period of the orbit, and used to better represent orbits with high inclination where the positional angle does not change much.} coverage for the observed epochs of the orbit, divided by the number of orbital revolutions and observed epochs. The values for all systems were sequentially scaled between 1 and 5 from lowest (best, J0008) to highest (worst, J0611) value, effectively creating five equally sized bins. The details of the calculations are outlined in Appendix~\ref{appendixC}, with final grades presented in Table~\ref{tab:grades}. Because the grid-search method included more astrometric data points and to be consistent with the method used for the main results, we employed this strategy for the orbits obtained from the grid-search approach only, and adopted the following grading scale: Reliable (1), the orbit is well-constrained; good (2), only minor changes to the orbital elements are expected; tentative (3), no major changes are expected for the orbit; preliminary (4), substantial revisions for the orbit are likely; indetermined (5), the orbital elements are not reliable or necessarily approximately correct. One of the important factors allowing for constrained dynamical masses in our survey is due to to the updated parallaxes from the Gaia mission, as the distance to the system is typically the main contribution to the uncertainty. With that in mind, it is worth noting that the current Gaia data releases are not yet optimised for handling binaries as they are not photometrically resolved, and further improvements are expected to follow in future releases. The impact of the Gaia parallax measurements is more thoroughly explained for the individual systems in Appendix~\ref{appendixD}. \section{Results and discussion}\label{sec:discussion} We were able to derive individual dynamical masses for the binary components for seven systems in our target sample. Furthermore, we were able to procure luminosities from the resolved observations and age estimates from their adopted YMG membership for the respective systems, which together with their dynamical masses we compared to pre- main sequence (PMS) evolutionary models, probing their accuracy. Although many evolutionary models exist in the literature, here we adopted the evolutionary models from \citet[hereafter BHAC15]{baraffe_new_2015}, as they are well-suited for lower mass stars and younger ages, reflecting our target sample. The dynamical mass estimates for the binaries with individual masses are plotted against stellar isochrones from the BHAC15 models in Figure~\ref{fig:isochrones} in a mass-luminosity diagram. The individual mass data-points with distances to the respective system, corresponding absolute magnitudes, their approximate associate age, and estimated theoretical mass from the BHAC15 models are listed in Table~\ref{tab:mass}. The age-ranges listed in the Table~\ref{tab:mass} show the ages of the isochrones used to calculate the theoretical mass, and not the given age-range of the respective YMG shown in Table~\ref{tab:targets}. The absolute magnitudes were calculated from the unresolved 2MASS $K$-band magnitudes of the systems, together with the $K$-band flux-ratios from our SPHERE observations or from previous SINFONI observations \citep{calissendorff_characterising_2020}. We therefore prescribe $K^{\prime}$ name convention for the absolute magnitude in the fourth column of Table~\ref{tab:mass} in in order to highlight this difference. \begin{table*}[b] \renewcommand{\arraystretch}{1.3} % \centering \caption{Dynamical masses for individual binary components} \begin{tabular}{lccccc} \hline \hline Target & Dynamical. Mass & Distance & App. Mag. & Age & Theoretical Mass \\ & $[M_\odot]$ & [pc] & $[K^{\prime}]$ & [Myrs] & $[M_\odot]$ \\ \hline J0437A & $0.52 \pm 0.02$ & $27.77 \pm 0.37$ & $6.79 \pm 0.04$ & 20 - 30 & 0.60 - 0.69 \\ J0437B & $0.40^{+0.03}_{-0.02}$ & $27.77 \pm 0.37$ & $7.72 \pm 0.03$ & 20 - 30 & 0.32 - 0.41 \\ J0459A & $0.19^{+0.03}_{-0.11}$ & $44.93 \pm 1.26$ & $9.22 \pm 0.06$ & $>500$ & 0.47 - 0.49\\ J0459B & $0.07^{+0.09}_{-0.01}$ & $44.93 \pm 1.26$ & $10.43 \pm 0.03$ & $>500$ & 0.28 - 0.29\\ J0532A & $0.59^{+0.09}_{-0.08}$ & $36.74 \pm 0.78$ & $7.50 \pm 0.05$ & 20 - 30 & 0.56 - 0.66 \\ J0532B & $0.42^{+0.08}_{-0.07}$ & $36.74 \pm 0.78$ & $8.12 \pm 0.05$ & 20 - 30 & 0.37 - 0.48\\ J0613A & $0.36^{+0.29}_{-0.09}$ & $16.84 \pm 0.12$ & $7.85 \pm 0.03$ & 30 - 50 & 0.17 - 0.23 \\ J0613B & $0.21^{+0.12}_{-0.04}$ & $16.84 \pm 0.12$ & $9.02 \pm 0.03$ & 30 - 50 & 0.07 - 0.10 \\ J0728A & $0.52 \pm 0.08$ & $15.59 \pm 0.11$ & $6.09 \pm 0.01$ & 120 - 200 & 0.60 - 0.61\\ J0728B & $0.55 \pm 0.09$ & $15.59 \pm 0.11$ & $7.06 \pm 0.03$ & 120 - 200 & 0.43 - 0.46\\ J1036B & $0.17 \pm 0.01$ & $20.01 \pm 0.03$ & $9.46 \pm 0.01$ & 300 - 500 & 0.19 \\ J1036C & $0.17 \pm 0.01$ & $20.01 \pm 0.03$ & $9.50 \pm 0.01$ & 300 - 500 & 0.18 - 0.19 \\ J2317A & $0.34 \pm 0.02$ & $16.46 \pm 0.21$ & $7.46 \pm 0.03$ & $>500$ & 0.41 \\ J2317B & $0.27 \pm 0.02$ & $16.46 \pm 0.21$ & $8.74 \pm 0.02$ & $>500$ & 0.22 - 0.23\\ \hline \label{tab:mass} \end{tabular}\\ {\small The age ranges listed were used to calculate theoretical masses in the last column predicted by the models for the given absolute magnitude in 2MASS $K$-band. The magnitudes listed in the fourth column are derived from the unresolved 2MASS $K$-band magnitudes of the system and the flux ratios in the SPHERE or SINFONI $K$-bands, and the name convention of $K^{\prime}$ is applied to mark this difference. For J0437 we used the flux ratio from the Keck/NIRC2 $K-$band observations from \citet{montet_dynamical_2015}. } \end{table*} Overall we found a good consistency between the dynamical mass estimates and the theoretical mass from the models in Figure~\ref{fig:isochrones}. Most systems have dynamical masses which correspond well with the ages from their respective YMG according to the isochrone tracks. We found two outliers from the prediction of the isochrones in the mass-magnitude diagram, J0459 and J0532. The space velocities for J0459 does not suggest it to belong to any known YMG or association, albeit the binary components are placed amongst the younger isochrones at around $\approx 40$ Myrs in the mass-magnitude diagram in Figure~\ref{fig:isochrones}. Nevertheless, the orbit for the system is not constrained to such a degree that stringent dynamical masses could be procured, and it is likely that our estimate is low. A higher mass would bring the system more in line with an older age. For J0532 we found a tentative orbital fit, displaying some degeneracy in the period-separation space which caused large uncertainties on the dynamical mass. Even with the large formal error bars we found a higher dynamical mass compared to what was expected by the model isochrones for the given young age. This discrepancy is likely attributed to the uncertain method of using the flux-ratio for the RV signals to derive the mass-ratio, but could also potentially be explained by the the individual components being unresolved binaries themselves, causing the observed source to appear underluminous for its mass. Nevertheless, such unseen companions would have to be in close-in orbits of less than one AU to avoid detection from our SPHERE observations. The RV data for the J0916 system aided in constraining the orbit, and was consistent with a Keplerian orbit. However, the flux-weighted approximation for inferring mass-fraction did not seem to apply for this particular system, where we obtained a mass-fraction $\gg0.5$ for the fainter companion. As such, we omitted the system from the mass-magnitude altogether. The flux-weighted approach also seemed dubious in other instances as well, including J0437, and to some extent J0728 where the mass-fraction suggested a slightly higher companion mass than primary, but well within the $68\,\%$ confidence interval and consistent with the results from \citet{rodet_dynamical_2018} as well as with the isochrones, confirming the inferred age-range for the system belonging to the AB Doradus moving group of 120-200 Myrs. We did not find the same mass-discrepancy as \citet{rodet_dynamical_2018}, however, our method of testing dynamical against theoretical mass only consisted of the $K-$band and the age range $120-200$ Myrs for a single theoretical model. For a younger age of 50 Myrs we obtain the same discrepancy of $\approx 15\,\%$ missing mass, with the models underpredicting the total mass of the system compared to the dynamical mass. \citet{rodet_dynamical_2018} explored the possibility for an unseen companion to explain the missing mass, which could remain hidden and stable if it were closer in than 0.1 AU from one of the other components. From our flux-weighted RV analysis of the system we obtain a slightly higher mass-fraction for the secondary component than the primary, which is also something that would be expected from a higher order hierarchical system such as a triplet. Resolved spectroscopic data could potentially discover such a companion. No RV data were available for the J1036 binary that could aid the orbital fit. Regardless, since the system is a known triplet we could take advantage of the orbit of the outer binary pair around their common centre of mass along their path on their common orbit around the primary A component in the system. This was previously done in \citet{calissendorff_discrepancy_2017} and we adopt their results of the outer binary being of equal mass. Given the new parallax measurements for the system provided by the Gaia EDR3, the uncertainty in the distance to the system was reduced. In turn, our dynamical mass estimate for the system is the most robust in our sample. The new dynamical mass from the improved orbit and distance is also well aligned with the prediction from the theoretical model isochrones, thus also abolishing the former discrepancy reported by \citet{calissendorff_discrepancy_2017}. The recent efforts by the Gaia mission made it possible to provide updated dynamical masses and absolute magnitudes for some of these systems which previous distance measurements remained somewhat uncertain. In principle, Gaia astrometry could also help to further constrain the orbits of binaries from exploiting the instantaneous acceleration in proper motions, for example by comparing to Hipparcos measurements \citep[e.g.][]{calissendorff_improving_2018, brandt_hipparcos-gaia_2018, brandt_precise_2019}. However, only one system in our sample, J0728, exists in both the Gaia and Hipparcos catalogues, and the baseline of 24.5 years between Hipparcos and Gaia far exceeds the orbital period of the system of $\,\approx 7.8$ years, causing some degeneracy when including proper motion acceleration into the orbital fit. Nonetheless, future data releases from Gaia may provide more advantageous information for binary systems with tentative orbital constraints that are unresolved by the space telescope itself. At the same time, one of the troubles for Gaia is to properly identify the photo-centre for unresolved binaries \citep{lindegren_gaia_2018}, which the ground-based astrometry could remedy to some extent. Some systems exhibit a discrepancy in mass when compared to the evolutionary models. A portion of this may be attributed to undetected close-in companions, and given our sample of 20 Mdwarf binaries with the expected multiplicity frequency of $\approx 25\,\%$ and companion rate $\geq 30\,\%$ \citep{winters_solar_2019}, it is likely that some of these systems contain yet unknown companions. Furthermore, systems which contain more mass, and thereby have faster orbits, are of notable interest for orbital monitoring programmes as the orbits can be mapped out in relatively shorter period of time, and the sample could be afflicted by a selection bias due to this. Observations with for example interferometry that can achieve smaller angular resolutions could place more stringent constraints on the parameter space for a potential visual companions, while spectroscopic monitoring could rule out spectroscopic companions at very low separations. \section{Summary and conclusions}\label{sec:summary} We considered 20 systems of astrometric M-dwarf binaries for which we present over 75 previously unpublished astrometric data points from AstraLux Norte/CAHA, AstraLux Sur/NTT, FastCam/NOT, NaCo/VLT and SPHERE/VLT. The new astrometric data allowed us to constrain Keplerian orbits and derive dynamical masses for the binaries. We constructed our own relative scale for grading the orbital fits in an attempt to obtain a more objective assessment of the results, with improving grades based on the positional angle and phase coverage as well as revolutions made and number of epochs observed. Provided our modest sample of 20 binaries used for our grading criteria, is likely to be skewed and not directly comparable to the orbital grading system demonstrated on the $\gtrsim 900$ binaries by \citet{worley_fourth_1983}. The grades provided some quantitative measurement of the orbital fit as a whole, but did not always reflect the uncertainties of the individual orbital parameters or dynamical mass estimate. For example, the orbit for J0459 was labelled as grade 4 (preliminary) but showed low errors for the dynamical mass. Thus, the grades can be interpreted as how well the orbital fit can be trusted, and whether we expect small or large improvements to be made for the orbit, not how stringent the uncertainties are. We summarised the information on the orbital period, semi-major axis, total dynamical and theoretical masses from both orbital fitting methods, along with grades for each orbit in Appendix~\ref{appendixE}. This was the first time orbital constraints were attempted and reported for 14 of our targeted systems. We found good orbital constraints for three systems that had not previously been published, J0613, J0916 and J232617. The PDFs for the orbital parameters obtained from the MCMC fitting procedure for three of these systems together with their associated orbit predictions and mass-distributions are presented in Appendix~\ref{appendixB2}. Six systems, J0008, J0437, J0532, J0728, J1036 and J2317 all had previous orbital parameters, and our additional data mainly confirmed the earlier results, with a slight improvement for J0437 and J0728 with updated Gaia parallaxes. J1036 and J2317 had previously more uncertain dynamical mass estimates due to the lack of good distance measurements, which we redressed here in this work. The improved dynamical mass estimates for J1036 and J2317 now stand among the most robust in our sample. The remaining 11 systems still require further monitoring to provide reliable dynamical masses, and our new data pave the way for future orbital constraints for these systems. Out of the 20 binaries in our sample, the four systems J0008, J0728, J1036 and J2317 received reliable orbital constraints. We determined tentative orbits for six systems; J0111, J0245, J0907, J0916, J1014 and J232611, which are expected to yield more robust orbital parameter constraints in the near future if additional astrometric measurements can be procured. Six systems, J0459, J0532, J2016, J2137, J232611 and J2349, only had their orbits constrained to a preliminary level, which may help to indicate an approximate period to some degree, while most of the orbital parameters are yet too uncertain to place robust dynamical masses. For the J0225 and J0611 systems the orbits were completely undetermined. We searched the literature for RV data for the target sample presented here, finding RV measurements that were consistent with Keplerian orbits for seven of our binaries. Since the binaries were not of pure SB1 single lined spectroscopic binaries we assumed a flux-weighted method to infer individual dynamical masses. The method proved unreliable for the J0916 binary, and dubious at most for J0437. We therefore argue that only their total dynamical masses are reliable, and that a different approach is necessary to disentangle the individual lines and masses. When we compared the derived individual dynamical masses with theoretical masses from the BHAC15 model isochrones we found an overall good consistency between the empirical and theoretical masses. The largest discrepancy was found for J0459, for which the isochrones predict ages between 10-50 Myrs, but Bayesian membership probabilities suggest the system more likely to be belonging to the field and no known young association or group, thus expected to be much older. We could explain this discrepancy from the degeneracy in the orbital fit, and it is possible that the period is overestimated while the semi-major axis underestimated. The near-IR spectral type for the primary being an earlier M$1.7\pm0.6$ does also suggest the system to be older and more massive than what our derived dynamical-mass advocates. However, spectral analysis by \citet{calissendorff_characterising_2020} shows a discrepancy between the near-IR bands, where the $J-$band suggest a lower surface-gravity, and thereby younger age, compared to the other bands. Our grid-search method suggested a greater mass and confidence interval than the MCMC orbital fit for this system, and we are likely to see further improvements to the orbit with just a single more astrometric data point in the future, which may also aid to constrain the age of the system through isochronal dating. Out of our sample of 20 low-mass binaries, eight have strong indicators of being young and members of YMGs and associations (excluding J1036 which has uncertain Ursa-Majoris affiliation and it is questionable whether the age estimate of $\sim 400$ Myrs can be considered young in this context). These systems will continue to prove to be important calibrators for evolutionary models, and as orbital monitoring continues, better estimates for important formation diagnostics such as semi-major axis and eccentricity distributions will be acquired. With the increasing sample size of young binaries with stringent orbital parameter constraints, we will soon achieve a set of empirical isochrones, which can then be utilised to evaluate more precise ages of nearby young moving groups. \begin{acknowledgements} The authors thank the anonymous referee for the comments which helped improve the paper. This project received funding by the Swedish Royal Academy (KVA). M.J. gratefully acknowledges funding from the Knut and Alice Wallenberg foundation. S.D. gratefully acknowledges support from the Northern Ireland Department of Education and Learning. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia}Data Processing and Analysis Consortium (DPAC,\url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. This work has made use of the SPHERE Data Centre, jointly operated by OSUG/IPAG (Grenoble), PYTHEAS/LAM/CeSAM (Marseille), OCA/Lagrange (Nice), Observatoire de Paris/LESIA (Paris), and Observatoire de Lyon/CRAL, \end{acknowledgements} \bibliographystyle{aa} \bibliography{references-binary_orbits.bib} % \begin{appendix} \clearpage \onecolumn \section{Astrometric data}\label{appendixA} \begin{center} \begin{longtable}{lccccccccc} \caption{Observations and astrometry.}\label{tab:astrometry}\\ \hline \hline \multicolumn{1}{l}{Target} & \multicolumn{1}{c}{Epoch} & \multicolumn{1}{c}{$s$} & \multicolumn{1}{c}{PA} & \multicolumn{1}{c}{\underline{$|\Delta s|$}} & \multicolumn{1}{c}{\underline{$|\Delta {\rm PA}|$}} & \multicolumn{1}{c}{Instrument} & \multicolumn{1}{c}{Filter} & \multicolumn{1}{c}{Contrast} & \multicolumn{1}{c}{Reference} \\ \multicolumn{1}{l}{2MASS} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{[mas]} & \multicolumn{1}{c}{[deg]} & \multicolumn{1}{c}{$\sigma_s$} & \multicolumn{1}{c}{$\sigma_{\rm PA}$} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{[$\Delta$mag]} & \multicolumn{1}{c}{} \\ \hline \endfirsthead \multicolumn{10}{c}% {{\tablename\ \thetable{}. continued.}} \\ \hline \hline \multicolumn{1}{l}{Target} & \multicolumn{1}{c}{Epoch} & \multicolumn{1}{c}{$s$} & \multicolumn{1}{c}{PA} & \multicolumn{1}{c}{\underline{$|\Delta s|$}} & \multicolumn{1}{c}{\underline{$|\Delta {\rm PA}|$}} & \multicolumn{1}{c}{Instrument} & \multicolumn{1}{c}{Filter} & \multicolumn{1}{c}{Contrast} & \multicolumn{1}{c}{Reference} \\ \multicolumn{1}{l}{2MASS} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{[mas]} & \multicolumn{1}{c}{[deg]} & \multicolumn{1}{c}{$\sigma_s$} & \multicolumn{1}{c}{$\sigma_{\rm PA}$} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{[$\Delta$mag]} & \multicolumn{1}{c}{} \\ \hline \endhead \multicolumn{1}{l}{{...}} & \multicolumn{1}{c}{{...}} & \multicolumn{1}{c}{{...}} & \multicolumn{1}{c}{{...}} & \multicolumn{1}{c}{{...}} & \multicolumn{1}{c}{{...}} & \multicolumn{1}{c}{{...}} & \multicolumn{1}{c}{{...}} & \multicolumn{1}{c}{{...}} & \multicolumn{1}{c}{{...}} \\ \hline \endfoot \hline \hline \endlastfoot J00085391+2050252 & 2001.60 & $111 \pm 5$ & $169.9 \pm 0.5$ & 0.58 & 0.62 & CFHT & $H$ & 0.46 & B04 \\ & 2003.94 & $138 \pm 4$ & $37.0 \pm 1.1$ & 0.84 & 0.93 & NaCo & NB$_{1.64}$ & $0.28 \pm 0.04$ & This work \\ & 2007.56 & $110 \pm 1$ & $167.1 \pm 2.5$ & 1.40 & 0.23 & NaCo & $H$ & $0.81 \pm 0.08$& This work \\ & 2012.02 & $133 \pm 5$ & $271.9 \pm 1.7$ & 0.80 & 0.11 & AstraLux & $z^{\prime}$ & $1.20 \pm 0.07$ & J14a \\ & 2012.02 & & & & & AstraLux & $i^{\prime}$ & $1.59\pm0.10$ & J14a\\ & 2014.61 & $147 \pm 2$ & $97.5 \pm 1.0$ & 0.24 & 5.03 & AstraLux & $z^{\prime}$ & & J14b \\ & 2016.63 & $130 \pm 3$ & $351.3 \pm 1.5$ & 0.72 & 2.02 & FastCam & $I$ & $0.19 \pm 0.09$ & This work \\ & 2016.87 & $122 \pm 4$ & $325.1 \pm 2.6$ & 1.59 & 3.10 & FastCam & $I$ & $0.17 \pm 0.24$ & This work \\ & 2019.54 & $111 \pm 1$ & $154.4 \pm 0.2$ & 0.59 & 0.70 & HRCam & $I$ & 0.0 & T21$^\ddagger$ \\ & 2019.86 & $123 \pm 1$ & $130.04 \pm 0.2$ & 0.25 & 1.54 & HRCam & $I$ & 0.3 & T21$^\ddagger$ \\ & 2020.83 & $152 \pm 1$ & $77.5 \pm 0.5$ & 0.99 & 1.62 & HRCam & $I$ & 0.0 & T21$^\ddagger$ \\ & 2020.92 & $151 \pm 1$ & $74.2 \pm 1.2$ & 0.39 & 0.05 & HRCam & $I$ & 0.2 & T21$^\ddagger$ \\[.3em] J01112542+1526214 & 2000.62 & $409 \pm 3$ & $147.2 \pm 0.3$ & 0.59 & 1.13 & CFHT & $K$ & 0.69 & B04 \\ & 2006.86 & $309 \pm 3$ & $ 186.1 \pm 0.3$ & 0.79 & 1.63 & AstraLux & $i^{\prime}\,z^{\prime}$ & & J12 \\ & 2007.01 & $304 \pm 3$ & $ 188.0 \pm 0.3$ & 0.18 & 0.47 & AstraLux & $i^{\prime}\,z^{\prime}$ & & J12 \\ & 2008.03 & $297 \pm 3$ & $ 197.3 \pm 0.4$ & 1.50 & 0.93 & AstraLux & $i^{\prime}\,z^{\prime}$ & & J12 \\ & 2008.64 & $292 \pm 3$ & $ 203.1 \pm 0.3$ & 1.49 & 1.45 & AstraLux & $i^{\prime}\,z^{\prime}$ & & J12 \\ & 2008.88 & $289 \pm 3$ & $ 205.1 \pm 0.3$ & 0.97 & 0.43 & AstraLux & $z^{\prime}$ & $1.11 \pm 0.11$ & J12 \\ & 2008.88 & & & & & AstraLux & $i^{\prime}$ & $1.23 \pm 0.10$ & J12 \\ & 2011.85 & $303 \pm 5$ & $ 231.5 \pm 0.5$ & 1.83 & 4.34 & AstraLux & $i^{\prime}\,z^{\prime}$ & & J14a \\ & 2012.65 & $308 \pm 4$ & $ 238.4 \pm 0.3$ & 1.11 & 7.89 & AstraLux & $i^{\prime}\,z^{\prime}$ & & J14a \\ & 2012.89 & $327 \pm 15$ & $ 241.1 \pm 0.8$ & 1.34 & 2.13 & AstraLux & $z^{\prime}$ & $1.71 \pm 0.86$ & J14a \\ & 2012.89 & & & & & AstraLux & $i^{\prime}$ & $1.46\pm0.15$ & J14a \\ & 2014.61 & $340 \pm 4$ & $ 256.8 \pm 0.2$ & 0.91 & 3.91 & AstraLux & $z^{\prime}$ & & This work \\ & 2015.98 & $367 \pm 1$ & $ 266.9 \pm 0.4$ & 3.64 & 5.05 & AstraLux & $z^{\prime}$ & $1.12 \pm 0.08$ & This work \\ & 2016.63 & $369 \pm 2$ & $ 272.9 \pm 0.3$ & 3.67 & 14.21 & FastCam & $I$ & $0.89 \pm 0.03$ & This work \\ & 2018.73 & $412 \pm 1$ & $ 275.2 \pm 0.9$ & 2.97 & 4.49 & HRCam & $I$ & 0.4 & T21$^\ddagger$ \\ & 2018.98 & $416 \pm 5$ & $ 279.5 \pm 1.4$ & 0.58 & 0.62 & SINFONI & $K^{\prime}$ & $0.63 \pm 0.05$ & C20 \\ & 2020.84 & $441 \pm 1$ & $ 284.6 \pm 0.3$ & 0.53 & 12.19 & HRCam & $I$ & 0.3 & T21$^\ddagger$ \\ [.3em] J02255447+1746467 & 2008.63 & $106 \pm 1$ & $ 269.0 \pm 2.0$ & 0.00 & 0.92 & AstraLux & $i^{\prime}\,z^{\prime}$ & & J12 \\ & 2008.87 & $ 98 \pm 1$ & $ 278.2 \pm 1.9$ & 3.74 & 2.17 & AstraLux & $z^{\prime}$ & $1.27 \pm 0.05$ & J12\\ & 2008.87 & & & & & AstraLux & $i^{\prime}$ & $1.17\pm0.24$ & J12 \\ & 2014.61 & $145 \pm 5$ & $ 135.0 \pm 2.6$ & 0.86 & 1.04 & AstraLux & $z^{\prime}$ & &This work \\ & 2015.91 & $178 \pm 8$ & $ 139.9 \pm 3.4$ & 0.50 & 0.99 & AstraLux & $z^{\prime}$ & $1.34 \pm 0.06$ & This work \\ & 2015.98 & $174 \pm 3$ & $ 146.9 \pm 1.6$ & 0.32 & 1.95 & AstraLux & $z^{\prime}$ & $0.94 \pm 0.10$ & This work \\ & 2016.87 & $188 \pm 5$ & $ 150.1 \pm 0.2$ & 0.75 & 1.12 & FastCam & $I$ & $0.07 \pm 0.09$ & This work \\[.3em] J02451431-4344102 & 2008.88 & $254 \pm 3$ & $214.4 \pm 0.3$ & 0.03 & 0.77 & AstraLux & $z^{\prime}$ & $1.12 \pm 0.07$& B10 \\ & 2008.88 & & & & & AstraLux & $i^{\prime}$ & $0.87\pm0.03$ & B10 \\ & 2010.09 & $362 \pm 4$ & $ 184.2 \pm 0.3$ & 1.72 & 3.00 & AstraLux & $z^{\prime}$ & $0.75 \pm 0.03$ & J12 \\ & 2010.09 & & & & & AstraLux & $i^{\prime}$ & $0.84\pm0.04$ & J12 \\ & 2010.81 & $429 \pm 4$ & $ 175.2 \pm 0.3$ & 2.03 & 0.28 & AstraLux & $z^{\prime}$ & & J14b \\ & 2012.01 & $505 \pm 5$ & $ 162.5 \pm 0.3$ & 2.43 & 2.99 & AstraLux & $z^{\prime}$ & & J14b \\ & 2015.91 & $460 \pm 1$ & $ 134.3 \pm 0.4$ & 2.47 & 8.13 & AstraLux & $z^{\prime}$ & $0.55 \pm 0.01$ & This work \\ & 2018.63 & $342 \pm 1$ & $ 97.0 \pm 0.5 $ & 1.39 & 12.32 & AstraLux &$z^{\prime}$ & $1.59 \pm 0.13\,^{\dagger}$ & This work \\[.3em] & 2019.54 & $335 \pm 1$ & $ 72.6 \pm 0.2$ & 1.76 & 0.31 & HRCam & $I$ & 0.6 & T20$^\ddagger$ \\ & 2019.86 & $339 \pm 1$ & $ 66.1 \pm 0.1$ & 2.12 & 1.02 & HRCam & $I$ & 0.6 & T21$^\ddagger$ \\ & 2019.95 & $341 \pm 1$ & $ 64.4 \pm 0.2$ & 2.52 & 0.10 & HRCam & $I$ & 0.6 & T21$^\ddagger$ \\ & 2020.83 & $364 \pm 1$ & $ 48.1 \pm 0.1$ & 2.28 & 1.34 & HRCam & $I$ & 0.6 & T21$^\ddagger$ \\[.3em] J04373746-0229282 & 2001.91 & $286 \pm 1$ & $ 198.1 \pm 0.1$ & 5.13 & 0.49 & NIRC2 & $H_2$ & $1.00 \pm 0.02$ & M15 \\ & 2002.16 & $275 \pm 2$ & $ 197.9 \pm 0.2$ & 1.27 & 0.42 & NIRC2 & $H$ &$ 1.02 \pm 0.02$ & M15 \\ & 2003.05 & $225 \pm 5$ & $ 195.0 \pm 1.0$ & 1.57 & 1.81 & NaCo & $K$ & $0.94 \pm 0.05$ & K07 \\ & 2003.20 & $217 \pm 1$ & $ 196.8 \pm 0.1$ & 7.72 & 1.98 & NIRC2 & $H$ & $0.99 \pm 0.01$ & M15 \\ & 2004.02 & $159 \pm 2$ & $ 194.0 \pm 1.0$ & 7.22 & 1.09 & NaCo & $L^{\prime}$ & & D12 \\ & 2004.95 & $ 93 \pm 2$ & $ 189.5 \pm 0.4$ & 5.26 & 4.72 & NaCo & $L^{\prime}$ & $0.88 \pm 0.28$ & K07 \\ & 2008.88 & $218 \pm 2$ & $ 20.3 \pm 0.3$ & 3.56 & 3.99 & AstraLux & $z^{\prime}$& $1.39\pm0.16$ & B10 \\ & 2008.88 & & & & & AstraLux & $i^{\prime}$ & $2.57\pm0.05$ & B10\\ & 2009.13 & $231 \pm 2$ & $ 19.2 \pm 0.3$ & 2.97 & 6.39 & AstraLux & $z^{\prime}\, i^{\prime}$ & & J12 \\ & 2009.90 & $269 \pm 3$ & $ 18.6 \pm 1.0$ & 2.55 & 1.58 & NaCo & $L^{\prime}$ & & D12 \\ & 2009.98 & $272 \pm 3$ & $ 19.2 \pm 1.0$ & 2.52 & 0.89 & NaCo & $L^{\prime}$ & & D12 \\ & 2010.10 & $280 \pm 3$ & $ 18.3 \pm 0.6$ & 3.71 & 2.78 & AstraLux & $z^{\prime}$ & $1.34 \pm 0.01$ & J12 \\ & 2010.10 & & & & & AstraLux & $i^{\prime}$ & $3.73\pm0.01$ & J12 \\ & 2010.81 & $297 \pm 3$ & $ 19.4 \pm 0.3$ & 2.61 & 0.29 & AstraLux & $z^{\prime}\, i^{\prime}$ & & J14\\ & 2011.67 & $303 \pm 3$ & $ 18.1 \pm 1.0$ & 0.81 & 0.50 & NaCo & $L^{\prime}$ & & D12 \\ & 2011.87 & $295 \pm 4$ & $ 18.5 \pm 0.3$ & 1.56 & 0.20 & AstraLux & $z^{\prime}\, i^{\prime}$ & & J14b \\ & 2012.01 & $307 \pm 3$ & $ 18.2 \pm 0.3$ & 1.92 & 0.43 & AstraLux & $z^{\prime}\, i^{\prime}$ & & J14b \\ & 2014.63 & $244 \pm 1$ & $ 16.8 \pm 0.1$ & 1.11 & 8.93 & NIRC2 & Br$\gamma$ & $0.92 \pm 0.01$ & M15 \\ & 2014.75 & $240 \pm 1$ & $ 16.3 \pm 0.3$ & 0.37 & 1.80 & DSSI & $R$ & $1.89 \pm 0.04$ & M15 \\ & 2015.65 & $199 \pm 1$ & $ 15.6 \pm 0.2$ & 1.26 & 5.93 & NIRC2 & $K$ & $0.93 \pm 0.01$ & M15 \\ & 2015.98 & $186 \pm 4$ & $ 19.3 \pm 0.6$ & 0.55 & 9.23 & AstraLux & $z^{\prime}$ & $1.25 \pm 0.07$ & This work \\ & 2016.88 & $105 \pm 25$& $ 15.2 \pm 2.8$ & 1.21 & 1.44 & FastCam & $I$ & $1.31 \pm 0.11$ & This work \\ & 2017.93 & $75 \pm 1$ & $ 2.6 \pm 0.1 $ & 1.15 & 9.53 & HRCam & $I$ & 1.3 & T19$^\ddagger$ \\ & 2020.11 & $57 \pm 2$ & $ 214.4 \pm 0.2$ & 5.95 &6.88 & HRCam & $I$ & 1.3 & T21$^\ddagger$ \\ & 2020.84 & $98 \pm 1$ & $ 208.2 \pm 0.1$ & & 5.85 & HRCam & $I$ & 1.2 & T21$^\ddagger$ \\[.3em] J04595855-0333123 & 2009.13 &$130\pm15$ & $294.0\pm2.5$& 0.56 & 1.59 & AstraLux & $z^{\prime}\, i^{\prime}$ & &J12\\ & 2010.08 & $139 \pm 1$ & $302.5 \pm 0.7$ & 0.58 & 2.26 & AstraLux & $z^{\prime}$ & $1.42\pm0.05$ & J12 \\ & 2010.08 & & & & & AstraLux & $i^{\prime}$ & $1.55\pm0.07$ & J12 \\ & 2012.01 & $141 \pm 1$ & $321.9 \pm 0.4$ & 0.83 & 1.49 & AstraLux & $z^{\prime}\, i^{\prime}$ & & J14b \\ & 2015.98 & $138 \pm 10$ & $10.6 \pm 3.3$ & 0.50 & 1.55 & AstraLux & $z^{\prime}$ & $1.18 \pm 0.10$ & This work \\ & 2016.88 & $133 \pm 6$ & $16.7 \pm 4.1$ & 1.50 & 0.35 & FastCam & $I$ & $1.25 \pm 0.26$ & This work \\ & 2017.92 & $141 \pm 1$ & $26.4 \pm 0.2$ & 0.99 & 2.25 & SPHERE & $H$ & $0.89 \pm 0.02$ & This work \\ & 2018.95 & $138 \pm 1$ & $40.3 \pm 0.4$ & 0.78 & 3.97 & SPHERE & $H23$ & $0.93 \pm 0.04$ & This work \\[.3em] J05320450-0305291 & 2009.13 & $232 \pm 4$ & $34.9 \pm 0.7$ & 0.84 & 0.47 & AstraLux & $z^{\prime}\,i^{\prime}$ & & J12 \\ & 2010.09 & $213 \pm 3$ & $40.8 \pm 1.1$ & 0.13 & 0.43 & AstraLux & $z^{\prime}$ & $1.05\pm 0.01$ & J12 \\ & 2010.09 & & & & & AstraLux & $i^{\prime}$ & $1.28\pm0.02$ & J12 \\ & 2010.82 & $202 \pm 2$ & $46.6 \pm 0.3$ & 0.37 & 0.51 & AstraLux & $z^{\prime}\,i^{\prime}$ & & J14b \\ & 2011.86 & $192 \pm 2$ & $55.6 \pm 0.7$ & 1.32 & 1.18 & AstraLux & $z^{\prime}\,i^{\prime}$ & & J14b \\ & 2012.01 & $189 \pm 2$ & $55.8 \pm 0.3$ & 0.69 & 0.89 & AstraLux & $z^{\prime}\,i^{\prime}$ & & J14b \\ & 2012.90 & $179 \pm 6$ & $61.6 \pm 2.5$ & 0.08 & 1.05 & NaCo & $L^{\prime}$ & $0.68 \pm 0.16$ & This work \\ & 2015.12 & $157 \pm 8$ & $86.2 \pm 1.3$ & 0.97 & 1.12 & NaCo & $L^{\prime}$ & $0.62 \pm 0.05$ & This work \\ & 2015.98 & $161 \pm 1$ & $99.5 \pm 0.6$ & 0.87 & 3.25 & AstraLux & $z^{\prime}$ & & This work \\ & 2018.63 & $131 \pm 6$ & $131.1 \pm 5.8$ & 0.66 & 0.40 & AstraLux & $z^{\prime}$ & $1.64 \pm 0.03$ & This work \\ & 2019.19 & $99 \pm 1$ & $146.7 \pm 0.1$ & 0.79& 0.35 & SPHERE & $K_{12}$ & $0.61 \pm 0.04$ & This work \\ & 2021.80 & $158 \pm 7$ & $ 291.3 \pm 2.4$ & 0.39 & 0.70 & HRCam & $I$ & & T22$^\ddagger$ \\ & 2021.89 & $166 \pm 7$ & $ 293.3 \pm 2.3$ & 0.61 & 0.76 & HRCam & $I$ & & T22$^\ddagger$ \\ & 2021.96 & $171 \pm 7$ & $ 292.9 \pm 2.2$ & 0.56 & 0.91 & HRCam & $I$ & & T22$^\ddagger$ \\[.3em] J06112997-7213388 & 2010.11 & $162 \pm 2$ & $316.4 \pm 0.3$ & 0.33 & 0.33 & AstraLux & $z^{\prime}$ & $1.21 \pm 0.04$ & J12 \\ & 2010.11 & & & & & AstraLux & $i^{\prime}$ & $1.16 \pm 0.08$ & J12 \\ & 2015.17 & $153 \pm 2$ & $270.6 \pm 0.8$ & 0.64 & 0.62 & AstraLux & $z^{\prime}$ & & This work \\ & 2015.98 & $153 \pm 3$ & $268.8 \pm 0.9$ & 1.38 & 5.74 & AstraLux & $z^{\prime}$ & $0.93 \pm 0.11$ & This work \\ & 2017.10 & $165 \pm 1$ & $253.9 \pm 0.1$ & 0.81 & 0.39 & SPHERE & $K_{12}$ & $0.30 \pm 0.01$ & This work \\ & 2018.81 & $179 \pm 2$ & $240.9 \pm 0.7$ & & & HRCam & $I$ & 0.0 & T21$^\ddagger$ \\ & 2018.91 & $178 \pm 3$ & $239.8 \pm 0.4$ & 1.43 & 1.45 & SINFONI & $K^{\prime}$ & $0.26 \pm 0.03$ & C20 \\ & 2019.17 & $186 \pm 1$ & $238.7 \pm 0.1$ & 0.54 & 0.31 & SPHERE & $K_{12}$ & $0.30 \pm 0.01$ & This work \\ & 2019.93 & $ 196 \pm 6$ & $ 233.2 \pm 6.1$ & 0.06 & 0.11 & HRCam & $I$ & 0.1 & T20$^\ddagger$ \\[.3em] J06134539-2352077 & 2010.10 & $143 \pm 2$ & $319.8 \pm 0.5$& 2.80 & 0.12 & AstraLux & $z^{\prime}$ & $1.37\pm0.02$ & J12\\ & 2010.10 & & & & & AstraLux & $i^{\prime}$ & $1.49\pm0.04$ & J12\\ & 2010.81 & $171 \pm 2$ & $302.3 \pm 0.7$& 0.19 & 0.10 & AstraLux & $z^{\prime}\,i^{\prime}$ & & J14b \\ & 2012.01 & $203 \pm 2$ & $276.2 \pm 0.5$& 0.40 & 0.56 & AstraLux & $z^{\prime}\,i^{\prime}$ & & J14b \\ & 2015.18 & $300 \pm 3$ & $240.1 \pm 0.3$& 0.71 & 0.22 & AstraLux & $z^{\prime}\,i^{\prime}$ & & This work \\ & 2015.98 & $312 \pm 2$ & $234.1 \pm 0.2$& 0.93 & 0.67 & AstraLux & $z^{\prime}$ & & This work \\ & 2016.88 & $301 \pm 1$ & $224.3 \pm 0.3$& 1.82 & 2.17 & FastCam & $I$ &$0.42\pm0.02$ & This work \\ & 2018.25 & $281 \pm 1$ & $216.5 \pm 0.2$ & 0.54 & 0.41 & HRCam & $I$ & 1.4 & T19$^\ddagger$ \\ & 2018.29 & $276 \pm 1$ & $216.0 \pm 0.1$& 0.14 & 0.71 & SPHERE & $K_{12}$ & $1.25 \pm 0.01$ & This work \\ & 2019.86 & $121 \pm 1$ & $187.7 \pm 0.2$ & 0.12 & 0.45 & HRCam & $I$ & 1.5 & T20$^\ddagger$ \\ & 2020.11 & $74 \pm 1$ & $166.4 \pm 2.4$ & 1.08 & 0.72 & HRCam & $I$ & 1.3 & T21$^\ddagger$ \\ & 2020.84 & $114 \pm 1$ & $30.4 \pm 2.0$ & 0.54 & 0.04 & HRCam & $I$ & 1.4 & T21$^\ddagger$ \\[.3em] J07285137-3014490 & 2002.99 & $425 \pm 4$ & $180.3 \pm 0.2$ & 0.71 & 1.99 & NIRC2 & $K_p$ & & J14b \\ & 2005.83 & $175 \pm 11$& $143.7 \pm 1.5$ & 0.84 & 1.55 & NIRI & $H$ & $0.44\pm0.30$ & D07 \\ & 2005.83 & & & & & NIRI & $K_s$ & $0.44\pm0.42$ & D07\\ & 2008.86 & $479 \pm 5$ & $169.7 \pm 0.3$ & 1.10 & 3.39 & AstraLux & $z^{\prime}$ & $1.29 \pm 0.11$ & B10 \\ & 2008.86 & & & & & AstraLux & $i^{\prime}$ & $1.50\pm0.18$ & B10 \\ & 2010.08 & $458 \pm 5$ & $176.2 \pm 0.3$ & 1.10 & 1.99 & AstraLux & $z^{\prime}$ & $1.20\pm0.02$ & J12 \\ & 2010.08 & & & & & AstraLux & $i^{\prime}$ & $1.47\pm0.03$ & J12 \\ & 2010.81 & $423 \pm 4$ & $181.1 \pm 0.3$ & 0.66 & 0.69 & AstraLux & $z^{\prime}$ & & J14b \\ & 2012.01 & $294 \pm 3$ & $191.6 \pm 0.3$ & 0.27 & 1.59 & AstraLux & $z^{\prime}$ & & J14b \\ & 2012.90 & $ 69 \pm 5$ & $232.3 \pm 3.0$ & 0.67 & 1.10 & NaCo & $H$ & & J14b \\ & 2014.92 & $377 \pm 1$ & $160.6 \pm 0.1$ & 0.30 & 1.36 & SPHERE & $K_{12}$ & $1.01 \pm 0.01$ & This work \\ & 2015.09 & $393 \pm 1$ & $161.8 \pm 0.1$ & 0.21 & 0.56 & SPHERE & $K_{12}$ & $0.94\pm0.01$ & R18 \\ & 2015.17 & $400 \pm 4$ & $162.5 \pm 0.5$ & 0.04 & 0.38 & AstraLux & $z^{\prime}$ & & R18 \\ & 2015.75 & $439 \pm 4$ & $166.3 \pm 0.2$ & 0.43 & 1.67 & NIRC2 & $K_c$ & & R18 \\ & 2015.88 & $447 \pm 4$ & $167.1 \pm 0.2$ & 0.16 & 1.99 & NIRC2 & $K_c$ & & R18 \\ & 2015.91 & $449 \pm 1$ & $166.9 \pm 0.1$ & 0.13 & 0.31 & SPHERE & $H_{23}$ & $0.98\pm0.01$ & R18 \\ & 2015.98 & $452 \pm 1$ & $167.2 \pm 0.1$ & 0.45 & 0.54 & SPHERE & $H_{23}$ & $0.93\pm0.01$ & R18 \\ & 2015.98 & $454 \pm 2$ & $167.5 \pm 0.2$ & 0.78 & 1.23 & AstraLux & $z^{\prime}$ & & R18 \\ & 2015.99 & $453 \pm 1$ & $167.3 \pm 0.1$ & 0.09 & 0.08 & AstraLux & $z^{\prime}$ & & R18 \\ & 2016.24 & $463 \pm 1$ & $168.6 \pm 0.1$ & 0.11 & 0.40 & SPHERE & $H_{23}$ & $1.01\pm0.01$ & R18 \\ & 2016.87 & $445 \pm 6$ & $169.8 \pm 0.2$ & 5.22 & 10.08 & FastCam & $I$ & $0.67 \pm 0.07$ & This work \\ & 2017.10 & $477 \pm 1$ & $173.1 \pm 0.1$ & 0.10 & 1.53 & SPHERE & $K_{12}$ & $0.90\pm0.01$ & R18\\[.3em] & 2018.09 & $456 \pm 1$ & $178.0 \pm 0.1$ & 1.03 & 0.54 & HRCam & $I$ & 1.4 & T19$^\ddagger$ \\ & 2019.79 & $295 \pm 1$ & $191.1 \pm 0.1$ & 0.73 & 0.62 & HRCam & $I$ & 1.2 & T20$^\ddagger$ \\ & 2020.11 & $237 \pm 1$ & $196.4 \pm 0.2$ & 0.77 & 1.31 & HRCam & $I$ & 1.2 & T21$^\ddagger$ \\[.3em] J09075823+2154111 & 2008.88 & $106 \pm 1$ & $192.3 \pm 1.1$ & 0.61 & 1.62 & AstraLux & $z^{\prime}$ & $0.92\pm0.10$ & J12\\ & 2008.88 & & & & & AstraLux & $i^{\prime}$ & $1.08\pm0.09$ & J12 \\ & 2009.13 & $108\pm1$ & $201.9\pm1.1$ & 1.01 & 1.75 & AstraLux & $i^{\prime}\,z^{\prime}$ & & J12\\ & 2015.18 & $135\pm3$ & $128.4 \pm 1.9$ & 0.25& 1.93& AstraLux & $z^{\prime}$ & & This work\\ & 2015.98 & $148\pm6$ & $137.1\pm1.9$ & 1.42& 0.70& AstraLux & $z^{\prime}$ & $1.38 \pm 0.12$ & This work\\ & 2015.99 & $139\pm1$ & $135.9\pm0.1$ & 0.40& 0.09& SPHERE & $H_{23}$ & $0.12 \pm 0.01$ & This work \\ & 2016.87 & $125\pm12$& $143.0\pm8.2$ & 0.32& 0.73& FastCam & $I$ & $1.01\pm0.13$ & This work\\[.3em] J09164398-2447428 & 2010.08 & $75\pm12$ & $160.3\pm1.5$ & 0.49& 2.30& AstraLux & $z^{\prime}$ & $0.92\pm0.07$ & J12\\ & 2010.08 & & & & & AstraLux & $i^{\prime}$ & $0.90\pm0.21$ & J12 \\ & 2012.01 & $60\pm3$ & $103.4\pm1.8$ & 3.42& 2.72& AstraLux &$z^{\prime}$ & & J14b\\ & 2015.17 & $73\pm2$ & $33.2\pm2.1$ & 3.88& 0.55& AstraLux &$z^{\prime}$ & & This work\\ & 2015.24 & $81\pm1$ & $34.4\pm1.3$ & 1.23& 1.20& SPHERE & $K_{12}$ & & This work\\ & 2018.07 & $83\pm2$ & $169.7\pm0.4$ & 0.17& 0.14& SPHERE & $K_{12}$ & $0.56\pm0.01$ & This work\\ & 2018.15 & $83\pm2$ & $168.1\pm0.3$ & 0.14& 0.27& SPHERE & $K_{12}$ & $0.56\pm0.02$ & This work\\ & 2019.18 & $79\pm1$ & $146.0\pm0.1$ & 1.59& 0.33& SPHERE & $K_{12}$ & $0.53\pm0.01$ & This work\\[.3em] J10140807-7636327 & 1996.25 & $91\pm7$ & $259.6\pm6.5$ & 0.03& 0.04& SHARP & $K$& $0.57\pm0.11$ & K01 \\ & 2010.10 & $223\pm2$ & $107.9\pm0.5$ & 2.50& 2.71 & AstraLux & $z^{\prime}$ & $1.45\pm0.05$ & J12\\ & 2010.10 & & & & & AstraLux & $i^{\prime}$ & $1.76\pm0.05$ & J12\\ & 2010.15 & $242\pm6$ & $111.0\pm1.6$ & 2.22 & 1.15 & NaCo & $K_s$ & $0.15\pm0.42$ & This work \\ & 2011.23 & $252\pm3$ & $108.0\pm0.8$ & 3.28 & 0.97& NaCo & $K_s$ & $0.18\pm0.40$ & This work \\ & 2015.17 & $277\pm3$ & $103.2\pm1.0$ & 0.57 & 1.70& AstraLux & $z^{\prime}$ & & This work\\ & 2015.99 & $283\pm2$ & $101.0\pm0.5$ & 0.40 & 1.05& AstraLux & $z^{\prime}$ & $1.05\pm0.09\,^{\dagger}$ & This work\\ & 2016.22 & $287\pm4$ & $100.8\pm0.9$ & 0.49 & 0.67& NaCo & $K_s$ & $0.04\pm0.16$ & This work \\ & 2016.38 & $282\pm2$ & $101.4\pm0.5$ & 1.95 & 2.80& AstraLux & $z^{\prime}$ & $1.27\pm0.14\,^{\dagger}$ & This work\\ & 2016.95 & $292\pm5$ & $99.5\pm0.9$ & 0.68 & 0.21& NaCo & $K_s$ & $0.01\pm0.05$ & This work\\ & 2017.36 & $292\pm8$ & $98.7\pm1.4$ & 0.21 & 0.09& NaCo & $K_s$ & $0.01\pm0.37$ & This work\\ & 2018.09 & $293\pm2$ & $97.5\pm0.9$ & 0.12 & 0.53& NaCo & $K_s$ & $0.03\pm0.08$ & This work\\ & 2018.25 & $296\pm1$ & $97.4\pm0.4$ & 2.78 & 0.96 & HRCam & $I$ & 0.1 & T19 \\ & 2018.34 & $293\pm1$ & $97.8\pm0.1$ & 0.45 & 1.14& SPHERE & $H_{23}$ & $0.04\pm0.01$ & This work \\ & 2018.39 & $287\pm2$ & $98.5\pm0.5$ & 3.29 & 1.74& AstraLux & $z^{\prime}$ & $0.90\pm0.07\,^{\dagger}$ & This work\\ & 2018.98 & $296\pm1$ & $96.4\pm0.3$ & 1.20 & 1.86 & HRCam & $I$ & 0.0 & T19$^\ddagger$ \\ & 2019.18 & $294\pm1$ & $96.6\pm0.1$ & 1.22 & 1.24 & SPHERE & $K_{12}$ & $0.06\pm0.01$ & This work\\[.3em] J10364483+1521394 & 2006.38 & $189\pm2$ & $310.6\pm0.14$ & 1.34 & 2.55 & NIRI & $H$ & $0.05\pm0.02$ & D07\\ & 2006.38 & & & & & NIRI & $K_s$ & $0.03\pm0.02$ & D07\\ & 2008.03 & $170\pm9$ & $348.2\pm1.1$ & 0.04 & 1.06 & AstraLux & $z^{\prime}$ & $0.12 \pm 0.03$ & J12 \\ & 2008.88 & $151\pm4$ & $13.8\pm2.6$ & 0.13 & 0.34 & AstraLux & $z^{\prime}$ & $0.06\pm0.05$ & J12 \\ & 2008.88 & & & & & AstraLux & $i^{\prime}$ & $0.00\pm0.15$ & J12 \\ & 2009.13 & $144\pm4$ & $16.4\pm0.7$ & 0.37 & 10.02 & AstraLux & $z^{\prime}$ & $0.00\pm0.03$ & J12 \\ & 2015.16 & $185\pm3$ & $316.4\pm0.7$ & 0.34 & 1.89 & AstraLux & $z^{\prime}$ & $0.01\pm0.03$ & C17 \\ & 2015.18 & $183\pm3$ & $316.2\pm3.0$ & 0.98 & 0.23 & AstraLux & $z^{\prime}$ & $0.10\pm0.04$ & C17 \\ & 2015.33 & $185\pm4$ & $319.0\pm0.2$ & 0.08 & 0.63 & AstraLux & $z^{\prime}$ & $0.05\pm0.04$ & C17 \\ & 2015.90 & $182\pm3$ & $330.2\pm7.2$ & 0.57 & 0.25 & AstraLux & $z^{\prime}$ & $0.30\pm0.05$ & C17 \\ & 2015.99 & $179\pm2$ & $335.8\pm1.3$ & 0.08 & 1.30 & AstraLux & $z^{\prime}$ & $0.05\pm0.02$ & This work \\ & 2016.01 & $181\pm1$ & $334.4\pm0.1$ & 2.10 & 1.94 & SPHERE & $K_{12}$ & $0.01\pm0.01$ & This work\\ & 2016.38 & $170\pm6$ & $343.7\pm2.1$ & 0.56 & 0.01 & AstraLux & $z^{\prime}$ & $0.04\pm0.02$ & C17 \\ & 2016.88 & $159\pm1$ & $355.8\pm1.8$ & 5.01 & 0.83 & FastCam & $I$ & $0.36\pm0.14$ & This work\\ & 2017.11 & $161\pm1$ & $4.3\pm0.1$ & 1.88 & 2.23 & SPHERE & $K_{12}$ & $0.02\pm0.01$ & This work\\ & 2018.29 & $130\pm1$ & $47.8\pm0.1$ & 0.12 & 0.69 & SPHERE & $K_{12}$ & $0.06\pm0.01$ & This work \\ & 2018.39 & $125\pm3$ & $54.0\pm1.5$ & 0.70 & 1.07 & AstraLux & $z^{\prime}$ & $0.03\pm0.04$ & This work\\ & 2019.15 & $100\pm3$ & $98.6\pm1.7$ & 1.43 & 1.30 & SINFONI & $K^{\prime}$ & $0.00\pm0.03$ & C20\\ & 2019.18 & $105\pm1$ & $98.6\pm0.1$ & 1.63 & 0.73 & SPHERE & $K_{12}$ & $0.02\pm0.01$ & This work\\ & 2019.95 & $86\pm1$ & $166.0\pm0.7$ & 4.43 & 0.25 & HRCam & $I$ & 0.1 & T20$^\ddagger$ \\ & 2021.00 & $132\pm1$ & $237.9\pm0.4$ & 3.04 & 3.24 & HRCam & $I$ & 0.1 & T21$^\ddagger$ \\[.3em] J20163382-0711456 & 2008.44 & $107\pm7$ & $352.4\pm2.1$ & 1.28 & 2.33 & AstraLux & $z^{\prime}$ & $0.63\pm0.15$ & J12\\ & 2011.85 & $176\pm2$ & $320.7\pm0.7$ & 2.16 & 0.82 & AstraLux & $z^{\prime}$ & & J14b\\ & 2014.61 & $187\pm1$ & $303.3\pm0.9$ & 0.35 & 2.63 & AstraLux & $z^{\prime}$ & & This work\\ & 2015.73 & $192\pm1$ & $293.2\pm0.1$ & 3.65 & 2.56 & SPHERE & $K_{12}$ & $0.36\pm0.01$ & This work\\ & 2016.63 & $194\pm4$ & $284.8\pm0.5$ & 1.16 & 3.58 & FastCam & $I$ & $0.36\pm0.12$ & This work\\ & 2016.87 & $191\pm4$ & $280.5\pm0.7$ & 0.32 & 6.30 & FastCam & $I$ & $0.20\pm0.06$ & This work\\ & 2017.92 & $188\pm1$ & $277.4\pm0.1$ & 3.73 & 2.33 & SPHERE & $H_{23}$ & $0.49\pm0.01$ & This work\\ & 2018.36 & $193\pm1$ & $274.1\pm0.1$ & 0.15 & 5.37 & SPHERE & $H_{23}$ & $0.43\pm0.01$ & This work\\ & 2018.39 & $189\pm3$ & $275.7\pm0.8$ & 1.31 & 1.58 & AstraLux & $z^{\prime}$ & $0.92\pm0.05\,^{\dagger}$ & This work\\ & 2018.63 & $196\pm1$ & $274.2\pm0.7$ & 2.38 & 1.98 & AstraLux & $z^{\prime}$ & $1.14\pm0.14\,^{\dagger}$ & This work\\ & 2018.71 & $193\pm1$ & $272.9\pm0.1$ & 0.76 & 6.21 & SPHERE & $K_{12}$ & $0.39\pm0.01$ & This work\\[.3em] J21372900-0555082 & 2008.63 & $245\pm2$ & $170.2\pm0.3$ & 2.38& 0.23& AstraLux & $z^{\prime}\,i^{\prime}$ & & J12\\ & 2008.88 & $219\pm2$ & $172.0\pm0.3$ & 1.65& 0.45& AstraLux & $z^{\prime}$ & $0.33\pm0.15$ & J12\\ & 2008.88 & & & & & AstraLux & $i^{\prime}$ & $0.55\pm0.09$ & J12 \\ & 2014.61 & $245\pm1$ & $318.5\pm0.4$ & 0.70& 3.37& AstraLux & $z^{\prime}$ & & This work\\ & 2015.74 & $292\pm1$ & $322.8\pm0.1$ & 0.85& 0.81& SPHERE & $K_{12}$ & $0.21\pm0.01$ & This work \\ & 2018.39 & $238\pm5$ & $336.6\pm0.6$ & 0.28& 2.76& AstraLux & $z^{\prime}$ & $0.85\pm0.13\,^{\dagger}$ & This work \\ & 2018.46 & $235\pm1$ & $335.3\pm0.1$ & 0.01& 1.06& SPHERE & $K_{12}$ & $0.18\pm0.01$ & This work \\ & 2018.63 & $225\pm6$ & $343.3\pm1.3$ & 0.20& 5.15& AstraLux & $z^{\prime}$ & $1.84\pm0.25\,^{\dagger}$ & This work\\[.3em] J23172807+1936469 & 2001.59 & $142\pm10$ & $209.0\pm1.0$ & 0.19 & 1.00 & CFHT & $J$ & $1.17$ & B04\\ & 2003.94 & $232\pm2$ & $39.3\pm0.3$ & 0.33 & 0.03 & NaCo & $NB_{1.64}$ & & J14b\\ & 2004.73 & $308\pm3$ & $34.5\pm0.3$ & 0.72 & 0.10 & NaCo & $NB_{1.64}$ & & J14b\\ & 2008.59 & $293\pm3$ & $19.2\pm0.3$ & 0.28 & 2.44 & AstraLux & $z^{\prime}$ & $1.50\pm0.12$ & J12\\ & 2008.59 & & & & & AstraLux & $i^{\prime}$ & $1.70\pm0.15$ & J12\\ & 2010.70 & $91\pm3$ & $347.2\pm0.3$ & 0.98 & 1.81 & NaCo & $H$ & $1.18\pm0.12$ & J14b\\ & 2012.65 & $145\pm2$ & $220.2\pm3.5$ & 0.64 & 0.67 & AstraLux & $z^{\prime}$ & & J14b\\ & 2014.61 & $109\pm4$ & $55.8\pm2.9$ & 0.56 & 0.75 & AstraLux & $z^{\prime}$ & & This work\\ & 2015.73 & $259\pm1$ & $37.9\pm0.4$ & 0.05 & 1.17 & SPHERE & $K_{12}$ & $1.25\pm0.02$ & This work\\ & 2015.91 & $277\pm2$ & $36.7\pm0.4$ & 0.27 & 0.95 & AstraLux & $z^{\prime}$ & $1.33\pm0.10$ & This work\\ & 2018.39 & $364\pm2$ & $27.2\pm0.4$ & 0.48 & 1.10 & AstraLux & $z^{\prime}$ & $1.47\pm0.03$ & This work\\ & 2018.79 & $357\pm1$ & $25.3\pm0.1$ & 0.02 & 1.07 & SPHERE & $H_{23}$ & $1.31\pm0.01$ & This work\\[.3em] J23261182+1700082 & 2008.63 & $195\pm2$ & $51.8\pm0.7$ & 0.02 & 0.19 & AstraLux & $z^{\prime}$ & $1.57\pm0.09$ & J12\\ & 2008.63 & & & & & AstraLux & $i^{\prime}$ & $1.90\pm0.12$ & J12\\ & 2011.85 & $273\pm4$ & $1.7\pm0.3$ & 4.62 & 3.71 & AstraLux & $z^{\prime}$ & & J14b \\ & 2014.61 & $253\pm3$ & $334.5\pm0.2$ & 3.53 & 6.90 & AstraLux & $z^{\prime}$ & & This work\\ & 2015.74 & $223\pm1$ & $315.6\pm0.1$ & 3.24 & 1.93 & SPHERE & $K_{12}$ & & This work\\ & 2015.91 & $221\pm9$ & $312.9\pm1.0$ & 0.93 & 0.36 & AstraLux & $z^{\prime}$ & $1.67\pm0.15$ & This work\\ & 2016.62 & $181\pm3$ & $295.6\pm0.9$ & 1.33 & 0.81 & FastCam & $I$ & $0.79\pm0.18$ & This work\\[.3em] J23261707+2752034 & 2008.59 & $151\pm2$ & $14.1\pm0.3$ & 0.68 & 0.29 & AstraLux & $z^{\prime}$ & $0.52\pm0.10$ & J12\\ & 2008.59 & & & & & AstraLux & $i^{\prime}$ & $0.47\pm0.10$ & J12\\ & 2011.86 & $109\pm2$ & $328.7\pm0.6$ & 0.58 & 0.73 & AstraLux & $z^{\prime}$ & & J14b\\ & 2014.61 & $128\pm1$ & $130.4\pm1.1$ & 0.03 & 0.67 & AstraLux & $z^{\prime}$ & & This work\\ & 2015.82 & $116\pm1$ & $102.2\pm0.2$ & 0.90 & 0.03 & SPHERE & $YJ$ & $0.06\pm0.01$ & This work\\ & 2016.62 & $103\pm7$ & $73.1\pm3.2$ & 0.89 & 2.31 & FastCam & $I$ & $0.53\pm0.18$ & This work\\[.3em] J23495365+2427493 & 2008.59 & $132\pm5$ & $316.9\pm0.7$ & 0.25 & 2.33 & AstraLux & $z^{\prime}\,i^{\prime}$ & &J12\\ & 2008.63 & $135\pm3$ & $317.9\pm1.0$ & 1.51 & 1.11 & AstraLux & $z^{\prime}\,i^{\prime}$ & & J12\\ & 2008.88 & $129\pm1$ & $324.4\pm1.6$ & 0.25 & 1.49 & AstraLux & $z^{\prime}$ & $1.13\pm0.04$ & J12\\ & 2008.88 & & & & & AstraLux & $i^{\prime}$ & $1.23\pm0.08$ & J12\\ & 2011.86 & $141\pm2$ & $359.3\pm1.8$ & 6.07 & 0.50 & AstraLux & $z^{\prime}$ & & J14b \\ & 2014.61 & $156\pm1$ & $28.9\pm0.2$ & 1.64 & 3.96 & AstraLux & $z^{\prime}$ & & This work\\ & 2015.91 & $162\pm5$ & $28.1\pm1.1$ & 2.59 & 8.66 & AstraLux & $z^{\prime}$ & & This work\\ & 2016.63 & $177\pm4$ & $41.0\pm0.6$ & 1.88 & 1.88 & FastCam & $I$ & $0.59\pm0.03$ & This work\\ & 2019.15 & $213\pm2$ & $53.7\pm0.5$ & 0.57 & 2.42 & SINFONI & $K^{\prime}$ & $1.07\pm0.02$ & C20\\ \hline \end{longtable} \begin{minipage}{\textwidth} {\small K01 = \citep{kohler_multiplicity_2001}; B04 = \citet{beuzit_new_2004}; D07 = \citet{daemgen_discovery_2007}; K07 = \citet{kasper_novel_2007} ; B10 = \citet{bergfors_lucky_2010}; D12 = \citet{delorme_high-resolution_2012}; J12 = \citet{janson_astralux_2012}; J14a = \citet{janson_noopsorta_2014}; J14b = \citet{janson_noopsortborbital_2014}; M15 = \citet{montet_dynamical_2015}; R18 = \citet{rodet_dynamical_2018}; T19 = \citet{tokovinin_speckle_2019}; C20 = \citet{calissendorff_characterising_2020}; T20 = \citet{tokovinin_speckle_2020}; T21 = \citet{tokovinin_speckle_2021}; T22 = \citet{tokovinin_family_2022} \\[.2em] $^{\dagger}$ Photometry subjected to lucky imaging ghosts and therefore contrast magnitudes overestimated.\\ $^{\ddagger}$ Not included in the MCMC fitting. } \end{minipage} \end{center} \section{Orbital fits}\label{appendixB} \clearpage \twocolumn \clearpage \subsection{MCMC}\label{appendixB2} \begin{minipage}{\textwidth} \begin{center} \includegraphics[width=\linewidth]{J0613_CornerPlot} \captionof{figure}{Distribution and correlations of each of the orbital element fitted by the MCMC algorithm for J0613. The blue lines depict the probability peak, with the shaded light-blue area encompassing the $16-84\,\%$ confidence interval.} \label{fig:J0613_MCMC_pams} \end{center} \end{minipage} \clearpage \section{Evaluating orbits}\label{appendixC} The grades for the orbits were calculated in a similar manner to that of \citet{worley_fourth_1983} and \citet{hartkopf_2001_2001}, but scaled according to our sample of 20 binaries here only. We did not include any extra weight or uncertainties for any given system depending on the quality of the observations. We determined the grade value for each orbit as $$ {\rm Grade} = \frac{{\rm PA_{gap}} + t_{\rm gap}}{N_{\rm rev}\, N_{\rm obs}}, $$ where the positional angle coverage gap ${\rm PA_{gap}}$ is calculated as the maximum gap between observed epochs in positional angle ${\rm PA}_{i} - {\rm PA}_{i-1}$, the phase coverage gap $t_{\rm gap}$ is the maximum gap in phase coverage calculated from the period and difference between time of periastron and observed epoch $(t_0 - {\rm t_{obs}})/P$, the number of revolutions $N_{\rm rev}$ is the time between the last and first epochs divided by the orbital period $(t_{\rm last} - t_{\rm first})/P$, and the number of observed epochs $N_{\rm obs}$. The values were then scaled between 0 and 5 logarithmically with base 10, going from lowest to highest value as best to worst for the systems J0008 and J0611 respectively, and we designated orbits with values between 0 and 1 as grade 1, values between 1 and 2 as grade 2 etc. The resulting grades for our systems are presented in Table~\ref{tab:grades}. \begin{table}[h!] \renewcommand{\arraystretch}{1.3} \centering \caption{Orbital grades} \begin{tabular}{lccc} \hline \hline Name & $\chi^2_{\rm Grid}$ & $\chi^2_{\rm MCMC}$ & Grade\\ \hline J0008 & 3.6 & 4.11 & 1\\ J0111 & 24.1 & 9.75 & 3\\ J0225 & 6.7 & 3.53 & 5\\ J0245 & 21.5 & 6.14 & 3\\ J0437 & 23.3 & 3.78 & 2\\ J0459 & 5.5 & 4.2 & 4\\ J0532 & 1.3 & 1.62 & 4\\ J0611 & 5.9 & 11.31 & 5\\ J0613 & 1.4 & 17.39 & 2\\ J0728 & 4.8 & 1.18 & 1\\ J0907 & 2.8 & 4.24 & 3\\ J0916 & 6.5 & 4.87 & 3\\ J1014 & 3.2 & 3.23 & 3\\ J1036 & 6.7 & 3.99 & 1\\ J2016 & 13.0 & 15.26 & 4\\ J2137 & 8.2 & 8.50 & 4\\ J2317 & 1.2 & 1.52 & 1\\ J232611 & 22.6 & 17.98 & 4\\ J232617 & 2.9 & 4.52 & 3\\ J2349 & 18.0 & 15.19 & 4\\ \hline \label{tab:grades} \end{tabular} {\small\\ Orbital grades: (1) Reliable, (2) Good, (3) Tentative, (4) Preliminary, (5) Indetermined. } \end{table} \section{Individual systems}\label{appendixD} {\bf 2MASS J00085391+2050252} was first reported as a binary by \citet{beuzit_new_2004} and has been monitored for almost 20 years, thus far exceeding the estimated period of $P = 5.94^{+0.02}_{-0.01}$ years, making it the binary with the shortest period in our sample. It is presumably a field binary that is unlikely to belong to any known young moving group or association. The orbital fit is the most robust in our sample according to our grading scale criteria, receiving the grade 1 (Reliable). \citet{vrijmoet_solar_2022} reported an orbital period of $P = 5.9$ years and $2.6$ semi-major axis for the system, corresponding to a total system dynamical mass of $M_{\rm tot} = 0.5$, consistent with our results. We found no resolved spectral analysis of the system. {\bf 2MASS J01112542+1526214} does not have its parallax or proper motions measured by Gaia yet, and we obtained the distance of $d = 17.24 \pm 2.17$ pc from \citet{dittmann_trigonometric_2014} and proper motions from the fourth US Naval Observatory CCD Astrograph Catalog \citep[UCAC4;][]{zacharias_fourth_2013}. All three YMG-tools agreed that the system is a likely BPMG member. We noted that the astrometric data point for J0111 in \citet{calissendorff_characterising_2020} was reported for the wrong spatial scale for the instrument. They reported on a spatial scale of $125 \times 250$ mas/pixel, which overestimated the astrometry for the data point. We corrected for this error here and recalculated the separation at the SINFONI epoch using the $50 \times 100$ mas/pixel spatial scale which was used during the J0111 observations and obtained a projected separation of $416 \pm 5\,$mas which we used into our orbital fitting. We found a few radial velocity measurements for the system in the literature, but they were spread across several different instruments and exhibiting a large jitter, and thus not useful for aiding with constraining the orbit in this case. We found the orbit to be of grade 3 (tentative), and is likely to see some improvements in the coming years, especially with the better distance-measurements which uncertainty propagates to the dynamical mass estimate. The estimated spectral types differ largely between optical photometry and near-IR spectra for the two components, with the secondary component being of much later type than expected from the photometric estimate. This could be an effect of the secondary component being an unresolved binary itself, or an effect of the spectral type relation derived in \citet{calissendorff_characterising_2020} from the relation in \citet{rojas-ayala_m_2014} not being adequately constrained for young M dwarfs. {\bf 2MASS J02255447+1746467} lacks Gaia parallax and proper motions, and we adopted the distance measurement of $d = 31\pm1.9$ pc from \citet{dittmann_trigonometric_2014} along with UCAC4 catalogue proper motions. The BANYAN $\Sigma-$tool suggested YMG memberships of $36.7\,\%$ for Carina-Near and $44\,\%$ for Argus, while the convergence point tool implied a $92.6\%$ probability for Carina-Near. \citet{zuckerman_nearby_2019} estimates the 40-50 Myr old Argus group to have a mean distance from Earth of 72.4 pc, which could imply the system to more likely be associated with the $\sim 200\,$Myr old Carina-Near which shares similar UVW values but is located closer to Earth at $\sim 30\,$pc \citep{zuckerman_carina-near_2006} and the adopted distance of the binary system. The orbit received a grade of 5 (indetermined) and is likely to be much better constrained with additional epochs. Both methods preferred orbits which insinuated a total dynamical mass of $0.09-0.17\,M_\odot$ for the system, which is extremely low for the observed spectral types when compared to the rest of our sample. If we restrain the orbital period to be shorter than 30 years we obtained a best-fit $\chi^2_{\nu}$ which was four times larger than for the longer periods (c.f. $\chi^2_{\nu} = 1.5$ and $6.3$), but with dynamical masses more compatible with the expected from the theoretical models. More likely is that the distance to the system is inaccurate, and a distance of $d \approx 55$pc would bring both the dynamical and theoretical masses closer together, as well as be more in line with the expected mass from the derived spectral types. {\bf 2MASS J02451431-4344102} is a likely field binary according to all three YMG tools applied. We mainly sampled the orbit around the periastron and despite the small uncertainty in dynamical mass we obtained a grade 3 (tentative) for the orbit, which has more than half of its path yet uncharted. The MCMC method did not include the astrometric data points from 2019-2020 which explains the difference in orbital parameters determined by the two methods. The RV data in the literature \citep{durkan_radial_2018} had too short baseline to aid the orbital fitting and does not match the astrometry in this case and was therefore excluded from the fitting procedure. The spectral types are similar to that of the J0008 system which had a reliable orbital fit, also belonging to the field and having similar dynamical mass. This could suggest that the orbit we obtained for J0245 is actually better constrained than expected from our grading criteria. {\bf 2MASS J04373746-0229282} has have its orbit constrained previously by \citet{montet_dynamical_2015}, obtaining a dynamical mass of $M = 1.11 \pm 0.04\,M_\odot$. We found a good orbital fit for the system, obtaining a grade 2 on our scale, and a slightly lower mass of $M = 0.92\,^{+0.06}_{-0.04}\,M_\odot$. This discrepancy is explained by the different distance measurements adopted for the system, where \citet{montet_dynamical_2015} resorted to using the Hipparcos distance to the comoving system 51 Eri of $29.43 \pm 0.30$ pc \citep{van_leeuwen_validation_2007} while we had access to the Gaia EDR3 parallax corresponding to a distance of $27.77 \pm 0.37$ pc. The RV measurements for the system allow for individual dynamical masses to be derived for the system, where we scaled the flux according to the relative brightness of the components. However, the flux ratio between the two components show additional discrepancies. Indeed, we obtainad a ratio of $F_B/F_A = 0.03$ in the $i^{\prime}$-band \citep{janson_noopsorta_2014} corresponding to a mass-ratio of $M_{\rm B}/M_{\rm tot} = 0.44$. Given the magnitude difference displayed in other bands \citep[e.g. Figure 2 in ][]{montet_dynamical_2015}, the flux ratio is more likely to be $F_{\rm B}/F_{\rm A} = 0.2 \pm 0.1$, which translated to a mass-ratio of $M_{\rm B}/M_{\rm tot} = 0.62$. As a comparison, \citet{montet_dynamical_2015} found a mass-ratio of $\approx 0.4$. However, they find that the secondary B component is missing mass when compared to evolutionary models, which could be attributed to an unseen companion. Such a companion could also be the cause for the oddity in mass-ratio being higher for the secondary than the primary component that we see. While all three of the YMG tools suggest different groups for the system, shown in Table~\ref{tab:YMGprob}, the LACEwING tool also produced a $\approx 37\,\%$ probability for BPMG membership, with most of the literature agree that the system is a member of the BPMG. {\bf 2MASS J04595855-0333123} received a grade 4 (preliminary) from our orbital fitting. The MCMC orbital fitting did not exclude the low probability of a high-mass system, and we cut the mass-distribution at $\leq 2\,M_\odot$. The BANYAN $\Sigma$-tool suggests the system to be in the field while LACEwING proposes the system to be a HYA member, which regardless places the system as old in comparison to the younger systems in our sample. Although the system is likely belonging to the field, the isochrones in the theoretical models predict a much higher mass for the system, which further alludes to the poor orbital constraints and that more information is necessary to obtain a better mass estimate. The models could however explain the lower dynamical mass if the age of the system was $\leq 50$ Myrs. Nevertheless, the optical spectral types are similar to that of J0008 and J0245, which should indicate for similar masses given their approximate ages. The near-IR spectra on the other hand implies an earlier spectral type for the primary state, and its mass could potentially be heavily underestimated. {\bf 2MASS J05320450-0305291} is part of a higher hierarchical sextuple system \citep{tokovinin_family_2022} and a strong candidate of being a member of the BPMG according to the BANYNA $\Sigma$-tool, but also a potential member of ABDMG according to the convergence point tool. The orbital fit is still preliminary according to our grading scale, receiving a grade of 4 (preliminary), with most of the orbital phase not yet observed. Despite the poor orbital constraints, we obtained low minimised $\chi_{\nu,\,{\rm grid}}^2 = 1.3$ and $\chi^2_{\nu,\,{\rm MCMC}} = 1.6$, suggesting that the astrometry is a good for the orbital fit. However, we find some degeneracy and the estimated period ranges from 20 to 100 years, where the shorter periods would suggest dynamical masses above $2\,M_\odot$, and thus dubious. A longer period of $\sim 80$ is more consistent with the results from \citet{tokovinin_family_2022}, where they provide two potential orbital solutions for either $P = 80$ or $P = 143$ years. We did not include the astrometric epochs from \citep{tokovinin_family_2022} in the MCMC fit which explains the discrepancy compared to the grid model. {\bf 2MASS J06112997-7213388} received the worst grade of the orbital fits in our sample, and is assigned as undetermined. The system is however likely young, with its highest probability being a CAR member according to the BANYAN $\Sigma$-tool, with LACEwING suggesting either a COL or CAR member with the same probabilities. The convergence point tool however suggests the system to be a CARN member, which is likely just because the tool does not include CAR in its calculations. The spectral types derived from optical photometry and near-IR spectra are consistent with each other. The secondary component is estimated to move close to its apastron in its coming years and expected to show limited motion in its orbit. {\bf 2MASS J06134539-2352077} is a likely ARG member, supported by both the BANYAN $\Sigma$ and LACEwING tools. The astrometric data from the epochs between 2019-2020 were only included in the grid-search approach, which explains the discrepancy in the orbital parameters obtained for the two methods and the different masses obtained. However, both procedures obtained masses greater than that of the evolutionary model, suggesting that perhaps there is some missing mass and an unseen companion in the system. The SPHERE observations should have been able to detect massive companions of $\sim 0.2\,M_\odot$, or at least a non-uniform PSF, down to $\sim 0.5$ AU, which is not apparent from the only epoch taken with SPHERE for the system. The orbit from the grid-search method obtained a grade 2 (good) on our scale. {\bf 2MASS J07285137-3014490} is a well-studied system in the ABD moving group, for which we mainly contributed by adding additional astrometric data points and an updated distance parallax compared to the previous results in \citet{rodet_dynamical_2018}. The updated Gaia EDR3 parallax measurement helped to constrain the distance uncertainty to the system by a factor of $\approx 4$, resulting in a more precise mass estimate. Our results were consistent with those of \citet{rodet_dynamical_2018} for most part, with the exception of a slightly higher mass-fraction in our case of $\frac{M_{\rm B}}{M_{\rm tot}} = 0.51\,^{+0.10}_{-0.08}$ compared to $\frac{M_{\rm B}}{M_{\rm tot}} = 0.46 \pm 0.10$ in\citet{rodet_dynamical_2018}, where we applied the same flux ratio of $0.2 \pm 0.01$. The missing mass in the secondary B component that \citet{rodet_dynamical_2018} found is accounted for in our case, which potentially could be caused by the different distances adopted, but the discrepancy in mass-fractions we found instead could also be explained by the same argument of an unseen companion. The system also possesses a notably high eccentricity of $e = 0.90$ which points towards significant dynamical interactions. Nevertheless, the existence of a third unseen companion is likely uncorrelated with the eccentricity, and the close encounters required to dynamically enhance the eccentricity would make the configuration of the system unstable \citep{rodet_dynamical_2018}. {\bf 2MASS J09075823+2154111} is a likely field binary, with no consensus of YMG membership from the different tools applied. Despite the astrometric data covering almost an entire period of 10-11 years, most of the data points are spread over a small change in positional angle of $\approx 70^\circ$, resulting in only a grade 3 (tentative) orbital fit. The MCMC method found a reasonable total mass of the system, whereas the grid-search method obtained an orbit corresponding to a dynamical mass above $14\,M_\odot$, which we rule out based on the spectral type and photometry of the system. There is nothing obvious in the Gaia EDR3 data that would elude to something being wrong with the estimated parallax to the system that could explain the high mass obtained from the grid-search orbital fit, and it would require a distance of $\approx 14$ pc to reduce the dynamical mass estimate from the grid-search to the same value as for the MCMC method. The astrometric data has rather large uncertainties compared to the rest of the sample, especially so for the epochs observed with NOT/FastCam. We are likely to see improvements for the orbit and better dynamical mass-constraints for the system in the coming years, the period is well-known and just a few or single new epoch would greatly benefit new orbital parameter estimations for the system. {\bf 2MASS J09164398-2447428} received a grade of 3 (tentative) on our grading scale for its orbit, and we have astrometric epochs that cover more than one full revolution, allowing for a more accurate period estimation. The binary is likely to belong to the field rather than any known YMG. The RV data were consistent with a Keplerian motion and the orbit, but the adopted flux-ratio suggested a much greater mass for the secondary component compared to the primary, with a mass ratio of $\frac{M_{\rm B}}{M_{\rm tot}} = 0.79\,^{+0.06}_{-0.07}$. The total mass of the system was consistent with the prediction from the theoretical models, with a slight overestimation of photometric mass, which could imply that the system is actually younger than anticipated. However, since the RV-weighted flux-ratio was too dubious we excluded the system from the mass-magnitude diagram in Figure~\ref{fig:isochrones}. {\bf 2MASS J10140807-7636327} is a likely CAR member according to the YMG tools, where the convergence point tool suggested CARN but does not include the CAR group in its calculations which is an approximate neighbour. The optical photometric and near-IR spectral types were consistent with each other, with a slight preference towards earlier types according to the near-IR spectra. The astrometric data spans over 20 years, but the orbital period is expected to be much larger and we obtained a grade 3 (tentative) for our orbital fits for the system, with the main uncertainties stemming from the period and the distance to the system. The dynamical mass was however consistent with the photometric mass obtained from the evolutionary models when adopting the age of the CAR moving group as suggested by the YMG tools. The system does not have a measured Gaia parallax, and instead we adopted the distance of $d = 69 \pm 2$ pc from \citet{malo_bayesian_2013} based on group member statistics, which is greater than the spectroscopic distance measured by \citet{riaz_identification_2006} of $d \approx 14$ pc. Our orbital fit favoured the greater distance which corresponded to a higher mass, as the brightness and spectral types of the binary are incompatible with the dynamical mass estimated using the smaller distance, with the mass being well below the Hydrogen burning limit in such case. We did not assume any distance to the system when assessing YMG membership probabilities. However, the BANYAN $\Sigma$-tool places the system in the field if assuming the shorter distance of $d \approx 14$ pc from \citet{riaz_identification_2006}. {\bf 2MASS J10364483+1521394} is a well-studied triplet system which previous dynamical mass estimate depict a $\approx 30\,\%$ discrepancy between dynamical and photometric masses \citep{calissendorff_discrepancy_2017}. Only the outer BC binary pair was considered for our orbital fit here, which was well-constrained and obtained the grade 1 (reliable). The orbit of the outer binary around the main primary A star is likely over hundreds of years and thus not ready for orbital constraints yet. We found no obvious YMG membership for the system according to the YMG tools utilised, however studies have suggest the system to be a UMA candidate member, and we therefore adopted the age of $400\pm 100$ Myrs for the system. We did not procure RV data for the system, however \citet{calissendorff_discrepancy_2017} previously measured the mass-ratio between the B and C components from the relative motion around the common centre of mass for the pair on the orbit around the primary, revealing the outer binary to be of both equal brightness and mass. The previous mass estimate had most of its error budget dominated by the uncertainty in the distance to the system, which has now been remedied by Gaia EDR3 parallax. The updated distance also reduced the mass-discrepancy observed in \citet{calissendorff_discrepancy_2017}. The near-IR spectral types of the individual components derived in \citep{calissendorff_characterising_2020} were surprisingly off by more than $1-\sigma$ from each other for the BC pair, but still within the error of the optical photometric spectral types from \citet{janson_astralux_2012}. The discrepancy in the near-IR spectral types could be due to one of the components being close to the edge of the detector and the PSF not fully sampled. {\bf 2MASS J20163382-0711456} received a grade 4 (preliminary) orbital fit and is poorly constrained. The system has an entry in Gaia EDR3, but the parallax is dubious with a distance above 1374 pc. An older entry in Gaia DR2 suggested the system to be at a distance of $d = 34.25 \pm 1.41$ pc which we adopted for our calculations. Photometric spectral type analysis in the optical indicated for early M0 and M2 types for the binary pair, which suggests that our dynamical mass estimate is underestimated, and also in agreement with the photometric mass which is about 2-3 times higher than the dynamical mass from the preliminary orbital fit. RV data exists for the system but was not helpful for constraining the orbit, exhibiting high jitter. {\bf 2MASS J21372900-0555082} had no Gaia parallax available and we adopted the photometric parallax from \citet{lepine_all-sky_2011} for the system, which has a corresponding distance-uncertainty of $30\,\%$. The orbital fit grade was 4 (preliminary) and the dynamical mass estimate has its uncertainty budget dominated by the distance-error. It is likely that the orbital parameters are not constrained well enough, as the dynamical mass is lower than expected from the photometric mass from the models, but within the errors because of the uncertain distance. A distance of $\approx 18$ pc would bring the dynamical and photometric masses closer together. All three YMG tools suggested the system to be part of the field. {\bf 2MASS J23172807+1936469} has previously been suggested to be part of the BPMG \citep{malo_bayesian_2013, janson_noopsortborbital_2014}, but updated space velocity parameters with the BANYAN $\Sigma$-online tool suggest it to be a field system. The convergence point tool gave some low indication for a possibility of the system being part of the THA group. Our orbital fit of the system {was one of the more robust in our sample, receiving a grade 1 (reliable) on our scale, and the resulting orbital parameters were} consistent with the previous orbit fitted in \citet{janson_noopsortborbital_2014}. The earlier results did not have access to Gaia parallaxes, and the distance estimate by \citet{lepine_new_2005} of $d = 11.6 \pm 2.4$ pc to the system was insufficient for precise mass estimates, which uncertainty would convert the mass error to $50\,\%$ of the total mass. The two new astrometric data points included here did not change the results from the previous orbital fit, but here we also incorporated RV data into the fit and had access to the Gaia DR2 parallax which allowed for robust dynamical masses to be derived. {\bf 2MASS J23261182+1700082} displayed a discrepancy between the two orbital fitting procedures employed, where the grid-search favoured higher masses. We estimated the grade of the orbit as 4 (preliminary), which showed surprisingly small uncertainties in mass despite the two masses from the two methods being so different. The mass from the MCMC orbit was more in line with with photometric mass from the evolutionary models compared to the grid-search that predicted a $\approx 30\,\%$ higher total dynamical mass in the system. Most of the YMG tools agreed that the system is more likely in the field, except for the convergence point tool which gave some probability for it being a THA member. If we were to assume the system to have the same age as the THA group of $45\pm4$ Myrs, the photometric mass would be reduced $0.19 - 0.27 M_\odot$, intensifying the discrepancy. The optical photometric spectral types were similar to that of J0111, and we would expect J232611 to have a mass not too dissimilar from it. New observations would greatly aid to reduce the large upper uncertainty in the orbital period that the grid-search obtained for the system. {\bf 2MASS J23261707+2752034} is likely belonging to the field according to all three YMG tools. The system has the fewest amount of observed epochs of just five observations in our sample, and the orbital fit received the grade 3 (tentative), which span almost an entire orbital revolution. The evolutionary models predicted higher total mass than our dynamical mass estimates, and it is possible that we overestimated the field age for the system. New observations closer to periastron could help constrain the orbital parameters further. {\bf 2MASS J23495365+2427493} receivied a grade 4 (preliminary) for its orbital fit, and it remains too uncertain to determine whether the observed epochs are close to periastron or apastron, causing a large uncertainty in the fitted orbital period. The system had previously been estimated to be part of either the BPMG or Columba \citep{malo_bayesian_2013, janson_noopsortborbital_2014}, but updated Gaia EDR3 parameters and the BANYAN $\Sigma$-online tool suggests the system to belong to the field instead. The convergence point tool suggests the system to be a strong TWA candidate member however. The dynamical mass from the grid-search and MCMC differed for the system, where the grid-search preferred higher masses and the MCMC lower masses compared to the photometric mass from the theoretical models. If we adopted the young TWA age of $\approx 10$ Myrs however, the photometric mass is similar to the mass obtained from the MCMC method. Nevertheless, the orbit was only constrained to a preliminary level and not yet good enough to make an adequate comparison. \section{Summary}\label{appendixE} \clearpage \begin{table*}[t] \renewcommand{\arraystretch}{1.3} \centering \caption{Summary} \begin{tabular}{l|cc|cc|ccc|c} \hline \hline Name$^{\rm grade}$ & \multicolumn{2}{|c|}{$P\,[{\rm yrs}]$} & \multicolumn{2}{|c|}{$a\,[{\rm AU}]$} & \multicolumn{3}{|c|}{$M_{\rm tot}\,[M_\odot]$} & $[M_{\rm B}/M_{\rm tot}]$ \\ & Grid & MCMC & Grid & MCMC & Grid & MCMC & Theoretical &\\ \hline J0008$^1$ & $5.92\pm0.01$ & $5.94^{+0.02}_{-0.01}$ & $2.60\,^{+0.05}_{-0.04}$ & $2.61 \pm0.01$ & $0.50\pm0.02$ & $0.50\,^{+0.04}_{-0.03}$ & $0.45-0.53$ &\\ J0111$^3$ & $41\,^{+ 20}_{ -10}$ & $56\,^{+1}_{-15}$ & $7.8\pm1.0$ & $7.5\,^{+1.2}_{-1.1}$ & $ 0.28\,^{+0.10}_{-0.11}$ & $0.15\,^{+0.11}_{-0.05}$ & $0.11-0.20$ &\\ J0225$^5$ & $20\,^{+ 20}_{ -3}$ & $49\,^{+43.6}_{-14.5}$ & $ 4.8\pm 0.3$ & $6.94\,^{+3.36}_{-1.75}$ & $0.27\,^{+0.05}_{-0.06}$ & $0.12\,^{+0.04}_{-0.03}$ & $0.22 - 0.26$ &\\ J0245$^3$ & $30\pm2$ & $69\,^{+43}_{-19}$ & $ 7.7\pm0.2$ & $14.22\,^{+0.73}_{-0.55}$ & $0.51\pm0.03$ & $0.55\,^{+0.04}_{-0.03}$ & $0.53 - 0.60$ &\\ J0437$^2$ & $26\,^{+6}_{-1}$ & $29.4\,^{+0.5}_{-0.4}$ & $ 8.6\,^{+0.1}_{-0.2}$ & $9.3\pm0.2$ & $ 0.92\,^{+0.06}_{-0.04}$ & $0.93\pm0.04$ & $0.91-1.09$ & $0.44\pm0.02$\\ J0459$^4$ & $ 28\,^{+ 6}_{ -16}$ & $32\,^{+1}_{-5}$ & $6.0\,^{+0.4}_{-0.3}$ & $6.38\,^{+0.27}_{-0.72}$ & $0.28\,^{+0.05}_{-0.03}$ & $0.26\,^{+0.06}_{-0.04}$ & $0.75-0.78$ & $0.24\,^{+0.29}_{-0.04}$\\ J0532$^4$ & $87\,^{+12}_{-4}$ & $26\,^{+15}_{-7}$ & $19.7\pm0.5$ & $13.92\,^{+2.65}_{-2.02}$ & $1.01\,\pm0.07$ & $1.87\,^{+0.10}_{-0.49}$ & $0.93-1.14$ & $0.41\,^{+0.05}_{-0.04}$\\ J0611$^5$ & $121\,^{+3304}_{ -99}$ & $57\,^{+41}_{-21}$ & $54\,^{+5635}_{ -40}$ & $19\,^{+8}_{-5}$ & $11\,^{+11}_{-8}$ & $1.1\,^{+0.60}_{-0.39}$ & $0.78 - 0.93$ &\\ J0613$^2$ & $13.2\,^{+0.2}_{-0.4}$ & $11.56\,^{+0.90}_{-0.73}$ & $4.62\,^{+0.06}_{-0.04}$ & $3.91\,^{+0.83}_{-0.11}$ & $0.57\pm0.02$ & $0.42\,^{+0.38}_{-0.15}$ & $0.28-0.34$ & $0.37\,^{+0.06}_{-0.05}$\\ J0728$^1$ & $7.79\pm0.03$ & $7.76\pm0.02$ & $ 4.03\pm0.04$ & $3.99\pm0.01$ & $1.08\,\pm0.03$ & $1.06\,\pm0.03$ & $1.05-1.11$ & $0.51\,^{+0.10}_{-0.08}$\\ J0907$^3$ & $10.2\,^{+2.0}_{-0.8}$ & $10.8\,^{+0.8}_{-0.6}$ & $ 11.3\,^{+0.3}_{-0.7}$ & $4.64\,^{+1.36}_{-0.81}$ & $14\,\pm1$ & $0.78\,^{+0.60}_{-0.35}$ & $0.69 - 0.85$ &\\ J0916$^3$ & $8.6\,^{+0.1}_{-0.2}$ & $8.76\,^{+0.09}_{-0.08}$ & $4.24\,^{+0.12}_{-0.11}$ & $4.27\,^{+0.06}_{-0.17}$ & $1.02\,^{+0.08}_{-0.07}$ & $1.01\,^{+0.12}_{-0.09}$ & $1.12 - 1.13$ & $0.79\,^{+0.06}_{-0.07}$\\ J1014$^3$ & $ 48\,^{+ 24}_{ -9}$ & $48.5\,^{+16.3}_{-6.5}$ & $13.5\,^{+1.3}_{-0.6}$ & $13.59\,^{+1.72}_{-0.41}$ & $1.10\,\pm0.10$ & $0.87\,^{+0.20}_{-0.13}$ & $0.83 - 1.12$&\\ J1036$^1$ & $8.56\pm0.02$ & $8.47\pm0.02$ & $2.92\pm0.03$ & $2.98\pm0.03$ & $0.34\,\pm0.01$ & $0.37\,\pm0.01$ & $0.37-0.38$ & $0.50\,\pm0.02$ \\ J2016$^4$ & $ 41\,^{+4371}_{ -26}$ & $36\,^{+26}_{-13}$ & $ 9\,^{+ 13}_{ -3}$ & $8.43\,^{+3.15}_{-1.48}$ & $0.45\,^{+0.10}_{-0.09}$ & $0.32\,^{+0.09}_{-0.07}$ & $0.97 - 1.03$ &\\ J2137$^4$ & $ 44\pm8$ & $69\,^{+29}_{-16}$ & $ 9\,^{+ 11}_{ -3}$ & $13.04\,^{+6.97}_{-6.00}$ & $0.4\,\pm0.4$ & $0.40\,^{+0.38}_{-0.37}$ & $0.53 - 0.56$ &\\ J2317$^1$ & $ 11.53\,^{+0.05}_{-0.03}$ & $11.55\pm0.02$ & $ 4.32\,^{+0.05}_{-0.06}$ & $4.32\,^{+0.08}_{-0.06}$ & $0.60\,\pm0.02$ & $0.61\pm0.03$ & $0.63-0.64$ & $0.45\,\pm0.03$\\ J232611$^4$ & $ 20\,^{+ 34}_{ -10}$ & $16.4\,^{+3.1}_{-1.9}$ & $ 6.3\,^{+0.4}_{-0.6}$ & $5.18\,^{+0.70}_{-0.59}$ & $0.63\,^{+0.12}_{-0.09}$ & $0.49\,\pm0.04$ & $0.42 - 0.49$ &\\ J232617$^3$ & $ 11.0\pm0.2$ & $10.95\,^{+0.16}_{-0.11}$ & $ 4.33\pm0.06$ & $4.33\,^{+0.13}_{-0.14}$& $0.67\,\pm0.03$ & $0.68\,^{+0.06}_{-0.05}$ & $0.77 - 0.87$ &\\ J2349$^4$ & $ 49\,^{+ 8}_{ -31}$ & $40\,^{+26}_{-13}$ & $ 11\,^{+ 1}_{ -2}$ & $8.53\,^{+2.28}_{-1.10}$ & $0.52\,^{+0.07}_{-0.05}$ & $0.23\,^{+0.17}_{-0.04}$ & $0.42 - 0.44$ & \\ \hline \end{tabular} \label{tab:results} {\small\\ Orbital grades: (1) Reliable, (2) Good, (3) Tentative, (4) Preliminary, (5) Indetermined. } \end{table*} \end{appendix}
Title: Do chaotic field lines cause fast reconnection in coronal loops?
Abstract: Over the past decade, Boozer has argued that three-dimensional (3D) magnetic reconnection fundamentally differs from two-dimensional (2D) reconnection due to the fact that the separation between any pair of neighboring field lines almost always increases exponentially over distance in a 3D magnetic field. According to Boozer, this feature makes 3D field-line mapping chaotic and exponentially sensitive to small non-ideal effects; consequently, 3D reconnection can occur without intense current sheets. We test Boozer's theory via ideal and resistive reduced magnetohydrodynamic simulations of the Boozer-Elder coronal loop model driven by sub-Alfvenic footpoint motions [A. H. Boozer and T. Elder, Physics of Plasmas 28, 062303 (2021)]. Our simulation results significantly differ from their predictions. The ideal simulation shows that Boozer and Elder under-predict the intensity of current density due to missing terms in their reduced model equations. Furthermore, resistive simulations of varying Lundquist numbers show that the maximal current density scales linearly rather than logarithmically with the Lundquist number.
https://export.arxiv.org/pdf/2208.06965
\title{ Do chaotic field lines cause fast reconnection in coronal loops? } \author{Yi-Min Huang} \email{yiminh@princeton.edu} \affiliation{Department of Astrophysical Sciences and Princeton Plasma Physics Laboratory, New Jersey 08543, USA} \author{Amitava Bhattacharjee} \affiliation{Department of Astrophysical Sciences and Princeton Plasma Physics Laboratory, New Jersey 08543, USA} \section{Introduction} Magnetic reconnection is a fundamental mechanism that changes the topology of magnetic field lines and converts magnetic energy to plasma thermal and non-thermal energy. \citep{Biskamp2000,PriestF2000,ZweibelY2009,YamadaKJ2010,ZweibelY2016,JiDJLSY2022,PontinP2022} It is generally believed that this mechanism drives explosive phenomena in astrophysical, space, and laboratory plasmas, including solar flares, coronal mass ejections, geomagnetic substorms, and sawtooth crashes in fusion devices. Magnetic reconnection can be classified into two-dimensional (2D) and three-dimensional (3D) reconnection. Real-world magnetic reconnection takes place in 3D, but 2D reconnection is commonly employed as an approximation by assuming that the whole process depends only on two spatial coordinates. Two-dimensional reconnection occurs at an X-point (or X-line when extending along the direction of symmetry) where separatrices, which separate topologically different magnetic field lines, intersect. When magnetic stress builds up at the X-point prior to reconnection, a thin current sheet forms. Then, during reconnection, the field line velocity (i.e., a velocity field that carries field lines from one time to another) diverges at the X-point. \citep{PriestHP2003} In other words, the field line is cut and rejoined with another field line at the X-point. Compared with 2D problems, magnetic reconnection in 3D remains a conceptual challenge, especially when topological structures, such as magnetic null points and closed magnetic field lines, are absent.\citep{Pontin2011} In this situation, all magnetic field lines are topologically equivalent, and a continuous velocity field that preserves magnetic field line connectivity can always be found.\citep{Greene1993} That raises a fundamental question: how and where does magnetic reconnection occur if all field lines are topologically identical? Several ideas have been proposed to address this question, including the general magnetic reconnection theory\citep{SchindlerHB1988,HesseS1988,SchindlerHB1991} that uses parallel voltage as a metric for 3D reconnection rate and the concept of quasi-separatrix-layers (QSLs) defined as regions with high squashing factors of the field line mapping.\citep{TitovH2002,Titov2007,TitovFPML2009} Even though 3D reconnection is a vast and ongoing research topic, most theories share one common aspect: Just like in 2D reconnection, intense thin current sheets play a critical role in 3D reconnection. Through a series of publications over the past decade, Boozer has advocated a paradigm shift regarding 3D reconnection.\citep{Boozer2012,Boozer2012a,Boozer2013,Boozer2014,Boozer2018,Boozer2019,Boozer2021,Boozer2022} The gist of Boozer's proposal can be described as follows. In a 3D magnetic field, neighboring magnetic field lines generically exponentiate away from each other. The field-line flow, which is Hamiltonian, becomes chaotic. If we follow two field lines initially separated by an infinitesimal distance $\delta r(0)$, in most cases the separation grows exponentially as $\delta r(\ell)=e^{\sigma(\ell)}\delta r(0)$, where $\ell$ is the distance along the field line, and $\sigma(\ell)$ is an overall (but in general non-monotonically) increasing function over distance. Boozer has argued that under the condition of large field-line exponentiation, an exponentially small non-ideal effect will completely scramble the field-line mapping, leading to fast reconnection without the necessity of intense thin current sheets. \replace{}{Indeed, that fast reconnection can occur without intense thin current sheets is what sets Boozer's theory apart from ``traditional'' theories, to use Boozer's terminology.\citep{Boozer2022} } Recently, Boozer and Elder \citep{BoozerE2021} have proposed a simple coronal loop model to test this new paradigm. In their model, the coronal loop is enclosed in a perfectly conducting cylinder of a radius $a$ and a finite length $0\le z\le L$. The initial magnetic field is uniform and pointing along the $\boldsymbol{\hat{z}}$ direction. The magnetic field lines are line-tied to the top and the bottom boundaries, which represent the photosphere. On the top boundary at $z=L$, a time-dependent flow is imposed; all other boundaries are static. The imposed boundary flow mimics photospheric convection and gradually entangles (or ``braids'') the field lines in the coronal loop. The braided magnetic field lines eventually reconnect and release the stored magnetic energy into plasma kinetic energy and heat. On the surface, the Boozer--Elder model is similar to Parker's model\citep{Parker1972,Parker1988} of coronal heating, but their predictions for the scenarios of magnetic reconnection and energy release are starkly different. Parker's scenario predicts that intense thin current sheets will develop throughout the coronal loop as a consequence of field line braiding. These thin current sheets cause numerous small-scale reconnection events, or ``nanoflares'' (as Parker called them), that heat the solar corona to millions of degrees. Thin current sheets play a crucial role in Parker's scenario of coronal heating. In fact, Parker argued that the thin current sheets will become singular (i.e., take the form of Dirac $\text{\ensuremath{\delta}}$-functions) in the ideal-MHD limit when the resistivity vanishes.\citep{Parker1994} Parker's prediction of ideal singular current sheets, often dubbed the ``Parker problem,'' has remained controversial for several decades, continuing to this day.\citep{VanBallegooijen1985,ZweibelL1987,LongcopeC1996,LongbottomRCS1998,NgB1998,CraigS2005,Low2006a,Low2007,JanseL2009,HuangBZ2009,HuangBZ2010,AlyA2010,Low2010,Low2010a,JanseLP2010,Low2011,PontinH2012,CraigP2014,CandelaresiPH2015,ZhouHQB2018,PontinH2020} Contrary to Parker's nanoflare scenario, intense thin current sheets play no significant role in the Boozer--Elder scenario. While Boozer and Elder also predict that electric current distribution will form thin ribbons, the current density does not become very intense and increases only linearly in time. In their scenario, the separation of neighboring field lines, which increases exponentially in time, plays a dominant role in triggering the onset of fast reconnection. The Boozer--Elder model is a welcome new development of Boozer's theory because it provides concrete and testable predictions. The primary objective of this study is to test some of the predictions. Specifically, we will focus on two distinct predictions, one related to ideal evolution and the other to resistive evolution. For the ideal evolution, the model predicts that the current density will increase linearly in time while the separation of neighboring field lines will grow exponentially in time. With a small but finite resistivity, because the exponential field-line separation amplifies the field line velocity, thereby speeding up reconnection, the \replace{}{time scale for the onset of fast reconnection and therefore the} current density will scale logarithmically with the Lundquist number.\footnote{A.~H.~Boozer, private communication (2022).} This logarithmic scaling relation for the current density is perhaps the most striking difference of Boozer's theory compared to traditional reconnection theories. This paper is organized as follows. Section \ref{sec:Reduced-Magnetohydrodynamics-Mod} outlines the reduced magnetohydrodynamic (RMHD) model and the imposed footpoint motions. Section \ref{sec:Quasi-Static-Evolution} lays out the governing equations for the ideal and resistive quasi-static evolution of a coronal loop driven by footpoint motions. In Section \ref{sec:Ideal-Evolution}, we test Boozer and Elder's current density calculation with an ideal RMHD simulation. In Section \ref{sec:Resistive-Evolution}, we present resistive simulations to test the scaling of current density with the Lundquist number and investigate whether chaotic field line separation causes onset of reconnection. We conclude in Section \ref{sec:Conclusion}. \section{Reduced Magnetohydrodynamics Model \label{sec:Reduced-Magnetohydrodynamics-Mod}} We employ the standard reduced magnetohydrodynamics (RMHD) model \citep{KadomtsevP1974,Strauss1976,VanBallegooijen1985} in this study. The RMHD model assumes a strong uniform guide field along the $z$ direction and that spatial scales along the guide field are much longer than that in perpendicular directions. Under these assumptions, the MHD equations can be simplified to a set of two equations \begin{equation} \partial_{t}\Omega+\text{\ensuremath{\left[\phi,\Omega\right]}}=\partial_{z}J+\left[A,J\right]+\nu\nabla_{\perp}^{2}\Omega-\lambda\Omega,\label{eq:RMHD-momentum} \end{equation} \begin{equation} \partial_{t}A=\partial_{z}\phi+\left[A,\phi\right]+\eta\nabla_{\perp}^{2}A.\label{eq:RMHD-faraday} \end{equation} Here, we normalize the strength of the guide field to unity. The operator $\nabla_{\perp} \equiv \boldsymbol{\hat{x}}\partial_x + \boldsymbol{\hat{y}}\partial_y$ denotes the gradient operator in the perpendicular directions of the guide field. The magnetic field is expressed in terms of the flux function $A$ through the relation $\boldsymbol{B}=\boldsymbol{\hat{z}}+\nabla_{\perp}A\times\boldsymbol{\hat{z}}$. The plasma velocity $\boldsymbol{u}$ is expressed in terms of the stream function $\phi$ as $\boldsymbol{u}=\nabla_{\perp}\phi\times\boldsymbol{\hat{z}}$. The vorticity and the electric current density along the $z$ direction are given by $\Omega\equiv-\nabla_{\perp}^{2}\phi$ and $J\equiv-\nabla_{\perp}^{2}A$, respectively. The Poisson bracket is defined as $\left[f,g\right]=\partial_{y}f\partial_{x}g-\partial_{x}f\partial_{y}g$. Dissipation is introduced by including the resistivity $\eta$, the viscosity $\nu$, and a friction coefficient $\lambda$. The RMHD model is widely used in analytic and numerical studies of Parker's coronal heating model \citep{VanBallegooijen1985,StraussO1988,LongcopeS1994a,LongcopeS1994b,NgB1998,DmitrukG1999,DmitrukGM2003,RappazzoVED2007,RappazzoVED2008,NgB2008,NgLB2012,RappazzoP2013} and also in the study of Boozer and Elder.\citep{BoozerE2021} From the definitions of $\boldsymbol{B}$ and $\boldsymbol{u}$ and the Poisson bracket, we have the following useful relations \begin{equation} \left[\phi,f\right]=\boldsymbol{u}\cdot\nabla f\label{eq:udg} \end{equation} and \begin{equation} \partial_{z}f+\left[A,f\right]=\boldsymbol{B}\cdot\nabla f\label{eq:bdg} \end{equation} for an arbitrary variable $f$. The RMHD equations are solved with the DEBSRX code,\citep{HuangBB2014} which is a reduced version of the compressible MHD code DEBS.\citep{SchnackBMHCN1986} The $x$--$y$ plane is discretized with a Fourier pseudospectral method. The $z$ direction is discretized with a finite-difference scheme where $\phi$ and $A$ reside on staggered grids. The timestepping scheme is a semi-implicit, predictor-corrector leapfrog method where $\phi$ and $A$ are staggered in time. \replace{}{While the semi-implicit scheme allows time-steps to be larger than the limit set by the Courant--Friedrichs--Lewy (CFL) condition,\citep{CourantFL1928} too large a time-step may compromise accuracy. For this reason, we have been conservative in setting the time-step. We determine the time-step dynamically by using the CFL condition of the Alfv\'en wave along the guide field and that of the perpendicular flow speeds, whichever gives a smaller time-step. We have tested the accuracy of this choice by reducing the time steps by a factor of two for selected runs and have seen no significant difference. } We assume that the system is bounded in the $z$ direction by two conducting plates at $z=0$ and $z=L$. The $x$--$y$ plane is a $1\times1$ box ($0\le x,y\le1$) with doubly periodic boundary conditions. We take $L=10$ in this study. The bottom boundary at $z=0$ is stationary for all time; i.e., we impose the boundary condition $\phi=\phi_{b}=0$. At the top boundary of the simulation box ($z=L$), we impose a time-dependent flow given by the stream function \begin{align} \phi_{t}= & \left(\cos\left(2\pi x\right)-1\right)\left(\cos\left(2\pi y\right)-1\right)\nonumber \\ & \left[c_{0}\sin\left(\omega_{0}t\right)+c_{1}\sin\left(2\pi x\right)\sin\left(\omega_{1}t\right)\right.\nonumber \\ & \left.+c_{2}\sin\left(2\pi y\right)\sin\left(\omega_{2}t\right)\right.\nonumber \\ & \left.+c_{3}\sin\left(2\pi x\right)\sin\left(2\pi y\right)\sin\left(\omega_{3}t\right)\right],\label{eq:boundary_phi} \end{align} where $c_{i}$ and $\omega_{i}$ are constants. This boundary flow is similar, but not identical, to the flow employed by Boozer and Elder.\citep{BoozerE2021} Because the DEBSRX code is limited to doubly-periodic systems in the perpendicular directions whereas Boozer and Elder consider a cylindrical domain, we cannot impose the same boundary conditions as they do. The difference in the boundary conditions, however, is not germane to the issue we will discuss. Note that the boundary flow is localized to the middle and vanishes at the edges of the simulation box in the perpendicular directions. This feature is qualitatively similar to the flow employed by Boozer and Elder. We start the simulations with only the guide field. The imposed boundary flow drags the footpoints and gradually entangles the field lines. From solar observations, it is known that the time scales of footpoint motions are much longer compared with the Alfv\'en transit time along the coronal loop. We set the constants to $c_{0}=0$, $c_{1}=c_{2}=c_{3}=0.00025$, $\omega_{1}=0.0125\sqrt{2}$, $\omega_{2}=0.0125\sqrt{3}$, and $\omega_{3}=0.0125\sqrt{5}$. For this set of parameters, the maximal footpoint speed is typically of the order of $0.01$, corresponding to an advection time scale on the order of $100$. In contrast, the Alfv\'en transit time along the coronal loop is $10$. Therefore, the condition for the separation of time scales is met. We also make the ratios between frequencies $\omega_{i}/\omega_{j}$ irrational so that the flow pattern will not repeat itself, but this choice is not crucial for the purpose of this study. \section{Quasi-Static Evolution of Coronal Loops\label{sec:Quasi-Static-Evolution}} Under the condition that the characteristic time scales of footpoint motions are much longer than the Alfv\'en transit time, the coronal loop will evolve quasi-statically, provided that the coronal loop is not close to an instability threshold. For a quasi-static coronal loop, we may assume that the plasma inertia as well as the viscous and frictional force are negligible, therefore the magnetic force-free condition (within the RMHD approximation) \begin{equation} \boldsymbol{B}\cdot\nabla J=\partial_{z}J+\left[A,J\right]=0\label{eq:force-balance} \end{equation} is satisfied for all time. The governing equations for quasi-static evolution now are the force-free condition (\ref{eq:force-balance}) together with the induction equation (\ref{eq:RMHD-faraday}). In this set of equations, the time derivative only appears in the induction equation (\ref{eq:RMHD-faraday}), whereas the force-free condition plays a role analogous to the incompressibility constraint, $\nabla\cdot\boldsymbol{u}=0$. The quasi-static evolution of coronal loops driven by slow footpoint motions can be determined as follows. First, taking the time derivative $\partial_{t}$ of Eq.~(\ref{eq:force-balance}) yields \begin{equation} \boldsymbol{B}\cdot\nabla\partial_{t}J+\left[\partial_{t}A,J\right]=0.\label{eq:dt_force_balance} \end{equation} We can obtain an equation for $\partial_{t}J$ by applying the $-\nabla_{\perp}^{2}$ operator on Eq.~(\ref{eq:RMHD-faraday}), yielding \begin{equation} \partial_{t}J=-\nabla_{\perp}^{2}(\boldsymbol{B}\cdot\nabla\phi)+\eta\nabla_{\perp}^{2}J.\label{eq:dtJ1} \end{equation} Now, we can use Eq.~(\ref{eq:RMHD-faraday}) and Eq.~(\ref{eq:dtJ1}) to eliminate the time derivatives in Eq.~(\ref{eq:dt_force_balance}) and obtain the following equation for $\phi$ \begin{equation} \mathcal{L}\phi=\eta\boldsymbol{B}\cdot\nabla\left(\nabla_{\perp}^{2}J\right),\label{eq:qs3} \end{equation} where the linear operator $\mathcal{L}$ is defined as \begin{equation} \mathcal{L}\phi\equiv\boldsymbol{B}\cdot\nabla\left(\nabla_{\perp}^{2}(\boldsymbol{B}\cdot\nabla\phi)\right)-\left[\boldsymbol{B}\cdot\nabla\phi,J\right].\label{eq:L} \end{equation} At a given instant, if the operator $\mathcal{L}$ is invertible subject to the boundary conditions $\phi|_{z=0}=\phi_{b}$ and $\phi|_{z=L}=\phi_{t}$, we can in principle obtain the stream function $\phi$ (and the flow $\boldsymbol{u}=\nabla_{\perp}\phi\times\boldsymbol{\hat{z}}$) that will carry the system to another force-free equilibrium at the next instant. In the ideal limit $\eta\to0$, Eq.~(\ref{eq:qs3}) is identical to the equation derived by van Ballegooijen in his study of the Parker problem.\citep{VanBallegooijen1985,NgB1998} The operator $\mathcal{L}$ is the one that appears, unsurprisingly, also in the ideal linear stability problem of the instantaneous equilibrium, which takes the form \begin{equation} \mathcal{L}\phi=\gamma^{2}\nabla_{\perp}^{2}\phi.\label{eq:linear_stability} \end{equation} Here, a $\phi\sim e^{\gamma t}$ time-dependence and homogeneous boundary conditions $\phi|_{z=0}=0$ and $\phi|_{z=L}=0$ are assumed. The operators $\mathcal{L}$ and $\nabla_{\perp}^{2}$ in Eq.~(\ref{eq:linear_stability}) are self-adjoint; therefore, the eigenvalues $\gamma^{2}$ are real and the eigenfunctions form a complete set; functions $\phi_{m}$ and $\phi_{n}$ of different eigenvalues satisfy the orthogonal condition \begin{equation} \int\nabla_{\perp}\phi_{m}^{*}\cdot\nabla_{\perp}\phi_{n}d^{3}x=0,\label{eq:orthogonal} \end{equation} while those with the same eigenvalues can be orthogonalized as well.\citep{ArfkenWH2013} Suppose the complete set of eigenfunctions and eigenvalues $\{\phi_{n},\gamma_{n}^{2}\}$ is known. We can formally solve Eq.~(\ref{eq:qs3}) by making an eigenfunction expansion \footnote{The eigenfunction expansion should include the contribution of continuous spectra if they exist. For continuous spectra, the summation in Eq.~(\ref{eq:eigen_expansion}) should be replaced by an integral. However, continuous spectra are often associated with toroidal magnetic fields with nested flux surfaces. They are likely to be absent in the coronal loop model of this study due to the line-tied boundary condition. } \begin{equation} \phi=\sum_{n}a_{n}\phi_{n}+\tilde{\phi}.\label{eq:eigen_expansion} \end{equation} Here, $\tilde{\phi}$ is an arbitrary smooth function satisfying the inhomogeneous boundary conditions $\phi|_{z=0}=\phi_{b}$ and $\phi|_{z=L}=\phi_{t}$ {[}e.g., $\tilde{\phi}=\phi_{b}+\left(\phi_{t}-\phi_{b}\right)z/L${]}. Using the orthogonality of eigenfunctions, we can determine the coefficients $a_{n}$ as \begin{equation} a_{n}=-\frac{\int\phi_{n}^{*}\left(\eta\boldsymbol{B}\cdot\nabla\left(\nabla_{\perp}^{2}J\right)-\mathcal{L}\tilde{\phi}\right)d^{3}x}{\gamma_{n}^{2}\int\nabla\phi_{n}^{*}\cdot\nabla\phi_{n}d^{3}x}.\label{eq:an} \end{equation} From Eq. (\ref{eq:an}), we conclude that Eq.~(\ref{eq:qs3}) is solvable provided that the system is not at marginal stability, i.e., none of the eigenvalues $\gamma_{n}^{2}$ vanish. \section{Quasi-Static Ideal Evolution\label{sec:Ideal-Evolution}} The above discussion shows that to determine the quasi-static evolution of this model requires solving Eq.~(\ref{eq:qs3}) in the whole domain in each time step. On the other hand, Boozer and Elder\citep{BoozerE2021} take a different approach for the quasi-static ideal evolution that only involves solving an equation for the current density at the top boundary. Their approach is as follows. First, using Eqs.~(\ref{eq:udg}), (\ref{eq:bdg}), and the definition of the Poisson bracket, the term $-\nabla_{\perp}^{2}(\boldsymbol{B}\cdot\nabla\phi)$ in Eq.~(\ref{eq:dtJ1}) can be written as \begin{align} -\nabla_{\perp}^{2}(\boldsymbol{B}\cdot\nabla\phi)= & -\partial_{z}\nabla_{\perp}^{2}\phi+\left[A,-\nabla_{\perp}^{2}\phi\right]+\left[-\nabla_{\perp}^{2}A,\phi\right]\nonumber \\ & -\partial_{y}\nabla_{\perp}A\cdot\partial_{x}\nabla_{\perp}\phi+\partial_{x}\nabla_{\perp}A\cdot\partial_{y}\nabla_{\perp}\phi\nonumber \\ = & \boldsymbol{B}\cdot\nabla\Omega-\boldsymbol{u}\cdot\nabla J\nonumber \\ & -\partial_{y}\boldsymbol{B}_{\perp}\cdot\partial_{x}\boldsymbol{u}+\partial_{x}\boldsymbol{B}_{\perp}\cdot\partial_{y}\boldsymbol{u}.\label{eq:del2_Bdphi} \end{align} Boozer and Elder ignore the terms $-\partial_{y}\boldsymbol{B}_{\perp}\cdot\partial_{x}\boldsymbol{u}+\partial_{x}\boldsymbol{B}_{\perp}\cdot\partial_{y}\boldsymbol{u}$, whereupon Eq.~(\ref{eq:dtJ1}) becomes (here, we set $\eta=0$ for the ideal evolution) \begin{equation} \partial_{t}J+\boldsymbol{u}\cdot\nabla J=\boldsymbol{B}\cdot\nabla\Omega.\label{eq:BE1} \end{equation} Because of the force-free condition $\boldsymbol{B}\cdot\nabla J=0$ and the ideal frozen-in condition, the left-hand-side of Eq.~(\ref{eq:BE1}) is constant along a field line. Consequently, the right-hand-side $\boldsymbol{B}\cdot\nabla\Omega$ is also constant along a field line. For the Boozer--Elder model, the vorticity at the bottom boundary vanishes and the vorticity at the top boundary is prescribed. Therefore, along a field line, $\boldsymbol{B}\cdot\nabla\Omega=\Omega|_{z=L}/L$ at any instant. Boozer and Elder then consider Eq.~(\ref{eq:BE1}) at $z=L$, yielding \begin{equation} \left(\partial_{t}J+\boldsymbol{u}\cdot\nabla J\right)_{z=L}=\frac{\Omega|_{z=L}}{L}.\label{eq:BE} \end{equation} This equation (hereafter the BE equation) plays a critical role in the study of Boozer and Elder.\footnote{The derivation of the BE equation given by Boozer and Elder is slightly different from ours. They start by writing Eq.~(\ref{eq:RMHD-faraday})(with $\eta=0$) in the Lagrangian coordinates as $\left(\partial_{t}A\right)_{L}=\left(\partial_{z}\phi\right)_{L}$. Here, the subscript $L$ implies the use of Lagrangian coordinates. Next, they apply $-\nabla_{\perp}^{2}$ (also expressed in Lagrangian coordinates) on both sides and obtain $\left(\partial_{t}J\right)_{L}=\left(\partial_{z}\Omega\right)_{L}$, which is equivalent to Eq.~(\ref{eq:BE1}). However, this derivation neglects the fact that the Laplacian $\nabla_{\perp}^{2}$ has explicit time-dependence when it is expressed in Lagrangian coordinates because the Lagrangian coordinates themselves are time-dependent. As such, the Laplacian $\nabla_{\perp}^{2}$ and the time derivative $\partial_{t}$ do not commute. Consequently, $\left(\partial_{t}J\right)_{L}=\left(-\partial_{t}\nabla_{\perp}^{2}A\right)\neq\left(-\nabla_{\perp}^{2}\partial_{t}A\right)_{L}$, implying some terms are missing in their derivation. This same issue of missing terms also appears in some other publications, e.g., Eqs.~(D4) and (D5) of Ref. {[}\onlinecite{Boozer2018}{]} and Eq.~(52) of Ref. {[}\onlinecite{Boozer2022}{]}.} The BE equation takes the form of an advection equation for the current density $J$ on the left-hand-side, whereas the right-hand-side serves as a source term. Because the flow velocity $\boldsymbol{u}$ (and therefore the vorticity $\Omega$) is prescribed at the top boundary, the BE equation can be solved to determine the current density distribution at the top boundary without further knowledge of what occurs in the remaining part of the system. \replace{}{That is not the case if the missing terms are included, because $\boldsymbol{B}\cdot\nabla \Omega$ will no longer be a constant along a field line. In other words, it is not possible to amend the BE equation simply by including those terms. } \replace{}{The advection term on the left-hand-side of the BE equation only redistributes the current density without changing its magnitude. The peak current density can increase only through the source term on the right-hand-side. Hence, the current density is bounded by $\left| J \right|\le \Omega_{\text{max}}t/L$, where $ \Omega_{\text{max}}$ is an upper bound of $\left|\Omega\right|$ at the top boundary. In other words, the current density increases at most linearly in time according to the BE equation.} Because some terms are neglected when deriving the BE equation, the question now is: are those terms negligible? We address this question by comparing solutions of the BE equation and that of the RMHD equations. The BE equation is solved using a pseudospectral method implemented with the Dedalus framework;\citep{BurnsVOLB2020} the grid resolution is $1024^{2}$. The RMHD equations are solved with the DEBSRX code with a grid resolution $1024^{3}$. We set $\eta=0$ in the DEBSRX simulation, but a small viscosity $\nu=10^{-6}$ is applied for numerical stability. \replace{}{The numerical algorithms of the Dedalus implementation for the BE equation and the DEBSRX code for the RMHD equations are similar. Both use a Fourier pseudospectral method dealiased by the Orszag two-thirds rule\citep{Boyd2001} in the $x$--$y$ plane, so the numerical errors should be similar.} Both simulations have been compared with numerical solutions at lower resolutions \replace{}{($256^2$ and $512^2$ for BE; $256^3$ and $512^3$ for RMHD)} to ensure that the results presented here are well-resolved and converged. Even though the time scales of footpoint motions are longer than the Alfv\'en transit time by approximately one order of magnitude, the RMHD calculation is not exactly force-free. To ensure that the RMHD solution remains close to force-free, we restart from a few selected snapshots, set the plasma speed to zero at footpoints and across the entire domain, then turn on the friction force and let the system relax to a force-free equilibrium. This experiment shows that the RMHD solution remains approximately force-free up to $t=200$, thereby ensuring a fair comparison between the BE and the RMHD solutions. We now compare the RMHD solution at the top boundary with the BE solution and summarize the results in Figure \ref{fig:jz-comparison}. Panel (a) shows the time histories of the maximum current density obtained from both sets of solutions. The maximum current densities from both solutions agree until $t=150$ and then depart significantly afterward. Furthermore, the current density in the RMHD solution increases significantly faster than predicted by the BE equation. Panel (b) shows snapshots from both sets of solutions at two representative times, at $t=140$ and $200$. Although the two solutions give essentially the same maximum current density at $t=140$, we can already see differences between the two solutions. The differences become quite pronounced at $t=200$. By that time, thin current sheets have developed in the RMHD solution but are absent in the BE solution. Our results show that the neglected terms in the derivation of the BE equation are not negligible. Therefore, to determine the current density at the top boundary, we must solve for it over the entire 3D domain. Importantly, the BE equation significantly under-predicts the current density of the RMHD solution. As we will see in the next section, these intensifying current sheets eventually lead to the onset of reconnection. \section{Resistive Evolution\label{sec:Resistive-Evolution}} \begin{table*}[t] \begin{centering} \begin{tabular}{ccccccccccccc} \toprule Run & $S$ & $P_m$ & Resolution & $W_{\eta}$ & $W_{\nu}$ & $W_{\eta}+W_{\nu}$ & $W_{P}$ & $E_{M}$ & $E_{K}$ & $J_{1/2}$ & $V_{1/2}/V$\tabularnewline \midrule A & $10^{4}$ & $1$ & $512^{3}$ & $1.36\ 10^{-2}$ & $2.66\ 10^{-4}$ & $1.38\ 10^{-2}$ & $1.46\ 10^{-2}$ & $7.96\ 10^{-4}$ & $7.74\ 10^{-6}$ & 0.272 & $5.13\%$\tabularnewline B & $4\ 10^{4}$ & $1$ & $512^{3}$ & $1.24\ 10^{-2}$ & $9.37\ 10^{-5}$ & $1.25\ 10^{-2}$ & $1.38\ 10^{-2}$ & $1.32\ 10^{-3}$ & $7.70\ 10^{-6}$ & 0.531 & $4.81\%$\tabularnewline C & $10^{5}$ & $1$ & $512^{3}$ & $1.18\ 10^{-2}$ & $3.07\ 10^{-4}$ & $1.21\ 10^{-2}$ & $1.43\ 10^{-2}$ & $2.12\ 10^{-3}$ & $7.57\ 10^{-6}$ & 0.803 & $3.61\%$\tabularnewline D1 & $4\ 10^{5}$ & $1$ & $512^{3}$ & $1.13\ 10^{-2}$ & $1.22\ 10^{-3}$ & $1.25\ 10^{-2}$ & $1.64\ 10^{-2}$ & $3.73\ 10^{-3}$ & $8.03\ 10^{-6}$ & 2.74 & $0.618\%$\tabularnewline D2 & $4\ 10^{5}$ & $1$ & $768^{3}$ & $1.14\ 10^{-2}$ & $1.21\ 10^{-3}$ & $1.26\ 10^{-2}$ & $1.64\ 10^{-2}$ & $3.66\ 10^{-3}$ & $8.39\ 10^{-6}$ & 2.91 & $0.552\%$\tabularnewline D3 & $4\ 10^{5}$ & $1$ & $1024^{3}$ & $1.14\ 10^{-2}$ & $1.21\ 10^{-3}$ & $1.26\ 10^{-2}$ & $1.63\ 10^{-2}$ & $3.64\ 10^{-3}$ & $8.34\ 10^{-6}$ & 2.90 & $0.550\%$\tabularnewline E & $10^{6}$ & $1$ & $1024^{3}$ & $1.13\ 10^{-2}$ & $2.13\ 10^{-3}$ & $1.34\ 10^{-2}$ & $1.79\ 10^{-2}$ & $4.37\ 10^{-3}$ & $8.97\ 10^{-6}$ & 9.69 & $0.163\%$\tabularnewline \bottomrule \end{tabular} \par\end{centering} \caption{Parameters and energy diagnostic results of simulations reported in this paper. Here, $W_{\eta}=\int \eta J^2 \, d^3x \, dt$ and $W_{\nu}=\int \nu \Omega^2 \, d^3x \, dt$ are the resistive and viscous dissipation during the whole period of each simulation, respectively. The Poynting energy input through the top boundary is $W_{P}=\int \boldsymbol{B}_\perp \cdot \boldsymbol{u} \, d^2x \, dt$. The magnetic energy $E_{M}=\int B_\perp^2/2 \, d^3x$ and the kinetic energy $E_{K}=\int u^2/2 \, d^3x$ are evaluated at the end of each simulation. The error of the energy conservation relation $W_P = W_\eta + W_\nu + E_K + E_M$ is within 1\% for all cases. The current density $J_{1/2}$ indicates that regions with $\left|J\right|>J_{1/2}$ contribute half of the resistive dissipation, and $V_{1/2}/V$ is the time averaged volumetric ratio between regions with $\left|J\right|>J_{1/2}$ and the entire domain.\label{tab:Parameters}} \end{table*} We continue with the resistive evolution of the model problem. Our objectives are to test the prediction that the peak current density will scale logarithmically with respect to the Lundquist number $S$, as well as to assess whether the ``exponential'' field line separation causes the onset of reconnection. To address these questions, we perform a series of simulations with the Lundquist number $S$ varying from $10^{4}$ to $10^{6}$. Here, the Lundquist number $S\equiv aV_{A}/\eta$ is defined through the box size $a$ in the perpendicular direction, the Alfv\'en speed $V_{A}$ of the guide field, and resistivity $\eta$. In our normalized units, the box size $a=1$ and the Alfv\'en speed $V_{A}=1$; therefore the Lundquist number is simply $S=1/\eta$. The viscosity is another free parameter. For simplicity, we set the viscosity $\nu=\eta$ for all cases; \replace{}{i.e., the magnetic Prandtl number $P_m\equiv \nu/\eta=1$. (See Appendix for a discussion of the effect of viscosity.) The friction coefficient $\lambda$ is set to zero.} Table \ref{tab:Parameters} lists the Lundquist numbers and grid resolutions for all the simulations we have performed. The simulation time is $t=1000$ for all cases, corresponding to 100 Alfv\'en transit times \replace{}{and approximately ten footpoint advection times.} Table \ref{tab:Parameters} also shows the total resistive dissipation $W_{\eta}$ and viscous dissipation $W_{\nu}$ during the whole period of each simulation. Over the range of Lundquist numbers $S$ that spans two orders of magnitude, the total dissipation $W_{\eta}+W_{\nu}$ stays remarkably close to constant. The dissipation is predominantly due to resistivity, although the portion of viscous dissipation slightly increases as the Lundquist number increases. To ensure that our simulations have sufficient numerical resolution, we perform three simulations (D1--D3) for the case $S=4\times10^{5}$ with resolutions ranging from $512^{3}$ to $1024^{3}$ and do not find significant difference between them. Therefore, a grid resolution of $512^{3}$ appears to be adequate for $S=4\times10^{5}$. This finding gives us some confidence that the highest Lundquist number case, Run E with $S=10^{6}$, should be reasonably resolved by a grid resolution of $1024^{3}$. In addition, we may also assess the accuracy of our numerical simulations by how precisely the energy is conserved. The energy conservation requires that the total Poynting energy input through the top boundary $W_{P}$ should be equal to the sum of the increase in magnetic energy $E_{M}$, the increase in kinetic energy $E_{K}$, and the total dissipation. For all the cases, the error in energy conservation is less than 1\% of the total Poynting energy input over the entire simulation period. The column $J_{1/2}$ in Table \ref{tab:Parameters} indicates that regions with $\left|J\right|>J_{1/2}$ contribute half of the resistive dissipation over the entire simulation period, and $V_{1/2}/V$ is the time averaged volumetric ratio between regions with $\left|J\right|>J_{1/2}$ and the entire domain. These numbers show that resistive dissipation is increasingly concentrated in small regions of high current density as the Lundquist number increases. For example, approximately $5\%$ of the total volume contributes one half of the resistive dissipation at $S=10^{4}$, whereas less than $0.2\%$ of the total volume accounts for the same portion at $S=10^{6}$. We plot the probability distributions $P\left(\left|J\right|\right)$ of the current density \replace{}{over the four-dimensional spacetime during the period $0\le t\le1000$} for various Lundquist numbers $S$ in Figure \ref{fig:Probability-distribution-of-J}. The probability distribution exhibits a strong dependence on the Lundquist number. Furthermore, the maximal current density roughly scales linearly with $S$. This $J\sim S$ scaling relation is stronger than the Sweet--Parker\citep{Sweet1958a,Parker1957} scaling relation $J\sim S^{1/2}$ and is on par with that of plasmoid-mediated reconnection.\citep{HuangB2010} Therefore, the prediction that current density will depend logarithmically on $S$ appears to be inconsistent with our numerical findings. Now we address the question whether the exponential separation of neighboring field lines causes onset of fast reconnection by taking a closer look at the highest-$S$ simulation, Run E. During the early phase of the simulation when $t\le190$, the system evolves quasi-statically \replace{}{while the current density gradually increases as in the ideal evolution.} After $t=190$, \replace{}{which is approximately twice the footpoint advection time}, an onset of activity leads to fast plasma flow \replace{}{and further intensifies the thin current sheets.} The peak plasma flow speed after the onset is comparable to the typical in-plane Alfv\'en speed and is faster than the typical footpoint speed by an order of magnitude. In contrast, the plasma speed in the interior is typically slower than or comparable to the typical footpoint speed during the quasi-static phase. The flow pattern and current density distribution after the onset clearly show signatures of magnetic reconnection, such as outflow jets within current sheets, as can be seen in the 2D slice at $t=260$ shown in Figure \ref{fig:2D-slice}. Figure \ref{fig:Visualization-of-Run-E} shows a 3D visualization of the entire domain as well as a zoom-in view of a sub-domain. From the zoom-in view in panel (b), we can see that the primary current sheet with $J<0$ (the one with dark colors) has become unstable to the plasmoid (or tearing) instability and developed flux-rope-like structures despite the stabilizing effects of the line-tied boundary conditions. The plasmoid instability has continued to be present in numerous reconnecting current sheets throughout this simulation. Similar to systems without line-tied boundary conditions, the Lundquist number needs to exceed a threshold for the plasmoid instability to occur.\citep{BhattacharjeeHYR2009,HuangB2010,HuangCB2017,HuangCB2019} In all other simulations with lower Lundquist numbers, the plasmoid instability has not been observed. \replace{}{Previous studies indicate that the stabilizing effect of the line-tied boundary condition for the tearing mode is negligible when the ``geometric'' width $\delta_{\text{geo}}\sim (\lambda_{\text{tearing}}B_z/L B_{\text{up}})\delta$ is thinner than the inner layer width of the tearing mode, $\delta_{\text{tearing}}$.\citep{DelzannoF2008,HuangZ2009,RichardsonF2012, FinnBDZ2014} Here, $\delta$ is the current sheet thickness, $B_{\text{up}}$ is the upstream in-plane magnetic field of the current sheet, and $\lambda_{\text{tearing}}$ is the wavelength of the tearing mode. Taking the current sheet at $t=250$, immediately prior to the onset of the plasmoid instability, we estimate $B_{\text{up}} \simeq 0.1$ and $\delta\simeq 0.005$. For this set of parameters, the fastest growing mode wavelength $\lambda_{\text{tearing}} \simeq 0.15$ and the corresponding inner layer width $\delta_{\text{tearing}}\simeq0.001$,\citep{HuangCB2017} whereas the geometric width $\delta_{\text{geo}}\simeq 0.00075$. Therefore, the geometric width is comparable to the inner layer width, and the stabilizing effect of line-tying should be marginal. This estimate is consistent with the fact that the current sheet becomes unstable. In contrast, for cases of lower Lundquist numbers, the corresponding current sheets at the same time do not satisfy the criteria $\delta_{\text{geo}}<\delta_{\text{tearing}}$ and remain stable. } To disentangle the relationship between reconnection and the exponential separation of neighboring field lines, we have implemented a suite of diagnostics together with field line tracing. Along each field line, we calculate (a) the squashing factor $Q$,\citep{TitovH2002,Titov2007,FinnBDZ2014} (b) the parallel voltage, (c) the plasma flow velocity, and (d) the velocity of the field line in relative to the plasma. The squashing factor $Q$ quantifies the extent of neighboring field line separation; the parallel voltage is often employed as a metric for 3D reconnection rate;\citep{SchindlerHB1988,SchindlerHB1991} finally, a large separation between the field line velocity and the plasma velocity may also indicate magnetic reconnection. We calculate the squashing factor $Q$ by simultaneously integrating the equation for magnetic field line flow, \begin{equation} \frac{d\boldsymbol{x}_{\perp}}{dz}=\boldsymbol{B}_{\perp},\label{eq:field_line} \end{equation} and the equation for an infinitesimal separation $\delta\boldsymbol{x}_{\perp}$ between neighboring field lines \begin{equation} \frac{d\delta\boldsymbol{x}_{\perp}}{dz}=\delta\boldsymbol{x}_{\perp}\cdot\nabla_{\perp}\boldsymbol{B}_{\perp}.\label{eq:separation} \end{equation} Equation (\ref{eq:separation}) can be written in matrix form as \begin{equation} \frac{d}{dz}\left[\begin{array}{c} \delta x\\ \delta y \end{array}\right]=\left[\begin{array}{cc} \partial_{x}B_{x} & \partial_{y}B_{x}\\ \partial_{x}B_{y} & \partial_{y}B_{y} \end{array}\right]\left[\begin{array}{c} \delta x\\ \delta y \end{array}\right]\equiv M(z)\left[\begin{array}{c} \delta x\\ \delta y \end{array}\right].\label{eq:matrix_form} \end{equation} Because Eq.~(\ref{eq:matrix_form}) is linear in $\delta\boldsymbol{x}$, it is sufficient to integrate it with respect to two linearly independent initial condition. For that purpose, we integrate the equation \begin{equation} \frac{dN}{dz}=MN\label{eq:matrix_eq} \end{equation} with the initial condition \begin{equation} N|_{z=0}=\left[\begin{array}{cc} 1 & 0\\ 0 & 1 \end{array}\right].\label{eq:initial_cond} \end{equation} Then, for any initial separation $\delta\boldsymbol{x}|_{z=0}$ we can obtain the separation at an arbitrary $z$ as \begin{equation} \left[\begin{array}{c} \delta x\\ \delta y \end{array}\right]=N(z)\left[\begin{array}{c} \delta x\\ \delta y \end{array}\right]_{z=0}.\label{eq:solution} \end{equation} The singular value decomposition (SVD) \citep{TrefethenB1997} of $N(z)$ is of the form \begin{equation} N(z)=U(z)\left[\begin{array}{cc} \lambda_{\text{max}}(z) & 0\\ 0 & \lambda_{\text{min}}(z) \end{array}\right]V^{T}(z),\label{eq:SVD} \end{equation} where both $U(z)$ and $V(z)$ are unitary matrices. The singular values $\lambda_{\text{max}}$ and $\lambda_{\text{min}}$ have a geometrical interpretation as follows. If we follow a infinitesimally thin flux tube starting with a circular cross section of radius $\delta r$ at $z=0$, the cross section becomes an ellipse at $z>0$, with the semi-major axis $\delta r_{\text{max}}=\lambda_{\text{max}}\delta r$ and the semi-minor-axis $\delta r_{\text{min}}=\lambda_{\text{min}}\delta r$. Because the field line mapping in RMHD preserves area, the singular values $\lambda_{\text{max}}$ and $\lambda_{\text{min}}$ satisfy the relation $\lambda_{\text{max}}\lambda_{\text{min}}=1$. The squashing factor $Q$ is then defined as \begin{equation} Q=\frac{\text{\ensuremath{\lambda}}_{\text{max}}}{\text{\ensuremath{\lambda}}_{\text{min}}}+\frac{\text{\ensuremath{\lambda}}_{\text{min}}}{\text{\ensuremath{\lambda}}_{\text{max}}}\label{eq:squashing} \end{equation} evaluated at the top plate. In the limit $Q\gg1$, the squashing factor is approximately the ratio between the simi-major and the semi-minor axes; i.e., $Q\simeq\text{\ensuremath{\lambda}}_{\text{max}}/\text{\ensuremath{\lambda}}_{\text{min}}$. \footnote{Boozer and Elder employ the Frobenius norm of $N$, defined as $\left\Vert N\right\Vert =\sqrt{N_{xx}^{2}+N_{xy}^{2}+N_{yx}^{2}+N_{yy}^{2}}$, to characterize the neighboring field line separation. The Frobenius norm and the squashing factor are related by $\left\Vert N\right\Vert =\sqrt{Q}$. Therefore, we can calculate $Q$ without invoking SVD.} Next, we calculate the field line velocity as follows. The electric field in our RMHD model is given by the resistive Ohm's law \begin{equation} \boldsymbol{E}=-\boldsymbol{u}\times\boldsymbol{B}+\eta J\boldsymbol{\hat{z}}.\label{eq:Ohm} \end{equation} If we can express the electric field by \begin{equation} \boldsymbol{E}=-\boldsymbol{v}\times\boldsymbol{B}+\nabla\Phi\label{eq:ideal} \end{equation} for some velocity field $\boldsymbol{v}$ and a scalar potential $\Phi$, then the evolution of $\boldsymbol{B}$ is formally governed by the ideal equation $\partial_{t}\boldsymbol{B}=\nabla\times\left(\boldsymbol{v}\times\boldsymbol{B}\right)$ such that the magnetic field lines are frozen-in to the velocity field $\boldsymbol{v}$. The velocity field $\boldsymbol{v}$ is not uniquely determined because we can add to $\boldsymbol{v}$ an arbitrary component parallel to the magnetic field $\boldsymbol{B}$ without changing Eq.~(\ref{eq:ideal}). Moreover, taking the inner product of Eq.~(\ref{eq:ideal}) and $\boldsymbol{B}$ yields \begin{equation} \boldsymbol{B}\cdot\nabla\Phi=\eta J,\label{eq:BdPhi} \end{equation} which can be integrated along magnetic field lines to determine the potential $\Phi$ up to a free function $\Phi(\boldsymbol{x}_{\perp})|_{z=0}$ defined on the bottom boundary. To uniquely specify $\boldsymbol{v}$, we impose the conditions $v_{z}=0$ and $\Phi(\boldsymbol{x}_{\perp})|_{z=0}=0$. The field line velocity $\boldsymbol{v}_{\perp}$ is then determined by the relation \begin{equation} -\boldsymbol{\hat{z}}\times\boldsymbol{E}=\boldsymbol{u}_{\perp}=\boldsymbol{v}_{\perp}-\boldsymbol{\hat{z}}\times\nabla_{\perp}\Phi,\label{eq:v1} \end{equation} and the relative velocity between the field line and the plasma is given by \begin{equation} \boldsymbol{w}_{\perp}=\boldsymbol{v}_{\perp}-\boldsymbol{u}_{\perp}=\boldsymbol{\hat{z}}\times\nabla_{\perp}\Phi.\label{eq:w} \end{equation} The imposed boundary condition $\Phi(\boldsymbol{x}_{\perp})|_{z=0}=0$ makes the plasma velocity and the field line velocity coincide at the bottom boundary. If the two velocities deviate significantly from each other as we follow the field lines, it may indicate that reconnection is ongoing. To calculate the field line velocity together with field-line tracing, we adopt a method similar to the calculation of $Q$. Applying the $\nabla_{\perp}$ operator on Eq.~(\ref{eq:BdPhi}) yields \begin{equation} \partial_{z}\nabla_{\perp}\Phi+\boldsymbol{B}_{\perp}\cdot\nabla_{\perp}\nabla_{\perp}\Phi=-\nabla_{\perp}\boldsymbol{B}_{\perp}\cdot\nabla_{\perp}\Phi+\eta\nabla_{\perp}J.\label{eq:Bdggphi} \end{equation} The left-hand-side of Eq.~(\ref{eq:Bdggphi}) corresponds to the variation of $\nabla_{\perp}\Psi$ along field lines. Using Eq.~(\ref{eq:w}) to replace $\nabla_{\perp}\Psi$ by $\boldsymbol{w}_{\perp}$, we can rewrite Eq.~(\ref{eq:Bdggphi}) in a matrix form \begin{equation} \frac{d}{dz}\left[\begin{array}{c} w_{x}\\ w_{y} \end{array}\right]=\left[\begin{array}{cc} -\partial_{y}B_{y} & \partial_{y}B_{x}\\ \partial_{x}B_{y} & -\partial_{x}B_{x} \end{array}\right]\left[\begin{array}{c} w_{x}\\ w_{y} \end{array}\right]+\eta\left[\begin{array}{c} -\partial_{y}J\\ \partial_{x}J \end{array}\right].\label{eq:dzw} \end{equation} Here, the derivative $d/dz$ on the left-hand-side is a derivative along the magnetic field lines. We can now integrate Eq.~(\ref{eq:dzw}) together with field line tracing to obtain the field line velocity. Figures \ref{fig:time1}, \ref{fig:time2}, and \ref{fig:time3} show the results of the diagnostics at three representative times. In each figure, the upper-left panel shows the time histories of the maximum current density $J_{\text{max}}$ and the maximum plasma speed $u_{\text{max}}$, with the vertical line indicating the time of the snapshot. We label the field lines by their footpoints at the bottom boundary. The lower-left panel shows the projected image of $5\times5$ squares at the top boundary to the bottom boundary following the field lines. The remaining four panels show the squashing factor $Q$, the parallel voltage $\int_{0}^{L}\eta J\,dz$, the maximum plasma speed $\left|\boldsymbol{u}\right|$, and the maximum field line relative speed $\left|\boldsymbol{w}\right|$ along each field line. At each time slice, we trace $1000\times1000$ field lines uniformly distributed at the bottom boundary. Additionally, we provide the entire time history of these diagnostics for Run E, together with that for Runs A, B, C, and D3 as animations (see multimedia view in Fig.~\ref{fig:time3} and Supplementary Material). Figure \ref{fig:time1} shows that the maximum $Q$ is approaching $10^{5}$ at $t=190$. The ``slippage'' speed between field line and the plasma also become approximately one order of magnitude larger than the plasma speed for field lines with high $Q$ values. The maximum slippage of footpoint mapping relative to that of the ideal run is approximately $30\%$ of the simulation box size in the perpendicular direction. Although one might interpret that significant magnetic reconnection has already occurred based on this information, it may be more appropriate to attribute the footpoint slippage to diffusion rather than reconnection because the evolution remains close to quasi-static up to this time. After $t=190$, there is a rapid onset of activity with the current density and plasma speed both increasing substantially. \replace{}{Notably, the onset occurs in approximately two footpoint advection times, much sooner than in ten advection times predicted by Boozer and Elder.} Subsequently, at $t=290$ shown in Fig.~\ref{fig:time2}, the plasma speed is approximately one order of magnitude higher than that during the quasi-static phase, the squashing factor $Q$ goes above $10^{10}$, and the field line relative speed is above $10^{2}$, which is more than three orders of magnitude higher than the plasma speed. The correlations between the squashing factor $Q$, the parallel voltage, and the plasma relative speed are also evident. The same general features are also present in Fig.~\ref{fig:time3} for $t=620$. At this time, the squashing factor $Q$ goes beyond $10^{15}$ at some locations, and the maximal field line relative speed is above $10^{4}$, more than five orders of magnitude higher than the plasma speed. Based on the result that both the squashing factor and the field line speed become extremely high, would one conclude that chaotic field line separation causes fast reconnection? Upon close examination, our simulation results do not appear to support this viewpoint. If chaotic field-line separation is the cause of fast reconnection, we expect the squashing factor to reach a maximum when the coronal loop ``breaks loose,'' i.e., when it starts to deviate from quasi-static evolution; subsequently, reconnection will simplify the field-line mapping and the squashing factor will decrease. That is not what the simulation shows. As we can see from the time sequence shown in Figures \ref{fig:time1} -- \ref{fig:time3} and the associated animation, after the coronal loop ``breaks loose'' after $t=190$, thin current sheets intensify and the squashing factor increases tremendously around them. Unlike Boozer's claim that chaotic field-line separation can speed up reconnection without intense current sheets, the simulation shows that current sheets must intensify to release the built-up magnetic stress; the intensified current sheets further enhance chaotic field-line separation as reflected in a even higher squashing factor $Q$.\footnote{Note that even though the existence of intense current sheets is not a necessary condition for a high squashing factor $Q$, the former naturally leads to the later because of the strongly sheared magnetic field associated with thin current sheets.} For a complicated 3D evolution with reconnection occurring at multiple locations and at different times, it is difficult, if not impossible, to quantify the speed of the reconnection process with a single reconnection rate. However, we may quantify the effectiveness of reconnection through its effect of converting magnetic energy into plasma heating through dissipation. Because the dissipation rate is nearly independent of the Lundquist number $S$, it is not unreasonable to think that reconnection proceed approximately at the same rate for different $S$. This conclusion is consistent with the fact that the peak values of the parallel voltage, often employed as a metric of 3D reconnection rate, mostly fluctuate between $2\times10^{-4}$ and $6\times10^{-4}$, regardless of the Lundquist number. In contrast, the field-line speed exhibits a strong dependence on the Lundquist number $S$. At $S=10^{4}$, the field line relative speed stays below $10^{-2}$; whereas at $S=10^{6}$ , the plasma relative speed can go above $10^{4}$. Our findings suggest that the parallel voltage is a more reliable indicator of the reconnection rate than the field-line speed. \section{Discussion and Conclusions\label{sec:Conclusion}} In conclusion, we have tested Boozer's reconnection theory using the Boozer--Elder model for a coronal loop driven by footpoint motions. Our simulation results significantly differ from their predictions in both ideal and resistive evolution. For the ideal evolution, we show that Boozer and Elder significantly under-predict the intensity of electric current in the coronal loop due to missing terms in their equations. For the resistive evolution, our simulations show that the maximal current density roughly scales linearly with the Lundquist number $S$, in stark contrast to the prediction of a logarithmic dependence on $S$. \replace{}{Because of the formation of intense thin current sheets, the onset of fast reconnection occurs much sooner than predicted by Boozer and Elder.} Therefore, our simulation results do not appear to support Boozer's theory. \replace{}{Moreover, thin current sheets become unstable to the plasmoid instability when the Lundquist number is sufficiently high.} \replace{}{A precise definition of 3D reconnection remains an open question. Boozer's definition of reconnection relies entirely on the connections between fluid elements, and he attributes any changes in the connections to reconnection. This definition, while precise, is overly general, and blurs the distinction between reconnection and diffusion. As an illuminating example, let us first consider a stable line-tied screw pinch\citep{ZweibelB1985, HuangZS2006} undergoing resistive diffusion. Because the footpoint mapping between the boundaries changes as time progresses, this process is reconnection according to Boozer's definition, although it occurs on a slow, resistive time scale. Next, let us consider the same process in a stable coronal loop with chaotic field lines. The time evolution of the magnetic field remains slow, but the field-line velocity is fast due to the amplification caused by the chaotic field-line mapping. This enhanced field-line speed is fast reconnection according to Boozer's definition. Examples of slow resistive diffusion with rapid changes in field-line connections have been demonstrated in Ref.~[\onlinecite{HuangBB2014}]. However, the same study also shows that when the slow resistive diffusion leads to an unstable configuration, the time evolution becomes dynamic, intense thin current sheets form, and reconnection outflow jets ensue. At that time, field-line connections change even faster.} \replace{}{These results of Ref.~[\onlinecite{HuangBB2014}]} and the present study indicate that field-line velocity is not a reliable metric for reconnection rate. We should keep in mind that field-line velocity is a mathematical construct rather than a physical velocity. Furthermore, the field-line speed has no upper limit and can even exceed the speed of light! Therefore, we should be cautious about using field line velocity to draw conclusions. In comparison, the parallel voltage, which is the integration of the parallel electric field, appears to be a better indicator of the reconnection rate. Because the parallel voltage $\Phi\sim\eta JL$ and the maximal current density $J_{\text{max}}\propto S\propto1/\eta$, the maximal voltage $\Phi_{\text{max}}$ is approximately independent of $S$. Therefore, the reconnection rate in the coronal loop is approximately independent of $S$. This conclusion is consistent with the dissipation rate being insensitive to $S$. Our simulation results \replace{}{show that intense thin current sheets are a natural outcome of a coronal loop forced by slow footpoint motions.} This conclusion appears to be more consistent with Parker's nanoflare scenario, where thin current sheets play a crucial role, than with Boozer's theory. However, Boozer's theory does provide some new perspectives and poses new challenges to Parker's scenario. In particular, Boozer points out the fragility of the ideal MHD frozen-in constraint in the presence of chaotic field lines. This important aspect warrants a broader attention and further investigation. Traditionally, the Parker problem is usually posed as a question regarding whether singular current sheets can form in a coronal loop under the frozen-in constraint and line-tied boundary conditions. This viewpoint attributes the formation of current sheets solely as an ideal MHD effect. However, because the footpoint motion at the photosphere naturally renders the field line chaotic, the ideal MHD frozen-in constraint may be overly restrictive for the Parker problem. In this study, we have observed that resistive simulations can develop more intensive current sheets than an ideal simulation at the same time when both are still in a quasi-static phase, indicating that non-ideal effects may play an active role in the formation of current sheets.\citep{HuangBB2014} A similar suggestion has been made by Bhattacharjee and Wang, who proposed that helicity-conserving reconnection processes might facilitate the formation of current sheets without rendering the footpoint velocity discontinuous.\citep{BhattacharjeeW1991} The roles of non-ideal effects in current sheet formation and onset of reconnection in coronal loops will be a topic of future study. \section{Supplementary Material} Diagnostic results for Runs A, B, C, and D3, similar to those shown in Fig.~\ref{fig:time3}, are available as animations. \begin{acknowledgments} This research was supported by the U.S. Department of Energy, grant number DE-SC0021205, and the National Aeronautics and Space Administration, grant numbers 80NSSC18K1285 and 88NSSC21K1326. Computations were performed on facilities at National Energy Research Scientific Computing Center. We thank Professor Allen Boozer for numerous beneficial discussion about his view on 3D reconnection, as well as the anonymous referees for insightful comments. This paper is dedicated to the memory of Aad van Ballegooijen, who made incisive contributions to the problem of current sheets in coronal loops and whose untimely passing in 2021 has deprived the community of a gentleman and a scholar. \end{acknowledgments} \section*{Data Availability} The data that support the findings of this study are available from the corresponding author upon reasonable request. \appendix* \replace{}{\section*{Appendix: The Effects of Viscosity}} \begin{table*}[t] \begin{centering} \begin{tabular}{ccccccccccccc} \toprule Run & $S$ & $P_m$ & Resolution & $W_{\eta}$ & $W_{\nu}$ & $W_{\eta}+W_{\nu}$ & $W_{P}$ & $E_{M}$ & $E_{K}$ & $J_{1/2}$ & $V_{1/2}/V$\tabularnewline \midrule F & $10^{4}$ & $10$ & $512^{3}$ & $1.37\ 10^{-2}$ & $2.51\ 10^{-3}$ & $1.62\ 10^{-2}$ & $1.70\ 10^{-2}$ & $7.96\ 10^{-4}$ & $7.77\ 10^{-6}$ & 0.273 & $5.10\%$\tabularnewline G & $ 10^{5}$ & $10^2$ &$512^{3}$ & $1.28\ 10^{-2}$ & $3.65\ 10^{-3}$ & $1.65\ 10^{-2}$ & $1.85\ 10^{-2}$ & $2.04\ 10^{-3}$ & $7.72\ 10^{-6}$ & 0.885 & $3.52\%$\tabularnewline H & $10^{6}$ & $10^3$ & $1024^{3}$ & $1.29\ 10^{-2}$ & $6.50\ 10^{-3}$ & $1.94\ 10^{-2}$ & $2.43\ 10^{-2}$ & $4.78\ 10^{-3}$ & $7.71\ 10^{-6}$ & 4.98 & $0.81\%$\tabularnewline \bottomrule \end{tabular} \par\end{centering} \caption{Parameters and energy diagnostic results of simulations reported in the Appendix. See the caption of Table \ref{tab:Parameters} for the definition of each column. These runs keep a constant value of the viscosity $\nu=10^{-3}$ and vary the resistivity $\eta$, whereas the runs in Table \ref{tab:Parameters} have $P_m=1$. \label{tab:Parameters1}} \end{table*} \replace{}{After the initial submission of this paper, Boozer suggested that the observed signatures of reconnection, in particular the intense thin current sheets, may be attributed to damping of Alfv\'en waves after changes of the field-line connections release the stored magnetic energy as plasma kinetic energy. He further suggested that by increasing the viscosity to damp the kinetic energy, intense thin current sheets may disappear.\citep{Boozer2022a} This hypothesis is interesting and has practical relevance as well, because observational evidence suggests that the magnetic Prandtl number $P_m \equiv \nu/\eta$ of coronal loops may be orders of magnitudes larger than unity.\citep{Aschwanden2005}} \replace{}{We test this hypothesis by performing additional simulations with $P_m \gg 1$. We keep the viscosity at a constant value $\nu=10^{-3}$, which is as high as possible without significantly compromising the force-free approximation of the magnetic field as the coronal loop evolves quasi-statically. The values of $\eta$ vary from $10^{-4}$ to $10^{-6}$. } \replace{}{Table \ref{tab:Parameters1} summarizes the parameters and diagnostic results of the high-$P_m$ runs. Remarkably, resistive dissipation still dominates over viscous dissipation even when $P_m\gg 1$. The resistive dissipation remains close to constant as $\eta$ varies. Moreover, compared with the results in Table \ref{tab:Parameters}, the resistive dissipation for $P_m\gg 1$ is similar to that for $P_m=1$.} \replace{}{Figure \ref{fig:Probability-distribution-of-J-visc} shows the probability distribution of the current density for various Lundquist numbers $S$. Similar to Figure \ref{fig:Probability-distribution-of-J} for $P_m=1$, here the probability distribution also exhibits a strong dependence on the Lundquist number $S$, although the maximal current density is lower than the corresponding maximal current density with the same $S$ and $P_m=1$. A 2D slice of Run H with $S=10^6$ and $P_m=10^3$ is shown in Fig.~\ref{fig:2D-slice-visc}. Compared with Fig.~\ref{fig:2D-slice}, the plasma velocity is significantly smoother and slower due to the higher viscosity, whereas the current density is only slightly smoother. } \replace{}{Even though increasing the magnetic Prandtl number does smooth the current sheets, the maximal current density still exhibits a strong dependence on the Lundquist number $S$. Therefore, intense thin current sheets appear to be a natural consequence of this coronal loop model with high Lundquist numbers, regardless of the magnetic Prandtl number.} \bibliographystyle{apsrev4-2} \bibliography{ref}
Title: Inferred Properties of Planets in Mean-Motion Resonances are Biased by Measurement Noise
Abstract: Planetary systems with mean-motion resonances (MMRs) hold special value in terms of their dynamical complexity and their capacity to constrain planet formation and migration histories. The key towards making these connections, however, is to have a reliable characterization of the resonant dynamics, especially the so-called "libration amplitude", which qualitatively measures how deep the system is into the resonance. In this work, we identify an important complication with the interpretation of libration amplitude estimates from observational data of resonant systems. Specifically, we show that measurement noise causes inferences of the libration amplitude to be systematically biased to larger values, with noisier data yielding a larger bias. We demonstrated this through multiple approaches, including using dynamical fits of synthetic radial velocity data to explore how the the libration amplitude distribution inferred from the posterior parameter distribution varies with the degree of measurement noise. We find that even modest levels of noise still result in a slight bias. The origin of the bias stems from the topology of the resonant phase space and the fact that the available phase space volume increases non-uniformly with increasing libration amplitude. We highlight strategies for mitigating the bias through the usage of particular priors. Our results imply that many known resonant systems are likely deeper in resonance than previously appreciated.
https://export.arxiv.org/pdf/2208.05423
\title{Inferred Properties of Planets in Mean-Motion Resonances are Biased by Measurement Noise} \author{David Jensen} \affiliation{Department of Physics, Princeton University, Princeton, NJ 08544, USA} \author[0000-0003-3130-2282]{Sarah C. Millholland} \affiliation{Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA} \affiliation{MIT Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA} \affiliation{Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544, USA} \email{sarah.millholland@mit.edu} \section{Introduction} \label{sec: Introduction} Mean-motion resonance (MMR) refers to an orbital configuration in which two bodies have orbital periods that form a ratio of small integers. These resonances were first studied in the context of the Solar System \citep{1976ARA&A..14..215P}, most notably the Galilean satellites around Jupiter, which form a 4:2:1 resonant chain. Most known extrasolar planets do not display resonant relationships \citep{2014ApJ...790..146F}, although they are also not particularly rare either. Resonant chains like the famous TRAPPIST-1 system are intriguing examples \citep{2017Natur.542..456G}. Compared to planets found in transit surveys, MMRs are more common in systems discovered with radial velocities (RVs), which contain predominantly giant planets \citep{2011ApJ...730...93W}. Resonant planetary systems are valuable because they provide a relic of the system's formation history. That is, MMRs are established through convergent migration, particularly migration arising from planet-disk interactions while the planets are still embedded in the protoplanetary disk \citep{1980ApJ...241..425G, 2007ApJ...654.1110T}. The current state of a resonant system can thus, in principle, provide some insight into the formation conditions and other details of the migration process \citep[e.g.][]{2015AJ....149..167B, 2016Natur.533..509M, 2017MNRAS.469.4613S, 2017A&A...605A..96D, 2019NatAs...3..424M, 2020AJ....160..106H}. One of the most important diagnostics of a resonance is the ``libration amplitude'', which is qualitatively related to the energy of the resonance and conveys the proximity of the system to ``exact'' resonance. Specifically, the libration amplitude reflects the range of oscillations of the planetary conjunctions around their equilibria. The libration amplitude is thought to provide constraints on how smooth or turbulent the migration was \citep[e.g.][]{2018ApJ...867...75D, 2021A&A...656A.115H}. Resonances with very small libration amplitudes indicate very smooth and dissipative formation \citep{2020AJ....160..106H}, whereas large libration amplitudes could be a consequence of stochastic forcing \citep[e.g.][]{2008ApJ...683.1117A, 2009A&A...497..595R}. A large libration amplitude could also indicate a history of perturbations by another planet \citep[e.g.][]{2021AJ....161..161D} or overstable librations \citep{2014AJ....147...32G, 2022ApJ...925...38N}. Given the value of the libration amplitude as a tracer of various formation processes, it is crucial to be able to obtain accurate estimates of this quantity from observations of resonant systems. Unfortunately, \cite{2018AJ....155..106M} showed that this may be in jeopardy. They found preliminary evidence (as detailed in Section \ref{sec: libration amplitude bias}) that the libration amplitude inferred from data of resonant systems may be systematically biased to larger values due to measurement uncertainties, in a similar way as eccentricity inferences are affected by measurement noise \citep[e.g.][]{1971AJ.....76..544L, 2008ApJ...685..553S, 2010ApJ...725.2166H}. Though this was suggested, it has not yet been confirmed in detail. In this paper, we use multiple approaches of confirming the bias (Sections \ref{sec: exploring the bias} and \ref{sec: synthetic data experiments}) and understanding its origin (Section \ref{sec: discussion}). \section{The Resonant Libration Amplitude and Bias from Measurement Noise} \label{sec: libration amplitude bias} One of the key measures of mean-motion resonance is the ``critical angle'' (also called ``critical argument'' or ``resonant argument''). For two planets in a first-order $p+1:p$ MMR, there are two critical angles, \begin{equation} \begin{split} \phi_{12,1} &= (p+1)\lambda_2 - p\lambda_1 - \varpi_1 \\ \phi_{12,2} &= (p+1)\lambda_2 - p\lambda_1 - \varpi_2, \end{split} \end{equation} where $\lambda_1$ and $\lambda_2$ are the mean longitudes of the inner and outer planets and $\varpi_1$ and $\varpi_2$ are the longitudes of pericenter. The critical angles describe the evolution of the planetary conjunctions with respect to the pericenters of the two orbits. When a system is in resonance, one or more of the critical angles undergo librations (bounded oscillations) about their equilibria. The ``resonant libration amplitude'' is the amplitude of these oscillations and is related to the total energy of the system, with smaller amplitudes corresponding to lower energies \citep{1999ssd..book.....M}. Systems with zero libration amplitude are maximally damped to their resonant fixed points. \cite{2018AJ....155..106M} first identified a possible bias of libration amplitude estimates as part of their detailed characterization of the GJ 876 system. GJ 876 is a nearby M4V dwarf hosting four known planets, the outer three of which are locked in a 4:2:1 Laplace resonance \citep{2001ApJ...556..296M, 2005ApJ...622.1182L, 2005ApJ...634..625R, 2010ApJ...719..890R, 2016MNRAS.455.2484N, 2018AJ....155..106M}. The 2:1 MMR of planets c and b (the second and third planets from the star with $P_c \sim 30$ days and $P_b \sim 61$ days) was the first resonance discovered in an exoplanetary system. This MMR has two critical angles, \begin{equation} \begin{split} \phi_{cb,c} &= 2\lambda_b - \lambda_c - \varpi_c \\ \phi_{cb,b} &= 2\lambda_b - \lambda_c - \varpi_b. \end{split} \end{equation} In Figure \ref{fig: GJ 876 resonant angles}, we plot the evolution of $\phi_{cb,c}$ and $\phi_{cb,b}$ for a 100 year $N$-body integration of the GJ 876 system using the best-fit parameters identified in \cite{2018AJ....155..106M} as initial conditions. We use the REBOUND gravitational dynamics software package \citep{2012A&A...537A.128R} with the ``WHFast'' Wisdom-Holman symplectic integrator \citep{1991AJ....102.1528W, 2015MNRAS.448L..58R}. The angles undergo low amplitude librations around $0^{\circ}$. There are additional angles analogous to $\phi_{cb,c}$ and $\phi_{cb,b}$ for the 2:1 resonance of planets b and e (the third and fourth planets from the star). Beyond the individual two-body critical angles, the three-body 4:2:1 Laplace resonance of planets c, b, and e is further defined by the libration of the critical angle, \begin{equation} \phi_{\mathrm{Lap}} = \lambda_c - 3\lambda_b + 2\lambda_e. \end{equation} Given the long history of explorations of GJ 876 by different research teams, one can explore how the libration amplitude estimates have changed with the size of the RV datasets. \cite{2018AJ....155..106M} showed that the reported libration amplitudes decreased monotonically with each successive characterization of the system. Figure \ref{fig: libration amplitude vs number of RVs} shows the amplitude estimates of $\phi_{cb,c}$, $\phi_{cb,c}$, and $\phi_{\mathrm{Lap}}$ from different publications as a function of the number of RV measurements used in the analyses. Each study deemed the system to be deeper in resonance than all previous studies. Since a larger dataset corresponds to a higher signal-to-noise ratio, this finding may indicate that measurement noise causes resonant systems to appear to have larger libration amplitudes than they actually do. The true state of the system may be even lower energy than the latest measurements indicate. The above hypothesis -- that measurement noise biases libration amplitude estimates -- needs to be confirmed with further analyses. One reason for this is that the different publications referenced in Figure \ref{fig: GJ 876 resonant angles} used a variety of analysis techniques, including both Bayesian and non-Bayesian methods, so the comparisons between them cannot be directly mapped to differences in signal-to-noise ratios. It would be more instructive to use the same analysis methods on the same dataset and systematically vary either the signal-to-noise ratio or the size of the dataset. Moreover, this would allow us to not only confirm the existence of the bias but also understand its origin. We will explore these concepts in the following sections. Before proceeding, however, we must clarify exactly what we mean with our usage of the term ``bias''. In frequentist statistics, an estimator $\hat{\theta}$ of a parameter $\theta$ is ``unbiased'' if $\mathrm{Bias}(\theta) = \mathrm{E}(\hat{\theta}) - \theta = 0$, or in other words, if the expected value of the estimator is equal to the true value of the parameter being estimated. The concept of bias is ill-defined in Bayesian statistics, in part because the parameter itself is not considered to be fixed, but rather it is a random variable whose probability distribution we wish to estimate with the inclusion of the prior probability. In this paper we use the term ``bias'' loosely in order to refer to the phenomenon in which progressively larger measurement noise leads the posterior parameter distribution to be increasingly weighted towards larger (or smaller) values. The libration amplitude ``bias'' discussed herein is very analogous to that which plagues the inference of orbital eccentricities \citep[e.g.][]{1971AJ.....76..544L, 2008ApJ...685..553S, 2010ApJ...725.2166H}, although an important difference is that the eccentricity is generally a parameter of the orbit model, whereas the libration amplitude is not. \section{Exploring the Bias} \label{sec: exploring the bias} Although the hypothesized libration amplitude bias was first identified in the GJ 876 system, it is useful to use a simpler resonant system to explore the bias further. For this purpose, we consider the HD 128311 system, which contains two eccentric gas giant planets in a 2:1 MMR \citep{2003ApJ...582..455B, 2005ApJ...632..638V, 2014ApJ...795...41M, 2015MNRAS.448L..58R}. The system was most recently studied by \cite{2015MNRAS.448L..58R}, who performed a dynamical fit and determined the system to be locked in resonance with a libration amplitude of $\sim37^{\circ}$. Some of the relevant best-fit system parameters from \cite{2015MNRAS.448L..58R} are provided in Table \ref{tab: HD 128311 parameters}. In this section, we use \cite{2015MNRAS.448L..58R}'s posterior distributions (H. Rein, private communication) to explore the effects of measurement noise on estimates of the libration amplitude. In general, lower signal-to-noise data results in broader posterior distributions. Accordingly, we can impose an artificial broadening of the posterior distribution as a means of simulating the effects of added measurement noise without ever touching the raw data. (Later in Section \ref{sec: synthetic data experiments}, we will work directly with the data.) To perform the simulated broadening, we will first demonstrate that the posterior distribution can be well-described by a multivariate Gaussian distribution. \begin{table}[t!] \caption{Best-fit parameters of the HD 128311 system from the dynamical fit by \cite{2015MNRAS.448L..58R}.} \begin{tabular}{c c} \hline \hline Parameter & Value and $2\sigma$ confidence interval \\ \hline Epoch & 2450983.83 (fixed) \\ $M_{\star}$ & 0.828 $M_{\odot}$ \\ $i$ & ${63.8^{\circ}}^{+23.7^{\circ}}_{-35.9^{\circ}}$ \\ \\ \multicolumn{2}{c}{Planet 1} \\ $M_{p1}\sin i$ & $1.83^{+0.15}_{-0.18} \ M_{\mathrm{Jup}}$ \\ $P_1$ & $460.1^{+4.2}_{-3.6}$ days \\ $e_1$ & $0.30^{+0.03}_{-0.04}$ \\ $\omega_1$ & ${-76.2^{\circ}}^{+6.4^{\circ}}_{-9.2^{\circ}}$ \\ $M_1$ & ${259.2^{\circ}}^{+11.9^{\circ}}_{-12.6^{\circ}}$ \\ \\ \multicolumn{2}{c}{Planet 2} \\ $M_{p2}\sin i$ & $3.20^{+0.08}_{-0.08} \ M_{\mathrm{Jup}}$ \\ $P_2$ & $910.7^{+7.6}_{-6.0}$ days \\ $e_2$ & $0.12^{+0.08}_{-0.06}$ \\ $\omega_2$ & ${-19.7^{\circ}}^{+23.2^{\circ}}_{-12.0^{\circ}}$ \\ $M_2$ & ${184.2^{\circ}}^{+20.0^{\circ}}_{-10.7^{\circ}}$ \\ \hline \label{tab: HD 128311 parameters} \end{tabular} \end{table} We parameterize the posterior distribution as \begin{equation} \begin{split} \label{eq: X_post} \mathbf{X}_{\mathrm{post}} &= (\log_{10}M_{p1}\sin i, P_1, \sqrt{e_1}\cos\omega_1, \sqrt{e_1}\sin\omega_1, M_1, \\ &\log_{10}M_{p2}\sin i, P_2, \sqrt{e_2}\cos\omega_2, \sqrt{e_2}\sin\omega_2, M_2, \cos{i}). \end{split} \end{equation} Next, we calculate the mean vector $\boldsymbol{\mu}_{\mathrm{post}}$ and covariance matrix $\boldsymbol{\Sigma}_{\mathrm{post}}$ of $\mathbf{X}_{\mathrm{post}}$ such that the distribution can be closely approximated by a simulated distribution drawn according to the multivariate Gaussian, \begin{equation} \mathbf{X}_{\mathrm{sim}} \sim \mathcal{N}(\boldsymbol{\mu}_{\mathrm{post}}, \boldsymbol{\Sigma}_{\mathrm{post}}). \end{equation} We use visual inspection of corner plots of the true distribution and a simulated distribution with an equal sample size to verify that the distributions are similar. An example of one sub-plot of this broader corner plot is shown in the left panel of Figure \ref{fig: subplot of corner plot}. To approximate the broadening of the posterior distribution that would result from noisier data, we simply scale the covariance matrix by a constant factor, $\gamma > 1$, such that the simulated distribution is now given by \begin{equation} \mathbf{X}_{\mathrm{sim}} \sim \mathcal{N}(\boldsymbol{\mu}_{\mathrm{post}}, \gamma\boldsymbol{\Sigma}_{\mathrm{post}}). \end{equation} We explore $\gamma$ values ranging from 1 through 8. Examples of broadened distributions are shown in the middle and right panels of Figure \ref{fig: subplot of corner plot}. We now calculate the distributions of libration amplitudes resulting from both the true and simulated posterior distributions. For each posterior sample, we use the system parameters as initial conditions and run a $1,000$-year $N$-body integration using the REBOUND gravitational dynamics software package \citep{2012A&A...537A.128R} with the ``WHFast'' Wisdom-Holman symplectic integrator \citep{1991AJ....102.1528W, 2015MNRAS.448L..58R}. We calculate the critical angle $\phi_1 = 2\lambda_2 - \lambda_1 - \varpi_1$ and numerically estimate its amplitude using ${A_{\mathrm{lib}} = 0.5(\max{\phi_1} - \min{\phi_1})}$.\footnote{We note that calculating $A_{\mathrm{lib}}$ from the osculating orbital elements in this manner may be another source of overestimation. This is because the osculating elements are affected by high-frequency variations at the synodic period, thus creating a non-zero minimum $A_{\mathrm{lib}}$. This is probably only relevant in systems with very massive planets and tightly spaced MMRs.} The resulting distributions of libration amplitudes are shown in Figure \ref{fig: libration amplitude distributions}. Here we observe, first, that the distribution resulting from the simulated posterior with $\gamma = 1$ (no broadening) closely resembles that of the true posterior, which provides further confirmation that the multivariate Gaussian approximation is appropriate. Moreover, we observe that the distributions corresponding to the simulated posteriors with $\gamma > 1$ are shifted to progressively larger libration amplitudes as the broadening factor increases. This result offers support of our primary hypothesis. Namely, the libration amplitude distribution does indeed appear to be systematically biased high as a result of noisier data, when simulated in terms of broader posterior distributions. \section{Synthetic Data Experiments} \label{sec: synthetic data experiments} While the previous exploration of simulated posterior distributions was supportive of our hypothesis, a more thorough examination of the hypothesis would involve systematically varying the signal-to-noise of the data. In this section, we perform dynamical fits to synthetic RV data with various levels of added noise and compare the resulting libration amplitude distributions. We use the general parameters of the HD 128311 system. Specifically, we examine the true posterior distribution and extract the parameters of the single sample that we determined to have the lowest libration amplitude, which turns out to be $\sim20^{\circ}$. We use the lowest libration amplitude configuration because we want to explore progressively larger amounts of measurement noise and see how the libration amplitude distribution shifts. We use REBOUND with the IAS15 integrator \citep{2015MNRAS.446.1424R} to generate the synthetic RV measurements with 200 data points randomly spaced over a period of 15 years. We add Gaussian noise with varying standard deviations, $\sigma_{\mathrm{noise}}$. Next, we employ the affine-invariant ensemble sampler \texttt{emcee} \citep{2010CAMCS...5...65G, 2013PASP..125..306F} to estimate the posterior distributions of the parameters consistent with the synthetic data. For simplicity, we assume uninformative priors for all parameters. We use 50 walkers and sample the parameters in the same coordinate system as indicated in equation \ref{eq: X_post}. We run the integration for 750 iterations and check for convergence by visual inspection of the log-probability. Finally, we take a random subset of 500 posterior samples from the chains (post burn-in) and use the procedure described in the previous section to calculate the corresponding libration amplitude distributions. The resulting libration amplitude distributions are shown in Figure \ref{fig: libration amplitude distributions for synthetic data}. Similar to Figure \ref{fig: libration amplitude distributions}, the distributions are shifted to larger libration amplitudes as $\sigma_{\mathrm{noise}}$ increases. Moreover, even the distribution corresponding to the lowest level of noise is still systematically shifted to larger values than $20^{\circ}$, which is the true libration amplitude of the synthetic system. This experiment thus offers a strong confirmation of our hypothesis. That is, any amount of measurement noise will tend to systematically bias the libration amplitude distribution inferred from the posterior distribution, and the degree of the bias increases with the measurement noise. \section{Discussion} \label{sec: discussion} \subsection{Origin of the bias} \label{sec: origin of the bias} The last two sections confirmed that the bias is indeed real. However, we haven't yet discussed its physical origin. The bias is likely caused by the fact that the available phase space volume increases non-uniformly with increasing libration amplitude. That is, if the dynamics are expressed in terms of an integrable one-degree-of-freedom approximation using Hamiltonian perturbation theory \citep[e.g.][]{1983CeMec..30..197H, 2013A&A...556A..28B, 2016ApJ...823...72N}, then the resonant trajectories in phase space are those that are librating inside a finite resonant domain. Only a subset of this domain is available for libration amplitudes below a certain threshold. If we denote $V(A_{\mathrm{lib}})$ as the total phase space volume occupied by resonant trajectories with libration amplitudes $\leq A_{\mathrm{lib}}$, then $dV/dA_{\mathrm{lib}}$ is a positive and increasing function of $A_{\mathrm{lib}}$. Thus, when the posterior distribution of system parameters is broader due to the effects of noise, a uniform sampling of the available resonant phase space will be increasingly skewed to larger libration amplitudes. Figure \ref{fig: resonant phase space} demonstrates a schematic representation of this. It shows a phase space portrait of the first-order resonant Hamiltonian derived by \cite{2013A&A...556A..28B} and applied to the parameters of the HD 128311 system. We superimpose trajectories resulting from $N$-body integrations of posterior samples, both from the true posterior distribution and the simulated posterior distribution with $\gamma=2$ (Section \ref{sec: exploring the bias}). This illustration indicates that the trajectories of initial system parameters from the broadened posterior extend to a larger region of the resonant domain (and a correspondingly larger range of libration amplitudes) than the trajectories resulting from the true posterior. Given a set of initial orbital elements, a single trajectory would ideally fall upon a single level curve. There are several reasons why the gray and brown regions are blurred out and do not follow the topology exactly. First, the analytic approximation assumes small eccentricities, but the eccentricities of the HD 128311 system ($e_1\sim0.3$, $e_2\sim0.12$) are moderate. Second, we are plotting multiple trajectories with different initial conditions, each associated with different libration amplitudes. Finally, the topology of the phase space (as indicated by the level curves) is conserved in time but varies with respect to different sets of initial system parameters. Thus, the level curves in Figure \ref{fig: resonant phase space} can only be thought of as an average representation of the system parameters. Despite these caveats, this schematic illustration helps provide a geometric understanding of how broader posterior distributions of system parameters translate into broader available volumes in the resonant phase space and larger libration amplitudes. \subsection{Potential remedies} \label{sec: potential remedies} There are some potential strategies for remedying the bias. One approach would be to set a prior on $A_{\mathrm{lib}}$ to counteract the effect of the bias. The key is to realize that the bias itself is a result of the particular choice of priors. When using the uninformative priors conventionally adopted in RV models, we found in Section \ref{sec: synthetic data experiments} that the marginalized posterior distributions of libration amplitudes are weighted to progressively larger values as a function of increasing measurement noise. The prior on $A_{\mathrm{lib}}$ is implicit in the conventional framework, but it can still be modified. For example, when computing the posterior of a proposed MCMC sample, we can include a Gaussian-like penalty term in the prior, such that the prior on $A_{\mathrm{lib}}$ becomes the product of the conventional implicit prior and the penalty term. We test this approach by repeating our synthetic data experiments from Section \ref{sec: synthetic data experiments}, everything being equal except with the inclusion of the penalty term, $\exp[-A_{\mathrm{lib}}^2/(2\sigma^2)]$, which is multiplied by the usual uninformative prior. In order to calculate $A_{\mathrm{lib}}$ for each proposed sample, we perform an additional ``WHFast'' integration with a timestep equal to {40 days}, approximately 8.7\% of $P_1$,\footnote{We note that this timestep is larger than what is generally recommended \citep{2015AJ....150..127W}. We adopted it for computational speed and verified that the resulting $A_{\mathrm{lib}}$ calculations are not significantly affected.}and a duration of 350 years. Performing this extra integration increases the computation time of the posterior by a factor of $\sim3$. As for the $\sigma$ in the penalty term, we find through trial and error that $\sigma\approx0.2$ (when $A_{\mathrm{lib}}$ is in radians) is an appropriate value, in the sense that it yields the desired effect on the libration amplitude distribution, as we will next show. Figure \ref{fig: libration amplitude distributions for synthetic data with prior} shows the resulting libration amplitude distributions when the penalty term is used. Compared to the previous case, the distributions agree much better with the true libration amplitude of the synthetic system. The distributions are broadened with increasing $\sigma_{\mathrm{noise}}$, but they do not have the strong systematic shifts seen earlier. Accordingly, this approach is a reasonable ``quick fix'' to approximately counteract the bias. We caution that the optimal value of $\sigma$ in the penalty term can depend on the system at hand, so in terms of an application to observed systems, one should explore a range of different $\sigma$ values and examine the resulting sensitivity of the inferred system parameters to the prior. A more formal approach to address the bias would be to construct the RV model with specific assumptions about the libration amplitude. For instance, \cite{2020AJ....160..106H} developed an approach for modeling RV data of resonant systems in which the system is assumed to reside in a particular MMR configuration called an ``apsidal corotation resonance'' (ACR). The ACR is the expected outcome of resonant capture under the influence of smooth convergent migration and eccentricity damping, and it has zero libration amplitude. \cite{2020AJ....160..106H} used Bayesian model comparison to assess the evidence in RV data for the ACR model versus a conventional model, and they identified several systems in which the zero-$A_{\mathrm{lib}}$ ACR model is preferred. It is possible that such a model could be extended to include finite libration amplitude configurations as well. In the case where the conventional approach is to be adopted with no modifications, it is important to be aware of the existence of the bias, particularly when making broader statements on the basis of the inferred libration amplitude. One could explore tests such as varying the size of the dataset going into the fit and seeing how the resulting libration amplitude distributions differ. This could help determine whether or not the distribution is converging. It is also generally appropriate to assume that the true libration amplitude is more closely approximated by values towards the low end of the inferred distribution, as opposed to the mean or median (e.g. Figure \ref{fig: libration amplitude distributions for synthetic data}). \section{Conclusion} Exoplanetary systems containing mean-motion resonances offer valuable tests of planet migration and protoplanetary disk properties. However, a key ingredient in an accurate characterization of a resonance is a reliable estimate of the libration amplitude, or the range of oscillations of the critical angle, which indicate how ``deep'' in resonance a system actually is. Motivated by prior work on the GJ 876 system by \cite{2018AJ....155..106M}, here we showed that the reliability of libration amplitude inferences from observations of resonant systems depends sensitively on the data quality and the degree of measurement noise. Specifically, when using conventionally-adopted uninformative priors, progressively larger measurement noise causes the libration amplitude distribution inferred from the posterior distribution of model parameters to be strongly biased towards larger values. We showed this using two methods of scrutiny of a representative resonant system, HD 128311, which contains two eccentric super-Jupiter-sized planets in a 2:1 MMR with orbital periods equal to $\sim460$ days and $\sim910$ days \citep{2015MNRAS.448L..58R}. The first approach involved a simulated broadening of the posterior distribution of system parameters, mimicking the effects of increasing measurement noise. The second approach involved performing dynamical fits of synthetic data with various levels of noise. In both approaches, the resulting libration amplitude distributions were found to systematically shift to larger values with more noise. Moreover, the synthetic data experiments revealed that even low levels of measurement noise result in inferred libration amplitude distributions that are systematically larger than the ``true'' value. We highlighted some strategies for mitigating the bias. Specifically, we showed how the simple inclusion of a Gaussian-like penalty term in the prior can avoid the posterior distribution being weighted to large libration amplitudes. Other modifications to the prior are also possible. If the conventional approach is still to be adopted with no modifications, one can examine the extent of the inevitable bias by observing the width of the libration amplitude distribution. (A broader distribution generally indicates a stronger bias.) Another approach is to vary the size of the dataset that is going into the parameter inference and observe whether the libration amplitude distribution is converging or varying strongly with the size of the dataset. In general, an awareness of the existence of the bias will strengthen our ability to characterize resonant systems and to decipher their formation histories. \section{Acknowledgements} We are very grateful to the referee, Sam Hadden, for his insightful comments and valuable suggestions, particularly with regard to the content in Sections \ref{sec: origin of the bias} and \ref{sec: potential remedies}. We are also grateful to Hanno Rein for sharing the posterior samples of HD 128311 from \cite{2015MNRAS.448L..58R}. We also thank Eric Ford, Dan Foreman-Mackey, and Greg Laughlin for helpful conversations, as well as Neta Bahcall for her help in facilitating this work through Princeton's Junior Project requirement. S.C.M. was supported by NASA through the NASA Hubble Fellowship grant \#HST-HF2-51465 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. \bibliographystyle{aasjournal} \bibliography{main}
Title: Carbon abundance of stars in the LAMOST-Kepler field
Abstract: The correlation between host star iron abundance and the exoplanet occurrence rate is well-established and arrived at in several studies. Similar correlations may be present for the most abundant elements, such as carbon and oxygen, which also control the dust chemistry of the protoplanetary disk. In this paper, using a large number of stars in the Kepler field observed by the LAMOST survey, it has been possible to estimate the planet occurrence rate with respect to the host star carbon abundance. Carbon abundances are derived using synthetic spectra fit of the CH G-band region in the LAMOST spectra. The carbon abundance trend with metallicity is consistent with the previous studies and follows the Galactic chemical evolution (GCE). Similar to [Fe/H], we find that the [C/H] values are higher among giant planet hosts. The trend between [C/Fe] and [Fe/H] in planet hosts and single stars is similar; however, there is a preference for giant planets around host stars with a sub-solar [C/Fe] ratio and higher [Fe/H]. Higher metallicity and sub-solar [C/Fe] values are found among younger stars as a result of GCE. Hence, based on the current sample, it is difficult to interpret the results as a consequence of GCE or due to planet formation.
https://export.arxiv.org/pdf/2208.10057
\title{Carbon abundance of stars in the LAMOST-Kepler field} \author[0000-0001-6093-5455 ]{Athira Unni } \affil{Indian Institute of Astrophysics,Koramangala 2nd Block, Bangalore 560034, India} \affil{Pondicherry University, R.V. Nagar, Kalapet, 605014, Puducherry, India} \author[0000-0002-0554-1151 ]{Mayank Narang} \affiliation{Department of Astronomy and Astrophysics, Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Mumbai 400005,India} \author[0000-0003-0891-8994 ]{Thirupathi Sivarani} \affiliation{Indian Institute of Astrophysics,Koramangala 2nd Block, Bangalore 560034, India} \author[0000-0002-3530-304X]{Manoj Puravankara} \affiliation{Department of Astronomy and Astrophysics, Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Mumbai 400005,India} \author[0000-0003-0799-969X ]{Ravinder K Banyal } \affiliation{Indian Institute of Astrophysics,Koramangala 2nd Block, Bangalore 560034, India} \author[0000-0002-9967-0391]{Arun Surya} \affiliation{Department of Astronomy and Astrophysics, Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Mumbai 400005,India} \author[0000-0003-0003-4561]{S.P. Rajaguru} \affiliation{Indian Institute of Astrophysics,Koramangala 2nd Block, Bangalore 560034, India} \author[0000-0003-1371-8890]{C.Swastik} \affiliation{Indian Institute of Astrophysics,Koramangala 2nd Block, Bangalore 560034, India} \affil{Pondicherry University, R.V. Nagar, Kalapet, 605014, Puducherry, India} \keywords{techniques: Spectroscopy --- methods: observational --- planets and satellites--- Planet formation ---planets and satellites --- stars: solar-type --- catalogs: surveys: Kepler; LAMOST} \section{Introduction} \label{sec:intro} Planets and their host stars are formed together from the same molecular cloud. Naturally, the planet's chemical composition is expected to correlate with the host star. Hence, studies of the host star's chemical abundances could constrain the planet's bulk abundance and the planet formation process. Host star metallicity and giant planet connection was first observed by \citet{planet_metalicity_gonzalez, gonzalez1998} and confirmed by \citet{Santos2001, Santos2004b} with a larger sample, and these authors also showed that the frequency of giant planet hosts increases steeply above solar metallicity. This rapid rise in giant planet (R$_{p}>4$R$_{\oplus}$) occurrence of 3\% at solar metallicities and up to 25\% at [Fe/H]=0.3 was again shown by \citet{ planet_metalicity}. \citet{planet_metalicity1} observed giant planet-metallicity correlation in a wide range of stellar masses, and the occurrence increased from 3\% in M dwarfs to 14\% in A dwarfs at solar metallicity. Although the metallicity trend was absent for stars that host smaller planets, a large spread in metallicities is observed among them\citep{Sousa2008,Neves2009} and the low-mass planet-bearing stars at low metallicity were found to be rich in $\alpha$ elements \citep{Adibekyan2012a,Adibekyan2012c,Adibekyan2012b,fgk2}. \citet{Adibekyan2012a} suggested terrestrial planets could form early in the Galaxy among the thick disk stars due to their enhanced $\alpha$ abundances. A recent study by \citet{swastik_2022} showed that [$\alpha$/Fe] ratio shows a negative trend with respect to planetary mass, indicating possible conditions for the formation of low-mass planets before Jupiter-like planets. The host star mass metallicity trend was also found to reverse for planet masses higher than M $>$ 4M$_{J}$ \citep{mayank}. Directly imaged planets also showed a large scatter in the metallicity among super Jupiters, indicating higher metallicity may not be necessary to form super Jupiters \citep{Swastik_2021}. The enhanced abundance of volatile elements as compared to refractory elements was first observed in the solar atmosphere \citep{melendez}, this could be used as a possible signature of the solar system among solar twins \citep{ramirez2009,melendez2012}. However, high precision differential abundances of solar analogs and stellar twins in binary systems did not show a significant difference in the trend of stellar abundance and condensation temperatures among planet hosts and non-hosts \citep{gonzalez_1,hernandez_2010,hernandez_2013,planet_li_eu}. In fact, \citet{adibekyan2014f} noticed a significant correlation of the stellar abundances versus condensation temperature slope with stellar age and Galactocentric distance among Sun-like stars, which could be a cause for the observed difference in the volatile and refractive element abundances. Stellar lithium abundance could be a sensitive indicator of planet pollution; however, the results were inconclusive, showing a large spread in Li even among stars of very similar stellar parameters \citep{pollak,Israelian2009,gonzalez_1,Figueira_2014,gonzalez_2014,Li_Eu,Delgado_Mena_2014,Delgado_Mena_2015}. Carbon is produced in massive stars similar to $\alpha$ elements at low metallicities, but low-mass AGB (Asymptotic Giant Branch) stars could also make carbon \citep{origin_carbon,Kobayashi_2020} at higher metallicities, and hence the C/O ratio can change with time. \citet{bond_2010a, delgado_mena_2010} showed the importance of C/O ratio in the formation of carbide and silicates in the planet formation and determine the planet mineralogy \citep{co_nikku}. \citet{c_n_o_s} studied 91 planet hosts and 31 non-host solar-type dwarf stars using atomic carbon lines and found no significant difference in [C/Fe] for the planet host and the non-host stars. \citet{delgado_mena_2010} also found no difference between carbon abundance between giant planet hosts and non-host stars. \citet{ cno_ch_band} used the CH band at $4300 \AA$ for deriving the carbon abundance instead of the atomic lines at $5380.3 \AA$ and $5052.2 \AA$ to study the carbon abundance of HARPS FGK stars with 112 giant planet hosts and 639 stars without known planets. Furthermore, they found that [C/Fe] is not varying as a function of the planetary mass, indicating the absence of a significant contribution of carbon in the formation of planets. In this paper, we present the occurrence rate analysis of carbon abundance based on a large number of Kepler-LAMOST (The Large Sky Area Multi-Object Fiber Spectroscopic Telescope) samples of main-sequence FGK stars to understand the importance of carbon abundance in the context of planet formation process as well as GCE using CH G band at 4300$\AA$. The sample contains 825 confirmed planet host stars and 214 stars with planet candidates from the Kepler catalog, and 49215 stars without detected planets so far. \section{Data and target selection} \label{sec:data} LAMOST is a wide field spectroscopic survey facility using a telescope with a 4m clear aperture and $5\deg$ field of view. The survey obtains 4000 spectra in a single exposure to a limiting magnitude of r=19 at the resolution R=1800 and simultaneous wavelength coverage of 370 - 900 nm \citep{Zhao2012}. We have used the LAMOST-Kepler project \citet{Zong_2018} Public Data Release 4 (DR4) \footnote{ LAMOST DR4 complete data : \href{link:}{http://dr4.lamost.org/}} data for the current study. The observations were carried out between 2012 and 2017 and covered the entire Kepler field. A total of 227870 spectra belonging to 156390 stars were available in the database and out of which the spectroscopic parameters for 126172 stars were available from the LASP pipeline \citep{lamost_data_reduction}. The spectra and the corresponding stellar parameters (e.g., T$_{eff}$, log$\,g$, $[Fe/H]$ and radial velocity) were obtained from the LAMOST database. Additional parameters such as the mass and the radius of the planets are taken from the NASA Exoplanet archive \footnote{NASA exoplanet archive : \href{link:}{https://exoplanetarchive.ipac.caltech.edu}} \citep{exoplanet_archive}. We restricted the analysis to the main sequence stars ($4800 \leq T_{eff} \leq 6500$ K and \logg\ $\geq 4.0$ ), leading to a final sample of 49215 field stars and 1039 host stars with conformed exoplanets and potential candidates. Figure \ref{hr1} shows the parameter range of the final LAMOST-Kepler sample. Figure \ref{teff_logg_snr} shows the SNR (Signal to Noise Ratio), \logg\ and $T_{eff}$ histogram distribution of the final sample. \section{Estimation of carbon abundances} The methodology for estimating the carbon abundance uses a grid of synthetic spectra of varying carbon abundances across various stellar parameters and interpolates the model spectra to match the observed spectra. In this work we used Kurucz ATLAS9 NEWODF \citep{kurucz_model} stellar atmospheric models by \citet{castelli2003} and Turbospectrum \citep{turbo_spectrum_1} spectrum synthesis code V19.1 \citep{plez2012} for generating the synthetic spectra . The atomic and molecular line lists are the same as that of \citet{lee2008} and \citet{carollo2012} with minor updates to the hyperfine structure and inclusions of isotopes for the heavy elements. The synthetic grid covers a wavelength range 4200-4400\AA, which covers the CH molecule of the G-band region, which is sensitive to carbon abundance. The synthetic spectra cover a range in effective temperatures between \teff $= 3500 - 7000$ K, with an increment of $250$ K and \logg\ range is between $0.0 - 5.0$ dex with an increment of 0.5 dex and $[Fe/H] = -1.0 - +0.5$ dex (with 0.5 dex increment). Carbon abundance was varied over this stellar parameter range at every $0.1$ dex step size. We used a python script for interpolation and $\chi^{2}$ minimization between the observed and the model spectra. Since the wavelength coverage of the grid is limited, stellar parameters from LAMOST were used, and only the carbon abundances are varied for estimating the best fit between observed and synthetic spectra. Figure \ref{spe1} shows an example of a best-fit spectrum. Solar scaled abundances are used for the stellar model atmospheres and synthetic spectra generation in the range $[Fe/H] = +0.0 - +0.5$ and for the metal-poor range, $[Fe/H] = -1.0 - -0.5$ dex an alpha enhanced abundances of $[\alpha/Fe] = 0.4$ dex was used. Solar abundances values are taken from \citet{grevesse2007} where $log(N(C)/N(H))+12= 8.39$ and $log(N(O)/N(H))+12=8.66$ were used. The synthetic spectra grid uses an oxygen abundance ($[O/H]$) that scales with the metallicity for the metal-rich models ($0.0 < [Fe/H] < 0.5$) and follows the alpha abundance in the metal-poor models ($[Fe/H] < 0.0$), as expected by the GCE. We checked the sensitivity of the derived carbon abundances to the assumed oxygen abundance and found it has less impact on the current sample, as the targets have $T_{eff}>$ 4800K and C/O $<$ 1.0. We also visually inspected the goodness of the spectral fit for the entire planet host stars, using plots similar to Figure \ref{spe1}. Figure \ref{spe2} represents the goodness of the fit at two extreme $T_{eff}$ regime. \section{Carbon abundances} \label{sec:carbon_abu} The carbon abundances derived in this work use low-resolution spectra fitting the strong CH feature. We corrected the LAMOST carbon abundances using common samples from the California Kepler Survey (CKS) \footnote{The CKS sample: \href{DOI:}{https://doi.org/10.3847/1538-4365/aad501}}\citep{Brewer18}. We have compared the derived carbon abundances with previous studies and found that the trend in carbon abundances with respect to [Fe/H] is consistent with APOGEE (The Apache Point Galactic Evolution Experiment) \citep{apogee-kepler} and HARPS (High Accuracy Radial velocity Planet Searcher) \citep{delgado_mena_2010} data. We used 1025 common targets from CKS \citep{Brewer18} for deriving the corrections. As shown in Figure \ref{teff_teff}, the temperature scale between CKS and LAMOST common samples matches well after removing the 5$\sigma$ outliers. First we made corrections to the CKS and LAMOST [Fe/H] estimates (from the LAMOST catalog) , which is not significantly different ( Figure \ref{cks-lamost_fit1}). The LAMOST and CKS, [C/H] values show some dependency with effective temperature (Figure \ref{cks-lamost_fit3}). So, in the next step, we derive corrections for [C/H] values as a function of $T_{eff}$ (Figure \ref{cks-lamost_fit2}). We verified the correction for Sun using HARPS solar spectra. We have used Sun as star spectra from HARPS and convolved and re-binned to LAMOST resolution. We also added Gaussian noise to the data with an SNR=76, which is the mean SNR of the final sample. The stellar parameters we adopted for Sun are, $T_{eff}=$5774 K, \logg\ = 4.3 dex and [Fe/H]=0.0. We found an offset of $[C/H]_{LAMOST}=-0.12 $ at solar temperature, which is consistent with the CKS corrections. The derived solar carbon abundance with CKS correction is $[C/H]_{LAMOST}=0.09 $ (Figure \ref{spe1}). In the following sections, we only use the CKS corrected LAMOST $[Fe/H]$ and $[C/H]$ values. The CKS corrected [Fe/H], [C/H], along with the stellar parameters of the complete sample stars, are given in Table \ref{complete_sample}. We plotted the derived carbon abundances with respect to the stellar parameters to infer any systematic trends among them. Figure \ref{teff_ch_logg} shows no obvious correlation between the derived carbon abundances with T$_{eff}$ and log$\,g$. Figure \ref{teff_ch_logg} (a) and (b) also shows no systematic difference in the T$_{eff}$ and log$\,g$ distribution between giant planet host stars, small planet host, and the field stars. The derived mean errors in the carbon abundances across different stellar parameters are also shown in the plots. The error in the carbon abundance is estimated from the $\chi^{2}$ difference for a fixed difference $\delta[C/H] = \pm 0.1$ dex in the carbon abundance around the minimum $\chi^{2}$. Figure \ref{teff_ch_logg} (c) represents $[C/H]$ as a function of $[Fe/H]$, that shows a positive trend between $[Fe/H]$ and $[C/H]$ as expected due to the GCE effect. Both $[Fe/H]$ and $[C/H]$ increase linearly from the low metallicity close to the solar value and then flatten. This is the typical behavior of $\alpha$ elements that indicate carbon is primarily produced due to massive stars. Carbon may start to increase slightly at the very metal-rich end due to carbon production from the AGB stars; however, it is not very clear. Figure \ref{teff_ch_logg} (d) represents the trend of $[C/Fe]$ as a function of $[Fe/H]$, which also represents the GCE effect of carbon with respect to iron. Both field stars and host stars follow a similar trend. From Figure \ref{teff_ch_logg} (d), the mean value of [C/Fe] as a function of [Fe/H] shows that, small planet host stars are preferentially found around higher [C/Fe] value in the metal-poor side ($[Fe/H]< -0.2$) compared to the field stars. \section{Results} \label{sec:results} We study the distribution of carbon abundance for planets of different radii and the occurrence rates with respect to the metallicity and carbon abundances. Using Galactic velocity dispersion, we infer the ages of the sample independent of the chemical abundances to understand the role of planet formation on the chemical composition. \subsection{Elemental abundance of the host stars as a function of planet population} We examined the elemental abundance distribution of three distinct stellar populations: (i) host stars of the smaller planet (R$_{p} \leq 4R_{\oplus}$), (ii) host stars with giant planets (R$_{p} > 4R_{\oplus}$) and, (iii) Kepler field stars with no known planet detection. Distributions of $[Fe/H]$, $[C/H]$ and $[C/Fe]$ as a function of planetary radius is shown in Figure \ref{feh_distri}. We find that (a) giant planet hosts, on average, have a higher value of $[Fe/H]_{mean}$ as compared to the host stars of small planets and the field stars. This indicates that the giant planets are preferentially found around metal-rich host stars, which is similar to previous studies \citep{Mulders16, Petigura18, mayank}. And even the smaller planet hosts have a slightly higher $[Fe/H]_{mean}$ as compared to the field stars, perhaps indicating that for the formation of small planets $[Fe/H]$ could have some role \citep{small_radius_planet_feh1}. Similarly in figure \ref{feh_distri} (b), the distribution of $[C/H]$ also follows similar trend as that of $[Fe/H]$. The giant planet host stars are carbon-rich compared to field stars and small planet host stars. The resulting $[C/H]$ trend is expected because $[C/H]$ increases with $[Fe/H]$ due to GCE. However, the difference between the $[C/H]$ distribution for small planet hosts and field stars is insignificant. Figure \ref{feh_distri} (c) shows the distribution of $[C/Fe]$ for host stars of different planet radii. We find $[C/Fe]$ peaks at a higher value for the field stars compared to the planet hosts, which could be again due to the effect of GCE. Since most of the field stars are $[Fe/H]$ poor compared to the planet hosts, the $[C/Fe]$ at lower metallicities are expected to be higher, as most of the carbon in the Galaxy seems to have come from massive stars and hence the $[C/Fe]$ is high than solar values at lower metallicities \citep{origin_of_C_to_U}. Beyond solar metallicities, the rate of increase of iron is higher compared to carbon; hence the $[C/Fe]_{mean}$ value for the giant planet host star is low compared to stars hosting small planets and field stars. The results are shown in Table \ref{hist_result}. \begin{table}[htb] \begin{center} \small \setlength\tabcolsep{0pt}\caption{Main results from the histogram distribution}\begin{tabular}{ccccrrrrr} \hline\hline category&$[Fe/H]_{mean}$&$[C/H]_{mean}$&$[C/Fe]_{mean}$ \\ \hline field star&$-0.034\pm0.001$&$-0.036\pm0.001$&$-0.006\pm0.001$\\ small planet host&$-0.006\pm0.005$&$-0.025\pm0.005$&$-0.019\pm0.004$\\ giant planet host& $0.068\pm0.016$&$0.023\pm0.016$&$-0.044\pm0.012$\\ \hline \label{hist_result} \end{tabular} \end{center} \end{table} \subsection{Occurrence rate of planets as a function of host star abundance} The analysis described in the previous sections does not take the completeness of the Kepler survey, the detector efficiency, or the probability of detecting a planet into account. The real trend can not be inferred from histograms. In order to derive the correlation between the host star elemental abundance and the planet radius that is free of selection effects and observational biases, we use the final $Kepler$ data release DR25 \footnote{ Kepler DR25 data : \href{doi:}{https://doi.org/10.3847/1538-4365/229/2/30}} catalog \citet{Mathur_2017} to compute the occurrence rate of exoplanets as a function of radius and host star $[Fe/H]$ and $[C/H]$. We updated the Kepler DR25 catalog with updated stellar and planetary radius based on Gaia DR2 from \citet{Berger18}. Since the LAMOST metallicity and the derived carbon abundances are calibrated with respect to the CKS values, we combine CKS samples \citep{cks2,Brewer18} that has metallicities and carbon abundances. This also added additional samples for the occurrence rate estimation. To compute the occurrence rate as a function of planetary radii, we followed the prescription presented in \citet{Youdin11,Howard12,Burke15, Mulders16}. Similar to \citet{mayank}, we have divided the sample into three $[Fe/H]$ bins; (i) sub-solar $[Fe/H]$ ($-0.8<[Fe/H]<-0.2$) , (ii) solar $[Fe/H]$ ($-0.2<[Fe/H]<0.2$) and (iii) super-solar $[Fe/H]$ ($0.2<[Fe/H]<0.8$). In Figure \ref{fig7} (a), the occurrence rate of the sample is shown as a function of planet radius and host star $[Fe/H]$. We also calculated the occurrence rate (for the exoplanet sample used in Figure \ref{fig7} (a) as a function of planet radius Figure \ref{fig7} (b). The \ref{fig7} (a), is both a function of host star $[Fe/H]$ and radius. Similar to \citet{mayank}, we normalized the occurrence rate in Figure \ref{fig7} (a) with the total occurrence rate as a function of radius Figure \ref{fig7} (b), to produce the normalized occurrence rate. The normalized occurrence rate Figure \ref{fig7} (c) is only a function of the host star $[Fe/H]$. From Figure \ref{fig7} (a) and Figure \ref{fig7} (b) it can be seen that smaller planets $R_P \leq 4R_{\oplus}$ have similar occurrence rate for the three $[Fe/H]$ ranges, while giant planets $R_P > 4R_{\oplus}$ have a higher occurrence rate for the solar and super-solar $[Fe/H]$. This is consistent with the previous works in literature \citep[e.g.,][]{Mulders16, Petigura18, mayank}. To compute the occurrence rate of planets as a function of $[C/H]$, we divided the sample into three $[C/H]$ bins. Since we found that the [C/H] is a strong function of [Fe/H] (see Figure \ref{fig9} (c)) we converted the [Fe/H] bins to [C/H] bins. Based on equation \begin{equation} [C/H]=0.657*[Fe/H]-0.165 \end{equation} we define the bins as (i) sub-solar $[C/H]$ ($-0.7<[C/H]<-0.3$) , (ii) solar $[C/H]$ ($-0.3<[C/H]<0.0$) and (iii) super-solar $[C/H]$ ($0.0<[C/H]<0.2$). In Figure \ref{fig8} (a), the occurrence rate as a function of host star carbon abundance and planetary radius is shown. Similar to Figure \ref{fig7} (a), Figure \ref{fig8} (a), is a strong function of both planetary radius and $[C/H]$. In Figure \ref{fig8} (b), the normalized occurrence rate of planets (using Figure \ref{fig7} (b)) as a function of $[C/H]$ is shown. From Figure \ref{fig8}, we find that similar to Figure \ref{fig7}, the occurrence rate of giant planets is higher for stars with solar and super-solar [C/H]. We further analyzed the occurrence rate of planets as a function of $[C/Fe]$. We divide the sample again into three bins (i) $[C/Fe]$ between -0.4 to -0.1, (ii) $[C/Fe]$ between -0.1 to 0.1, and (iii) $[C/Fe]$ between 0.1 to 0.4. In Figure \ref{fig9} (a), the occurrence rate as a function of host star $[C/Fe]$ and planetary radius is shown. We found that the occurrence rate for smaller planets ($R_P \leq 4R_{\oplus}$) is similar in all the three $[C/Fe]$ bins, while the occurrence rate of the giant planets ($R_P> 4R_{\oplus}$) is much higher for $[C/Fe] < 0.1$. This might indicate that volatile elements such as carbon do not play a significant role in the formation of giant planets. \subsection{Galactic space velocity dispersion} \label{sec:velocity_dis} The increase in (normalized) occurrence rate as a function of $[C/H]$ indicates that carbon enhancement is a necessary step in the Galactic context of planet formation, though it might not play a strong role as that of $[Fe/H]$ in determining the size/radius of the planet. To further understand the planet population in the Galactic context, we need to understand the dependence and evolution of planetary properties and host star properties as a function of the Galactic age. In Narang et al. (under review), we have established that the critical threshold of $[Fe/H]$ in ISM that was necessary to form Jupiter-like planets was only achieved in the last 5-6 Gyrs indicating that the Jupiters only started forming in the last 5-6 Gyrs. Since the $[C/Fe]$ values are expected to change over the timescale of the Galactic thin disk, we further investigated if probing the Galactic evolution of the $[C/H]$ and/or the $[C/Fe]$ might provide us with clues about the Galactic evolution of planetary systems. Similar to \citet{Binney00,Manoj05,Hamer19} and Narang et al. (under review), we used the dispersion in the peculiar velocity of the stars as a proxy for the age of the stars in the Kepler field. We estimated the velocity dispersion (a proxy for the age) as function of $[Fe/H]$, $[C/H]$ and $[C/Fe]$. To compute the velocity dispersion, we first calculated the Galactic space velocity in terms of the U, V, $\&$ W space components following \citet{Johnson87,Ujjal}. The total velocity dispersion ($\sigma_{tot}$) for a particular ensemble of stars is then given as the quadratic sum of the individual components of the velocity dispersion in that given ensemble such that \begin{equation} \sigma_{tot} = \sqrt{\sigma_U^2+\sigma_V^2+\sigma_W^2} \end{equation} where $\sigma_U$, $\sigma_V$, and $\sigma_W$ are the velocity dispersion of the $U$, $V$, and $W$ components given in the same manner as: \begin{equation} \sigma_U^2 = \frac{1}{N}\Sigma_{0=i}^N (U_i-\Bar{U})^2 \end{equation} here N is the number of stars. Furthermore velocity dispersion can then be converted to an average age of the stars following the formalism from Narang et al., (under review): \begin{equation} \sigma_{tot}(\tau) = A \times \tau^{\beta} \label{eq11} \end{equation} {where $\tau$ is the average age of the host stars in a bin, A is a constant and is equal to $21.5 \, km\, s^{-1} \, Gyr^{-0.53}$ and $\beta =0.53$. } In Figure \ref{fig10}, we show the velocity dispersion of the Kepler field as a function of $[Fe/H]$. As the average field $[Fe/H]$ increases, the total velocity dispersion $\sigma_{tot}$ decreases. This indicates that $[Fe/H]$ rich stars ($[Fe/H] > -0.2$) are younger. Similar behavior is seen for the $\sigma_{U}$, $\sigma_{V}$, and $\sigma_{W}$ as well. Using equation \ref{eq11}, we can further convert $\sigma_{tot}$ to the average age of the stars. We find that $[Fe/H]$ rich stars ($[Fe/H] > -0.2$) have an average age between $\sim$4-6 Gyrs. Further from Figure \ref{fig7}, we find that most giant planets are around $[Fe/H]$ rich stars. Hence from Figure \ref{fig7} and Figure \ref{fig10} we conclude that most of the giant planets ($R_P>$ 4 $R_\oplus$) in the Kepler field are of an average age between $\sim$4-6 Gyrs, while smaller planets have a much larger spread in host stars $[Fe/H]$ and hence even in the age. Similarly, by combining the results of velocity dispersion as a function of $[C/H]$ from Figure \ref{fig11} and the occurrence rate of planets in the Kepler field as a function of $[C/H]$, we find that the average age of host stars of giant planets $R_P>4R_\oplus$ is between 4-5 Gyrs. Similar age ranges are obtained based on $[C/Fe]$ as well (Figure \ref{fig12}). \section{Discussion} \label{sec:discussion} We have calculated the planet occurrence rate as a function of host star metallicity and carbon abundance. The distribution of $[Fe/H]$ and $[C/H]$ with respect to the planet radii show that planets with $R_{p}>4R_{\oplus}$ are preferentially found around stars with solar and super-solar metallicities. At these preferred high metallicities, the GCE trend shows lower [C/Fe] ratios, and planet hosts also follow a similar trend as the field stars as shown in Figure \ref{teff_ch_logg}. With the current sample, we do not find a significant difference in the [Fe/H] versus [C/Fe] trend above solar metallicities between the field stars and planet hosts. We explored the difference in $[C/Fe]$ within a narrow bin in metallicity to remove the GCE trend; however, this has reduced the number of samples significantly. A simple mean gives a $[C/Fe]$ value of $-0.09$ for field stars and $-0.13$ for giant planet hosts for $ [Fe/H] > 0.28$. However, at lower metallicities (where mostly low mass planet hosts are present), the planet hosts may have slightly higher [C/Fe] values than the field stars, which is similar to what is observed in alpha elements \citep{Adibekyan2012a}. Hence, there may be a general preference for planet hosts to have higher abundances of metals. Since, planet hosts and field stars follow the GCE trends in elemental abundance, it is difficult to test the preference of a higher [C/Fe] among planet hosts at solar and super solar metallicities. Stellar population with different abundance ratios with overlapping metallicity, similar to thick and thin disk, is not present at higher metallicity. Giant planet frequency at a different Galactic distance from future microlensing surveys can cover a range of stellar metallicities and possibly with different abundance ratios. \section{Conclusion} We have used LAMOST-Kepler data of main-sequence dwarfs to derive the carbon abundance and compared the planet hosts and the field stars. We constrained the sample to the main sequence dwarf stars to avoid effects due to stellar evolution. The distribution of carbon and iron with the planet radii and the occurrence rate analysis showed that the giant planet hosts are metal-rich and carbon-rich compared to the field stars and the stars with smaller planets. However, at super-solar metallicities, the $[C/Fe]$ values are lower than the solar ratio. At the metal-rich end, iron increases at a faster rate compared to carbon, which may be crucial for increasing the abundance of the refractory elements. Based on the Galactic space velocity dispersion, we found that the Jupiter host stars are younger, only about 4-5 Gyrs old. From the detailed occurrence rate analysis, we found that carbon may not be a significant contributor to the mineralogy of planet formation as compared to iron. \section{Acknowledgement} We used LAMOST archival data and the NASA exoplanet archive for this study. Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, the Chinese Academy of Sciences. We thank and acknowledge the immense contributions of Castelli in generating an extensive grid of stellar atmospheric models that this work used. \section{Appendix} \begin{table*}[h] \centering \begin{tabular}{@{}ccccccc@{}} \hline RA(degree) & DEC(degree) & $T_{eff}$ & \logg\ & [Fe/H]&[C/H]\\ \hline 12.9681 & 10.1142 & 5843.0 & 4.09& -0.5487& -0.2814 \\ 13.1028 & 8.7679 & 6058.0 & 4.34 & 0.0207 &-0.0390 \\ 13.1220 & 9.1860& 5044.0 & 4.57& -0.4775&-0.2193 \\ 13.1915 & 9.5228 & 5562.0 & 4.25 & 0.0681 & 0.0960 \\ 13.2336 & 8.7201 & 5401.0& 4.57& -0.4854& -0.1415 \\ 13.2383 & 10.6629 & 5024.0 & 4.46& -0.0979& 0.2636 \\ 13.2399 & 9.5898 & 5444.0 & 4.44& 0.0444&-0.0075 \\ 13.2497 & 9.5943& 5803.0 & 4.04 &-0.5329& -0.3562\\ 13.2887 & 9.7260 & 6266.0& 4.11& -0.1928 & -0.1047 \\ 13.3265 & 9.8816 & 5361.0 & 4.42 & -0.4696 & -0.2758 \\ 13.3561 & 11.1456 & 5858.0 & 4.44 & 0.0207& -0.0334 \\ 13.3963 & 9.6087 & 6201.0 & 4.03& -0.0662& -0.0568 \\ 13.4179& 9.7623 & 6465.0 & 4.15 & -0.3668 &-0.3385 \\ 13.4460 & 9.7859 & 5720.0& 4.34 & -0.5566 &-0.5352 \\ .......&.......&.......&.......&.......&.......\\ .......&.......&.......&.......&.......&.......\\ .......&.......&.......&.......&.......&.......\\ 303.2472 & 46.1495 & 5325.0 &4.42 & 0.4082 & 0.3392 \\ 303.2586& 45.8985 & 6281.0 & 4.50 &-0.0662 & 0.1934 \\ 303.2754 & 45.9066 & 5685.0 & 4.21 & 0.1472 & 0.0194 \\ 303.2776 & 46.4944 & 5498.0 & 4.32 & -0.3826 &-0.3450\\ 303.2888 & 45.2001 & 6111.0 & 4.17 & 0.0365 & -0.0256 \\ 303.3010 & 45.4655 & 5548.0 & 4.21& 0.1551 & 0.1180\\ 303.3096 & 45.9596 & 5933.0 & 4.36 & 0.0365 & -0.0331 \\ 303.3115 & 45.8891 & 6059.0 & 4.26 & -0.0109 & -0.0391\\ 303.3131 & 46.2382 & 5750.0 & 4.09 & 0.1156 &0.0007 \\ 303.3182 & 46.2866 & 6246.0 & 4.00& -0.1058 & -0.0723 \\ 303.3222 & 45.8463 & 5833.0 & 4.00 & -0.0662 & -0.1701 \\ 303.3513 & 46.2210 & 6059.0 & 4.07 & 0.0760 & -0.0091 \\ 303.3543 & 46.0936 & 5697.0 & 4.42 & -0.0188 & -0.0421 \\ 303.3627 & 45.4936 & 5605.0 & 4.05 & -0.3193 & -0.1597 \\ 303.3804 & 45.6495 & 5788.0& 4.40 & 0.1077 & -0.1142 \\ 303.3810 & 45.7701 & 6286.0 & 4.23 &-0.0188 & -0.2871 \\ 303.4039 & 45.6290 & 5910.0 & 4.25 & -0.0109 & 0.1798\\ \hline \end{tabular} \caption{sample data \footnote{Complete data is available }} \label{complete_sample} \end{table*} \bibliographystyle{aasjournal} \bibliography{c_h_main}
Title: HIPASS study of southern ultradiffuse galaxies and low surface brightness galaxies
Abstract: We present results from an HI counterpart search using the HI Parkes All Sky Survey (HIPASS) for a sample of low surface brightness galaxies (LSBGs) and ultradiffuse galaxies (UDGs) identified from the Dark Energy Survey (DES). We aimed to establish the redshifts of the DES LSBGs to determine the UDG fraction and understand their properties. Out of 409 galaxies investigated, none were unambiguously detected in HI. Our study was significantly hampered by the high spectral rms of HIPASS and thus in this paper we do not make any strong conclusive claims but discuss the main trends and possible scenarios our results reflect. The overwhelming number of non-detections suggest that: (A) Either all the LSBGs in the groups, blue or red, have undergone environment aided pre-processing and are HI deficient or the majority of them are distant galaxies, beyond the HIPASS detection threshold. (B) The sample investigated is most likely dominated by galaxies with HI masses typical of dwarf galaxies. Had there been Milky Way (MW) size (R_e) galaxies in our sample, with proportionate HI content, they would have been detected, even with the limitations imposed by the HIPASS spectral quality. This leads us to infer that if some of the LSBGs have MW size optical diameters, their HI content is possibly in the dwarf range. More sensitive observations using the SKA precursors in future may resolve these questions.
https://export.arxiv.org/pdf/2208.08640
\label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \begin{keywords} galaxies: groups: general -- galaxies: evolution -- galaxies: dwarf -- radio lines: galaxies \end{keywords} \section{Introduction} A relatively unexplored area of extragalactic astronomy is the study of low mass (dwarf) and low surface brightness galaxies (LSBG). Understanding the fainter end of the galaxy mass spectrum holds the key to questions related to galaxy formation, evolution, mass budgets in these structures and thus improving cosmology models. Since their reporting in 2015, a class of fainter LSBGs, called the ultradiffuse galaxies (UDGs; \cite{vdokkum15}) have become a topic of interest to the astronomy community. To qualify as an UDG, a galaxy has to meet two criteria: they must have a \textcolor{black}{ central surface brightness ($\mu_{\rm g}$) of $\ge$ 24 mag arcsec$^{-2}$ and an effective radius\footnote{The effective radius of a galaxy is the radius at which half of the total light is emitted} (\re) $\ge$ 1.5 \citep{vdokkum15}.} While faint, LSBGs are not a recent discovery \citep{impey88, dalc97, conselice18}, the 1000+ UDGs found \textcolor{black}{ projected} around the Coma cluster \citep{koda15} indicated for the first time their relative ubiquity in a dense environments \citep{vaderBurg17}. This fact suggested that UDG studies had the potential to add new insights to knowledge of galaxy and structure formation. Despite the relatively large number of reported UDGs, little is known about their properties and formation. Various secular and environmentally driven formation scenarios have been proposed but detailed observations are needed to \textcolor{black}{ determine which ones are valid}. UDGs, and LSBGs in general, \textcolor{black}{ are optically faint galaxies with mostly low star formation rates \citep{wyder09}}. As a result, establishing their optical/UV and infrared (IR) properties is observationally expensive. They are typically metal poor, limiting the practicality of molecular gas observations. However, outside cluster cores, UDGs and LSBGs are usually \hi\ rich, making \hi\ line observations a high priority tool to study these galaxies. Despite this, very few UDG \hi\ studies exist in the literature mainly \textcolor{black}{ because} the field is new. Single dish targeted \hi\ UDG surveys yielding statistically significant results are so far limited to only a handful of studies \citep[i.e.][]{Leisman17, karunakaran20}. A few more \hi\ studies of UDGs are focused on \hi\ in isolated UDGs \citep{papastergis17}, \hi\ rich field UDGs \citep{Leisman17} , and UDGs in groups \citep{spekkens18, poulain22}. There are even fewer resolved \hi\ studies of UDGs \citep{sengupta19,mancera19,scott21, gault21, mancera21}. More extensive \hi\ studies of these galaxies is thus timely and relevant as their abundance in different environments has important implications \textcolor{black}{ for} our knowledge of galaxy and large-scale structure formation. Using optical imaging from the Dark Energy Survey (DES; \citealt{abbott18}), \cite{tanoglidis21} reported a large number ($\sim$23790) LSBGs in an area $\sim$ 5000 deg$^{2}$ mainly from the southern hemisphere sky with a fraction of them being UDG candidates. The \cite{tanoglidis21} LSBG catalogue was based on imaging data and thus lacked the essential redshift information necessary to determine the UDG fraction in the catalogue. Unlike the northern hemisphere where a number of \hi\ surveys have been carried out, principally with the Arecibo 305m telescope, the \hi\ Parkes All Sky Survey (HIPASS) single dish survey is the only extensive southern \hi\ survey available. Thus HIPASS provides an excellent opportunity to search for \hi\ counterparts to LSBG/UDGs in the \cite{tanoglidis21} catalogue and determine their redshifts. In this paper we present the results of our search on a subset of southern hemisphere \cite{tanoglidis21} catalogue LSBGs using \hi\ spectra extracted from HIPASS data cubes \citep{barnes01, meyer04, zwaan04, wong06}. We aim to understand what fraction of our sample had detectable \hi\ and their \hi\ properties, and most importantly the fraction of the \hi\ detected LSBGs that qualify as UDGs. \section{Sample and Methodology} \subsection{Sample Selection} \label{sec:selection} Our sample was selected from the southern LSBGs in the \citep{tanoglidis21} LSBG catalogue compiled from Dark Energy Survey (DES) optical imaging. According to the authors' definition, galaxies qualified as LSBGs if they had g--band effective radii $\ge$ 2.5$^{\prime\prime}$ and a mean surface brightness (in g band) $\ge$24.2 mag arcsec$^{-2}$. While Tanoglidis' LSBGs were found to be distributed all across the southern sky, they also showed projected clustering around prominent known galaxy groups and clusters. About 80 such \textcolor{black}{ concentrations} were reported in \cite{tanoglidis21}. On the assumption that the clustering of the LSBGs around known galaxy groups is also true in velocity space, and not just in projection, we selected the LSBGs associated with groups and clusters. Assuming a large dwarf population dominated this catalogue, we selected primarily nearby groups. We expect that selecting the groups/clusters would provide approximate constraint on the distance to our targets. While a fraction of LSBGs projected around the groups and clusters could be foreground or background galaxies, choosing nearby groups and clusters increases the probability that the targeted galaxies would be at a similar redshift. As the \hi\ detection threshold increases with redshift this approach tends to maximise the probability of detecting \hi\ in the LSBGs while minimising the search distance. \textcolor{black}{ A fraction of the reported \cite{tanoglidis21} LSBGs in nearby groups and clusters \textcolor{black}{ are} also UDG candidates.} In defining their UDG sample, those authors followed the standard definition of an UDG, i.e., g -- band \re\ $\ge$ 1.5 kpc and the central surface brightness $\mu_{\rm g}$ $\ge$24.0 mag arcsec$^{-2}$ \citep{vdokkum15}. \cite{tanoglidis21} used the distances to the groups or clusters with which they were presumed to be associated to, to estimate the \re\ \textcolor{black}{ of} the UDG candidates. Detection of an \hi\ counterpart to these optical candidates would thus allow us to determine whether these are truly UDGs. We used the HIPASS spectra extracted from the online data release (https://www.atnf.csiro.au/research /multibeam/release/) to search for \hi\ counterparts in \textcolor{black}{ a} sample 409 of \cite{tanoglidis21} LSBG candidates with the aim of estimating the \hi\ content of the LSBGs. \textcolor{black}{ The same exercise was repeated using spectra extracted directly from the HIPASS cubes as a cross check.} Given the HIPASS spectral {\rm rms} $\sim$13 mJy beam$^{-1}$, velocity resolution of 18 \km\ \citep{meyer04} and assuming the \hi\ emission appears over at least three consecutive channels, a galaxy with an \hi\ mass $\sim 1.9\times 10^{8}\,{\rm M}_{\odot}$, at a distance of 20 Mpc, should be detected at 3$\sigma$ significance with HIPASS. However, had we restricted our sample to distances $\le$ 20 Mpc, our sample size would have been very small. Therefore, we increased our distance limit, being aware that with increasing distance, possibility of detecting galaxies with dwarf \hi\ masses significantly reduces. However, not all LSBGs are dwarf galaxies and several LSBGs are known to be \hi\ rich and relatively optically extended galaxies \citep{sprayberry95, deblok96, Impey96} and we therefore extended our search to groups with luminosity distances $\le$ 70 Mpc. \textcolor{black}{ At 70 Mpc, a galaxy with an \hi\ mass of $\sim 2.4\times 10^{9}{\rm M}_{\odot}$, still in the dwarf galaxy \hi\ mass range, would be detected at 3$\sigma$ level in a HIPASS spectrum}. Thus, even at 70 Mpc, a few LSBGs could potentially be detected and thus we included all \cite{tanoglidis21} LSBG candidates in clusters/ groups and overdensities with distances $\le$ 70 Mpc in our sample of 409 LSBGs. Using the archival HIPASS data, we searched for \hi\ along the line sight for the 409 LSBGs associated with 18 groups and overdensities (15 known groups and 3 central galaxies) with luminosity distances $\le$ 70 Mpc. Table \ref{table1} shows these group names, coordinates, redshift, luminosity distance \textcolor{black}{ as well as the} number of \textcolor{black}{ associated} LSBGs and UDGs (in brackets). The redshifts to these groups/galaxy clusters are taken from \cite{tanoglidis21}. \begin{table*} \begin{minipage}{150mm} \caption{Groups searched for H{\sc i}. } \label{table1} \begin{center} \begin{tabular}{|l|l|l|l|l|l|l|} \hline (1)&(2)&(3)&(4)&(5)&(6)& (7)\\ Sl. no.\footnote{Serial number.} &Group/cluster name &R.A.\footnote{All group co-ordinates are from SIMBAD, except RXC J0152.9-1345 and RXC J0340.1-1835 which are from \cite{Piffaretti2011}.}& Dec.& Redshift\footnote{Redshift of the groups/clusters from \cite{tanoglidis21}.} & Lum. dist.\footnote{Luminosity distance of the groups/clusters from \cite{tanoglidis21}}& No. LSBGs (UDGs)\footnote{Number of LSBGs (UDGs) in each group/cluster from \cite{tanoglidis21}.}\\ &&[h\,m\,s]&[d\,m\,s]&&[Mpc]&\\ \hline 1&Abell S373 (Fornax) & 03:38:30.0 & $-$35:27:18.0 & 0.0046 & 19.0& 59 (3) \\ 2&NGC 1401 & 03:39:21.9 & $-$22:43:29.0 &\textcolor{black}{0.0050} & 20.3& 26 (1) \\ 3&RXC J0152.9-1345 & 01:52:59.0 & $-$13:45:12.0& 0.0058 &21.9& 13(0) \\ 4&RXC J0340.1-1835 & 03:40:11.4 & $-$18:35:15.0& 0.0057 & 23.4& 45(1) \\ 5&NGC 1316 &03:22:41.8&$-$37:12:29.5 &\textcolor{black}{0.0059} & 24.4& 17(1) \\ 6&Abell 3820 &21:52:32.0 &$-$48:23:54.0 & 0.0064 & 25.6& 14(0) \\ 7&NGC 7041 &21:16:32.4&$-$48:21:48.8 & \textcolor{black}{0.0065} & 26.0& 14(1) \\ 8&Abell S989 &22:04:25.0&$-$50:04:24.0 & \textcolor{black}{0.0098} & 40.3& 25(3) \\ 9&NGC 1162 &02:58:56.0&$-$12:23:54.8 & \textcolor{black}{0.0131} & 55.3& 12(2) \\ 10&NGC 145 &00:31:45.7&$-$05:09:09.6 & \textcolor{black}{0.0138} & 56.0&10 (0) \\ 11&NGC 829 &02:08:42.2&$-$07:47:26.9 & \textcolor{black}{0.0135} & 56.1& 17 (5) \\ 12&NGC 1200 &03:03:54.5&$-$11:59:30.7 & \textcolor{black}{0.0135} & 57.0& 30 (10)\\ 13&Abell 2964 &02:01:06.4&$-$25:04:31.7 & 0.0144 & 60.3& 18 (5)\\ 14&NGC 1521 &04:08:18.9&$-$21:03:07.3 & \textcolor{black}{0.0142} & 61.4& 14 (4)\\ 15&NGC 1208 &03:06:11.9&$-$09:32:29.4& \textcolor{black}{0.0145} & 61.6& 18 (5)\\ 16&NGC 199 &00:39:33.2 &+03:08:18.8& 0.0154 & 62.8& 39 (12) \\ 17&NGC 7396 &22:52:22.6&+01:05:33.3& \textcolor{black}{0.0166} & 68.0& 18 (7)\\ 18&Abell S924 &21:07:53.0 &$-$47:10:54.0& \textcolor{black}{0.0162} & 68.9& 20 (8)\\ \hline \end{tabular} % % \end{center} \end{minipage} \end{table*} \subsection{Search for \hi\ counterparts and comparison with spectra from the HIPASS cubes} Lines of sight spectra were extracted from the HIPASS online archive for each of the 409 LSBGs in our sample in an attempt to detect \hi\ in them. Caveats to this process need to be discussed. The FWHM of the HIPASS beam is large ($\sim$15$^{\prime}$) and in most cases the galaxy coordinates, although within the FWHM of HIPASS beam, differed significantly from the HIPASS beam pointing centre. Additionally, while the canonical {\rm rms} for HIPASS is 13 mJy beam$^{-1}$, depending on sky position it varies from 13 -- 20 mJy beam$^{-1}$ \citep{zwaan04}. These {\rm rms} variations are often convolved with baseline ripples. This fact can add to the difficulty in detecting galaxies with low \hi\ mass. The pointing offset and presence of other large group galaxies in the same redshift range within the HIPASS FWHM leads to the risk that the \hi\ signal from our intended target is confused with \hi\ emission from other galaxies within or slightly beyond the HIPASS beam. To minimise this risk for targets associated with groups we restricted our search to only nearby groups ($D \le 70\,{\rm Mpc}$) , while acknowledging that we may have missed several \hi\ counterparts due to this restriction. Figure \ref{fig1} demonstrates the effect of the high spectral {\rm rms} and baseline issues mentioned above. While NGC\,7398 ($D = 67.4\,{\rm Mpc}$), the galaxy in the figure, is not in our sample, but belongs to one of the groups we are investigating. It is a large \hi\ rich spiral in contrast to the dwarf dominated LSBG population of our sample. Thus the figure indicates that there is a low probability of detecting our targets with HIPASS unless they are \hi\ rich. The HIPASS cubes cover a $\sim 8^{\circ}\times 8^{\circ}$ sky area with each pixel covering an area of $\sim 4^{\prime}\times 4^{\prime}$ and the HIPASS FWHM beam is $\sim$15$^{\prime}$ \citep{meyer04}. The spectra available from the website\footnote{\url{https://www.atnf.csiro.au/research /multibeam/release/}} are extracted using a single pixel box at the location of the source, where \textcolor{black}{ the} pixel size is 8$^{\prime}$ $\times$ 8$^{\prime}$. \textcolor{black}{ While extracting spectra directly from the HIPASS cubes, we used a $3\,{\rm pixels}\times 3\,{\rm pixels}$ box (with pixel sizes of 4$^{\prime}$), closer to the HIPASS FWHM, for each source. We compared the entire set of HIPASS spectra available from HIPASS website to the spectra extracted directly from the cubes. We found no significant difference, however for our analysis we used the spectra \textcolor{black}{ from the $3\,{\rm pixels}\times 3\,{\rm pixels}$} boxes, extracted from the HIPASS cubes.} \section{Results} \textcolor{black}{ Our search for \hi\ in HIPASS cubes for the target galaxies along 409 lines of sight, associated with 18 groups/clusters, yielded no clear detection. There were four tentative detections, two associated with the Fornax cluster and one each with NGC 1316 and NGC 145 groups (see Figures \ref{fornaxgc1}, \ref{fornaxgc2}, \ref{ngc1316gc3} and \ref{ngc145gc1}). The rest were all clear non-detections. All four tentative detections had the following common features. They were all narrow line features, similar to 2 --4 channels and appeared at velocities similar to 4500 \km\ and 2600 \km. The W$_{20}$ of our tentative detections ranged from 30-50 \km. Narrow line signals at these frequencies can potentially be \textcolor{black}{ radio frequency interference} {(RFI)}. The HIPASS Data Release Help Page offers information on the frequencies where known RFI signals can be seen. According to this page, the prime interfering line is the 11th harmonic of the 128 MHz sampler clock at 1408 MHz (cz=2640 \km). The page further states, that while this is a narrow line, Doppler corrections may broaden this line by up to 30 \km. Additionally, other residual narrow-band signals may be present in the HIPASS cubes, notably near 1400 MHz, or 4400 \km. Since some of these RFI signatures match with our tentative detections, we carried out the prescribed RFI checking method suggested in the Data Release Help Page\textcolor{black}{, i.e., by extracting several spectra from along a 1\degree\ radius from the candidate source position. Of our tentative sources Fornax-C1, Fornax-C2 and NGC145-C1, had narrow signals from their 1\degree\ radii tests at the same velocity confirming the tentative detections were in fact RFI.} Sometimes spectrometer saturation may cause a sign bit inversion. This could be a possible reason for seeing negative amplitudes at RFI frequencies. For NGC 1316-C1 we see no such feature. But NGC1316-C1 was the weakest of the four tentative signals and barely a two sigma emission. Thus we conclude we have complete non-detection of \hi\ signals in this search for \hi\ counterparts using the HIPASS data. We note that a few groups in our sample overlap with the ALFALFA\footnote{\url{http://egg.astro.cornell.edu/alfalfa/data/}} survey areas. Two of our groups (NGC\,199 and NGC\,7369) overlap with the ALFALFA sky coverage, but the LSBGs in those groups were \hi\ non-detections in both HIPASS and ALFALFA.} \textcolor{black}{ Assuming the \cite{tanoglidis21} LSBGs to be group members, we next performed a spectral stacking experiment. Due to lack of redshifts for the LSBGs, we assumed all the LSBGs had velocities similar to the nearest group in projection. \textcolor{black}{ Additionally, we only stacked the spectra from the blue galaxies, because these are expected to be \hi\ rich.} The caveat here being that the \textcolor{black}{ groups can have \hi\ velocity dispersions of up to 200 \km\ with group members having a range of radial velocities, whereas the LSBGs are assumed to be at the group systemic velocity. Thus stacking in this case is likely to miss a major fraction of the galaxies.} However given that these are nearby groups where individual galaxies \textcolor{black}{ with \hi\ masses $\geq$ 10$^{8}$ M$\odot$ should} be detected, the stacked signal would at least detect the LSBGs close to the systemic velocity of the group. Thus the systemic velocity of the host group was considered the zero velocity for all the blue LSBG spectra and a $\pm~$1000 \km\ range about the zero velocity was extracted for stacking them. However, we did not detect any signal in the stacked spectra.} \textcolor{black}{The \cite{tanoglidis21} LSBG catalogue is based on DES DR1 from the first three years of data from the DES. Their paper contains a link (https://desdr-server.ncsa.illinois.edu/despublic/other\_files/y3-lsbg/) to their LSBG catalogues. We used the original version of the catalogue for our analysis. \textcolor{black}{ But we note that the above website also contains a second version of the catalogue, possibly a recent update on their original version. Comparing the two catalogue versions for our sample showed that galaxies from six groups in our sample were reclassified as field LSBGs rather than group members in version two of the catalogue. The differences between the two versions of the Tanoglidis LSBG catalogues add additional uncertainties to group memberships. But, whether we include or exclude \textcolor{black}{ these six groups, our complete \hi\ non--detection result} remains unchanged \textcolor{black}{ as do the conclusions.}}} \section{Discussion} \textcolor{black}{ Analysis of imaging data from the Dark Energy Survey (DES) provided a large sample of new LSBGs and UDGs mainly in the southern hemisphere \citep{tanoglidis21}. They report a 2D clustering for the red LSBGs where the galaxies appear preferentially near to known groups and clusters. The authors report $\sim$80 such groupings. For a subset of that sample, 18 groups in total, we used the only available large-scale single dish \hi\ survey in the southern hemisphere, HIPASS, to search for \hi\ counterparts. In absence of spectroscopic redshifts, projected proximity to a group or cluster provided the initial distance constraint for our sample.} According to \cite{tanoglidis21}, the majority of the LSBGs associated with over densities are redder than g -- i$\ge$ 0.60 and the redder LSBGs are more strongly clustered than the bluer ones. This situation introduces a bias in our sample as the bluer galaxies are more likely to be \hi\ detected than the red ones \citep{Leisman17,spekkens18,sengupta19}. \textcolor{black}{However, as a first step, we chose to probe the groups because this provides a better redshift constraint on the sample. \textcolor{black}{ Though rare, it is not impossible for redder LSBGs or dwarfs to contain} substantial \hi\ \citep{Leisman17, papastergis17, karunakaran20, poulain22} and thus we did expect \hi\ detections in at least a fraction of them. In addition, choosing groups does not imply that our sample is completely devoid of blue galaxies. While the dominant population in our 409 LSBG sample have a red colour, 108 are blue galaxies (g--i $<$ 0.6) . } \textcolor{black}{ Our study \textcolor{black}{ resulted in \hi\ non--detection for all of the} 409 lines of sight in 18 groups. For the HIPASS data, a galaxy's \hi\ mass upper limits ranges from $\sim 1.9\times 10^{8}\,{\rm M}_{\odot}$ (for 20 Mpc) to $\sim 2.4\times 10^{9}{\rm M}_{\odot}$ (for 70 Mpc).} Our \textcolor{black}{ 70 Mpc distance cut off was chosen} to ensure we do not miss higher \hi\ mass but more distant LSBGs, if any. While our best candidates are projected close to the nearest six groups in our sample (Table \ref{table1}), we extend our distance limit to 70 Mpc. Although, the recently reported UDGs \citep{sengupta19, scott21} are predominantly dwarf mass galaxies, several LSBGs have been reported to be \hi\ rich with moderate to large size stellar disks \citep{bothun90, sprayberry93}. \textcolor{black}{ So if such galaxies with proportionally large \hi\ masses are present in the \cite{tanoglidis21} LSBG sample, extending the distance limit to 70 Mpc would help us detect them in those more distant groups. Here we discuss a few factors that could explain the \hi\ non-detections in our study.} According to \cite{tanoglidis21}, of the 409 target LSBGs in our sample, 108 have blue DES color (g -- i $\le$ 0.6) and the majority, 301, are red (g -- i $\ge$ 0.6). While red galaxies can contain detectable \hi\ mass \citep[e.g.][]{Leisman17, papastergis17, karunakaran20, poulain22} at least in the nearby groups, the chances of \hi\ detection in them are lower than bluer galaxies \citep{bouchard05, grossi09,karunakaran20}. Additionally, if these galaxies are genuinely group members, the chance of them being \hi\ deficient is high. \hi\ deficiency from galaxy pre-processing in groups is a known phenomenon and LSBGs with nominal stellar disk mass are more vulnerable to gas stripping physical processes like tidal interactions, harassment and ram pressure stripping than higher mass galaxies \citep{vm01, seng06, kilborn09, odekon16}. \textcolor{black}{These group physical processes could make even the blue fraction of the LSBGs \hi\ deficient. However, this scenario alone appears insufficient to explain the complete non-detection of the 108 blue galaxies in the sample. Even with pre--processing active in groups, at least a small fraction of the blue galaxies should have been detected at HIPASS sensitivity. \hi\ deficient dwarf galaxies have been detected previously with HIPASS data in groups at similar distances \citep{seng06}.} An alternative explanation for this non-detections could be that LSBGs, while projected close to the groups, are in fact background galaxies which fall below the HIPASS detection threshold. \textcolor{black}{ HIPASS's} \hi\ sensitivity falls off rapidly with distance and if a large fraction of our sample are dwarfs and/or in the background of their Tanoglidis assigned group, they would not be detected in the HIPASS. \textcolor{black}{ The result from spectral stacking of the blue galaxies supports this hypothesis. If our LSBGs are group members, statistically at least a fraction of them could have had velocities close to the group systemic velocity. \textcolor{black}{ Since the groups are at various redshifts, the total blue stacked spectrum rms cannot be used to quote upper limits of \hi\ masses for groups at different distances. Thus individual group's stacked spectral rms was used to extract this number. Thus the 3$\sigma$ upper limit to the \hi\ mass for the nearest ($\sim$20 Mpc) and the farthest ($\sim$70 Mpc) groups are $\sim 3.6\times 10^{7}\msolar$ and 7.3$\times 10^{8}\msolar$ respectively.} For individual galaxies, this limit varies from $\sim$ 1.9$\times 10^{8}\msolar$ to $2.4\times 10^{9}\msolar$, for the nearest and the farthest groups respectively. These are normal \hi\ \textcolor{black}{ masses for} dwarf galaxies and should have been easily detected in HIPASS, either individually or in the stacked spectra. } While \textcolor{black}{ our study only results in} non-detections, this exercise, carried out with the best available data at our disposal, \textcolor{black}{ provides a statistical trend for} \hi\ in the \cite{tanoglidis21} LSBGs. In that context, our results reveal two important trends. \textcolor{black}{ Of our sample of 409 targets, 68 are designated as UDG candidates in \cite{tanoglidis21} and the rest as LSBGs. This classification, however assumes that the galaxies are at the same distances as the groups or clusters they are projected near to. Our \hi\ results suggest, a large fraction of our sample galaxies might not in fact be clustered \textcolor{black}{ near to the groups} they are projected close to. This effect is almost certainly impacting the estimate of \textcolor{black}{ the true number of} UDGs in the \cite{tanoglidis21} LSBG catalogue. Additionally, our work demonstrates the \textcolor{black}{ critical} importance of spectroscopic observations for these galaxies since redshift confirmation is the only way to understand the true fraction of UDGs in this sample. This result together with the low \hi\ detection rates of UDGs in clusters \citep{karunakaran20} challenge our perceived idea of clustering property of UDGs.} UDGs are optically selected galaxies and thus the UDG literature is dominated by optical imaging studies~\citep{vdokkum15, koda15, yagi16, roman17, shi17}. They were first reported in the Coma cluster and subsequent reports of their discoveries also came mainly from groups and clusters giving the impression of an enhanced population of these galaxies in such overdensities \citep{vaderBurg17}. \cite{tanoglidis21} also reported a similar clustering for red LSBGs and UDGs in the southern sky. Our overwhelming number of non-detections, \textcolor{black}{ even for typical \textcolor{black}{ \hi\ mass} dwarf LSBGs or UDGs}, raises doubts about the reported clustering properties. The 108 blue galaxies in our sample of 409 LSBGs have an even higher probability of being non-cluster or non-group members. This is because galaxies in groups will undergo pre-processing causing gas loss and also redder colour. \textcolor{black}{ Deeper spectroscopic, optical or \hi\, } observations are required to confirm or refute the association of UDGs and LSBGs with the groups/ clusters. Our project was designed to detect \hi\ rich LSBGs of all sizes, \textcolor{black}{ including distant \hi\ rich dwarfs} out to a distance of about 70 Mpc. \textcolor{black}{ The} lack of even a single clear detection of \textcolor{black}{ a LSBG or UDG with the \hi\ mass of the Milky way (MW) suggested our sample only contains dwarf \hi\ mass galaxies}. Among the reported UDGs in the recent years, a substantial fraction have \re\ $\ge$ 3.7 kpc (similar to or larger than that of the MW) \citep{2019ApJS..240....1Z}. The stellar masses of these galaxies may be equivalent to small dwarfs, but their \re\ mimics much larger galaxies. While these UDGs are considerably more extended than dwarf galaxies, it is not yet clear if the \hi\ line widths, \hi\ masses and the dark matter content are consistent with the dwarf or more massive galaxies. Recently \cite{gault21} imaged \hi\ in about ten UDGs and found the \hi\ mass and the \hi\ disk diameter to follow the correlation in \cite{wang16}, however the \hi\ mass range covered in this work is less than $2\times 10^{9}\msolar$, in the range of dwarf galaxies. A scaling relation between the UDG \re\ and the DM halo mass was proposed by \cite{Zaritsky17} and is consistent with a globular cluster count study of six Coma UDGs with \re\ $\ge$ 3 kpc by \cite{saif22}. However the \re\ - DM halo mass relation is yet to be confirmed with DM halo mass estimates based on \hi\ rotation curves. Moreover, if this relation is established for cluster UDGs it is not clear if this would also hold for gas rich field UDGs where the formation mechanism may also be different. The lack of \textcolor{black}{spectroscopically confirmed distances} for our sample makes it impossible to ascertain how many of our target 409 LSBGs have an \re\ $\ge$ 3.7 kpc. The \textcolor{black}{LSBGs in our sample, with the } largest angular \re\ are in the range of 14 to 21 $^{\prime\prime}$ \citep{tanoglidis21}. \textcolor{black}{In the absence of redshift measurements these larger angular \re\ LSBGs could be at any redshift along the line of sight.} \textcolor{black}{If these larger angular \re\ galaxies, or a fraction of them,} are at a distance of 70 Mpc then their \re\ would \textcolor{black}{be} 4 -- 7 kpc\textcolor{black}{, i.e. larger than the MW}. \textcolor{black}{ LSBs or more specifically UDGs} with \re\ \textcolor{black}{ larger than MW} are not unusual and have been detected in \hi\ in \cite{Leisman17}. Non-detection of even a single extended galaxy (\re\ $\ge$ 3.7 kpc) in our study thus suggests two possibile scenarios: (A) the sample is \textcolor{black}{consists entirely of } LSBGs with \hi\ masses in the range dwarf galaxies and \textcolor{black}{is} devoid of any higher \textcolor{black}{ \re\ } galaxies; (B) \textcolor{black}{ If \textcolor{black}{LSBGs with \re\ $\ge$ the MW are present in the sample, their non-detection in \hi, suggests that they } have dwarf like \hi\ content and perhaps even dwarf like dark matter content.} {Scenario (B) is consistent with recent results from \hi\ studies of faint LSBGs and UDGs. \textcolor{black}{For example} \textcolor{black}{\cite{gault21} studied a sample of UDGs with \re\ ranging from 1.9 to 6.3 kpc. Irrespective of \re\, the detected \hi\ mass \textcolor{black}{was} $\le 2\times 10^{9}\msolar$, \textcolor{black}{the \hi\ mass} typically found in dwarf galaxies. \textcolor{black}{The lack \hi\ detections in our study is consistent with the low \hi\ detection rates in other} studies of UDGs and LSBGs. A recent \hi\ study} of moderately extended (\re $\ge$ 2.5 kpc at the distance of Coma) UDGs from the SMUDGES survey \citep{karunakaran20} resulted in a low detection rate for UDGs.} In that study about 70 UDG candidates were observed using the Green Bank Telescope (GBT) and about 9 UDGs were detected in \hi. The region surveyed was around the Coma cluster, however none of the \hi\ detected UDGs was cluster members. All of them belong to the low density environment in the foreground or background of the Coma cluster which probably resulted in a better detection rate as opposed to a search inside a group or a cluster, where higher \hi\ deficiencies are expected. Additionally the \hi\ masses of the detected galaxies were $\le 1.7\times 10^{9}\msolar$ irrespective of the \re\, \textcolor{black}{ which again seems to reinforce our findings of Scenario (B) above}. Compared our 409 targets, \cite{gault21} and \cite{karunakaran20} had smaller sample sizes, however both of those studies show similar trend to our results with respect to the absence of \hi\ rich and large \re\ UDGs. \textcolor{black}{ While the sample is insufficient to make any strong claims, Scenario (B) combined with other studies in the literature showing irrespective of \re\, the \hi\ masses of UDGs are typical of dwarf galaxies \citep{gault21,karunakaran20},} most likely suggest that a scaling relation as suggested by \cite{2019ApJS..240....1Z} may not be valid for UDGs. However we clearly need more data and a statistically significant sample to confirm this. \textcolor{black}{The SKA precursors MeerKAT and ASKAP are located in the southern hemisphere. Both telescopes offer higher sensitivity and resolution than HIPASS and therefore could be used in future studies of the LSBGs and UDGs with a higher probability of detecting \hi.} \section{Conclusions} Using archival HIPASS data, we searched for \hi\ counterparts in 409 LSBGs from the~\cite{tanoglidis21} catalogue of southern hemisphere LSBGs. \textcolor{black}{ We found no convincing \hi\ counterparts for any of the} sample of 409 LSBGs. While our study was \textcolor {black}{ significantly hampered } by the high spectral {\rm rms} of HIPASS, the non-detections are not entirely a result of this. \textcolor {black}{ Our project was designed to detect \hi\ rich LSBGs of all sizes, \textcolor{black}{ including distant \hi\ rich dwarfs} out to a distance of about 70 Mpc. For example, for a distance of 20 Mpc, the HIPASS data would allow us to detect \hi\ mass $\sim 1.9\times 10^{8}\,{\rm M}_{\odot}$ and for 70 Mpc, the farthest group in our sample, the detection limit would be $\sim 2.4\times 10^{9}{\rm M}_{\odot}$. These numbers represent typical dwarf galaxy, small LSBGs to gas rich small spiral's \hi\ content. Thus a complete non-detection cannot be only due to the limitation of the HIPASS spectral {\rm rms} .} Our non-detections suggest the following \textcolor {black}{ likely} scenarios: \textcolor {black}{ (I)} The majority of LSBGs are group members but nearly all of them are \hi\ deficient due to pre--processing in those groups. While many of the red LSBGs could be highly \hi\ deficient and thus below the HIPASS detection limit, this scenario cannot explain the non-detection of all of our sample's 108 blue galaxies. \textcolor{black}{ (II) Is it possible that our perceived idea of UDG clustering is incorrect. \textcolor{black}{ The majority} of \textcolor{black}{\cite{tanoglidis21}} LSBGs could be distant background galaxies to the groups and thus beyond the detection threshold of the HIPASS. Without more sensitive spectroscopic measurements this cannot be confirmed. Our study highlights the crucial need for spectroscopy, optical or \hi\ , to estimate the redshifts and to understand \textcolor{black}{ whether} LSBGs or UDGs are genuine groups members. \textcolor {black}{ (III)} The sample investigated by us appears to be dominated by galaxies with \hi\ masses in the dwarf range. Had there been LSBGs or UDGs in our sample with $\ge$ MW \re\ and proportional \hi\ masses, even with the high spectral {\rm rms} of HIPASS, the detection rate would have been higher. We did not even detect any MW \re\ LSBG with an \hi\ mass of the order of a few times 10$^{9} \msolar$ , typically seen in extended UDGs \citep{Leisman17, karunakaran20}. This may imply, LSBGs or UDGs with stellar disks as extended as the MW probably have an \hi\ content similar to dwarf galaxies. Clearly more sensitive observations using the SKA precursors in future may answer these questions. } \section*{Acknowledgements} \textcolor {black}{ We thank the annonymous referee, whose comments have significantly improved the paper.} We thank Jayanta Roy and Bhaswati Bhattacharyya of NCRA-TIFR for useful discussions about the paper. The Parkes telescope is part of the Australia Telescope which is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. YC acknowledges the support from the NSFC under grant No. 12050410259, and Center for Astronomical Mega-Science, Chinese Academy of Sciences, for the FAST distinguished young researcher fellowship (19-FAST-02), and MOST for the grant no. QNJ2021061003L. TS acknowledges support by Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia (FCT) through national funds (UID/FIS/04434/2013), FCT/MCTES through national funds (PIDDAC) by this grant UID/FIS/04434/2019 and by FEDER through COMPETE2020 (POCI--01--0145--FEDER--007672). TS also acknowledges support from DL 57/2016/CP1364/CT0009. YZM acknowledges the support of National Research Foundation with grant no. 120385 and 120378. HC is supported by Key Research Project of Zhejiang Lab (No. 2021PE0AC03). \section*{Data Availability} This project has used publicly available archived data. Koribalski, Baerbel; Staveley-Smith, Lister (2004): The HI Parkes All Sky Survey (HIPASS) image cubes. v1. CSIRO. Data Collection. https://doi.org/10.25919/5c36de6d37141. The spectra can also be downloaded from https://www.atnf.csiro.au/ research/multibeam/\\release/. \bibliographystyle{mnras} \bibliography{cig} % \bsp % \label{lastpage}
Title: Extending empirical constraints on the SZ-mass scaling relation to higher redshifts via HST weak lensing measurements of nine clusters from the SPT-SZ survey at $z\gtrsim1$
Abstract: We present a Hubble Space Telescope (HST) weak gravitational lensing study of nine distant and massive galaxy clusters with redshifts $1.0 \lesssim z \lesssim 1.7$ ($z_\mathrm{median} = 1.4$) and Sunyaev Zel'dovich (SZ) detection significance $\xi > 6.0$ from the South Pole Telescope Sunyaev Zel'dovich (SPT-SZ) survey. We measured weak lensing galaxy shapes in HST/ACS F606W and F814W images and used additional observations from HST/WFC3 in F110W and VLT/FORS2 in $U_\mathrm{HIGH}$ to preferentially select background galaxies at $z\gtrsim 1.8$, achieving a high purity. We combined recent redshift estimates from the CANDELS/3D-HST and HUDF fields to infer an improved estimate of the source redshift distribution. We measured weak lensing masses by fitting the tangential reduced shear profiles with spherical Navarro-Frenk-White (NFW) models. We obtained the largest lensing mass in our sample for the cluster SPT-CLJ2040$-$4451, thereby confirming earlier results that suggest a high lensing mass of this cluster compared to X-ray and SZ mass measurements. Combining our weak lensing mass constraints with results obtained by previous studies for lower redshift clusters, we extended the calibration of the scaling relation between the unbiased SZ detection significance $\zeta$ and the cluster mass for the SPT-SZ survey out to higher redshifts. We found that the mass scale inferred from our highest redshift bin ($1.2 < z < 1.7$) is consistent with an extrapolation of constraints derived from lower redshifts, albeit with large statistical uncertainties. Thus, our results show a similar tendency as found in previous studies, where the cluster mass scale derived from the weak lensing data is lower than the mass scale expected in a Planck $\nu\Lambda$CDM (i.e. $\nu$ $\Lambda$ Cold Dark Matter) cosmology given the SPT-SZ cluster number counts.
https://export.arxiv.org/pdf/2208.10232
\title{Extending empirical constraints on the SZ--mass scaling relation to higher redshifts via \textit{HST} weak lensing measurements of nine clusters from the SPT-SZ survey at $z\gtrsim1$} \author{ Hannah Zohren$^{1}$\thanks{E-mail: hzohren@astro.uni-bonn.de}, Tim Schrabback$^{1}$, Sebastian Bocquet$^{2,3}$, Martin Sommer$^{1}$, Fatimah Raihan$^{1}$, Beatriz Hern\'andez-Mart\'in$^{1}$, Ole Marggraf$^{1}$, Behzad Ansarinejad$^{4}$, Matthew B. Bayliss$^{5}$, Lindsey E. Bleem$^{6,7}$, Thomas Erben$^{1}$, Henk Hoekstra$^{8}$, Benjamin Floyd$^{9}$, Michael D. Gladders$^{7,10}$, Florian Kleinebreil$^{1}$, Michael A. McDonald$^{11}$, Mischa Schirmer$^{12}$, Diana Scognamiglio$^{1}$, Keren Sharon$^{13}$, and Angus H. Wright$^{14}$} \institute{ Argelander Institut f\"ur Astronomie, Rheinische Friedrich-Wilhelms-Universit\"at Bonn, Auf dem H\"ugel 71, D-53121 Bonn, Germany \and Faculty of Physics, Ludwig-Maximilians University, Scheinerstr. 1, D-81679 M\"unchen, Germany \and Excellence Cluster ORIGINS, Boltzmannstr. 2, D-85748 Garching, Germany \and School of Physics, University of Melbourne, Parkville, VIC 3010, Australia \and Department of Physics, University of Cincinnati, Cincinnati, OH 45221, USA \and High Energy Physics Division, Argonne National Laboratory, Argonne, IL, USA 60439 \and Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL, USA 60637 \and Leiden Observatory, Leiden University, PO Box 9513, 2300 RA, Leiden, the Netherlands \and Department of Physics and Astronomy, University of Missouri--Kansas City, 5110 Rockhill Road, Kansas City, MO 64110, USA \and Department of Astronomy and Astrophysics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637, USA \and Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, USA \and Max-Planck-Institut f\"ur Astronomie (MPIA), K\"onigstuhl 17, 69117 Heidelberg, Germany \and Department of Astronomy, University of Michigan, 1085 S. University Ave, Ann Arbor, MI 48109, USA \and Ruhr University Bochum, Faculty of Physics and Astronomy, Astronomical Institute (AIRUB), German Centre for Cosmological Lensing, 44780 Bochum, Germany } \date{Received 23 December 2021; accepted 13 August 2022} \abstract{}{}{}{} \keywords{Gravitational lensing: weak -- Cosmology: observations -- Galaxies: clusters: general } \titlerunning{\textit{HST} WL study of nine high-$z$ SPT clusters} \authorrunning{H. Zohren et al.} \section{Introduction} Galaxy clusters trace the densest regions of the large-scale structure in the Universe. Studying their number density as a function of mass and redshift, therefore, provides insights into the cosmic expansion and structure formation histories, allowing for constraints of cosmological models \citep[e.g. ][]{Haiman2001,Allen2011}{}. The expected number of dark matter haloes at a given mass and redshift is predicted by the halo mass \mbox{function (HMF)}, which can be obtained from numerical simulations \citep[e.g. ][]{Tinker2008,McClintock2019,Bocquet2020}{}. A comparison of these predictions to observations of galaxy clusters as representatives of these haloes and their abundance serves as a probe, which is particularly sensitive to a combination of the cosmological parameters $\Omega_\mathrm{m}$, the matter energy density of the Universe, and $\sigma_8$, the standard deviation of fluctuations in the linear matter density field at scales of \mbox{8\,Mpc/$h$}. At the same time, cluster studies can constrain the dark energy equation of state parameter $w$. Such studies require samples of galaxy clusters with a well-defined selection function and covering a large redshift range. Common methods for the assembly of such samples include detection via the overdensity of galaxies in the optical/near-infrared (NIR) regime \citep[e.g. ][]{Rykoff2016}{}, via the X-ray flux \citep[e.g. ][]{Piffaretti2011,Pacaud2018,Liu2021}{}, or via the signal from the Sunyaev Zel'dovich (SZ) effect \citep[e.g. ][]{Bleem2015,Planck2016clustercounts,Hilton2021}{}. The thermal SZ effect \citep[][]{Sunyaev1972}{} describes a distortion of the cosmic microwave background (CMB) blackbody spectrum towards higher energy, caused when CMB photons experience an inverse Compton scattering with the energetic electrons in the intracluster medium. Since the signal is independent of redshift, detecting clusters through the SZ effect enables the assembly of cluster catalogues, which are nearly mass-limited and extend out to very high redshifts. Additionally, the uncertainties in the selection function are relatively low because the SZ-observable provides a mass proxy with a comparably low intrinsic scatter \citep[$\sim$ 20 per cent, e.g. ][]{Angulo2012}{}. These are promising prerequisites for cosmological studies through the comparison of the observed cluster mass function and the predicted HMF. However, accurate and precise calibration of the scaling relations between the observable mass proxy and the underlying unobservable halo mass as predicted by the HMF over a wide redshift range is needed to obtain meaningful cosmological constraints. Especially since the remaining uncertainties in the observable-mass scaling relations are the limiting factor hampering the progress to tighter constraints \citep[e.g. ][]{Dietrich2019}. It is, therefore, imperative to improve the cluster mass calibration out to the highest redshifts that are now accessible in cluster samples \citep{Bocquet2019,Schrabback2018,Schrabback2021}. Mass measurements from weak gravitational lensing are frequently used as a method to obtain an absolute calibration of the normalisation of these scaling relations \citep[e.g. ][]{Okabe2010,Kettula2015,Dietrich2019,Herbonnet2020,Chiu2021,Schrabback2021}{}. Weak gravitational lensing causes a systematic distortion of the shapes of background galaxies when their light travels through the gravitational field of a foreground mass distribution. The weak lensing reduced shear quantifies the tangential distortion with respect to the centre of the mass distribution. The differential projected cluster mass distribution can be inferred from measurements of the reduced shear, without the need for assumptions about the dynamical state of the clusters. This is especially advantageous for high-redshift clusters because these objects are still dynamically young and may not have settled into hydrostatic equilibrium yet. The assumption of hydrostatic equilibrium is an important ingredient for measurements of the cluster mass with X-ray observations. A complementary method to weak lensing studies with optical/NIR data is CMB cluster lensing, which measures the (stacked) weak lensing signal by galaxy clusters in maps of the temperature and polarisation of the CMB \citep[e.g. ][]{Raghunathan2019,Zubeldia2019,Madhavacheril2020}. Due to the high redshift of the CMB, the mass scale for high-redshift clusters is more easily accessible with this method, and constraints will become increasingly competitive with upcoming instruments such as SPT-3G \citep[][]{Benson2014} and CMB-S4 \citep[][]{CMBs4Collab2019}. In the low to intermediate redshift regime, wide-field ground-based surveys such as the Kilo Degree Survey \citep[KiDS, ][]{Kuijken2015}, the Dark Energy Survey \citep[DES, ][]{DES2005} and the Hyper-Suprime-Cam Survey \citep[HSC, ][]{Miyazaki2012} can calibrate cluster masses at the few per cent level via weak lensing, but they are not suited to obtain the critically required cluster masses at high redshifts. Their limited depth and ground-based resolution are not sufficient to resolve the shapes of the small and faint background galaxies behind high-redshift clusters. The aforementioned optical lensing studies have been limited to low to intermediate redshift regimes up to $z\sim 1$. It is important to extend the calibration of scaling relations to higher redshifts because cluster properties (e.g. thermodynamic properties such as density, temperature, pressure, and entropy) evolve over time. Upcoming surveys conducted with \textit{Euclid} \citep{Laureijs2011}, the \textit{Nancy Grace Roman Space Telescope} \citep[formerly known as WFIRST, ][]{Spergel2015}, and the Vera C. Rubin Observatory \citep{LSST2009} will provide improved and critically required constraints on the cluster masses over a wide redshift range, where the exquisite depth of the \textit{Nancy Grace Roman Space Telescope} will be particularly valuable for the very high-redshift regime. However, until these surveys become available, pointed follow-up studies provide the best option to constrain the cluster mass scale out to high redshifts. With this work, we present the first weak lensing constraints on the mass scale of SZ-selected clusters extending to redshifts above $z \gtrsim 1.2$, using galaxy shape measurements from \textit{HST} imaging. The median redshift of the sample with nine clusters studied here is $z = 1.4$. This study is an extension of the works by \citet[][henceforth \citetalias{Schrabback2018}]{Schrabback2018}, \citet[][henceforth \citetalias{Dietrich2019}]{Dietrich2019}, \citet[][henceforth \citetalias{Bocquet2019}]{Bocquet2019}, and \citet[][henceforth \citetalias{Schrabback2021}]{Schrabback2021} to constrain the redshift evolution of the SZ mass scaling relation based on clusters from the 2500\,deg$^2$ South Pole Telescope SZ survey \citep[SPT-SZ survey, ][]{Bleem2015}{}. With our high-redshift sample, we aim to tighten the constraints on the scaling relation parameter $C_\mathrm{SZ}$, describing its redshift evolution, which in particular helps to break the degeneracy of $C_\mathrm{SZ}$ with the dark energy equation of state parameter $w$. The structure of this paper is as follows: we provide a summary of the studied cluster sample in Sect. \ref{Sec:SPT cluster sample}. We then present the data reduction of our optical observations and describe the photometric calibration steps in Sect. \ref{Sec:Data+Data reduction}. The selection of background galaxies based on four photometric bands and the estimation of the source redshift distribution from photometric redshift catalogues are detailed in Sect. \ref{Sec: full section, BG gal selection + redshift distrib}. We describe the weak lensing shape measurements in Sect. \ref{sec:shapes}. We present our weak lensing mass constraints including an estimation of the weak lensing mass bias in Sect. \ref{sec:wlconstraints}. We constrain the observable-mass scaling relation incorporating the new lensing results for our high-redshift SPT cluster sample in Sect. \ref{Sec:ScalingRelAnalysis}. Finally, we discuss our results in Sect. \ref{Sec:Discussion} and summarise and conclude in Sect. \ref{Sec:Summary+Conclusions}. Unless indicated otherwise, we assume a standard flat $\Lambda$ Cold Dark Matter ($\Lambda$CDM) concordance cosmology throughout this paper with $\Omega_\mathrm{m} = 0.3$, $\Omega_\Lambda = 0.7$, and $H_0 = 70$\, km\,s$^{-1}$\,Mpc$^{-1}$, as approximately consistent with CMB constraints \citep[e.g. ][]{Planck2020}{}. We express masses in terms of $M_{\Delta \mathrm{c}}$ corresponding to a sphere within which the density is $\Delta$ times higher than the critical density at the given redshift. All reported magnitudes in this work are AB-magnitudes. We correct all magnitude measurements for Galactic extinction with the extinction maps by \citet{2011ApJ...737...103S}. \section{The high-$z$ SPT cluster sample and previous studies} \label{Sec:SPT cluster sample} \begin{table*} \centering \caption{Properties of the galaxy cluster sample.} \label{tab:Cluster sample properties} \begin{threeparttable} \begin{tabular}{lcr rrrr c} \hline \hline Cluster name & $z_\mathrm{l}$ & $\xi$ & \multicolumn{4}{c}{Coordinates centres (deg J2000)} & $M_{500\mathrm{c,SZ}}$ \\ & & &SZ $\alpha$ & SZ $\delta$ & X-ray $\alpha$ & X-ray $\delta$ & [$10^{14}\,\mathrm{M}_\odot / h_{70}$] \\ \hline SPT-CL{\thinspace}$J$0156$-$5541 & 1.288 \tnote{$a$} & 6.98 & 29.04490 & $-55.69801$ & 29.0405 & $-55.6976$ & $3.96^{+0.57}_{-0.65} $\\ SPT-CL{\thinspace}$J$0205$-$5829 & 1.322\tnote{$b$} & 10.40 & 31.44282 & $-58.48521$ & 31.4459 & $-58.4849$ & $5.06^{+0.55}_{-0.68}$\\ SPT-CL{\thinspace}$J$0313$-$5334 & 1.474\tnote{$a$} & 6.09 & 48.48090 & $-53.57809$ & 48.4813 & $-53.5718$ & $3.31^{+0.55}_{-0.61}$\\ SPT-CL{\thinspace}$J$0459$-$4947 & 1.710\tnote{$d$} & 6.29 & 74.92693 & $-49.78724$ & 74.9240 & $-49.7823$ & $3.08^{+0.53}_{-0.53}$\\ SPT-CL{\thinspace}$J$0607$-$4448 & 1.401\tnote{$a$} & 6.44 & 91.89841 & $-44.80333$ & 91.8940 & $-44.8050$ & $3.60^{+0.57}_{-0.63}$\\ SPT-CL{\thinspace}$J$0640$-$5113 & 1.316\tnote{$a$} & 6.86 & 100.06452 & $-51.22045$ & 100.0720 & $-51.2176$ & $3.89^{+0.58}_{-0.65}$\\ SPT-CL{\thinspace}$J$0646$-$6236 & 0.995\tnote{$e$} & 8.67 & 101.63906 & $-62.61360$ & -- & -- & $4.97^{+0.64}_{-0.76}$\tnote{$^f$}\\ SPT-CL{\thinspace}$J$2040$-$4451 & 1.478\tnote{$c$} & 6.72 & 310.24832 & $-44.86023$ & 310.2417 & $-44.8620$ & $3.76^{+0.58}_{-0.63}$\\ SPT-CL{\thinspace}$J$2341$-$5724 & 1.259\tnote{$a$} & 6.87 & 355.35683 & $-57.41580$ & 355.3533 & $-57.4166$ & $3.58^{+0.51}_{-0.59}$\\ \hline \end{tabular} \textbf{Notes.} We list cluster names, SZ significance $\xi$, SZ coordinates of the centre and SZ masses as presented in \citetalias{Bocquet2019}. The X-ray coordinates correspond to the centroid positions estimated by \citet{McDonald2017}.\newline $^a$ Spectroscopic redshifts by \citet{Khullar2019}. $^b$ Spectroscopic redshift from \citet{Stalder2013}. $^c$ Spectroscopic redshift from \citet{Bayliss2014}. $^d$ Best redshift constraint currently available \citep[based on a spectral analysis of \textit{XMM-Newton} data, using the 6.7\,keV Fe emission line complex, ][]{Mantz2020}{}. $^e$ Observation design and data reduction followed the same procedures as described in \citet{Khullar2019}. More general results will be discussed in a future paper on high-$z$ spectroscopic measurements of SPT clusters. $^f$ We list the SZ mass recalculated at the updated redshift of the cluster. \end{threeparttable} \end{table*} We investigate nine massive and distant galaxy clusters at redshifts \mbox{$1.0 \lesssim z \lesssim 1.7$} detected by the SPT via their SZ signal. They were originally selected to have $z > 1.2$ according to the best redshift estimate available at the time. However, our analysis of more recent spectroscopic observations place the cluster SPT-CL{\thinspace}$J$0646$-$6236 at lower redshift, $z = 0.995$ (see also note $^e$ in Table \ref{tab:Cluster sample properties}). Therefore, only the remaining eight clusters constitute the complete sample of galaxy clusters at high redshifts \mbox{$z \geq 1.2$} with the strongest detection significance of \mbox{$\xi \geq 6$} from the 2500\,deg$^2$ SPT-SZ survey \citep[][see Table \ref{tab:Cluster sample properties} for cluster properties]{Bleem2015}{}. The sample has a median redshift of \mbox{$z_\mathrm{med} = 1.4$}. Our study represents the first homogeneous weak lensing study of a cluster sample of this size with a clean SZ-based selection function at this high-redshift regime. \citetalias{Bocquet2019} derive cosmological constraints with galaxy clusters from the 2500\,deg$^2$ SPT-SZ survey and provide updated redshift and SZ mass estimates for the SPT cluster sample, including the clusters studied here \citep[redshift updates for clusters relevant to this work are from ][]{Khullar2019,Mantz2020}. The SZ mass estimates incorporate a weak lensing mass calibration using data from \citetalias{Dietrich2019} and \citetalias{Schrabback2018}. The nine clusters in this work are also part of several previous studies. \citet{McDonald2017} examine \textit{Chandra} X-ray data for eight of these clusters and investigate the redshift dependency and compatibility with self-similar evolution of the ICM in a large sample of galaxy clusters. Their study includes an estimation of the positions of the cluster X-ray centres (see also Table \ref{tab:Cluster sample properties}) and the X-ray-based masses \citep[derived from the $M_\mathrm{gas}-M$ relation from ][]{Vikhlinin2009}, as well as density profiles and morphologies of the clusters. \citet{Ghirardini2021} investigate thermodynamic properties, for example, density, temperature, pressure, and entropy with combined \textit{Chandra} and \textit{XMM-Newton} X-ray observations of seven clusters in our sample and compare them with the corresponding properties of low-redshift clusters. Additionally, \citet{Bulbul2019} include two of the clusters in their analysis of X-ray properties of SPT-selected galaxy clusters observed with \textit{XMM-Newton}. They constrain the scaling relations between the X-ray observables of the ICM (luminosity $L_\mathrm{X}$, ICM mass $M_\mathrm{ICM}$, emission-weighted mean temperature $T_\mathrm{X}$, and integrated pressure $Y_\mathrm{X}$), redshift, and halo mass. Further X-ray studies investigating astrophysical properties featuring one or more clusters from our sample include \citet{McDonald2013}, \citet{Sanders2018}, and \citet{Mantz2020}. There have also been efforts to obtain precise spectroscopic redshifts for the majority of clusters in our sample \citep[][]{Stalder2013, Bayliss2014, Khullar2019, Mantz2020}{}, where some studies specifically investigate the galaxy kinematics and velocity distributions \citep[][]{Ruel2014,Capasso2019}{}. Several multi-wavelength studies of cluster samples (including one or more clusters from our sample) with varying size investigate different cluster components such as the baryon content \citep[][]{Chiu2016,Chiu2018}{}, the properties, growth and star formation in brightest cluster galaxies \citep[BCGs, ][]{McDonald2016,DeMaio2020,Chu2021}{}, the mass-richness relation \citep[][]{Rettura2018}{}, environmental quenching of the galaxy populations in clusters \citep[][]{Strazzullo2019}{}, and AGN-feedback \citep[][]{Hlavacek-Larrondo2015,Birzan2017}{}. The cluster SPT-CL{\thinspace}$J$2040$-$4451 was already studied in a weak lensing analysis by \citet{Jee2017}, using infrared imaging from the Wide Field Camera 3 (WFC3) on the \textit{HST} for shape measurements. We compare their analysis strategy and ours in detail in Sect. \ref{Sec:Discussion}. \section{Data and data reduction} \label{Sec:Data+Data reduction} \subsection{HST ACS and WFC3 data} \label{Sec:HSTData reduction} We used high-resolution imaging from the \textit{HST} to measure galaxy shapes for the weak lensing analysis as detailed in \mbox{Sect. \ref{sec:shapes}}. The observational data analysed in our study were obtained during Cycles 19, 21, 23, and 24 as part of the SPT follow-up GO programmes 12477 (PI: F. High), 13412 (PI: T. Schrabback), 14252 (PI: V. Strazzullo), and 14677 (PI: T. Schrabback) in the filters F606W and F814W with the ACS/WFC instrument and F110W with the WFC3/IR instrument. We measured the shapes of galaxies for our weak lensing analysis in the F606W and F814W imaging, which have a field of view of $202'' \times 202''$ at a pixel scale of $0\farcs05/\mathrm{pixel}$. The ACS (Advanced Camera for Surveys) observations were obtained in a single pointing except for SPT-CL{\thinspace}$J$0205$-$5829 for which an additional larger $2\times 2$ mosaic was obtained in F606W as part of GO programme 12477. The field of view of WFC3 is $136'' \times 123''$ with a pixel scale of roughly $0\farcs128/\mathrm{pixel}$ (the pixels are not exactly square shaped). We observed $2\times2$ mosaics in the F110W filter (with partial overlap between pointings), which roughly match the field of view of the ACS observations. We used the observations in the F110W filter exclusively for the photometric selection of the background galaxies carrying the weak lensing signal. The integration times range between \mbox{2.3 and 5.5\,ks} (F606W), \mbox{3.3 and 4.9\,ks} (F814W), and \mbox{2.4\,ks} (F110W, spread out over a $2\times2$ mosaic to reach a minimum depth of \mbox{0.6\,ks} over the full ACS footprint) (see \mbox{Table \ref{tab:exposure times}}). \begin{table*} \centering \caption{Summary of the integration times, image quality, and depth from our observations with \textit{HST}/ACS, \textit{HST}/WFC3, and VLT/FORS2. } \label{tab:exposure times} \begin{threeparttable} \begin{tabular}{lccc ccc ccc ccc} \hline \hline & & F606W & & & F814W & & & F110W & & & $U_\mathrm{HIGH}$ & \\ \cmidrule(lr){2-4}\cmidrule(lr){5-7}\cmidrule(lr){8-10}\cmidrule(lr){11-13} Cluster name & $t_\mathrm{exp}$& IQ & depth & $t_\mathrm{exp}$ & IQ & depth & $t_\mathrm{exp}$ $^b$ & IQ & depth$^b$ & $t_\mathrm{exp}$ & IQ & depth \\ & [ks] & [$''$] & [mag] & [ks] & [$''$] & [mag] & [ks] & [$''$] & [mag] & [ks] & [$''$] & [mag] \\ \hline SPT-CL{\thinspace}$J$0156$-$5541 & 5.5 & 0.10 & 27.0 & 4.9 & 0.10 & 26.6 & 0.6 & 0.29 & 26.3 & 4.8 & 0.73 & 26.9\\ SPT-CL{\thinspace}$J$0205$-$5829 & 3.7$^a$ & 0.10 & 27.1$^a$ & 3.7 & 0.08 & 26.5 & 0.6 & 0.29 & 26.3 & 4.8 & 0.85 & 26.8\\ SPT-CL{\thinspace}$J$0313$-$5334 & 3.7 & 0.10 & 26.9 & 3.7 & 0.09 & 26.1 & 0.6 & 0.29 & 26.3 & 4.8 & 0.80 & 27.1\\ SPT-CL{\thinspace}$J$0459$-$4947 & 2.3 & 0.11 & 26.7 & 4.8 & 0.10 & 26.5 & 0.6 & 0.28 & 26.3 & 6.0 & 0.81 & 26.9\\ SPT-CL{\thinspace}$J$0607$-$4448 & 2.3 & 0.10 & 26.7 & 4.8 & 0.10 & 26.3 & 0.6 & 0.28 & 26.4 & 4.8 & 0.97 & 26.4\\ SPT-CL{\thinspace}$J$0640$-$5113 & 5.6 & 0.10 & 26.7 & 3.3 & 0.10 & 26.2 & 0.6 & 0.26 & 26.1 & 2.4 & 0.97 & 26.3\\ SPT-CL{\thinspace}$J$0646$-$6236 & 4.0 & 0.10 & 26.8 & 4.0 & 0.10 & 26.1 & 0.6 & 0.27 & 26.1 & 4.8 & 1.07 & 26.3\\ SPT-CL{\thinspace}$J$2040$-$4451 & 2.1 & 0.10 & 26.6 & 4.8 & 0.10 & 26.1 & 0.6 & 0.28 & 26.1 & 4.8 & 0.88 & 26.5\\ SPT-CL{\thinspace}$J$2341$-$5724 & 5.3 & 0.10 & 26.5 & 4.8 & 0.10 & 26.2 & 0.6 & 0.29 & 26.1 & 4.8 & 0.92 & 26.9\\ HUDF & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & 6.6 & 1.03 & 26.6\\ \hline \end{tabular} \textbf{Notes.} For the image quality (IQ), we report the full width at half maximum of the PSF, based on measurements with \texttt{Source Extractor}. The depth corresponds to $5\sigma$ limiting magnitudes, computed from the standard deviation of 1000 non-overlapping apertures without flux from detected sources. We used apertures with diameters of $0\farcs7$ for \textit{HST} bands and $1\farcs2$ for $U_\mathrm{HIGH}$.\newline $^a$ For the cluster SPT-CL{\thinspace}$J$0205$-$5829 a $2\times2$ ACS mosaic from GO programme 12477 and one single ACS pointing from GO programme 14677 are available in the F606W band. We have eight overlapping exposures in the region with the biggest overlap with our observations in the other bands. We report the integration time and depth based on this region. \\ $^b$ The F110W stacks are mosaics of eight exposures. The highest/intermediate/lowest depth is achieved, where eight/four/two exposures overlap, respectively. Since regions with only two overlapping exposures make up the most area in the stacks, we report integration times and depths equivalent to two exposures. \end{threeparttable} \end{table*} We performed the basic image reduction steps for the \textit{HST}/ACS imaging data with the ACS calibration pipeline \texttt{CALACS}\footnote{\url{https://hst-docs.stsci.edu/acsdhb}, Chapter 3}. However, we deviated from the standard processing steps regarding the correction for charge transfer inefficiency (CTI). As in previous studies by \citetalias{Schrabback2018} and \citetalias{Schrabback2021}, we performed the CTI correction with the algorithm by \citet{Massey2014} and applied it to both the \textit{HST}/ACS imaging data and the respective dark frames. Furthermore, we performed a quadrant-based sky background subtraction, improved the bad pixel masks by manually flagging satellite trails and cosmic ray clusters, and computed accurately normalised RMS maps following the prescription by \citet{Schrabback2010}.\\ The \textit{HST}/WFC3 imaging data reduction was performed similarly to the \textit{HST}/ACS imaging data reduction. We downloaded the pre-reduced \texttt{flt} frames, which had already undergone basic image processing via the WFC3 calibration pipeline \texttt{calwf3}\footnote{\url{https://hst-docs.stsci.edu/wfc3dhb}, Chapter 3}, but we did not perform a quadrant-based sky background subtraction because it is not suitable for the parallel read-out mechanism of WFC3. Instead, we used \texttt{Source Extractor} \citep[version 2.23.1, ][]{BertinArnouts1996} to obtain a background model, which we subtracted. This allowed us to account properly for gradients in the background level. These occasionally occur in particular due to a variable airglow line of He I in the lower \mbox{exosphere} at 10830\,\AA, which mostly affects the bands F105W and F110W (see Chapter 7.9.5 of the WFC3 instrument handbook\footnote{\url{https://hst-docs.stsci.edu/wfc3ihb}} and WFC3 ISR 2014-03). We did not perform a CTI correction for the WFC3 data, as they are not affected by this. Subsequently to the initial data reduction, we employed the software \texttt{DrizzlePac}\footnote{\url{https://www.stsci.edu/scientific-community/software/drizzlepac.html}} for aligning and combining \textit{HST} images in particular using the tasks \texttt{TweakReg} and \texttt{AstroDrizzle}. This typically involved combining 4 to 10 exposures for \textit{HST}/ACS imaging or 8 exposures in a $2\times2$ mosaic for WFC3 imaging. For the stacking with \texttt{AstroDrizzle}, we used the \texttt{lanczos3} kernel at the native pixel scale of $0\farcs05$ ($0\farcs128$) of the ACS (WFC3) images to distribute the flux onto the output image. Additionally, we employed the RMS image as the weighting image. We produced the stack for the imaging in the F606W band first and subsequently used this stack as the astrometric reference image for the stacks in the F814W and F110W bands to ensure optimal astrometric alignment between the stacks. \subsection{Very Large Telescope (VLT) FORS2 data} \label{Sec:VLTData reduction} We used additional observations from VLT/FORS2 in the $U_\mathrm{HIGH}$ passband obtained as part of the ESO programme 0100.A-0204(A) (PI: Schrabback) between November 18 and November 20, 2017. Together with the \textit{HST} imaging, these observations facilitated a robust photometric selection of background galaxies. The images were taken with the two blue-sensitive $2\mathrm{k}\times4\mathrm{k}$ E2V CCDs in standard resolution with $2\times2$ binning, providing observations over a field of view of $6\farcm8 \times 6\farcm8$ at a pixel scale of $0\farcs25/\mathrm{pixel}$. We observed the nine galaxy clusters in our sample and additionally one pointing centred on the \textit{Hubble Ultra Deep Field} \citep[HUDF, ][]{Beckwith2006}, which we used to assess the photometric calibration of the $U_\mathrm{HIGH}$ band. The integration times per cluster ranged between \mbox{2.4\,ks} and \mbox{6.6\,ks} (see Table \ref{tab:exposure times}). We reduced the data with the software \texttt{THELI}\footnote{\url{https://www.astro.uni-bonn.de/theli/}} \citep{Erben2005,Schirmer2013}. We performed a bias subtraction, flat-field correction, and a subtraction of a background model. The latter was obtained by taking advantage of the dither pattern applied between exposures. The images were median combined, resulting in the background model. This enabled us to distinguish features at a fixed detector position from sky-related signals. The background model was rescaled to the illumination level of the individual exposures and then subtracted from them. We applied a sky background subtraction using \texttt{Source Extractor} \citep{BertinArnouts1996}. We obtained the astrometric calibration based on the Gaia DR1 catalogue \citep{GaiaCollab2016a, GaiaCOllab2016b} as reference. Finally, the images were co-added. We did not match the astrometry of the VLT observations to the one of the \textit{HST} data. Checking for offsets between \textit{HST} and VLT astrometrty with \texttt{Source Extractor} detected sources, we found small offsets of the order of 0.1\,arcsec, which we corrected for in the photometric analysis. \subsection{Photometry} \subsubsection{Photometric measurements with LAMBDAR} We performed photometric measurements on our fully reduced images with the Lambda Adaptive Multi-Band Deblending Algorithm in R \citep[\texttt{LAMBDAR}\footnote{\url{https://github.com/AngusWright/LAMBDAR}}, ][]{Wright2016}. This algorithm can perform consistent and matched aperture photometry across images with varying pixel scales and resolutions. Therefore, it is ideally suited for our analysis, which requires accurate and precise colour measurements between the \textit{HST} and VLT imaging with very different resolutions. In the following, we give a brief summary of the \texttt{LAMBDAR} algorithm. We refer the reader to \citet{Wright2016} for a more in-depth description. \texttt{LAMBDAR} requires at least two inputs: a FITS image and a catalogue of object locations and aperture parameters. Additionally, we provide a point-spread function (PSF) model for the FITS image. These files are read in as the first step, then the aperture priors from the catalogue are transferred onto the same pixel grid as the input FITS image, using the image's astrometric solution (stored in the FITS header). Subsequently, the aperture priors are convolved with the input PSF, and object deblending is executed based on the convolved aperture priors. Images are deblended via multiplication with a deblending function. For this, it is assumed that the total flux in a pixel equals the sum of the fluxes from sources with aperture models overlapping that pixel. The flux per source is distinguished with the help of the deblending function. This function is calculated using the second assumption that the PSF convolved aperture model is a tracer of the emission profile of each source. Taking into account the estimation of the local sky-backgrounds and noise correlation using random/blank apertures, \texttt{LAMBDAR} calculates the object fluxes with the help of the deblended convolved aperture priors. Here, the code accounts for aperture weighting and/or missed flux through an appropriate normalisation of fluxes. Finally, flux uncertainties in relation to all of the above steps are determined. For our purposes, using \texttt{LAMBDAR} has two main advantages. Firstly, we can comfortably perform matched aperture photometry across our images with varying PSF sizes between $0\farcs08$ and $1\farcs07$. Secondly, the prior aperture definitions derived from high-resolution optical imaging allow for deblending of sources leading to more accurate flux measurements, in particular in comparison to conventional fixed aperture photometry. For each cluster, we ran \texttt{Source Extractor} on the F606W image to obtain the input catalogue with object locations and aperture parameters. We set the detection and analysis thresholds to $1.4\sigma$. We required a minimum of 8 pixels for a source detection and set the deblending threshold to 32 with a minimum contrast parameter of 0.005. Before the detection, the images were smoothed with a Gaussian filter of 5 pixels with a full width at half maximum (FWHM) of 2.5 pixels. We checked for residual shifts in the astrometry of our images with respect to the F606W detection image and corrected for them with a linear shift if necessary to avoid biases in the flux measurements with \texttt{LAMBDAR}. For the \textit{HST} images, we used \texttt{TinyTim} \citep{Krist2011} to obtain a PSF model for the photometric analysis. For the ACS images (i.e. in F606W and F814W), we looked up the average focus from the duration of the observation at the \textit{HST} Focus Model tool\footnote{\url{http://focustool.stsci.edu/cgi-bin/control.py}}. Since this tool does not offer an estimate for WFC3/IR (i.e. for the images in F110W), we assumed a focus offset of 0.0 microns as default\footnote{To cross-check this assumption, we measured the photometry with an alternative PSF model with a very different focus offset of 4.0 microns. We found that both measurements differ by \mbox{0.001\,mag} (median), which is negligible for our purposes.}. We used the central chip position as the position of reference for the estimation of the PSF model. In the case of the ACS instrument with two chips, we took the central pixel of chip 1 as a reference. For our VLT/FORS2 images, we obtained a PSF model with the help of the software \texttt{PSFEx} \citep{Bertin2011}. Some of our fully reduced images exhibited slight residual gradients in the background level. Therefore, we performed an initial run with \texttt{Source Extractor} to obtain a background-subtracted image. We used these as the FITS input images to be analysed with \texttt{LAMBDAR}. \subsubsection{Photometric zeropoints} \label{Sec: Photometric zeropoints} The photometric calibration for the \textit{HST} bands is straightforward. We obtained a photometric zeropoint (ZP) for each coadd from the header keywords \texttt{PHOTFLAM} and \texttt{PHOTPLAM}: \begin{equation} \begin{split} \mathrm{ZP}_\mathrm{AB} = & -2.5 \log_{10}(\mathrm{PHOTFLAM}) \\ & - 5.0 \log_{10}(\mathrm{PHOTPLAM}) -2.408\,. \end{split} \end{equation} \texttt{PHOTFLAM} is the inverse sensitivity, which facilitates the transformation from an instrumental flux in units of electrons per second to a physical flux density and \texttt{PHOTPLAM} denotes the pivot wavelength in units of \AA\footnote{\url{https://www.stsci.edu/hst/instrumentation/acs/data-analysis/zeropoints}}. Afterwards, we accounted for Galactic extinction with the extinction maps by \citet{2011ApJ...737...103S}\footnote{obtained from the website \url{https://irsa.ipac.caltech.edu/applications/DUST/}}. The challenge in the photometric calibration of the $U_{\mathrm{HIGH}}$ band is the lack of an adequate reference catalogue with well-calibrated $U$ band magnitudes for our cluster fields. In such a case, a common method for calibration is to use a stellar locus \citep{High2009}. It is based on the assumption that stars occupy a well-defined region, the stellar locus, in colour-colour space, independent of the line of sight. Then, the photometry can be calibrated by matching the photometry of stars in an observation to the universal stellar locus. However, we found that direct use of a stellar locus does not work well for our analysis due to the limited number of stars in the small fields of view. Additional large scatter resulted in substantial uncertainties of the stellar locus approach. We, therefore, developed a calibration strategy based on a galaxy locus, where we made use of the fact that galaxies have a characteristic distribution in colour-colour space, similar to stars occupying the stellar locus. We identified a reference galaxy locus from the 3D-HST photometric catalogues as presented in \citet{Skelton2014}. They summarised photometric measurements in the five CANDELS/3D-HST fields (AEGIS, COSMOS, GOODS-North [abbreviated GN], GOODS-South [abbreviated GS], and UDS) over a total area of $\sim 900$\,arcmin$^2$. Among others, this includes the following bands relevant for our reference galaxy locus: the \textit{HST} bands F606W and F814W and $U$ bands from various instruments such as CFHT/MegaCam (AEGIS, COSMOS, and UDS), KPNO 4\,m/Mosaic (GOODS-North), and VLT/VIMOS (GOODS-South). We describe in \mbox{Sect. \ref{Sec:common photometric system}} how we accounted for the differences in these effective band-passes. Compared to the CANDELS/3D-HST fields our cluster fields are overdense at the cluster redshift, changing the local galaxy distribution in colour-colour space. To account for this, we applied a preselection, which used the well-calibrated \textit{HST}-only colours to remove galaxies at the cluster redshift (see \mbox{Fig. \ref{Fig:Cuts to remove only cluster gals}}). In addition, the galaxy distribution varies locally due to line of sight variations. We reduced the impact of these by limiting the calibration with the galaxy locus to relatively faint galaxies in the regime \mbox{$24.2 < V_{606} < 27.0$}. We note that we optimised the calibration to be most accurate in this magnitude regime since it is the regime we used for the selection of background source galaxies (see Sect. \ref{Sec: full section, BG gal selection + redshift distrib}). Together, this allowed us to calibrate $U-V_{606}$ colour estimates in the cluster fields by matching the galaxy distribution of the $VIJ$-selected galaxies in the \mbox{$V_{606} - I_{814}$} versus \mbox{$U-V_{606}$} colour-colour space. For the calibration, we first accounted for Galactic extinction with the extinction maps by \citet{2011ApJ...737...103S}. We then smoothed the distribution of the galaxies in the $UVI$ colour-colour space with a Gaussian kernel\footnote{using scipy.stats.gaussian\_kde in python} both for the galaxies of the reference galaxy locus and the galaxies in our observation. We identified the peak position of the highest density and applied a shift to the $U_{\mathrm{HIGH}}$ magnitudes according to the difference in $U - V_{606}$ of the peak positions. We quantified and propagated the statistical uncertainty of 0.08\,mag of our colour calibration scheme (see Appendix \ref{Appendix:ZP robustness with gal locus} for a robustness test of the $U_{\mathrm{HIGH}}$ band zeropoint calibration with the help of the reference galaxy locus; see Table \ref{Tab:Errorbudget of photometry,beta} for the effect of this statistical uncertainty on the average geometric lensing efficiency.) \subsubsection{Defining a common photometric system} \label{Sec:common photometric system} When we investigate colour cuts for a suitable selection of background galaxies, we need to make sure to work in a consistent photometric framework. Regarding the $U$ bands, we have measurements from four different instruments at hand: $U_\mathrm{HIGH}$ from VLT/FORS2 (our observations), $U_\mathrm{MEGACAM}$ from CFHT/MegaCam, $U_\mathrm{KPNO}$ from KPNO 4\,m/Mosaic, and $U_\mathrm{VIMOS}$ from VLT/VIMOS \citep[the latter three filters are employed in different CANDELS/3D-HST fields in ][]{Skelton2014}. All of these have different effective filter curves. We, therefore, had to make sure that we employed these different bands to select consistent source populations, in particular regarding the \mbox{$U-V_{606}$} colour. Comparing the \mbox{$U-V_{606}$} colour of these populations, we found that there are small offsets among the CANDELS/3D-HST fields. We quantified these by identifying the peak position of the galaxy loci after smoothing the distribution with a Gaussian kernel (galaxies with \mbox{$24.2 < V_{606} < 27.0$}, where galaxies in the cluster redshift regime $1.2 \lesssim z \lesssim 1.7$ are excluded according to a cut in the $VIJ$ colour plane; see Sect. \ref{Sec: Photometric zeropoints}). We applied a shift to the $U$ bands to make the peak positions coincide with the peak position of the galaxy locus in GOODS-South as an anchor. We used this field as an anchor because we have observations of our own in $U_\mathrm{HIGH}$ in the HUDF situated within GOODS-South. We list the applied shifts in Table \ref{Tab:Galloc_comparisons} in the Appendix. As a cross-check, we compared the peak positions in the \mbox{$U-V_{606}$} colour distribution for differently selected galaxy subsamples in \mbox{Fig. \ref{Fig:UV-offsets in galaxy populations}}. Here, we generally found good agreement. For example, for the full population of galaxies with \mbox{$24.2 < V_{606} < 27.0$}, we measured a standard deviation of the density peak positions between the five CANDELS/3D-HST fields of 0.045\,mag. We conclude that the photometry is sufficiently comparable as a basis for the selection of background galaxies (we summarise systematic and statistical uncertainties connected to the photometry at the end of Sect. \ref{Sec: Background galaxy selection}). In addition to these considerations for the $U$ bands, we used \textit{HST} bands for which we have available observations for our cluster fields, that is, F606W, F814W, and F110W. Since not all reference catalogues have magnitude information on the galaxies in all of these bands, we needed to apply a few interpolations to estimate the fluxes and magnitudes of galaxies in our photometric system of filters. In this case, we performed an interpolation based on the closest available filters in effective wavelength, where one filter is redder (R) and one is bluer (B) than the missing filter (X): \begin{equation} \begin{split} F_\mathrm{X} &= s (\lambda_\mathrm{eff,X} - \lambda_\mathrm{eff,B}) + F_\mathrm{B}\,,\\ m_\mathrm{X} &= -2.5\log_{10}(F_\mathrm{X}) + \mathrm{ZP}\,,\\ \mathrm{with} \quad s &= \frac{(F_\mathrm{R} - F_\mathrm{B})}{(\lambda_\mathrm{eff,R} -\lambda_\mathrm{eff,B})}\,, \end{split} \end{equation} where $F$ denotes the flux, $m$ denotes the magnitude, $\mathrm{ZP}$ is the zeropoint (it is fixed to \mbox{$\mathrm{ZP} = 25.0$} for all bands in the \citet{Skelton2014} CANDELS/3D-HST photometric catalogues), and $\lambda_\mathrm{eff}$ is the effective wavelength of the respective filter. In a catalogue that covers the sources in all bands, we can gauge how well the interpolation typically represents the measured magnitude. Overall, there is a good match between the interpolated and the measured magnitudes. We do, however, see that the interpolation becomes increasingly noisy and asymmetric for fainter magnitudes. This is likely related to the (potentially different) depths of the available bands. None of the available reference catalogues provides measurements in the band F110W. Options for interpolation are to use a combination of either F105W and F125W, or F850LP and F125W, or F814W and F125W. Depending on the method used, we found that a small median offset of the order of $0.04$\,mag with a standard deviation of $0.07$\,mag can be introduced. We did not attempt to correct for such differences but we investigated the impact of systematic photometric offsets on the estimate of the average lensing efficiency in Appendix \ref{Appendix:Impact of syst. shifts in photom}, finding that the impact of such a systematic offset can well be neglected given our current statistical uncertainties. We also checked how well our photometry compares to measurements from \citet{Skelton2014} in Appendix \ref{Appendix:Comparison of S14 and LAMBDAR photometry}. From this, we concluded that slight offsets in photometry can occur, and we included the expected uncertainties in the overall error budget of our analysis (summarised at the end of Sect. \ref{Sec: Background galaxy selection}). \section{Photometric selection of source galaxies and estimation of the source redshift distribution} \label{Sec: full section, BG gal selection + redshift distrib} For a robust weak lensing analysis, it is important to preferentially select the galaxies at redshifts higher than the cluster redshifts. Only these galaxies carry the weak lensing signal that we are interested in. We need to estimate the expected source redshift distribution of the selected galaxies to quantify the average geometric lensing efficiency $\langle \beta \rangle$ defined as \begin{equation} \langle \beta \rangle = \frac{\sum \beta(z_i)w_i}{\sum w_i} \,, \end{equation} with the shape weights $w_i$ \citep[see ][ and \mbox{Sect. \ref{sec:shapes}}]{Schrabback2018} and \begin{equation} \beta = \frac{D_\mathrm{ls}}{D_\mathrm{s}} H(z_\mathrm{s} - z_\mathrm{l}) \,, \end{equation} where $D_\mathrm{l}$, $D_\mathrm{s}$, and $D_\mathrm{ls}$ denote the angular diameter distances to the lens at redshift $z_\mathrm{l}$, the source at redshift $z_\mathrm{s}$, and between lens and source, respectively. The Heavyside step function is defined as $H(x) = 1$ if $x>0$ and $H(x) = 0$ if $x \leq 0$. It is sufficient to estimate the averages $\langle \beta \rangle$ and $\langle \beta^2 \rangle$ to tie the measured weak lensing shear signal to the cluster mass \citep[e.g. ][]{Bartelmann2001}. A straightforward but prohibitively observationally expensive way to identify the background galaxies is based on their spectroscopic redshifts. High-quality photometric redshifts can also be helpful if examined carefully for systematic outliers. Such redshift information is, however, not available for the galaxies in our observed cluster fields. Instead, we aim to use only the photometry from our observations to identify background galaxies. For this, we need reference catalogues of galaxies providing redshift and magnitude information in different bands. This allows us to understand how to distinguish background galaxies from contaminating foreground and cluster galaxies solely based on their colours. This is a commonly used strategy for weak lensing studies covering various redshift regimes \citep[e.g. ][]{Klein2019-APEX-WL,Schrabback2018,Schrabback2021}. In the following section, we first describe the reference catalogues used in this work. After that, we present suitable cuts in colour space to preferentially select background galaxies for the weak lensing analyses. These cuts identified in photometric redshift reference catalogues can be safely applied to the cluster fields, because gravitational lensing is a colour-indifferent effect. \subsection{Redshift catalogues} \label{Sec:Referenct Redshift cats} \subsubsection{UVUDF} The HUDF is a region of the sky that has been studied extensively both spectroscopically and in various photometric filters by the \textit{HST}. \citet[][henceforth \citetalias{Rafelski2015}]{Rafelski2015} conducted a joint analysis of imaging ranging from near-ultraviolet (NUV) bands F225W, F275W, and F336W \citep[UVUDF,][]{Teplitz2013}, over optical bands F435W, F606W, F775W, and F850LP \citep{Beckwith2006}, to near-infrared (NIR) bands F105W, F125W, F140W, and F160W \citep[UDF09 and UDF12, ][]{Oesch2010a,Oesch2010b,Bouwens2011,Koekemoer2013, Ellis2013}. These data sets cover an area of $12.8$\,arcmin$^2$, but only $4.6$\,arcmin$^2$ have full NIR coverage. \citetalias{Rafelski2015} provide photometric redshifts obtained with the code \texttt{BPZ} \citep[][]{Benitez2000}, which are highly robust due to the exquisite depth and high wavelength coverage of the data sets \citep[e.g. demonstrated in][who found a median of $|(z_\mathrm{MUSE}-z_\mathrm{p}) / (1 + z_\mathrm{MUSE})| < 0.05$ from a comparison of photometric redshifts to high quality redshifts from the MUSE integral field spectrograph]{Brinchmann2017}. Given their accuracy, the \citetalias{Rafelski2015} photo-$z$s provide an important benchmark for our computation of the average lensing efficiency. However, the small area covered in the sky leads to a substantial impact of sampling variance. Consequently, we also need to incorporate other data sets, which are shallower but cover a larger footprint in the sky (see Sect. \ref{Sec:3D-HST cat description}). \subsubsection{3D-HST} \label{Sec:3D-HST cat description} \citet[][henceforth \citetalias{Skelton2014}]{Skelton2014} present catalogues with photometric measurements in filters covering a wide wavelength range and photometric redshifts for galaxies from the CANDELS/3D-HST fields over a total area of $\sim 900$\,arcmin$^2$. Their aim is to homogeneously combine various data sets available for these fields. Firstly, this includes the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey \citep[CANDELS,][]{Grogin2011,Koekemoer2011}. It is an imaging survey conducted with \textit{HST}/WFC3 and \textit{HST}/ACS in five fields of the sky, namely AEGIS, COSMOS, GOODS-North, GOODS-South, and UDS. Secondly, the 3D-HST program \citep{Brammer2012} provides slitless spectroscopy obtained with the WFC3 G141 grism for galaxies across nearly 75 per cent of the CANDELS area and thus includes redshifts and spatially resolved spectral lines. Additionally, the WFC3 G141 grism spectroscopy data products are presented in \citet{Momcheva2016}, who also developed software to optimally extract spectra for the objects from the \citetalias{Skelton2014} photometric catalogues. \citetalias{Skelton2014} combined the photometric data sets from the CANDELS and 3D-HST programmes with available ancillary data sets in the five CANDELS/3D-HST fields by using a common WFC3 detection image, conducting consistent PSF-homogenised aperture photometry, and estimating photometric redshifts and redshift probability distributions with the code \texttt{EAZY} \citep[][]{Brammer2008}. The \citetalias{Skelton2014} photometric redshift catalogues form an excellent basis to estimate the redshift distribution for our weak lensing study. They cover a large area on the sky distributed over five independent lines-of-sight. This helps to combat sampling variance when estimating the average lensing efficiency. Additionally, the wide wavelength coverage, especially including deep NIR observations, is particularly valuable for robust redshift measurements out to high redshifts, as required for this study. However, \citetalias{Schrabback2018} and \citet[][henceforth \citetalias{Raihan2020}]{Raihan2020} show that the photometric redshifts by \citetalias{Skelton2014} suffer from catastrophic outliers, which can significantly bias weak lensing mass measurements. Through the comparison of photometric redshift measurements from \citetalias{Skelton2014} and \citetalias{Rafelski2015}, \citetalias{Raihan2020} found that these outliers led to a systematic underestimation of the mean geometric lensing efficiency by $-13.2$ per cent (for clusters at a redshift of 0.9) with a catastrophic outlier fraction of 5\,per cent. \citetalias{Raihan2020} were able to mitigate this by recomputing the photometric redshifts using the code \texttt{BPZ} instead of \texttt{EAZY}. In particular, the interpolation of the implemented spectral energy distribution (SED) template set helped reduce the bias\footnote{When recomputing the photo-$z$s, \citetalias{Raihan2020} employed an approximately homogeneous subset of broad-band filters (between $U$ and $H$ band), which are available for all five CANDELS fields. Since they dropped additional bands, this may increase the scatter in some of the photo-$z$ estimates compared to the \citetalias{Skelton2014} catalogue. However, for our analysis it is more important to have accurate estimates of the overall redshift distribution of colour-selected high-$z$ lensing source galaxies, as provided by the \citetalias{Raihan2020} catalogues.}. For our weak lensing study, we used the updated \citetalias{Raihan2020} photometric redshift catalogues in the five CANDELS/3D-HST fields to estimate the average redshift distribution and lensing efficiency of our samples of selected background galaxies. Additionally, \citetalias{Schrabback2018} found some systematic deviations between the \citetalias{Rafelski2015} photometric redshifts and the grism redshifts \citep{Brammer2012,Momcheva2016}. Upon revisiting this comparison, now including MUSE spectroscopic redshifts \citep[][see Sect. \ref{Sec:MUSE} below for details]{Inami2017}, \citetalias{Raihan2020} identified the affected redshift regimes and corrected the respective bias by subtracting the median offset. This bias amounts to 0.081 (0.162) for the photo-$z$ regime \mbox{$1.0 < z < 1.7$} \mbox{($2.6 < z < 3.2$)}. The resulting `fixed' redshift catalogues do not suffer from issues with catastropic redshift outliers and are denoted as R15\_fix catalogues. \subsubsection{HDUV} The Hubble Deep UV Legacy Survey \citep[HDUV,][henceforth \citetalias{Oesch2018}]{Oesch2018} is an imaging programme, which expands on the \citetalias{Skelton2014} catalogues with deeper UV observations in the WFC3/UVIS bands F275W and F336W. It targets $\sim 100$\,arcmin$^2$ within the GOODS-North and GOODS-South fields. \citetalias{Oesch2018} conducted photometry consistent with \citetalias{Skelton2014} regarding the detection image and flux measurements and recalculated photometric redshifts with the \texttt{EAZY} code including their deeper UV images. \subsubsection{MUSE} \label{Sec:MUSE} The MUSE Hubble Ultra Deep Field Survey \citep{Bacon2015,Inami2017,Brinchmann2017} comprises spectroscopic redshift measurements of almost 1400 sources in the HUDF region. This increases the number of available spectroscopic redshifts in this region by a factor of eight. It was conducted with the Multi Unit Spectroscopic Explorer (MUSE) at the Very Large Telescope. \citet{Inami2017} provide spectroscopic redshifts for sources with a completeness of 50 per cent at 26.5\,mag in F775W. The redshift distribution includes sources beyond \mbox{$z > 3$} and up to a F775W magnitude of $\sim 30$\,mag. This spectroscopic redshift catalogue is an excellent reference to judge the reliability of the photometric redshift catalogues used for the colour selection of background galaxies. \subsection{Selection of background galaxies through colour cuts} \label{Sec: Background galaxy selection} \subsubsection{Defining the colour and magnitude cuts} \label{Sec:Colour_selection, defining mag and colour cuts} We aim to find criteria based on colours and magnitudes that help us distinguish the background galaxies of interest from the contaminating foreground and cluster galaxies. To this end, we made use of the \citetalias{Skelton2014}/\citetalias{Raihan2020} catalogues providing photometry and photometric redshifts for the largest number of galaxies. First, we decided to focus on the magnitude regime \mbox{$24.2 < V_{606} < 27.0$} for the selection strategy. Inspecting the redshift distributions of galaxies in the CANDELS/3D-HST fields, we found that there is no significant amount of background galaxies at redshifts \mbox{$z \gtrsim 1.8$} present at magnitudes brighter than \mbox{$ V_{606} < 24.2$}. By focusing on galaxies fainter than this limit, we could exclude many bright foreground galaxies. Additionally, our cluster fields roughly reach limiting magnitudes of 27\,mag in the F606W band. Second, we inspected the colour-colour plots of different combinations of colours to identify a suitable strategy. We found that a combination of the colour plane including $V_{606}$, $I_{814}$, and $J_{110}$ and the colour plane including $U$, $V_{606}$, and $J_{110}$ provided a useful basis for a selection of background galaxies, that is galaxies at redshifts higher than the cluster redshifts of \mbox{$1.2 \lesssim z \lesssim 1.7$}. We developed a selection consisting of two steps. For the first step, a strategic cut in the colour plane \mbox{$V_{606} - I_{814}$} and \mbox{$I_{814} - J_{110}$} (short $VIJ$ plane) allowed us to remove a significant fraction of foreground galaxies at \mbox{$0.0 < z < 1.1$}. We discarded all galaxies to the right of this cut (redder in \mbox{$V_{606} - I_{814}$}, see the black line in upper panels of \mbox{Fig. \ref{Fig:Low-z colour selection cuts}}). With this cut, we did, however, still keep a lot of galaxies at the cluster redshift while discarding a substantial fraction of background galaxies at \mbox{$z > 2.2$}. The second step using the colour plane \mbox{$U - V_{606}$} and \mbox{$V_{606} - J_{110}$} (short $UVJ$ plane) helped us to refine the selection. Here, we could remove almost all galaxies at the cluster redshift (galaxies that are blue in \mbox{$U - V_{606}$} and red in \mbox{$V_{606} - J_{110}$}, occupying the upper left corner of the $UVJ$ plane in \mbox{Fig. \ref{Fig:Low-z colour selection cuts}}), and at the same time recover high-redshift sources we had discarded in the first selection step (galaxies that are red in \mbox{$U - V_{606}$}, occupying the lower right corner of the $UVJ$ plane in \mbox{Fig. \ref{Fig:Low-z colour selection cuts}}). Additionally, we slightly varied these cuts depending on whether the galaxies were bright \mbox{($24.2 < V_{606} < 25.75$)} or faint \mbox{($25.75 < V_{606} < 27.0$)}. Fainter galaxies typically exhibit a larger photometric scatter than brighter galaxies. We could, therefore, apply slightly tighter cuts for brighter galaxies without a high risk of contamination by cluster galaxies due to scatter. \mbox{Fig. \ref{Fig:Low-z colour selection cuts}} illustrates our cuts in the two colour planes and for the bright and faint magnitude regimes for clusters at redshift \mbox{$1.2 \lesssim z \lesssim 1.7$}. For clarity, we summarise the selection strategy as follows: we selected all galaxies below the grey line in the $UVJ$ plane and all galaxies that are both to the left of the black line in the $VIJ$ plane \textit{and} to the right of the black line in the $UVJ$ plane. We also investigated if it is possible to optimise the selection depending on the cluster redshift. For instance galaxies at redshift \mbox{$1.3 < z < 1.7$} could be used for a cluster at redshift \mbox{$z = 1.2$}, but have to be removed for a cluster at redshift \mbox{$z = 1.7$}. Unfortunately, such an optimisation was not possible with the available filters because all the galaxies in the redshift regime \mbox{$1.2 \lesssim z \lesssim 1.7$} occupy a similar location in the $UVJ$ plane (see red and purple symbols in \mbox{Fig. \ref{Fig:Low-z colour selection cuts}}). We investigated two alternative selection strategies in Appendix \ref{Appendix:Coloursel_alternatives}, which did not improve the signal-to-noise ratio of the lensing analysis. We, therefore, decided to use common selection criteria for background galaxies, independent of the cluster redshift for the majority of our cluster sample. The only exception is the cluster SPT-CL{\thinspace}$J$0646$-$6236 at the lowest redshift of $z =0.995$. We used an optimised selection strategy for this particular cluster, which we describe in Appendix \ref{Appendix:Coloursel_alternatives0995}. Additionally, we investigated how beneficial the use of the $U$ band is for an efficient source selection since it is the band introducing the largest uncertainties. We found that it is possible to select sources with a similar average geometric lensing efficiency only based on the bands F606W, F814W, and F110W. However, the resulting source density of such a selection is significantly lower. In conclusion, the signal-to-noise ratio of the lensing measurement (proportional to the product of the average lensing efficiency and the square root of the source density) is about 1.4 times higher when the $U$ band is included for the source selection. \subsubsection{Comparison of selections based on the S14 and the LAMBDAR photometry} \label{Sec:Colour_selection, compare S14 + LAMBDAR} We calculated the average lensing efficiency $\langle \beta \rangle$ for the selection based on the \citetalias{Skelton2014} photometry and for five catalogues with photometric redshift information, namely the original \citetalias{Skelton2014} redshifts, the updated \citetalias{Raihan2020} redshifts by \citetalias{Raihan2020}, the redshifts given in \citetalias{Rafelski2015}, a modified version of the \citetalias{Rafelski2015} redshifts from \citetalias{Raihan2020} called R15\_fix, and the redshifts from \citetalias{Oesch2018}. Throughout this section, we used the median lens redshift of our cluster sample of \mbox{$z_\mathrm{l} = 1.4$} for the calculation of $\langle \beta \rangle$. In addition to the selection as described in Sect. \ref{Sec:Colour_selection, defining mag and colour cuts}, we employed a signal-to-noise ($S/N$) threshold of $S/N_\mathrm{flux,606} > 10$ as applied for the shape measurements of galaxies (the signal-to-noise ratio is defined via the ratio of \texttt{FLUX\_AUTO} and \texttt{FLUXERR\_AUTO} from \texttt{Source Extractor}; see also Sect. \ref{sec:shapes}). We note that \citetalias{Raihan2020} optimised the redshifts for a source selection targeting background galaxies behind clusters of \mbox{$0.6 \lesssim z \lesssim 1.1$} (the cluster sample from \citetalias{Schrabback2018}). They apply a cut based on \mbox{$V-I$} colour at \mbox{$V-I < 0.3$} and a magnitude cut of \mbox{$V_{606} < 26.5$}. Even though these settings differ from ours, we found that the \citetalias{Raihan2020} catalogues are still applicable for our analysis because on average \mbox{84 per cent} of galaxies in our selection in the cluster fields also fulfil the condition \mbox{$V-I < 0.3$}. Additionally, we found that the average lensing efficiency calculated based on \citetalias{Raihan2020} photo-$z$s for our colour-selected galaxies in the HUDF was not significantly affected by a change of the magnitude limit from \mbox{$V_{606} < 27.0$} to \mbox{$V_{606} < 26.5$}. The five redshift catalogues (denoted \citetalias{Rafelski2015}, R15\_fix, \citetalias{Raihan2020}, \citetalias{Skelton2014}, and \citetalias{Oesch2018}) overlap in the HUDF region. We matched the sources from our five reference catalogues based on their coordinates through the function \texttt{associate} from the \texttt{LDAC} tools\footnote{\url{https://marvinweb.astro.uni-bonn.de/data_products/THELIWWW/LDAC/}}. For a match, we required a distance smaller than $0\farcs3$. In \mbox{Fig. \ref{Fig:Low-z redshift distribution}}, we show the redshift distribution of the galaxies that we selected with our strategy. We note that the \citetalias{Skelton2014} $U$ band \mbox{($5\sigma\, \mathrm{depth} = 27.9$)} is considerably deeper than our observations in the $U_\mathrm{HIGH}$ band in the HUDF \mbox{($5\sigma\, \mathrm{depth} = 26.6$)}. To account for this difference, we added Gaussian noise to the \citetalias{Skelton2014} $U$ band photometry and show the average redshift distribution derived from 50 noise realisations of galaxies in the HUDF for a $U_\mathrm{HIGH}$ band depth of 26.6\,mag in Fig.\thinspace {\ref{Fig:Low-z redshift distribution}}. We note that, when we estimated the average lensing efficiency for the cluster fields, we added Gaussian noise to both the $U$ band and \textit{HST} photometry from the \citetalias{Skelton2014} catalogues to account for the difference between the depths in the respective cluster fields and in the CANDELS/3D-HST fields. When we calculated the average lensing efficiency, we employed the shape weights from \citetalias{Schrabback2018} that depend on the signal-to-noise ratio (\texttt{FLUX\_AUTO}/\texttt{FLUXERR\_AUTO}) in $V_{606}$. Since the \citetalias{Skelton2014} catalogues do not provide measurements of \texttt{FLUX\_AUTO}, we used the listed total fluxes and respective errors instead\footnote{As a cross-check, we calculated the average lensing efficiency with the shape weights based on the total fluxes in the \citetalias{Skelton2014} catalogues and the \texttt{AUTO} fluxes in catalogues by \citetalias{Schrabback2018}. They have analysed shallower stacks in the CANDELS/3D-HST fields, including measurements of \texttt{FLUX\_AUTO}, which allowed us to draw a direct comparison. We found that the difference between both options is less than 1 per cent.}. The redshift distributions show that \citetalias{Skelton2014} and \citetalias{Oesch2018} have an excess of galaxies at the cluster redshifts and in the foreground at \mbox{$z < 0.4$} compared to the other catalogues. This is connected to the reported contamination by catastrophic redshift outliers (see \mbox{Sect. \ref{Sec:3D-HST cat description}}). We can see this effect as well in \mbox{Fig. \ref{Fig:Low-z redshift distribution}} where the \citetalias{Skelton2014} and \citetalias{Oesch2018} redshift catalogues yield lower values of the average lensing efficiency than the other redshift catalogues. In contrast to that, the average lensing efficiency results from the \citetalias{Raihan2020} redshift catalogues are in good agreement with the robust photometric redshift catalogues \citetalias{Rafelski2015} and R15\_fix. According to these catalogues, we expect nearly no contamination by cluster galaxies for our selection strategy (only $\sim1$ per cent of selected galaxies are within the cluster redshift range). Fig. \ref{Fig:Low-z redshift distribution} displays a small residual contribution of foreground galaxies in our source selection. This is, however, not a concern as long as the redshift distribution is modelled accurately. From a comparison of the average lensing efficiency based on \citetalias{Raihan2020} and R15\_fix we infer a systematic uncertainty of \mbox{$\Delta(\langle \beta \rangle)/ \langle \beta \rangle_\mathrm{R15\_fix} = 5.6\%$}. Since we measured fluxes in our observations with LAMBDAR, we additionally inspected the redshift distributions that we obtained when we used the LAMBDAR photometry measured from our observations of the HUDF in $U_\mathrm{HIGH}$ and from the \citetalias{Skelton2014} stacks in the \textit{HST} filters F606W, F814W, F850LP and F125W\footnote{\url{https://archive.stsci.edu/prepds/3d-hst/} ; (F606W + F850LP: GO programme 9425 with PI M. Giavalisco, F814W: GO programme 12062 with PI S. Faber, F125W: GO programme 13872 with PI G. Illingworth).} (we interpolated between the latter two filters to estimate the magnitude in the filter F110W). The resulting distribution is shown on the right-hand side of \mbox{Fig. \ref{Fig:Low-z redshift distribution}}. This corresponds to a systematic uncertainty of \mbox{$\Delta(\langle \beta \rangle)/ \langle \beta \rangle_\mathrm{R15\_fix} = 3.5\%$}. Overall, the average lensing efficiency results based on \citetalias{Skelton2014} and LAMBDAR photometry agree within the uncertainties (see \mbox{Fig. \ref{Fig:Low-z redshift distribution}}). \subsubsection{Comparison of selections based on photo-$z$s and spec-$z$s} As a cross-check for the photometric redshift catalogues, we retrieved spectroscopic/grism redshifts from the MUSE and 3D-HST catalogues, respectively, for all galaxies matched by their coordinates in the HUDF field. As a reference, we then calculated the average lensing efficiency of the colour-selected sources based on the spectroscopic/grism redshifts. Here, we only used the MUSE spec-$z$s with the highest quality flags 3 (secure redshift, determined by multiple features) and 2 \citep[secure redshift, determined by a single feature, see][]{Inami2017}{}. In the case of galaxies with both spectroscopic redshifts from MUSE and grism redshifts from 3D-HST, we used the former for the calculation of $\langle \beta_ \mathrm{spec}\rangle$. To estimate the uncertainty, we bootstrapped the colour-selected galaxies and recalculated the average lensing efficiency 1000 times. \mbox{Fig. \ref{Fig:Low-z MUSE/grism beta comparisons}} shows how the average lensing efficiency calculated from the five photometric redshift catalogues compares to the one calculated based on spectroscopic/grism redshifts. We did not find a bias within the uncertainties, but we notice that the average lensing efficiency based on R15\_fix, \citetalias{Raihan2020} and \citetalias{Oesch2018} matches closest to the result based on the spectroscopic/grism redshifts. It also has to be noted that the spectroscopic/grism redshifts are only complete in comparison to the full sample of matched galaxies in the HUDF region up to a magnitude of \mbox{$V_{606} \lesssim 25.0$\,mag} (see \mbox{Fig. \ref{Fig:muse-z gals,completeness-histo+fraction}}). We still decided to correct our measurements of the average lensing efficiency by the roughly three per cent offset between the \citetalias{Raihan2020} redshift-based and the spectroscopic redshift-based lensing efficiency for all clusters except SPT-CL{\thinspace}$J$0646$-$6236. For the specific source selection used for this cluster, such an offset did not occur. \subsubsection{Differences between the five CANDELS/3D-HST fields} \label{Sect: Differences between CANDELS beta} \begin{table} \caption{ Summary of our systematic and statistical error budget. } \begin{center} \begin{threeparttable} \begin{tabular}{ l c c c} \hline\hline Source of \textbf{systematic} & Rel. error & Rel. error & Sect./ \\ uncertainties & signal & $M_{500c}$ & App. \\ \hline \textbf{Redshift distribution:} & & & \\ - \citetalias{Raihan2020} vs. R15\_fix comp.& 5.6\,\% & 8.4\,\% & \ref{Sec:Colour_selection, compare S14 + LAMBDAR} \\ - Variations between & 5.7\,\% & 8.6\,\% & \ref{Sect: Differences between CANDELS beta} \\ CANDELS/3D-HST fields & & \\ - F110W band & 2.2\,\% & 3.3\,\% & \ref{Appendix:Comparison of S14 and LAMBDAR photometry}/\ref{Appendix:Impact of syst. shifts in photom} \\ (LAMBDAR/\citetalias{Skelton2014}, interp.) & & & \\ - $V - I$ colour & 2.2\,\% & 3.3\,\% & \ref{Appendix:Comparison of S14 and LAMBDAR photometry}/\ref{Appendix:Impact of syst. shifts in photom} \\ (LAMBDAR/\citetalias{Skelton2014}) & & & \\ \textbf{Shape measurements:} & & & \\ - Shear calibration & 2.3\,\% & 3.4\,\% & \ref{sec:shapes} \\ \textbf{Mass model:} & & & \\ - $c(M)$ relation & & 4.0\,\% & \\ - Miscentring for & & & \\ \quad \quad X-ray centres & & 3.8\,\% /& \ref{Sec:corr_for_mass_modelling_bias}\\ \quad \quad SZ centres & & 9.2\,\% & \ref{Sec:corr_for_mass_modelling_bias}\\ \hline total (added in quadrature) & & 14.4\,\% / & \\ & & 16.7\,\% & \\ \hline\hline Source of \textbf{statistical} & Rel. error & Rel. error & Sect./ \\ uncertainties & signal & $M_{500c}$ & App. \\ \hline \textbf{Redshift distribution:} & & &\\ - Line of sight variations & 6.9\,\% & 10.4\,\% & \ref{Sect: Differences between CANDELS beta}\\ - $U_\mathrm{HIGH}$ band calibration & 4.1\,\% & 6.2\,\% & \ref{Appendix:ZP robustness with gal locus}/\ref{Appendix:Impact of syst. shifts in photom} \\ \hline total (added in quadrature) & & 12.1\,\% & \\ \hline \end{tabular} \textbf{Notes.} In the upper part of the table, we list all systematic uncertainties, which ultimately translate into an uncertainty in the weak lensing mass measurement, where we added the individual contributions in quadrature to obtain an estimate for the total uncertainty. We report the relative uncertainties in per cent in the second column, the resulting relative uncertainty on the mass in the third column, and refer the reader to the respective sections or appendices listed in the last column for more detailed information about the contributions to the error budget. In the lower table, we list statistical uncertainties in the redshift distribution, which affect the calculation of the average geometric lensing efficiency $\langle \beta \rangle$ for individual cluster fields. We note that the final statistical uncertainties reported in Tables \ref{tab:mass-Xray} and \ref{tab:mass-SZ} do include additional contributions from shape noise and uncorrelated large-scale structure projections. \end{threeparttable} \end{center} \label{Tab:Errorbudget of photometry,beta} \end{table} Since we estimate the average lensing efficiency from all CANDELS fields, we want to evaluate the expected systematic uncertainties arising from differences in the depths, available filters, and calibrations in the five CANDELS/3D-HST fields. Additionally, we expect statistical sampling variance due to line of sight variations. We quantified the systematic uncertainties by measuring the average lensing efficiency for colour-selected galaxies independently in the five CANDELS/3D-HST fields (see \mbox{Fig. \ref{Fig:Low-z CANDELS redshift distribution}}). We obtained a mean of the average lensing efficiencies of \mbox{$\langle \beta \rangle_\mathrm{mean} = 0.242$} with a standard deviation of \mbox{$\sigma(\langle \beta \rangle) = 0.014$} between the \mbox{$N = 5$} fields (using the photometric redshifts from \citetalias{Raihan2020}). This translates into a systematic uncertainty of \mbox{$\sigma(\langle \beta \rangle)/\langle \beta \rangle_\mathrm{mean} = 5.7 \%$}. We calculated this more conservative systematic uncertainty without dividing by $\sqrt{N-1}$ because we noticed that the value of the GOODS-South field is notably higher, and thus, one field might not automatically be a good representation of the average of all. We added this uncertainty in quadrature to our systematic error budget (see Table \ref{Tab:Errorbudget of photometry,beta}). We note that this uncertainty also contains a statistical contribution as each CANDELS/3D-HST field represents a different line of sight. However, since the fields are each much larger than the small sub-patches studied in the paragraph below, we conservatively assume that the variations between the CANDELS/3D-HST fields are dominated by systematic uncertainties. We gauged the expected statistical uncertainty from line of sight variations in the average lensing efficiency by placing non-overlapping apertures with the same area as the field of view of our observations (about 11 arcmin$^2$) in the CANDELS/3D-HST fields. We can fit exactly eight apertures in each of the fields. We calculated the average lensing efficiency independently for all of the apertures, where we obtained the mean \mbox{$\langle \beta \rangle_\mathrm{mean} = 0.243$} with a scatter of \mbox{$\sigma(\langle \beta \rangle) = 0.017$}. Hence, we added a statistical uncertainty of \mbox{$\sigma(\langle \beta \rangle)/\langle \beta \rangle_\mathrm{mean} = 6.9 \%$} to our statistical error budget (see Table \ref{Tab:Errorbudget of photometry,beta}). Regarding uncertainties of the source redshift distribution, we estimated a total statistical uncertainty of \mbox{8.0 per cent} on the average lensing efficiency corresponding to 12.1 per cent on the mass scale. This includes uncertainties in the $U_\mathrm{HIGH}$ band calibration (see Appendix \ref{Appendix:ZP robustness with gal locus}) and line of sight variations (this section), which we added in quadrature. Furthermore, we estimated a total systematic uncertainty of \mbox{8.6 per cent} on the average geometric lensing efficiency. Here, we took into account systematics for the F110W band (interpolation versus direct measurement, aperture photometry versus LAMBDAR photometry, see Appendices \ref{Appendix:Comparison of S14 and LAMBDAR photometry} and \ref{Appendix:Impact of syst. shifts in photom}), uncertainties in the measurement of \mbox{$V-I$} colours (see Appendices \ref{Appendix:Comparison of S14 and LAMBDAR photometry} and \ref{Appendix:Impact of syst. shifts in photom}), uncertainties of the \citetalias{Raihan2020} redshift catalogues (see Sect. \ref{Sec:Colour_selection, compare S14 + LAMBDAR}), and variations between the CANDELS/3D-HST fields (differences of the filters, depths, availability of $U$ bands, and usage of different bands to interpolate the $J_{110}$ magnitudes, see this section). Again, we added these contributions in quadrature. All of these uncertainties are summarised in Table \ref{Tab:Errorbudget of photometry,beta}. \subsection{Check for cluster member contamination} We aim to preferentially select background galaxies with our magnitude and colour cuts both in the cluster fields and the CANDELS/3D-HST fields. Investigating the source density of the selected galaxies and their radial dependence allows us to test if we have a substantial amount of contamination by cluster galaxies and if our method provides a consistent selection in the cluster fields and the CANDELS/3D-HST fields in the presence of noise \citepalias{Schrabback2018}. To this end, we added Gaussian noise to the \citetalias{Skelton2014} photometric catalogues according to the difference between the depth of the cluster observations and the depth of the CANDELS/3D-HST fields. This may vary depending on the field and filter. We only added Gaussian noise if the CANDELS/3D-HST observation in a filter were deeper than the corresponding observation in the cluster field. Occasionally, the cluster observations were slightly deeper than some of the CANDELS/3D-HST observations, but only by $\sim0.2$ mag. We considered this negligible for the validity of this test. We measured the source density of selected sources accounting for masks, for example, due to bright stars for the cluster fields and CANDELS/3D-HST fields. We only considered photometrically selected galaxies and did not consider potential flags from the shape measurement pipeline. We also did not apply the signal-to-noise ratio cut $S/N_\mathrm{flux,606} > 10$ as mentioned in Sect. \ref{Sec: Background galaxy selection} and \ref{sec:shapes} for this test, since the quantities \texttt{FLUX\_AUTO} and \texttt{FLUXERR\_AUTO} required to calculate the signal-to-noise ratio are not available in the CANDELS/3D-HST catalogues. In \mbox{Fig. \ref{Fig:Radial+Magnitude number densitiy profiles}} (left panel), we show the average source density of selected galaxies as a function of the $V_{606}$ band magnitude. We found a good agreement over the full magnitude range of interest in this study. Additionally, we investigated the radial dependence of the source density of selected galaxies. In principle, an increase of the number density towards the cluster centre can indicate cluster member contamination. However, the profile can also be affected by blending and/or masking of background galaxies by cluster member galaxies, magnification, or selection effects. We accounted neither for blending and/or masking by cluster galaxies nor magnification in our analysis. The blending/masking by cluster galaxies should be less important than for clusters at lower redshifts since the cluster galaxies are more cosmologically dimmed. Additionally, we conservatively excluded the core region $r < 500\,\mathrm{kpc}$, when we measured the weak lensing masses so that this effect should not play a significant role. Regarding magnification, for \citetalias{Schrabback2021} the application of a magnification correction had only a minor impact on the source density profile. Given the higher redshifts of our clusters, the lensing strength and, therefore, the expected impact of magnification is even lower, which is why we ignore it here. \mbox{Fig. \ref{Fig:Radial+Magnitude number densitiy profiles}} (right panel) displays the radial distance from the X-ray centre (except for the cluster SPT-CL{\thinspace}$J$0646$-$6236, where we used the SZ centre) in units of the radius $R_\mathrm{500c,SZ}$, which we derived from the SZ mass $M_\mathrm{500c,SZ}$. We found a very slight trend of a higher source density towards the centres of the clusters. However, the profile is consistent with flat within the uncertainties. Together, both measurements provide an important confirmation for the success of the photometric background selection and cluster member removal. \section{Shape measurements} \label{sec:shapes} The shape of a galaxy can be quantified by its ellipticity, as a complex number \mbox{$\epsilon = \epsilon_1 + \mathrm{i}\epsilon_2$}. The observed ellipticity $\epsilon_\mathrm{obs}$ of a background galaxy can be related to the intrinsic ellipticity $\epsilon_\mathrm{orig}$ and reduced shear $g$ via \citep{Bartelmann2001} \begin{equation} \epsilon_\mathrm{obs} = \frac{\epsilon_\mathrm{orig} + g}{1 + g^*\epsilon_\mathrm{orig}} \,. \end{equation} According to the cosmological principle, the intrinsic orientation of galaxies should have no preferred direction\footnote{Despite this principle, intrinsic alignments of galaxies due to various physical effects can pose a challenge for weak lensing analyses, especially for cosmic shear studies. See for example \citet{Troxel2015} for a review. These intrinsic alignments are, however, not a concern for this work.}. Therefore, the expectation value for an average over many galaxies \mbox{$\langle \epsilon_\mathrm{orig} \rangle = 0$} vanishes. In conclusion, we can estimate the reduced shear, that is, the main observable for weak lensing studies, from the ensemble-averaged PSF-corrected ellipticities of the background galaxies via \begin{equation} \langle \epsilon_\mathrm{obs} \rangle = g\,. \end{equation} We measured galaxy shapes in the ACS F606W ($V$) and \mbox{F814W ($I$)} images using the KSB+ formalism \citep{Kaiser1995,Luppino1997,Hoekstra1998} as implemented by \citet{Erben2001} and \citet{Schrabback2007}. We modelled the spatially and temporally varying ACS point-spread function using an interpolation based on principal component analysis, as calibrated on dense stellar fields \citep{Schrabback2010,Schrabback2018}. We corrected for shape measurement and selection biases as a function of the KSB+ galaxy signal-to-noise ratio from \citet{Erben2001}. This correction was derived by \citet{Hernandez-Martin2020}, who analysed custom \texttt{Galsim} \citep{Rowe2015} image simulations with ACS-like image characteristics. Importantly, \citet{Hernandez-Martin2020} tuned their simulated source samples such that the measured distributions in galaxy size, magnitude, signal-to-noise ratio, and ellipticity dispersion closely matched the corresponding measured distributions of the magnitude and colour-selected source samples from \citetalias{Schrabback2018}, while also incorporating realistic levels of blending. Varying the properties of the simulations, \citet{Hernandez-Martin2020} estimated a (post-correction) multiplicative shear calibration uncertainty of the employed KSB+ pipeline of \mbox{$\sim 1.5\%$}. Our data are very similar to those analysed by \citetalias{Schrabback2018}. Therefore, we expect the \citet{Hernandez-Martin2020} shear calibration to be directly applicable for our analysis. However, our colour selection selects galaxies at slightly higher redshifts on average compared to the \mbox{$V-I$} selection from \citetalias{Schrabback2018}. Some of our image stacks are also slightly deeper. We, therefore, conservatively increased the shear calibration uncertainty in our systematic error budget by a factor $\times 1.5$ (see Table \ref{Tab:Errorbudget of photometry,beta}). Given their greater average depth (see Table \ref{tab:exposure times}), we based our shear catalogue primarily on the F606W stacks. Here, we included galaxies with a measured flux signal-to-noise ratio \mbox{$S/N_\mathrm{flux,606}>10$}\footnote{With the aim to potentially reduce statistical uncertainties in our analysis, we also computed results using an alternative signal-to-noise ratio cut of \mbox{$S/N_\mathrm{flux}>7$}. While this did increase the source number density, we found that it only marginally changes the constraints of our SZ--mass scaling relation analysis, likely due to the low shape weights and the increased photometric scatter of the additional faint galaxies. In an interest to keep our study consistent with previous studies, for example, by \citetalias{Schrabback2021}, we chose to use the cut of \mbox{$S/N_\mathrm{flux}>10$}.} (defined as the ratio of the \texttt{FLUX\_AUTO} and \texttt{FLUXERR\_AUTO} parameters from \texttt{Source Extractor}). This single-band selection matches the one employed in Sect. \ref{Sec: Background galaxy selection} in the computation of the average geometric lensing efficiency. For galaxies that additionally have \mbox{$S/N_\mathrm{flux,814}>10$}, we combined the shape measurements from both filters to reduce the impact of measurement noise. \begin{table} \caption{Number densities of selected source galaxies measured in the cluster fields.} \begin{center} \begin{threeparttable} \begin{tabular}{l c} \hline\hline Cluster name & $n_\mathrm{gal}$ \\ & [arcmin$^{-2}$] \\ \hline SPT-CL{\thinspace}$J$0156$-$5541 & 14.3 \\ SPT-CL{\thinspace}$J$0205$-$5829 & 12.7 \\ SPT-CL{\thinspace}$J$0313$-$5334 & 20.1 \\ SPT-CL{\thinspace}$J$0459$-$4947 & 10.7 \\ SPT-CL{\thinspace}$J$0607$-$4448 & 13.3 \\ SPT-CL{\thinspace}$J$0640$-$5113 & 10.2 \\ SPT-CL{\thinspace}$J$2040$-$4451 & 11.2 \\ SPT-CL{\thinspace}$J$2341$-$5724 & 12.6 \\ \hline average & 13.1 \\ \hline SPT-CL{\thinspace}$J$0646$-$6236 & 26.9 \\ \end{tabular} \end{threeparttable} \end{center} \textbf{Notes.} We apply the source selection as described in Sect. \ref{Sec:Colour_selection, defining mag and colour cuts} including only sources that pass the lensing selections and have a signal-to-noise ratio $S/N_\mathrm{flux,606} > 10$, leading to lower numbers compared to Fig. \ref{Fig:Radial+Magnitude number densitiy profiles}. The cluster SPT-CL{\thinspace}$J$0646$-$6236 is listed separately because we applied a different selection strategy for this cluster (see Appendix \ref{Appendix:Coloursel_alternatives0995}). \label{Tab:Number densities} \end{table} In order to compute shape weights and filter-combined estimates of the reduced shear, we made use of the \mbox{$\log_{10}{S/N_\mathrm{flux}}$}-dependent fits computed by \citetalias[][see their appendix A]{Schrabback2018} for the total ellipticity dispersion $\sigma_{\epsilon,V/I}$, the intrinsic ellipticity dispersion $\sigma_{\mathrm{int},V/I}$, and the ellipticity measurement noise $\sigma_{\mathrm{m},V/I}$ of \mbox{$V-I$} colour selected galaxies in custom CANDELS \citep{Grogin2011} $V$ (F606W) and $I$ (F814W) band stacks of approximately single-orbit depth\footnote{We employ the \mbox{$\log_{10}{S/N_\mathrm{flux}}$}-dependent fits instead of the magnitude-dependent fits provided by \citetalias{Schrabback2018} in order to account for the slightly higher depth of some of our stacks and the significant dependence of the measurement noise on \mbox{$\log_{10}{S/N_\mathrm{flux}}$}. For comparison, the dependence of $\sigma_{\mathrm{int},V/I}$ on \mbox{$\log_{10}{S/N_\mathrm{flux}}$} is weak in the regime covered by most of our sources.}. With the complex reduced shear estimates $\epsilon_{V/I}$ obtained in the $V$ band and the $I$ band, respectively, and the shape weights \begin{equation} w_{V/I}=\left( \sigma_{\epsilon,V/I} \right)^{-2}\,, \end{equation} we computed the filter-combined reduced shear estimate as \begin{equation} \epsilon_\mathrm{comb}=\frac{w_V \epsilon_V + w_I \epsilon_I}{w_V+w_I} \,. \end{equation} The measurement noise is independent between the stacks in the different filters, which is why the combined ellipticity measurement variance reads \begin{equation} \sigma_\mathrm{m,comb}^2=\frac{(w_V \sigma_{\mathrm{m},V})^2 + (w_I \sigma_{\mathrm{m},I})^2}{(w_V+w_I)^2} \,. \end{equation} In the relevant $S/N$ or magnitude regime, differences are small between $\sigma_{\mathrm{int},V}$ and $\sigma_{\mathrm{int},I}$ for the colour-selected source samples from \citetalias{Schrabback2018}. In addition, \citet{Jarvis2008} found that intrinsic shapes are highly correlated between \textit{HST} images of galaxies in different optical filters. Therefore, as an approximation, we interpolated the intrinsic ellipticity dispersion between the filters \begin{equation} \sigma_\mathrm{int,comb}=\frac{w_V \sigma_{\mathrm{int},V} + w_I \sigma_{\mathrm{int},I}}{w_V+w_I}\,, \end{equation} allowing us to compute shape weights for the combined shear estimate as \begin{equation} w_\mathrm{comb}=\left( \sigma_\mathrm{int,comb}^2 + \sigma_\mathrm{m,comb}^2 \right)^{-1}\,. \end{equation} We reached an average final source density after all photometry and shape cuts of 13.1\,arcmin$^{-2}$ (see Table \ref{Tab:Number densities}) for the clusters with $1.2 \lesssim z \lesssim 1.7$. We note that this is substantially lower than the values shown in Fig. \ref{Fig:Radial+Magnitude number densitiy profiles} because we now included the signal-to-noise ratio and lensing cuts\footnote{While the number density is affected by a change of the signal-to-noise ratio cut, we found that the average geometric lensing efficiency is not sensitive to it. The change is smaller than $\sim 1$ per cent comparing the results with or without the cut at $S/N_\mathrm{flux,606}>10$.}. \section{Weak lensing results} \label{sec:wlconstraints} Our pipeline used to obtain weak lensing constraints largely follows \citetalias{Schrabback2018} and \citetalias{Schrabback2021} to which we refer the reader for more detailed descriptions. \subsection{Mass reconstructions} \label{Sec:Mass-maps} The weak lensing convergence $\kappa$ and shear $\gamma$ are both second-order derivatives of the lensing potential \citep[e.g.][]{Bartelmann2001}. As a result, it is possible to reconstruct the convergence distribution from the shear field up to a constant, which is also known as the mass-sheet degeneracy \citep{Kaiser1993,Schneider1995}. Here, we employed the Wiener-filtered reconstruction algorithm from \citet{McInnes2009} and \citet{Simon2009}, where we fixed the mass-sheet degeneracy by setting the average convergence inside the observed fields to zero. We computed $S/N$ maps of the reconstruction, where the noise map is computed as the root mean square (r.m.s.) image of the $\kappa$ field reconstructions of 500 noise shear fields, which were created by randomising the ellipticity phases in the real source catalogue. Given the limited field of view and our choice to set the average convergence to zero, we expect to slightly underestimate the true $S/N$ levels \citepalias{Schrabback2021}. The obtained $S/N$ reconstructions are shown as contours in the left panels of Figs. \ref{fi:wl_results_1} and \ref{fi:wl_results_2} -- \ref{fi:wl_results_4} in Appendix \ref{Appendix:WeakLensingResults}. SPT-CL{\thinspace}$J$0646$-$6236 and SPT-CL{\thinspace}$J$2040$-$4451 show clear peaks in the mass reconstruction signal-to-noise ratio maps with \mbox{$S/N_\mathrm{peak}>3$} (see Table \ref{tab:masspeaklocations} for details). We find tentative counterparts to the clusters with \mbox{$2<S/N_\mathrm{peak}<3$} for SPT-CL{\thinspace}$J$0156$-$5541, SPT-CL{\thinspace}$J$0459$-$4947, SPT-CL{\thinspace}$J$0640$-$5113, and SPT-CL{\thinspace}$J$2341$-$5724. The other clusters either show no significant peak in their corresponding mass reconstruction $S/N$ maps, or only a peak close to the edge of the field of view, which is less reliable and likely spurious. While some of the clusters remained undetected in the reconstructed mass maps, we note that these maps are only for illustration purposes. We still took the tangential reduced shear profiles of all clusters in our sample into account for the likelihood analysis (see Sect. \ref{Sec:ScalingRelAnalysis}). \begin{table*} \caption{Constraints on the peaks in the mass reconstruction signal-to-noise ratio maps including their locations (\mbox{$\alpha, \delta$}), positional uncertainties (\mbox{$\Delta\alpha, \Delta\delta$}) as estimated by bootstrapping the galaxy catalogue \citep[we note that this underestimates the true uncertainty as found by][]{Sommer2021}, and their peak signal-to-noise ratios \mbox{$(S/N)_\mathrm{peak}$}. We excluded unreliable peaks close to the edge of the field of view (compare Fig. \ref{fi:wl_results_1} and Figs. \ref{fi:wl_results_2}--\ref{fi:wl_results_4}). \label{tab:masspeaklocations}} \begin{center} \begin{tabular}{lccccccc} \hline\hline Cluster & $\alpha$ & $\delta$ & $\Delta\alpha$ & $\Delta\delta$ & $\Delta\alpha$ & $\Delta\delta$ &\mbox{$(S/N)_\mathrm{peak}$} \\ & [deg J2000] & [deg J2000] & [arcsec] & [arcsec] & [kpc] & [kpc] &\\ \hline SPT-CL{\thinspace}$J$0156$-$5541 & 29.04676 & $ -55.69426 $ & 9.1 & 4.8 & 76 & 41 & 2.0 \\ SPT-CL{\thinspace}$J$0459$-$4947 & 74.92771 & $ -49.77739 $ & 8.1 & 9.6 & 69 & 81 & 2.2 \\ SPT-CL{\thinspace}$J$0640$-$5113 & 100.08319 & $ -51.21488 $ & 6.4 & 5.5 & 53 & 46 & 2.6 \\ SPT-CL{\thinspace}$J$0646-6236 & 101.63130 & $ -62.62127 $ & 1.5 & 2.2 & 12 & 18 & 5.5 \\ SPT-CL{\thinspace}$J$2040$-$4451 & 310.24056 & $ -44.86349 $ & 4.6 & 3.7 & 39 & 31 & 3.4 \\ SPT-CL{\thinspace}$J$2341$-$5724 & 355.34768 & $ -57.41418 $ & 7.7 & 8.1 & 64 & 68 & 2.2 \\ \hline \end{tabular} \end{center} {\flushleft } \end{table*} \subsection{Fits to the tangential reduced shear profiles} \label{Sec: fits_to_tangential_reduced_shear_profiles} When measuring the reduced shear signal with respect to the centre of a mass concentration such as a cluster, it is helpful to distinguish the tangential component $g_\mathrm{t}$ and the cross component $g_\times$: \begin{equation} \begin{split} g_\mathrm{t} & = -g_1\cos{2\phi} - g_2\sin{2\phi} \,, \\ g_\times & = +g_1\sin{2\phi} - g_2\cos{2\phi} \,, \end{split} \end{equation} where $\phi$ indicates the azimuthal angle with respect to the centre. We computed the tangential component (`t') and the cross component (`$\times$') of the reduced shear in linear bins of width \mbox{100\,kpc} (see the right panels of Fig. \ref{fi:wl_results_1} and Figs. \ref{fi:wl_results_2} -- \ref{fi:wl_results_4} in Appendix \ref{Appendix:WeakLensingResults}) around both the X-ray centroids (when available) and the SZ centres of the targeted clusters. We fitted the tangential reduced shear profiles using spherical Navarro-Frenk-White \citep[NFW,][]{Navarro1997} models following \citet{WrightBrainerd2000}, employing the concentration--mass relation from \citet{Diemer2015} with updated parameters from \citet{Diemer2019}. When deriving mass constraints, we excluded the cluster cores \mbox{($r<500$\,kpc)}, since the inclusion of smaller scales would both increase the intrinsic scatter and systematic uncertainties related to the mass modelling \citep[see e.g. ][]{Sommer2021,Grandis2021}. We note that weak lensing mass constraints can also be derived this way for clusters, which were undetected in the reconstructed mass maps (see Sect. \ref{Sec:Mass-maps}). We summarise the resulting fit constraints in Tables \ref{tab:mass-Xray} and \ref{tab:mass-SZ}. For clusters with both X-ray and SZ centres, we regarded the X-ray-centred analysis as our primary result given the smaller expected mass modelling biases (see Sect. \ref{Sec:corr_for_mass_modelling_bias}). \begin{table*} \caption{Weak lensing mass constraints derived from the fit of the tangential reduced shear profiles around the X-ray centres using spherical NFW models assuming the $c(M)$ relation from \citet{Diemer2015} with updated parameters from \citet{Diemer2019} for two different over-densities \mbox{$\Delta \in \{200\mathrm{c}, 500\mathrm{c}\}$}. } \begin{center} \begin{tabular}{crccrcc} \hline \hline Cluster& \multicolumn{1}{c}{$M_{200\mathrm{c}}^\mathrm{biased,ML}\,[10^{14}\mathrm{M}_\odot]$} & $\hat{b}_{200\mathrm{c,WL}}$& $\sigma(\mathrm{ln}\, b_\mathrm{\mathrm{200c,WL}})$ & \multicolumn{1}{c}{$M_{500\mathrm{c}}^\mathrm{biased,ML}\,[10^{14}\mathrm{M}_\odot]$} & $\hat{b}_{500\mathrm{c,WL}}$ & $\sigma(\mathrm{ln}\, b_\mathrm{\mathrm{500c,WL}})$ \\ \hline SPT-CL{\thinspace}$J$0156$-$5541 & $4.5_{-2.9}^{+3.5}\pm 1.0\pm 0.5 $& $0.88\pm0.02$ & $0.35\pm0.03$ & $3.1_{-2.1}^{+2.5} \pm 0.7\pm 0.3$ & $0.92\pm0.03$ & $0.28\pm0.05$\\ SPT-CL{\thinspace}$J$0205$-$5829 & $0.1_{-2.4}^{+2.8}\pm 0.5\pm 0.0 $& $0.76\pm0.03$ & $0.41\pm0.05$ & $0.1_{-1.6}^{+1.9} \pm 0.3\pm 0.0$ & $0.79\pm0.03$ & $0.41\pm0.04$\\ SPT-CL{\thinspace}$J$0313$-$5334 & $2.8_{-2.4}^{+3.3}\pm 1.1\pm 0.3 $& $0.86\pm0.03$ & $0.44\pm0.04$ & $1.9_{-1.7}^{+2.4} \pm 0.8\pm 0.2$ & $0.83\pm0.03$ & $0.37\pm0.05$\\ SPT-CL{\thinspace}$J$0459$-$4947 & $4.4_{-4.4}^{+6.8}\pm 1.5\pm 0.5 $& $0.85\pm0.05$ & $0.51\pm0.08$ & $3.0_{-3.0}^{+5.0} \pm 1.1\pm 0.4$ & $0.79\pm0.05$ & $0.43\pm0.10$\\ SPT-CL{\thinspace}$J$0607$-$4448 & $0.6_{-2.2}^{+3.4}\pm 0.7\pm 0.1 $& $0.86\pm0.03$ & $0.46\pm0.04$ & $0.4_{-1.5}^{+2.4} \pm 0.4\pm 0.0$ & $0.82\pm0.04$ & $0.45\pm0.06$\\ SPT-CL{\thinspace}$J$0640$-$5113 & $6.6_{-4.5}^{+5.1}\pm 1.1\pm 0.7 $& $0.93\pm0.03$ & $0.27\pm0.08$ & $4.6_{-3.2}^{+3.8} \pm 0.8\pm 0.5$ & $0.85\pm0.04$ & $0.37\pm0.05$\\ SPT-CL{\thinspace}$J$2040$-$4451 & $16.4_{-5.7}^{+5.8}\pm 1.6\pm 1.9 $& $0.89\pm0.04$ & $0.44\pm0.06$ & $12.0_{-4.4}^{+4.5} \pm 1.3\pm 1.4$ & $0.74\pm0.04$ & $0.48\pm0.06$\\ SPT-CL{\thinspace}$J$2341$-$5724 & $5.7_{-3.5}^{+3.9}\pm 1.1\pm 0.6 $& $0.88\pm0.03$ & $0.35\pm0.04$ & $4.0_{-2.5}^{+2.9} \pm 0.8\pm 0.4$ & $0.87\pm0.03$ & $0.25\pm0.05$\\ \hline \end{tabular} \end{center} \textbf{Notes.} The maximum likelihood mass estimates $M_{\Delta}^\mathrm{biased,ML}$ are given in $10^{14}\mathrm{M}_\odot$, where errors correspond to statistical 68 per cent uncertainties from shape noise (asymmetric errors), followed by uncorrelated large-scale structure projections, the calibration of the $U_\mathrm{HIGH}$ band, and variations in the redshift distribution between different lines of sight (for systematic uncertainties see Table \ref{Tab:Errorbudget of photometry,beta}). Statistical corrections for mass modelling biases have not yet been applied for $M_{\Delta}^\mathrm{biased,ML}$. They are characterised by \mbox{$\hat{b}_\mathrm{\Delta,WL}=\text{exp}\left[\langle \text{ln}\,b_{\Delta,\text{WL}}\rangle\right]$} and $\sigma(\mathrm{ln}\,b_\mathrm{\Delta,WL})$, which relate to the mean and the width of the estimated mass bias distribution \mbox{(see Sect. \ref{Sec:corr_for_mass_modelling_bias}).} \label{tab:mass-Xray} \end{table*} \begin{table*} \caption{As Table \ref{tab:mass-Xray}, but for the analysis centring the shear profiles around the SZ centres. \label{tab:mass-SZ}} \begin{center} \begin{tabular}{crccrcc} \hline \hline Cluster& \multicolumn{1}{c}{$M_{200\mathrm{c}}^\mathrm{biased,ML}\,[10^{14}\mathrm{M}_\odot]$} & $\hat{b}_{200\mathrm{c,WL}}$& $\sigma(\mathrm{ln}\, b_\mathrm{\mathrm{200c,WL}})$ & \multicolumn{1}{c}{$M_{500\mathrm{c}}^\mathrm{biased,ML}\,[10^{14}\mathrm{M}_\odot]$} & $\hat{b}_{500\mathrm{c,WL}}$ & $\sigma(\mathrm{ln}\, b_\mathrm{\mathrm{500c,WL}})$ \\ \hline SPT-CL{\thinspace}$J$0156$-$5541 & $3.9_{-2.8}^{+3.4}\pm 1.1\pm 0.4 $& $0.74\pm0.02$ & $0.41\pm0.04$ & $2.7_{-1.9}^{+2.5} \pm 0.8\pm 0.3$ & $0.73\pm0.02$ & $0.36\pm0.04$\\ SPT-CL{\thinspace}$J$0205$-$5829 & $0.3_{-2.3}^{+3.1}\pm 0.5\pm 0.0 $& $0.76\pm0.03$ & $0.38\pm0.05$ & $0.2_{-1.6}^{+2.2} \pm 0.4\pm 0.0$ & $0.72\pm0.03$ & $0.40\pm0.05$\\ SPT-CL{\thinspace}$J$0313$-$5334 & $4.3_{-3.1}^{+3.8}\pm 1.2\pm 0.4 $& $0.80\pm0.03$ & $0.33\pm0.06$ & $3.0_{-2.2}^{+2.8} \pm 0.8\pm 0.3$ & $0.76\pm0.03$ & $0.34\pm0.05$\\ SPT-CL{\thinspace}$J$0459$-$4947 & $6.9_{-5.7}^{+7.0}\pm 1.7\pm 0.8 $& $0.83\pm0.07$ & $0.49\pm0.12$ & $4.9_{-4.1}^{+5.3} \pm 1.2\pm 0.6$ & $0.67\pm0.06$ & $0.65\pm0.09$\\ SPT-CL{\thinspace}$J$0607$-$4448 & $2.4_{-2.5}^{+4.0}\pm 1.0\pm 0.3 $& $0.76\pm0.04$ & $0.23\pm0.11$ & $1.7_{-1.7}^{+2.9} \pm 0.7\pm 0.2$ & $0.72\pm0.03$ & $0.34\pm0.07$\\ SPT-CL{\thinspace}$J$0640$-$5113 & $3.4_{-3.4}^{+5.1}\pm 1.0\pm 0.4 $& $0.66\pm0.03$ & $0.56\pm0.05$ & $2.3_{-2.3}^{+3.7} \pm 0.7\pm 0.3$ & $0.70\pm0.03$ & $0.36\pm0.07$\\ SPT-CL{\thinspace}$J$0646$-$6236 & $12.1_{-3.3}^{+3.3}\pm 1.3\pm 1.1 $& $0.78\pm0.02$ & $0.41\pm0.03$ & $8.6_{-2.5}^{+2.4} \pm 0.9\pm 0.8$ & $0.78\pm0.02$ & $0.39\pm0.03$\\ SPT-CL{\thinspace}$J$2040$-$4451 & $15.7_{-5.8}^{+5.8}\pm 1.5\pm 1.8 $& $0.77\pm0.04$ & $0.40\pm0.07$ & $11.5_{-4.4}^{+4.5} \pm 1.2\pm 1.3$ & $0.71\pm0.04$ & $0.47\pm0.07$\\ SPT-CL{\thinspace}$J$2341$-$5724 & $3.8_{-3.0}^{+3.8}\pm 1.0\pm 0.4 $& $0.71\pm0.03$ & $0.46\pm0.04$ & $2.6_{-2.1}^{+2.7} \pm 0.7\pm 0.3$ & $0.70\pm0.03$ & $0.41\pm0.05$\\ \hline \end{tabular} \end{center} \end{table*} \subsection{Estimation of the weak lensing mass modelling bias} \label{Sec:corr_for_mass_modelling_bias} Weak lensing mass estimates can suffer from systematic biases caused by deviations of the cluster from an NFW profile, triaxial or complex mass distributions (e.g. due to mergers), both correlated and uncorrelated large-scale structure, and miscentring of the fitted shear profile. The measured weak lensing mass $M_{\Delta,\mathrm{WL}}$ at an overdensity $\Delta$ is typically smaller than the true mass of the halo $M_{\Delta,\mathrm{halo}}$ by a factor \begin{equation} b_{\Delta,\mathrm{WL}} = \frac{M_{\Delta,\mathrm{WL}}}{M_{\Delta,\mathrm{halo}}} \,. \label{Eq:WLmass+bias+halomass} \end{equation} This bias also depends on the specific properties of the sample such as mass and redshift and the measurement setup regarding the employed concentration--mass relation and radial fitting range. In this study, we obtained an estimate for the weak lensing mass bias distribution following the method described by \citet{Sommer2021}. They showed that the traditional, simplifying assumption of a log-normal bias distribution according to \begin{equation} \ln \left( \frac{M_{\Delta,\mathrm{WL}}}{M_{\Delta,\mathrm{halo}}} \right) \sim \mathcal{N}(\mu, \sigma^2) \end{equation} is a suitable choice in the absence of miscentring. Here, $\mathcal{N}(\mu, \sigma^2)$ is the log-normal distribution with expectation \mbox{value $\mu = \langle \ln b_{\Delta,\mathrm{WL}} \rangle$} and variance $\sigma^2$. The expectation value $\mu$ in log-space translates to a measure of the bias in linear space via the estimator \begin{equation} \hat{b}_{\Delta,\mathrm{WL}} = \exp [\langle \ln b_{\Delta,\mathrm{WL}} \rangle] \,. \end{equation} Following \citet{Sommer2021}, we used snapshots of the Millennium XXL simulations \citep[MXXL,][]{Angulo2012} at redshift $z = 1$ to estimate the weak lensing mass bias distribution. We obtained an estimate for each cluster individually by incorporating the given SZ mass and uncertainties of the measured radial tangential shear profile as input information. First, we used all haloes in the MXXL simulations with a halo mass within $2\sigma$ of the SZ mass of the respective cluster (see Table \ref{tab:Cluster sample properties}). Their mass distributions were projected along three mutually orthogonal axes increasing the effective sample size. We note that we did include a line of sight integration length of \mbox{$200\,h^{-1}$\,Mpc} and not the full line of sight. Consequently, this method takes into account only correlated but not uncorrelated large-scale structure. However, integration along a line of sight twice as long changes the mean results only marginally \citep[][]{Becker2011}. The projected mass distributions of the massive haloes served to calculate the shear and convergence fields on a grid with four arcsecond resolution. We converted the shear to the reduced shear using the same average lensing efficiency as in the respective cluster observations. This reduced shear field was azimuthally averaged in the same range and bins as in the cluster analysis to obtain a reduced shear profile. As the centre, we used either the 3D halo centre (most bound particle) or an offset centre drawn from an empirical miscentring distribution. We added noise to the reduced shear profile in each radial bin matching the corresponding uncertainties of the actual cluster tangential reduced shear estimates. We then obtained a weak lensing mass estimate by fitting the tangential reduced shear profile with an NFW profile, analogous to the analysis in our actual cluster observations. Subsequently, the comparison of the obtained weak lensing mass with the true halo mass provided the estimate for the weak lensing mass bias distribution for our specific setup. The full probability distribution $P(M_{\Delta,\mathrm{WL}}|M_{\Delta,\mathrm{halo}})$ was modelled with the help of Bayesian statistics as described in \citet{Sommer2021}, where the SZ-derived mass estimates ($M_{200\mathrm{c},\mathrm{SZ}}$ and $M_{500c,\mathrm{SZ}}$) from \citetalias{Bocquet2019} served as a prior for the mass estimation. Thus, we did not take into account any mass dependence of the bias other than using the SPT-SZ masses as a prior. We incorporated miscentring into the estimation of the weak lensing mass bias distribution by applying an offset in a random direction before obtaining the reduced shear profile and subsequently fitting the masses. The offset was drawn from a miscentring distribution derived from the Magneticum Pathfinder Simulation \citep[][]{Dolag2016}{} measuring the offset between X-ray (or SZ) peaks from the simulation as a proxy for the centre and the position of the most bound particle \citepalias[see ][for a detailed description]{Schrabback2021}{}. We note that the log-normal assumption does not hold anymore for the weak lensing mass bias distribution in case of miscentring. However, the deviation is at the 3-5 per cent level regarding the true mass. Therefore, we could still obtain meaningful estimates of the mean bias and scatter from a log-normal fit. We found that the weak lensing mass bias distribution is nearly independent of mass within the $2\sigma$ bounds of the given SZ-derived mass of the respective clusters. Thus, we averaged the bias and scatter over this mass range and report the results in Tables \ref{tab:mass-Xray} and \ref{tab:mass-SZ}. We found that the clusters exhibit a weak lensing mass bias $\hat{b}_{\Delta,\mathrm{WL}}$ between 0.74 and 0.92 in the presence of miscentring (using X-ray centres) with a scatter $\sigma$ between 0.25 and 0.48 regarding the weak lensing masses $M_{500c}$. On average, the masses computed with the X-ray centres are slightly less biased with a slightly smaller scatter when compared to the masses computed with the SZ centre (see Tables \ref{tab:mass-Xray} and \ref{tab:mass-SZ}). This is a result of the on average smaller offsets of the X-ray miscentring distribution compared to the offsets of the SZ miscentring distribution \citepalias{Schrabback2021}. We note that we have derived these estimates from the MXXL snapshot at $z = 1$. \citetalias{Schrabback2021} report weak lensing mass bias estimates, which are interpolated between results at $z = 0.25$ and $z = 1$ according to the given cluster redshift. We found that the results using the $z = 0.25$ snapshot are very similar to those at $z = 1$. This suggests that there is no strong redshift evolution, and we decide to report the results from the $z = 1$ snapshot, closest to the redshift range of our sample. \section{Constraints on the SPT observable-mass scaling relation} \label{Sec:ScalingRelAnalysis} In this section, we present how we combined the weak lensing mass measurements of our nine high-redshift SPT clusters with results for clusters at lower redshifts, namely weak lensing mass measurements of 19 SPT clusters with redshifts $0.29 \leq z\leq 0.61$ based on Magellan/Megacam observations \citepalias[][sample Megacam-19]{Dietrich2019} and of 30 SPT clusters with redshifts $0.58 \leq z \leq 1.13$ based on \textit{HST} observations \citepalias[][sample HST-30]{Schrabback2021}. We used this sample of in total 58 SPT clusters (we refer to it as HST-39 + Megacam-19) with weak lensing mass measurements to constrain the SPT observable-mass scaling relation. Thereby, we extended the previous studies \citepalias{Schrabback2018,Dietrich2019,Bocquet2019,Schrabback2021} out to redshifts of up to $z = 1.7$. \subsection{Likelihood formalism for the observable-mass scaling relation} \label{Sec:Likelihood formalism} In this section, we briefly summarise our likelihood formalism. It follows the definitions in \citetalias{Dietrich2019}, \citetalias{Bocquet2019}, and \citetalias{Schrabback2021}, which we refer the reader to for further details. The SPT observable-mass scaling relation is based on the measured detection significance $\xi$ as a mass proxy. Its relation to the unbiased detection significance $\zeta$ can be quantified from simulations \citep{Vanderlinde2010} or analytically \citep{Zubeldia2021} and exhibits a scatter given by a Gaussian of unit width \begin{equation} P(\xi | \zeta ) = \mathcal{N} \left(\sqrt{\zeta^2 + 3},1\right) \,. \end{equation} Further following \citetalias{Bocquet2019} and \citetalias{Schrabback2021}, we define the scaling relation between the unbiased detection significance $\zeta$ and the mass $M_{500c}$ as a power-law in mass and the dimensionless Hubble parameter $E(z) \equiv H(z)/H_0$: \begin{equation} \langle \ln \zeta \rangle = \ln \left[ \gamma_\mathrm{field} A_\mathrm{SZ} \left(\frac{M_{500c}}{3\times10^{14}\mathrm{M}_\odot /h}\right)^{B_\mathrm{SZ}} \left(\frac{E(z)}{E(0.6)}\right)^{C_\mathrm{SZ}} \right] \,, \end{equation} where $A_\mathrm{SZ}$, $B_\mathrm{SZ}$, and $C_\mathrm{SZ}$ parametrise the normalisation, mass slope, and redshift evolution, respectively, and $\gamma_\mathrm{field}$ characterises the effective depth of the individual SPT fields. Since we want to constrain this relation with the help of weak lensing mass measurements, we additionally need to consider the relation between lensing mass and true mass (see Eq. \ref{Eq:WLmass+bias+halomass}). We set $\Delta = 500c$ and omit this notation in this section for readability, so that the relation reads \begin{equation} \ln \langle M_\mathrm{WL}\rangle = \ln b_\mathrm{WL} + \ln M \,. \end{equation} Combining both relations, we therefore obtain the joint relation \begin{equation} P\left(\left[ \myvec{\ln \zeta \\ \ln M_\mathrm{WL}}\right] | M,z \right) = \mathcal{N} \left( \left[\myvec{\langle \ln \zeta \rangle (M,z) \\ \langle \ln M_\mathrm{WL}\rangle (M,z) } \right] , \Sigma_{\zeta - M_\mathrm{WL}} \right) \,, \label{Eq:joint-scaling-rel} \end{equation} where the covariance matrix $\Sigma_{\zeta - M_\mathrm{WL}}$ summarises how the logarithms of the observables $\zeta$ and $M_\mathrm{WL}$ scatter. It is given by \begin{equation} \Sigma_{\zeta - M_\mathrm{WL}} = \left( \myvec{ \sigma^2_{\ln \zeta} & \rho_{\mathrm{SZ} - \mathrm{WL}}\sigma_{\ln \zeta} \sigma_{\ln M_\mathrm{WL}}\\ \rho_{\mathrm{SZ} - \mathrm{WL}}\sigma_{\ln \zeta} \sigma_{\ln M_\mathrm{WL}} & \sigma^2_{\ln M_\mathrm{WL}} } \right) \,. \end{equation} The quantities $\sigma_{\ln \zeta}$ and $\sigma_{\ln M_\mathrm{WL}}$ denote the widths of the normal distributions, which characterise the intrinsic scatter in $\ln \zeta$ and $\ln M_\mathrm{WL}$, respectively. They are assumed to be independent of redshift and mass. Correlated scatter between the SZ and the weak lensing observable is described by the correlation coefficient $\rho_{\mathrm{SZ} - \mathrm{WL}}$. We note that the weak lensing observable is not the mass $M_\mathrm{WL}$, but rather the tangential reduced shear $g_\mathrm{t}$. Therefore, the likelihood for each cluster reads \begin{equation} \begin{split} P(g_\mathrm{t} | \xi, z, \boldsymbol{p}) &= \iiint \mathrm{d}M\, \mathrm{d}\zeta\, \mathrm{d}M_\mathrm{WL} \\ & \times [ P(\xi | \zeta ) P(g_\mathrm{t}| M_\mathrm{WL}, N_\mathrm{source}(z), \boldsymbol{p}) \\ & \times P(\zeta, M_\mathrm{WL} | M, z, \boldsymbol{p}) P(M | z, \boldsymbol{p}) ]\,. \end{split} \end{equation} Here, $P(\zeta, M_\mathrm{WL} | M, z, \boldsymbol{p})$ is the joint scaling relation introduced in Eq. (\ref{Eq:joint-scaling-rel}) and $P(M | z, \boldsymbol{p})$ denotes the halo mass function by \cite{Tinker2008}. It represents a weighting required to account for Eddington bias. The vector $\boldsymbol{p}$ summarises the astrophysical and cosmological modelling parameters. Furthermore, the source redshift distribution is given by $N_\mathrm{source}(z)$ and the terms $P(\xi | \zeta )$ and $ P(g_\mathrm{t}| M_\mathrm{WL}, N_\mathrm{source}(z), \boldsymbol{p})$ contain information about the intrinsic scatter and observational uncertainties in the observables\footnote{We note that we already included the shape noise of the tangential reduced shear profiles when we quantified the mass modelling bias in Sect. \ref{Sec:corr_for_mass_modelling_bias}. However, the scatter $\sigma(\mathrm{ln}\, b_\mathrm{\mathrm{500c,WL}})$ of the weak lensing mass modelling bias changes only marginally for a noiseless estimation of the bias, so that our scaling relation results are not affected.}. Finally, the total log-likelihood corresponds to the sum of logarithms of the individual cluster likelihoods \begin{equation} \ln \mathcal{L} = \sum_{i=1}^{N_\mathrm{cl}} \sum_{j=1}^{N_\mathrm{bin}} \ln P(g_{\mathrm{t},ij} | \xi_{ij}, z_{ij}, \boldsymbol{p}) \,, \end{equation} where $N_\mathrm{cl} = 58$ is the total number of clusters considered to obtain constraints on the SPT observable-mass scaling relation and $N_\mathrm{bin}$ is the number of radial bins for the reduced shear profiles. We note that we naturally accounted for the selection function of the sample because we applied the established likelihood formalism only to the clusters from the SPT-SZ survey. Furthermore, the subsamples of clusters with weak lensing measurements were assembled randomly, independent of their lensing signal, so that the likelihood function is complete and does not suffer from biases due to weak lensing selections \citepalias{Dietrich2019,Bocquet2019}. In particular, this means that we also included the clusters that were not detected with a peak in the mass maps (see Sect. \ref{Sec:Mass-maps}), because we would otherwise have introduced unwanted selection effects. We cannot constrain all parameters in this relation equally well with the current weak lensing mass measurements. In particular, our data set does not allow for meaningful constraints for $B_\mathrm{SZ}$ and $\sigma_{\ln \zeta}$ \citepalias{Schrabback2021}. Thus, we introduced the following priors. Regarding the slope parameter, we used a Gaussian prior $B_\mathrm{SZ}\sim \mathcal{N}(1.53,0.1^2)$, which is motivated by the cosmological study in \citetalias{Bocquet2019}. We assumed $\sigma_{\ln \zeta} \sim \mathcal{N}(0.13,0.13^2)$ as used by \citet{deHaan2016} and derived based on mock observations of hydrodynamic simulations from \citet{LeBrun2014}. Additionally, we implemented the weak lensing mass modelling bias and corresponding scatter obtained in Sect. \ref{Sec:corr_for_mass_modelling_bias} and adopted a flat prior for the correlation coefficient, that is $\rho_{\mathrm{SZ} - \mathrm{WL}} \in [-1,1]$. We conducted the likelihood analysis with an updated version of the pipeline used in \citetalias{Bocquet2019} and \citetalias{Schrabback2021}, which is embedded in the \texttt{COSMOSIS} framework \citep{Zuntz2015} and where the likelihood is explored with the MULTINEST sampler \citep{Feroz2009}. The full, updated pipeline will be made available along with a future publication by Bocquet et al. (in prep.). We tested the likelihood machinery with mock cluster data. We simulated an SPT cluster catalogue with SZ detection significances and redshifts. We chose a number density and shape noise resembling the optical observations and implement an average source redshift distribution to simulate weak lensing cluster observations. These served as a basis to generate mock shear profiles, which we used as input for the likelihood analysis. Running the analysis on these mock data, we found that the resulting constraints on the scaling relation meet the expectation, thereby providing a valuable consistency check of our pipeline. \begin{table*} \centering \caption{Fit results for the parameters of the $\zeta$--mass relation, analogously to table 12 in \citetalias{Schrabback2021}, now including the weak lensing measurements for the nine high-$z$ SPT clusters from this work. } \label{tab:SZ mass-scaling-rel_results} \begin{threeparttable} \begin{tabular}{l cc ccc} % \hline \hline Parameter & Prior & \multicolumn{2}{c}{HST-39 + Megacam-19} & SPTcl ($\nu\Lambda$CDM) & \textit{Planck} + SPTcl ($\nu\Lambda$CDM) \\ & & fiducial & binned & \citepalias{Bocquet2019} & (no WL mass calibration) \\ \hline $\ln A_\mathrm{SZ}$ & flat & $1.71\pm0.19$ & -- & $1.67\pm0.16$ & $1.27^{+0.08}_{-0.15}$\\ $\ln A_\mathrm{SZ}(0.25<z<0.5)$ & flat & -- & $1.74\pm0.23$ & -- & -- \\ $\ln A_\mathrm{SZ}(0.5<z<0.88)$ & flat & -- & $1.58\pm0.31$ & -- & -- \\ $\ln A_\mathrm{SZ}(0.88<z<1.2)$ & flat & -- & $1.85\pm0.43$ & -- & -- \\ $\ln A_\mathrm{SZ}(1.2<z<1.7)$ & flat & -- & $1.89\pm0.81$ & -- & -- \\ $C_\mathrm{SZ}$ & flat/fixed & $1.34\pm1.00$ & $1.34$ & $0.63^{+0.48}_{-0.30}$ & $0.73^{+0.17}_{-0.19}$\\ \hline \multicolumn{3}{l}{Prior-dominated parameters in our analysis:} & & & \\ $B_\mathrm{SZ}$ & $\mathcal{N}(1.53, 0.1^2)$ & $1.56\pm0.09$ & $1.57\pm0.10$ & $1.53\pm0.09$ & $1.68\pm0.08$\\ $\sigma_{\ln\zeta}$ & $\mathcal{N}(0.13, 0.13^2)$ & $0.16^{+0.06}_{-0.13}$ & $0.15^{+0.04}_{-0.13}$ & $0.17\pm0.08$ & $0.16^{+0.07}_{-0.12}$\\ \hline \end{tabular} \textbf{Notes.} SPTcl ($\nu\Lambda$CDM) denotes the results from the \citetalias{Bocquet2019} study, which combined SPT cluster counts with weak lensing and X-ray mass measurements. The results from the analysis denoted as \textit{Planck} + SPTcl ($\nu\Lambda$CDM) are based on a combination of measurements from the \textit{Planck} CMB anisotropies \citep[TT,TE,EE+low-E, ][]{Planck2020CMB} and SPT cluster counts. \end{threeparttable} \end{table*} \subsection{Redshift evolution of the $\zeta$--mass relation} We applied the likelihood setup to our full cluster sample of 58 clusters with weak lensing mass measurements to constrain the $\zeta$--mass relation. We present our results in Table \ref{tab:SZ mass-scaling-rel_results}. With our analysis, we constrained the scaling relation parameters \mbox{$A_\mathrm{SZ} = 1.71\pm0.19$} and \mbox{$C_\mathrm{SZ} = 1.34\pm1.00$}, while the parameter $B_\mathrm{SZ}$ is dominated by the prior. Fig. \ref{Fig:Redshift_evol_of_SZ mass-scaling-rel} displays the redshift evolution of the scaling relation, now for the first time extending out to redshifts up to $z\sim1.7$ (red band, result of the fiducial analysis). For comparison, we show the constraints from \citetalias{Schrabback2021} based on the HST-30 + Megacam-19 samples in blue, demonstrating that our findings in this study are fully consistent with these previous results. This was expected because we added only nine clusters to the previously used sample. In addition, our clusters are at the high-redshift end and therefore the statistical uncertainties are larger compared to clusters at lower and intermediate redshifts. Furthermore, the diagonally hatched region represents the scaling relation constraints from \citetalias{Bocquet2019}, who analysed weak lensing measurements from the Megacam-19 sample and 13 clusters from \citetalias{Schrabback2018} in combination with X-ray measurements and cluster abundance information. They marginalised over cosmological parameters for a flat $\nu\Lambda$CDM cosmology. For comparison, we also show results computed for a joint analysis of \textit{Planck} primary CMB anisotropies \citep[TT,TE,EE+low-E, ][]{Planck2020CMB} and the SPT cluster abundance as the vertically hatched region. Again, this includes a marginalisation over cosmological parameters assuming a flat $\nu\Lambda$CDM cosmology. This analysis does not incorporate any weak lensing mass measurements. As also found in \citetalias{Schrabback2021}, we observe an offset between the red and vertically hatched regions implying that the mass scale preferred from our analysis with the weak lensing data sets is lower than the mass scale that would be consistent with the \textit{Planck} $\nu\Lambda$CDM cosmology by a factor of $0.72^{+0.09}_{-0.14}$ (at our pivot redshift of $z=0.6$). Analogous to \citetalias{Schrabback2021}, we wanted to check if the simple scaling relation model is applicable over the full, wide redshift range investigated here by performing a binned analysis, where the amplitude $A_\mathrm{SZ}$ is allowed to vary individually for each bin. Therefore, we added a bin of $1.2 < z < 1.7$ to the bins that were already used before in \citetalias{Schrabback2021} (namely $0.25 < z < 0.5$, $0.5 < z < 0.88$, and $0.88 < z < 1.2$). We kept the redshift evolution parameter fixed to the value from the fiducial analysis at $C_\mathrm{SZ} = 1.34$. From \mbox{Fig. \ref{Fig:Redshift_evol_of_SZ mass-scaling-rel}}, we can see that the results in our new high-redshift bin are consistent with the scaling relation results from the full unbinned analysis. Additionally, we found that our results in the lower redshift bins are very similar to the results from the binned analysis in \citetalias{Schrabback2021}. This is also expected because the bins contain the same clusters except for SPT-CL{\thinspace}$J$0646$-$6236, which was added to the third redshift bin and causes a small shift towards a higher cluster mass scale due to its large cluster mass. \section{Discussion} \label{Sec:Discussion} Weak lensing studies of galaxy clusters with ever higher redshifts face the increasingly difficult challenge to identify background galaxies carrying the lensing signal \citep[e.g. ][]{Mo2016,Jee2017,Finner2020}. In a simplified consideration, the signal-to-noise ratio of a lensing measurement scales with the product of the average geometric lensing efficient $\langle \beta \rangle$ and the square root of the source number density $\sqrt n$. For comparison purposes, we define the weak lensing sensitivity factor $\tau_\mathrm{WL}$ as the product of these two quantities: $\tau_\mathrm{WL} = \langle \beta \rangle \sqrt{n}$\footnote{In principle, the signal-to-noise ratio of a lensing measurement also depends on other parameters such as cluster mass and fit range. However, the signal-to-noise ratio still scales with the weak lensing sensitivity factor $\tau_\mathrm{WL}$. We use it to represent how the source selection affects the lensing signal-to-noise ratio and compare this quantity for different studies.}. The average geometric lensing efficiency is tied to the purity of the source sample, that is, the fraction of true background source galaxies. A higher purity is desirable as it also increases the average geometric lensing efficiency. At the same time, cuts to identify true background source galaxies should not be too rigorous as this might reduce the overall source density potentially at the cost of also excluding true background galaxies. Additionally, a lower source density is more subject to shot noise, consequently reducing the lensing signal-to-noise ratio. Some previous weak lensing studies were conducted with \textit{HST}/WFC3 in infrared bands to measure masses of clusters at redshifts $z \gtrsim 1.5$. They introduced varying techniques to select source galaxies for the lensing measurements. For their weak lensing analysis of cluster SpARCS1049$+$56 at redshift $z = 1.71$, \citet{Finner2020} selected sources via a magnitude cut of $H_\mathrm{F160W} > 25.0$\,mag and specific shape cuts aiming to remove galaxies with high uncertainty in the ellipticity measurement and objects that are too small or too elongated to be galaxies. Applying this method to their observations, they achieved a source density of 105\,arcmin$^{-2}$ and estimated an average geometric lensing efficiency of $\langle \beta \rangle = 0.107$. This translates into a signal-to-noise ratio of $\tau_\mathrm{WL} \sim 1.10$. Alternatively, \citet{Jee2017} performed a weak lensing study of clusters SPT-CL{\thinspace}$J$2040$-$4451 and IDCS{\thinspace}J1426$+$3508 at redshifts $z = 1.48$ and $z=1.75$, respectively. They selected source galaxies requiring that they are bluer than the cluster red-sequence combined with a bright magnitude and shape measurement uncertainty cut. They obtained a source density of $\sim 240$\,arcmin$^{-2}$ with an average lensing efficiency of $\langle \beta \rangle = 0.086$ and $\langle \beta \rangle = 0.120$ for IDCS{\thinspace}J1426$+$3508 and SPT-CL{\thinspace}$J$2040$-$4451, respectively. This corresponds to $\tau_\mathrm{WL} \sim 1.33$ and $\tau_\mathrm{WL} \sim 1.86$, respectively. \citet{Mo2016} conducted a weak lensing study of IDCS{\thinspace}J1426$+$3508 prior to \citet{Jee2017} using \textit{HST}/ACS and \textit{HST}/WFC3 data from the bands F606W, F814W, and F160W. They measured galaxy shapes with the F606W imaging selecting source galaxies with $24.0 < V_\mathrm{F606W} < 28.0$ (the latter is roughly the $10\sigma$ depth limit of their observations), $0\farcs27 < $ FWHM\footnote{measured with \texttt{Source Extractor} } $ < 0\farcs9$ (to exclude too large/small galaxies either because they are likely foreground galaxies or to avoid PSF problems, respectively), and $I_\mathrm{F814W} - H_\mathrm{F160W} < 3.0$ (to exclude cluster red-sequence galaxies). They achieved an average lensing efficiency of $\langle \beta \rangle = 0.086$ at a source density of 89\,arcmin$^{-2}$, resulting in $\tau_\mathrm{WL} \sim 0.81$. In conclusion, both NIR studies \citep{Jee2017,Finner2020} achieved higher source densities, but lower average geometric lensing efficiencies than our study, which has an average source density of 13.1\,arcmin$^{-2}$ and an average geometric lensing efficiency of \mbox{$\langle \beta \rangle = 0.244$}, and thus \mbox{$\tau_\mathrm{WL} \sim 0.88$}. The studies by \citet{Jee2017} and \citet{Finner2020} owe the high signal-to-noise ratios mainly to very deep observations enabling high source densities. In contrast, our study focuses on a high purity as visible in Figs. \ref{Fig:Low-z redshift distribution} and \ref{Fig:Low-z CANDELS redshift distribution}, which display that we selected almost only high-$z$ sources at $z\gtrsim 2$ with high lensing efficiency, while keeping the contamination of foreground, cluster, and near background galaxies low. This strategy resulted in an average lensing efficiency more than twice as high, and it helps to keep systematic uncertainties low for several reasons. First, excluding galaxies at the cluster redshift minimises uncertainties related to the correction for cluster member contamination. Second, galaxies in the near background are located in a regime where $\beta(z)$ is a steep function of $z$. Thus, systematic redshift uncertainties lead to larger systematic uncertainties in $\langle \beta\rangle$ than for the distant background galaxies selected in our approach. Finally, the efficient removal of foreground galaxies minimises the impact that catastrophic redshift outliers scattering between low and high redshifts have on the computation of $\langle\beta\rangle$ \citepalias[see ][]{Schrabback2018,Raihan2020}. While we found that the uncertainties in the redshift distribution (\citetalias{Raihan2020} versus R15\_fix comparison and variations between CANDELS/3D-HST fields) dominate the systematic error budget (see Table \ref{Tab:Errorbudget of photometry,beta}), our comparatively low number density introduced high statistical uncertainties, which (together with other statistical uncertainties) outweigh the systematic ones in our current analysis. However, we stress that our approach, which aims to limit systematic uncertainties by using data of moderate depth and applying a stringent background selection, could directly be applied to similar data sets obtained for larger cluster samples. In combination with the considerable measurement uncertainties and the substantial expected intrinsic scatter (see Sect. \ref{Sec:corr_for_mass_modelling_bias}), the best-fitting cluster mass estimates in our study are, therefore, expected to scatter significantly. This likely explains the relatively low mass estimate of SPT-CL{\thinspace}$J$0205$-$5829, which remained undetected in the weak lensing data despite its high SZ-inferred mass, and the comparably high best-fitting mass estimate for SPT-CL{\thinspace}$J$2040$-$4451. Still, we emphasise that our study aims to provide mass constraints that are accurate on average for our sample of nine galaxy clusters. Indeed, the median ratio of lensing mass to SZ mass from SPT is close to unity. We found a median ratio of bias corrected weak lensing mass to SZ mass $M_\mathrm{500c,WL,corr}/M_\mathrm{500c,SZ}$ of $1.048\pm0.372$ or $1.064\pm0.462$ using the weak lensing masses with X-ray centres (8 clusters) or SZ centres (9 clusters), respectively. We estimated the uncertainties via bootstrapping of the cluster sample. Deviations between the X-ray or SZ mass and the lensing mass for individual clusters can, for instance, be caused by their different sensitivities to large-scale structure projections, triaxiality, and variations in density profiles. For example, we measured the highest weak lensing mass for the cluster SPT-CL{\thinspace}$J$2040$-$4451, which is notably higher than the expectation from the SZ or X-ray mass estimates. However, taking the statistical uncertainties of the weak lensing, SZ and X-ray mass estimates into account, as well as the mass modelling bias and scatter, we found that the bias-corrected weak lensing mass agrees with its SZ (X-ray) mass estimate at the $1.2\sigma$ ($1.2\sigma$) level. We used the SZ mass listed in Table \ref{tab:Cluster sample properties} and the X-ray mass \mbox{$M_\mathrm{500c,X-ray} = 3.10^{+0.79}_{-0.47}\times 10^{14}\,\mathrm{M}_\odot$} from \citet{McDonald2017} as reference. We quantified the expected discrepancy between the SZ or X-ray mass and the weak lensing mass further in Appendix \ref{Appendix:Consistency of WL with SZ + X-ray}. For this particular cluster, \citet{Jee2017} found a weak lensing mass of $M_{200\mathrm{c}} = 8.6^{+ 1.7}_{-1.4}\,\times 10^{14}\,\mathrm{M}_\odot$ (not corrected for mass modelling bias), which is also higher than the X-ray and SZ mass estimates of the cluster. Our weak lensing mass constraint of $M_{200\mathrm{c}}^\mathrm{biased,ML} = 16.4_{-5.7}^{+5.8}\pm 1.6\pm 1.9 \,\times 10^{14}\,\mathrm{M}_\odot$ (for comparability with \citealt{Jee2017} not corrected for mass modelling bias) deviates only by $1.2\sigma$ from the result by \citet{Jee2017}, so that our results confirm the generally higher lensing mass for SPT-CL{\thinspace}$J$2040$-$4451 (albeit with larger statistical uncertainties), suggesting potential line of sight effects. This conclusion is additionally supported by a high dynamical mass measurement (albeit with large uncertainties) by \citet{Bayliss2014}. Several differences in the analyses especially regarding the source selection strategies and fit ranges may explain the difference between the lensing masses from \citet{Jee2017} and our study. \citet{Jee2017} obtained their weak lensing mass constraint from \textit{HST}/WFC3 imaging in F105W, F140W, and F160W. They fitted a spherical NFW profile assuming the concentration--mass relation of \citet{Dutton2014} and centred at their measured X-ray peak position (from \textit{Chandra} data), including weak lensing sources outside of a minimum radius \mbox{$r_\mathrm{min} = 26$\,arcsec}, corresponding to 218\,kpc at the cluster redshift. The WFC3/IR observations by \citet{Jee2017} provide a full azimuthal coverage out to $r \lesssim 60$\,arcsec, while we have $r\lesssim 90$\,arcsec ($r\lesssim 72$\,arcsec) around the SZ (X-ray) centre in our observations. We note that our inner fit limit ($r_\mathrm{min} = 500$\,kpc) corresponds to an angular radius of 59\,arcsec. Accordingly, our analysis primarily employs reduced shear measurements at larger scales compared to the analysis of \citet{Jee2017}. Additionally, we measured the weak lensing mass assuming the concentration--mass relation by \citet{Diemer2015} with updated parameters from \citet{Diemer2019}, we centred the fit around the X-ray centroid from \citet{McDonald2017}, which has a distance of 8.1\,arcsec to the X-ray peak employed by \citet{Jee2017}, and we used galaxies outside a minimum radius of $r_\mathrm{min} = 500$\,kpc. We excluded any scales smaller than this to minimise systematic mass modelling uncertainties and the impact of a potential residual cluster member contamination (below the detection limit). Since the X-ray peak and centroid positions are relatively close to each other, it is reasonable to compare the weak lensing mass results without applying the statistical mass modelling correction. The largest difference between the \citet{Jee2017} study and ours is the source selection strategy. \citet{Jee2017} based their work on imaging that is significantly deeper (with a limiting magnitude of F140W $\sim 28$\,mag) than ours but limited to a smaller field of view. Their selection of background galaxies focussed on the exclusion of red-sequence galaxies (galaxies at $\mathrm{F105W} - \mathrm{F140W} < 0.5$ are selected) and resulted in a source number density of $\sim 240\,\mathrm{arcmin}^{-2}$ with a fraction of non-background sources (with $z \leq z_\mathrm{cluster}$) of approximately 45 per cent. Additionally, the inclusion of scales at \mbox{$218\,\mathrm{kpc}<r<500$\,kpc} likely shrinks statistical uncertainties since the lensing signal is high in the inner regions of the cluster. This allowed them, in turn, to achieve small statistical uncertainties of their weak lensing mass constraints. However, the inclusion of such core regions usually increases the intrinsic scatter and mass modelling uncertainties \citep[][see also Sect. \ref{Sec:corr_for_mass_modelling_bias}]{Sommer2021}. Our more strict selection strategy for the background galaxies based on magnitudes/colours from four bands is contaminated by 17 to 20 per cent of non-background galaxies. The shallower data finally resulted in a source number density of 11.2\,arcmin$^{-2}$ for SPT-CL{\thinspace}$J$2040$-$4451 so that our analysis exhibits substantially larger statistical uncertainties in the weak lensing mass constraints. \citet{Jee2017} reported the detection of the cluster in their weak lensing mass map at the location \mbox{$\alpha = 20^\mathrm{h}40^\mathrm{m}57\fs85$} and \mbox{$\delta = -44^\circ51^\prime42\farcs4$} with $6\sigma$ significance. In our mass map, we detected a peak at $3.4\sigma$, with a separation of 6.6\,arcsec from the location in \citet{Jee2017}. While this offset is slightly larger than our estimate of the positional uncertainty derived using bootstrapping (see Table \ref{tab:masspeaklocations}), we note that \citet{Sommer2021} found that bootstrapping substantially underestimates the true uncertainty. The peaks from both studies are close to the X-ray centroid position from \cite{McDonald2017} so that they are overall in agreement. We also note that the peak in our weak lensing mass reconstruction for SPT-CL{\thinspace}$J$2040$-$4451 closely coincides with the X-ray centroid. Accordingly, the shear profile is approximately centred on the position that maximises the lensing signal. This likely scatters the mass result high, especially if the statistical correction for mass modelling bias is applied. While several studies undoubtedly confirmed SPT-CL{\thinspace}$J$2040$-$4451 as one of the most massive high-redshift clusters known, our study shows that based on our weak lensing measurements, the SPT cluster population is less massive than what one would expect in a {\it Planck} $\Lambda$CDM cosmology, also at very high redshifts (see Sect. \ref{Sec:ScalingRelAnalysis}). With our cluster sample and analysis, we enabled constraints on the SZ--mass scaling relation and its redshift evolution for the first time out to the redshift regime of $z>1.2$. While lensing studies at lower redshifts can be calibrated more precisely and systematics are generally smaller, high-redshift clusters are particularly sensitive to probe, for example, models with massive neutrinos \citep{Ichiki2012}, or deviations from standard $\Lambda$CDM expectations, such as early dark energy \citep{Klypin2021}. Therefore, exploring the high-redshift regime is worthwhile to understand the cosmological $\Lambda$CDM model and its possible extensions. Our study provides a first step towards constraints from clusters at redshifts $z>1.2$. \section{Summary and conclusions} \label{Sec:Summary+Conclusions} In this work, we studied the gravitational lensing signal of a sample of nine clusters with high redshifts $z \gtrsim 1.0$ in the SPT-SZ survey. They all exhibit a strong SZ signal with a high SZ detection significance $\xi>6.0$. We obtained weak lensing mass constraints from shape measurements of galaxies with high-resolution \textit{HST}/ACS imaging in the F606W and F814W bands. With the help of additional \textit{HST} imaging using WFC3/IR in F110W and VLT/FORS2 imaging in $U_\mathrm{HIGH}$, we applied a strategy to photometrically select background galaxies, even for clusters at such challenging high redshifts. Using updated photometric redshift catalogues computed by \citetalias{Raihan2020} for the CANDELS/3D-HST fields as a reference, we estimated the source redshift distribution and calculated the average geometric lensing efficiency, applying the same selection criteria in the reference photometric redshift catalogues as in the cluster observations. We also added Gaussian noise to the reference catalogues if they were deeper than our cluster observations. We carefully investigated sources of systematic and statistical uncertainties for estimates of the average geometric lensing efficiency. We found consistent results in the HUDF field comparing our photometric measurements employing the algorithm \textsc{LAMBDAR} for adaptive aperture photometry and the \citetalias{Skelton2014} photometric measurements based on fixed aperture photometry. A comparison based on photometric and spectroscopic redshifts revealed a $\sim 3$ per cent difference in calculating the average geometric lensing efficiency, which we accounted for in the weak lensing analysis. We reconstructed the projected cluster mass distributions based on the shear measurements of the selected galaxies. In the resulting mass maps, we detected two of the clusters with a peak at $S/N > 3$, four clusters with $S/N > 2$, and three clusters were not detected. We obtained weak lensing mass constraints by fitting the tangential reduced shear profiles with spherical NFW models, employing a fixed concentration--mass relation by \citet{Diemer2015} with updated parameters from \citet{Diemer2019}. We reported statistical uncertainties from shape noise, uncorrelated large-scale structure projections, line of sight variations in the source redshift distribution, and uncertainties in the calibration of the $U_\mathrm{HIGH}$ band. We also estimated mass modelling biases using simulated clusters from the Millennium XXL simulations accounting for miscentring. Masses based on the X-ray centre were less biased ($\hat{b}_{\Delta\mathrm{c,WL}}$) and exhibited a slightly smaller scatter of the mass bias ($\sigma(\mathrm{ln}\, b_{\Delta\mathrm{c,WL}})$) than masses obtained using SZ centres. This is consistent with findings in previous studies \citep[e.g. ][\citetalias{Schrabback2021}]{Sommer2021}. We carefully investigated the sources of systematic uncertainties in our study. The total systematic uncertainty of our weak lensing mass estimates amounts to 14.4 per cent (16.7 per cent) for the analyses centring the reduced shear profiles around the X-ray (SZ) centres. Here, the largest contribution (12.9 per cent) comes from uncertainties related to the source selection and calibration of the source redshift distribution (see Table \ref{Tab:Errorbudget of photometry,beta}). Our weak lensing mass constraints for SPT-CL{\thinspace}$J$2040$-$4451 are higher, but still consistent with the earlier results obtained by \citet{Jee2017}. Given the limited depth of our data and the high redshifts of the targeted clusters, our weak lensing mass estimates are relatively noisy. However, on average they are consistent with the SZ-inferred mass estimates from \citetalias{Bocquet2019}, which employ a weak lensing mass calibration based on data from \citet{Dietrich2019} and \citet{Schrabback2018}. We found a median ratio of $1.048\pm0.372$ or $1.064\pm0.462$ using the weak lensing masses with X-ray centres (8 clusters) or SZ centres (9 clusters), respectively. Finally, we used the obtained weak lensing mass measurements in a joint analysis with measurements for clusters at lower \citepalias{Dietrich2019} and intermediate \citepalias{Schrabback2021} redshifts to constrain the scaling relation between the debiased SPT cluster detection significance $\zeta $ and cluster mass, thereby expanding the previous studies by \citetalias{Bocquet2019} and \citetalias{Schrabback2021} to higher redshifts $z > 1.2$. Our binned analysis of the redshift evolution of the $\zeta $--mass scaling relation revealed that the new highest redshift bin at $1.2 < z < 1.7$ is consistent with the scaling relation behaviour predicted from lower redshifts, albeit with large statistical uncertainties. Even with these large uncertainties at the high redshift end, our results for the full, unbinned analysis support previous findings where the mass scale preferred in an analysis including the weak lensing measurements is lower than the mass scale required for consistency with the \textit{Planck} $\nu\Lambda$CDM cosmology presented in \citet{Planck2020CMB}. In our pilot study, we developed an approach for weak lensing mass measurements of high-$z$ clusters with well-controlled systematics, thereby obtaining such measurements for a first significant sample of SZ-selected clusters at \mbox{$z\gtrsim 1.2$}. However, the small sample size and limited depth of the data imply large statistical uncertainties, which can be addressed by applying the approach to new weak lensing data of additional high-redshift clusters. While statistical uncertainties dominate in our study, there also remain notable systematic uncertainties, which need to be reduced in the future. Our study shows that the largest systematic uncertainty for lensing studies of high-redshift galaxy clusters arises from the calibration of the source redshift distribution. Here, surveys such as the planned \textit{James Webb Space Telescope} Advanced Deep Extragalactic Survey\footnote{\url{https://pweb.cfa.harvard.edu/research/james-webb-space-telescope-advanced-deep-extragalactic-survey-jades}} (JADES) will help to calibrate the redshift distributions especially for high-redshift clusters, which are observed with deep imaging data. This survey will provide imaging and spectroscopy to unprecedented depth and infer photometric and spectroscopic redshifts over an area of 236\,arcmin$^2$ in the GOODS-South and GOODS-North fields. Additionally, direct calibration methods and those utilizing the stacked redshift probability distribution functions of galaxies already show promising results and need to be further explored to help reduce systematic uncertainties in the redshift calibration \citep[e.g. ][]{Ilbert2021}. Furthermore, in-depth analyses of hydrodynamical simulations will help to better understand and reduce systematics due to the concentration--mass relation, the weak lensing mass modelling, and miscentring distribution uncertainties. \begin{acknowledgements} This research is based on observations made with the NASA/ESA Hubble Space Telescope obtained from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with GO programmes 12477, 13412, 14252, and 14677 (observations of the nine clusters targeted for this study), as well as 14043, 9425, 12062, and 13872 (archival data in the GOODS-South region). This work is based on observations taken by the 3D-HST Treasury Program (HST-GO-12177 and HST-GO-12328) with the NASA/ESA Hubble Space Telescope. This work made use of HDUV Data Release 1.0 data products \citep{Oesch2018}. The team members involved in the HDUV survey are: P. Oesch, M. Montes, N. Reddy, R. J. Bouwens, G. D. Illingworth, D. Magee, H. Atek, C. M. Carollo, A. Cibinel, M. Franx, B. Holden, I. Labbe, E. J. Nelson, C. C. Steidel, P. G. van Dokkum, L. Morselli, R. P. Naidu, S. Wilkins. This work is based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO programme 0100.A-0204(A). This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular, the institutions participating in the {\it Gaia} Multilateral Agreement. The Bonn group acknowledges support from the German Federal Ministry for Economic Affairs and Climate Action (BMWK) provided through DLR under projects 50OR1803, 50OR2002, 50OR2106, and 50QE2002, as well as support provided by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant 415537506. HZ, FR, and DS are members of and received financial support from the International Max Planck Research School (IMPRS) for Astronomy and Astrophysics at the Universities of Bonn and Cologne. DS acknowledges support from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 776247. HHo acknowledges support from Vici grant 639.043.512 financed by the Netherlands Organization for Scientic Research. AHW is supported by an European Research Council Consolidator Grant (No. 770935). Argonne National Laboratory's work was supported by the U.S. Department of Energy, Office of High Energy Physics, under contract DE-AC02-06CH11357. This work was performed in the context of the South Pole Telescope scientific programme. SPT is supported by the National Science Foundation through grants OPP-1852617. Partial support is also provided by the Kavli Institute of Cosmological Physics at the University of Chicago. The authors would like to thank Peter Schneider and the anonymous referee for useful comments, which helped to improve this manuscript.\\ \\ Data availability: The full, updated pipeline, which we used for the likelihood analysis in this work, will be made available along together with an upcoming publication by Bocquet et al. (in prep.). The data underlying this article will be shared upon reasonable request to the corresponding author. \end{acknowledgements} \bibliographystyle{aa} % \bibliography{HST-WL-SPT-high-z_A+A} % \begin{appendix} \section{Comparison of S14 and LAMBDAR photometry} \label{Appendix:Comparison of S14 and LAMBDAR photometry} While we measured fluxes in our observations with the LAMBDAR software, we only had the \citetalias{Skelton2014} photometry available when we estimated the redshift distribution from the CANDELS/3D-HST fields. Therefore, we checked how consistent we expect our measurements to be with the \citetalias{Skelton2014} photometry. We can perform this check in the central region of the GOODS-South field covering the HUDF, which we observed in the VLT FORS2 $U_\mathrm{HIGH}$ band. In addition to our stack in the $U_\mathrm{HIGH}$ band, we downloaded the stacks\footnote{\url{https://archive.stsci.edu/prepds/3d-hst/}} the \citetalias{Skelton2014} team used in the bands F606W, F814W, F850LP, and F125W (F606W + F850LP: GO programme 9425 with PI M. Giavalisco, F814W: GO programme 12062 with PI S. Faber, F125W: GO programme 13872 with PI G. Illingworth) and measured the photometry on these stacks with LAMBDAR. We used the PSF models provided on the 3D-HST website. We then matched the galaxies in our catalogue with the galaxies in the \citetalias{Skelton2014} photometric catalogue with the \texttt{associate} function from the \texttt{LDAC} tools, requiring a distance of not more than $0\farcs3$ for a match. We interpolated the magnitude $J_{110}$ from our measurements in the filters F850LP and F125W. In this appendix, we define all offsets of the magnitudes or colours in terms of \citetalias{Skelton2014} photometry minus LAMBDAR photometry. In \mbox{Fig. \ref{Fig:S14+LAMBDAR photometry comparison - magnitudes}}, we show how our magnitude measurements with LAMBDAR compare to the \citetalias{Skelton2014} photometry. We found a negative shift with a median offset of up to $\sim -0.1$\,mag between \citetalias{Skelton2014} and LAMBDAR in all of the \textit{HST} bands with a scatter of $\sim 0.3$\,mag. In part, this negative shift is caused by sources with a \texttt{Source Extractor} detection flag of \mbox{$\mathrm{FLAG} > 0$} (based on our detection in the F606W band). For these sources, \texttt{Source Extractor} recognises, for instance, contamination by nearby sources or blending. We found that the magnitude differences of these sources are predominantly negative in the direct comparison of \citetalias{Skelton2014} and LAMBDAR, meaning that \citetalias{Skelton2014} measurements are systematically brighter than LAMBDAR measurements. This is consistent with the expectation given the measurement techniques. \citetalias{Skelton2014} utilise aperture photometry, where fluxes are measured within apertures of fixed size with a diameter of $0\farcs7$ for \textit{HST} images. In contrast to that, LAMBDAR actively deblends photometry and thus measures fainter magnitudes for blended sources. But also for sources with $\mathrm{FLAG} = 0$, we found a slight asymmetry skewed towards more negative magnitude differences between the \citetalias{Skelton2014} and LAMBDAR photometry. For the $U_\mathrm{HIGH}$ band, we found a median offset of $-0.062$\,mag with a scatter of 0.703\,mag, which is a considerably larger scatter than for the \textit{HST} bands. This is likely connected to the difference in depth between the $U_\mathrm{VIMOS}$ stack from \citetalias{Skelton2014} ($5\sigma$ depth 27.4\,mag) and our $U_\mathrm{HIGH}$ stack ($5\sigma$ depth 26.6\,mag) and the difference of the seeing ($0\farcs8$ for $U_\mathrm{VIMOS}$ versus $1\farcs0$ for $U_\mathrm{HIGH}$). We found that including a conversion from the $U_\mathrm{VIMOS}$ band to the $U_\mathrm{HIGH}$ band based on the respective filter curves does not reduce this scatter. However, Fig. \ref{Fig:S14+LAMBDAR photometry comparison - magnitudes} reveals that the scatter is a strong function of magnitude, suggesting that it is indeed related to the shallower depth of the $U_\mathrm{HIGH}$ data. When limited to bright $V_{606}<25$ galaxies, it reduces to 0.426\,mag. Regarding the comparisons of colour measurements (see \mbox{Fig. \ref{Fig:S14+LAMBDAR photometry comparison - colours}}), we found slightly positive shifts for all colours based on \textit{HST} bands. In particular, these colours typically exhibited small shifts of up to $\sim 0.04$\,mag with a scatter of up to \mbox{$\sim 0.11$\,mag}. The shift for \mbox{$U_\mathrm{HIGH} - V_{606}$} is $-0.005$\,mag with a scatter of $0.712$\,mag. Systematic shifts of this order will only mildly impact the estimates of the average lensing efficiency $\langle \beta \rangle$, as we show in Appendix \ref{Appendix:Impact of syst. shifts in photom}. We additionally reduced a data set in the filter F110W (GO programme 14043, PI: F. Bauer) located within the GOODS-South field and compared our F110W photometry with the results from the \citetalias{Skelton2014} photometric catalogues. We found only mild offsets of $-0.010$\,mag and $-0.022$\,mag between the \citetalias{Skelton2014} and our photometry for the colours \mbox{$V_{606}-J_{110}$} and \mbox{$I_{814}-J_{110}$}, respectively. When we calculated the average lensing efficiency for the cluster fields, we could, in principle, apply the scatter that we measured when comparing the \citetalias{Skelton2014} and LAMBDAR photometry to all CANDELS/3D-HST catalogues to account for the different measurement techniques. However, we have to keep in mind that the comparison, which we presented here, is limited in some respects: the $U$ bands we compared here have different depths so that we cannot clearly distinguish between effects due to depth and due to the different filter curves of $U_\mathrm{HIGH}$ and $U_\mathrm{VIMOS}$. Additionally, the CANDELS/3D-HST fields employed different $U$ bands, and also each field has different depths in different filters. Therefore, we decided to account for differences in depth in a consistent way for all five CANDELS/3D-HST fields by adding Gaussian noise based on the difference to the depths in our cluster fields (see Table \ref{tab:Cluster sample properties}). However, we did investigate how shifts in the photometry as presented in this section can affect the average lensing efficiency and added the related uncertainties to our error budget (see Table \ref{Tab:Errorbudget of photometry,beta} and Appendix \ref{Appendix:Impact of syst. shifts in photom}). \section{Robustness of the photometric zeropoint estimation via the galaxy locus method} \label{Appendix:ZP robustness with gal locus} For our $U$ band calibration purposes, we defined the galaxy locus to comprise all galaxies in the magnitude range \mbox{$24.2 < V_{606} < 27.0$}, but excluding galaxies approximately at the cluster redshift \mbox{($1.2 \lesssim z \lesssim 1.7$)} through a cut in the $VIJ$ colour plane (see \mbox{Fig. \ref{Fig:Cuts to remove only cluster gals}}). As described in Sect. \ref{Sec:common photometric system}, we corrected for small shifts in the $U$ band photometry among the five CANDELS/3D-HST fields based on the peak position of highest density in the $UVI$ colour plane. These shifts are listed in \mbox{Table \ref{Tab:Galloc_comparisons}}. In order to estimate how well the zeropoint calibration of the $U_\mathrm{HIGH}$ band works for the observations of our cluster fields, we tested the zeropoint estimation in the CANDELS/3D-HST fields using only subsets of galaxies that approximately match the number of galaxies available in the cluster fields. Our cluster field observations roughly cover a field of view of 11\,arcmin$^2$. We, therefore, only used galaxies from a region of this size from a random position in the respective CANDELS/3D-HST fields. A number of around 400 to 600 galaxies per subsample belongs to our galaxy locus (as defined by the magnitude and colour cuts in Sect. \ref{Sec: Photometric zeropoints}), which approximately equals the expected number of locus galaxies in our cluster fields. Since we had already applied a shift to the $U$ bands in the CANDELS/3D-HST fields as explained above, this means that we measured the residual zeropoint offset for 100 different (possibly overlapping) subsamples and report the average residual zeropoint offset and scatter in Table \ref{Tab:Galloc_comparisons}. Overall, we found that the offsets did not exceed a value of $\sim -0.04$\,mag with a scatter of 0.08\,mag. The impact of such offsets is studied in Appendix \ref{Appendix:Impact of syst. shifts in photom}. \begin{table} \caption{Overview about absolute and residual zeropoint offsets between CANDELS/3D-HST fields. } \begin{center} \begin{threeparttable} \begin{tabular}{ l c c} \hline\hline &\multicolumn{2}{c}{Zeropoint offsets}\\ \cmidrule(lr){2-3} Field & full & 100 samples \\ & [mag] & [mag] \\ \hline AEGIS & $0.121$ & $-0.013\,,\sigma = 0.053$ \\ COSMOS & $0.121$ & $-0.021\,,\sigma = 0.062$ \\ UDS & $0.121$ & $-0.037\,,\sigma = 0.076$ \\ GOODS-North & $-0.040$ & $-0.020\,,\sigma = 0.080$ \\ GOODS-South & $0.0$ & $-0.027\,,\sigma = 0.055$ \\ \hline \end{tabular} \end{threeparttable} \end{center} \textbf{Notes.} \textit{First column}: Names of the CANDELS/3D-HST fields. \textit{Second column}: Overview about the measured zeropoint offsets in the $U$ band between the galaxy loci from the five CANDELS/3D-HST catalogues from \citetalias{Skelton2014} with respect to the locus in the GOODS-South field, which serves as an anchor. \ \textit{Third column}: Average residual offset computed from 100 subsamples in the CANDELS/3D-HST fields (drawn from areas with a similar field of view as \textit{HST}/ACS) after applying the `full' correction (second column). The values correspond to the average and scatter. \label{Tab:Galloc_comparisons} \vspace{-4mm} \end{table} \section{Effect of systematic offsets in the photometry on $\langle \beta \rangle$} \label{Appendix:Impact of syst. shifts in photom} \begin{table} \caption{ Impact of expected photometric uncertainties of relevant colours on the average lensing efficiency. } \begin{center} \begin{threeparttable} \begin{tabular}{ l c c c} \hline\hline Colour & expected uncert. & $\left( \frac{\Delta \langle \beta \rangle}{\langle \beta \rangle} \right) _\mathrm{HUDF,R20}$ & $\left( \frac{\Delta \langle \beta \rangle}{\langle \beta \rangle} \right) _\mathrm{CAND}$ \\ \hline $U-V_{606}$ & $\pm 0.08$\,mag & 2.7\,\% & 4.1\,\% \\ $V_{606}-I_{814}$ & $\pm 0.02$\,mag & 2.9\,\% & 2.2\,\% \\ $V_{606}-J_{110}$ & $\pm 0.05$\,mag & 2.7\,\% & 2.2\,\% \\ $I_{814}-J_{110}$ & $\pm 0.05$\,mag & 0.3\,\% & 0.1\,\% \\ \hline \end{tabular} \end{threeparttable} \end{center} \textbf{Notes.} We quantified this by calculating the difference $\Delta \langle \beta \rangle$ between the results for $\langle \beta \rangle$ (at reference redshift $z_\mathrm{l} = 1.4$) based on the \citetalias{Skelton2014} photometry shifted by the expected uncertainty in a positive and negative direction. We divide this by the average lensing efficiency $\langle \beta \rangle$ without shift of the photometry. \textit{First column:} Colour. \textit{Second column:} Expected uncertainty of the colour. \textit{Third column:} Impact on the average lensing efficiency for matched galaxies in the HUDF region. We report the value based on the \citetalias{Raihan2020} photometric redshifts. \textit{Fourth column:} Average impact on the average lensing efficiency for galaxies in the five CANDELS/3D-HST fields using the \citetalias{Raihan2020} photometric redshifts. \label{Tab:Syst offset in photometry, effect on beta} \vspace{-4mm} \end{table} In order to estimate how systematic shifts in the photometry affect the average lensing efficiency, we applied different systematic shifts to the colours \mbox{$U-V_{606}$}, \mbox{$V_{606} - I_{814}$}, \mbox{$V_{606} - J_{110}$}, and \mbox{$I_{814} - J_{110}$} from the \citetalias{Skelton2014} photometry. We then calculated $\langle \beta \rangle$ based on the photometric redshifts for the colour-selected galaxies. Since we applied a Gaussian noise to the $U$ band from the GOODS-South field, we evaluated five noise realisations. A summary of the uncertainty level of the photometric shifts (based on our results presented in Appendix \ref{Appendix:Comparison of S14 and LAMBDAR photometry} and \ref{Appendix:ZP robustness with gal locus}) and the consequential uncertainties of the average lensing efficiency are presented in Table \ref{Tab:Syst offset in photometry, effect on beta}. \section{Alternative colour selection strategies for clusters at $z \sim 1.2$} \label{Appendix:Coloursel_alternatives} As mentioned before, galaxies at redshift $1.3 < z < 1.7$ could, in principle, be used for a lensing analysis for a cluster at redshift $z \sim 1.2$, but have to be removed for a cluster at redshift $z \sim 1.7$. We explored two alternative colour selection strategies of background galaxies for a cluster at redshift $z\sim1.2$ aiming to add the galaxies at $1.3 < z < 1.7$ into the selection, which would increase the signal-to-noise ratio of the lensing measurement. In our first alternative, we left the first step of the selection in the $VIJ$ colour plane unchanged, because it serves the removal of (the same) foreground galaxies as in the default selection strategy. However, we noticed that the galaxies at the cluster redshift in the $UVJ$ colour plane occupy a smaller space in the upper left corner. Therefore, we modified the cuts slightly so that fewer background galaxies are cut from this corner (see Fig. \ref{Fig:1.2selection conservative}). At a lens redshift of $z=1.2$ and using the matched sources from the HUDF region as in Sect. \ref{Sec:Colour_selection, defining mag and colour cuts}, the default selection strategy achieved an average lensing efficiency of $\langle \beta \rangle = 0.324$ with a number density of $n=13.4$\,arcmin$^{-2}$, resulting in a weak lensing sensitivity factor of $\tau_\mathrm{WL} = 1.19$. In comparison to that the alternative strategy achieved $\langle \beta \rangle = 0.317$ with a number density of $n=15.3$\,arcmin$^{-2}$, resulting in a weak lensing sensitivity factor of $\tau_\mathrm{WL} = 1.23$. In conclusion, this alternative provides only a negligible improvement of the weak lensing sensitivity factor, which would be even less for clusters at higher redshifts $1.2 < z \lesssim 1.6$. As a second alternative selection strategy, we made use of the fact that the galaxies at the cluster redshift for a cluster at $z=1.2$ are concentrated more towards the lower right of the $VIJ$ colour plane than for a cluster, for instance, at $z=1.7$. In this strategy, we used the $VIJ$ plane to cut not only the foreground but also the galaxies at the cluster redshift (see \mbox{Fig. \ref{Fig:1.2selection alternative}}). To cut all galaxies at the cluster redshift this way, the cuts need to be extended further towards bluer $V-I$ colour (to the left in the $VIJ$ plane). Consequently, cutting the galaxies at the cluster redshift in the upper left corner of the $UVJ$ colour plane is not necessary anymore, which allows us to keep more background galaxies (mainly the close background galaxies indicated by cyan symbols in \mbox{Fig. \ref{Fig:1.2selection alternative}}). With this strategy, we found an average lensing efficiency of $\langle \beta \rangle = 0.276$ with a number density of $n=16.9$\,arcmin$^{-2}$, resulting in a weak lensing sensitivity factor of $\tau_\mathrm{WL} = 1.13$. Thus, we found we cannot increase the weak lensing sensitivity factor with this strategy. While the number density did increase, mainly in the regime of near background galaxies, we also lost a notable fraction of the far background galaxies at high redshift due to the more extended cut in the $VIJ$ plane. As a result, the average geometric lensing efficiency decreased strongly and this could not be compensated by the higher source number density. From exploring these two alternative background source selection strategies, we concluded that it is not beneficial to introduce a selection strategy that is optimised based on the cluster redshift for clusters with redshifts between $1.2 \lesssim z \lesssim 1.7$. We, therefore, applied the selection strategy presented in Sect. \ref{Sec:Colour_selection, defining mag and colour cuts} for all clusters in our sample with $1.2 \lesssim z \lesssim 1.7$. However, for the cluster SPT-CL{\thinspace}$J$0646$-$6236, which is located at a lower redshift of $z=0.995$, an alternative selection strategy did increase the weak lensing sensitivity factor noticeably as presented in Appendix \ref{Appendix:Coloursel_alternatives0995}. \section{Colour selection strategy for the cluster SPT-CL{\thinspace}$J$0646$-$6236 at $z=0.995$} \label{Appendix:Coloursel_alternatives0995} The cluster SPT-CL{\thinspace}$J$0646$-$6236 has the lowest redshift in our sample with $z=0.995$. With the default background source selection strategy presented in Sect. \ref{Sec:Colour_selection, defining mag and colour cuts}, we do miss the galaxies in the redshift regime $1.1 \lesssim z \lesssim 1.7$, which we could incorporate for the lensing analysis of this cluster. In contrast to the alternative background source selection strategies presented in Appendix \ref{Appendix:Coloursel_alternatives}, we found that it is possible to achieve a significantly higher weak lensing sensitivity factor with a modification of the default selection strategy for this cluster. The original cut in the $VIJ$ plane already removed the majority of the galaxies at the cluster redshift $z\sim1$, so that we could omit the cut of sources in the upper left corner of the $UVJ$ plane (see Fig. \ref{Fig:0995selection}). As a result, we achieved a number density of selected background source galaxies, which was two times higher (27.4\,arcmin$^{-2}$) than for the default selection while the average geometric lensing efficiency only mildly decreased. At a lens redshift of $z=0.995$, we found $\langle \beta \rangle = 0.392$ for the default selection and $\langle \beta \rangle = 0.336$ for the optimised selection. As a consequence, the weak lensing sensitivity factor increased by about 23 per cent from $\tau_\mathrm{WL} = 1.43$ for the default selection strategy to $\tau_\mathrm{WL} = 1.76$ for the optimised strategy. Therefore, we used this optimised strategy in the lensing analysis of the cluster SPT-CL{\thinspace}$J$0646$-$6236 at $z=0.995$. \section{Consistency of weak lensing mass results with SZ or X-ray masses} \label{Appendix:Consistency of WL with SZ + X-ray} Some of the clusters in our sample have a lensing mass that has scattered high or low with respect to the reference mass measured from SZ or X-ray data (see Table \ref{tab:Cluster sample properties} for SZ masses, \citet{McDonald2017} for X-ray masses). This concerns in particular the clusters SPT-CL{\thinspace}$J$2040$-$4451 and SPT-CL{\thinspace}$J$0205$-$5829. To quantify the tension between weak lensing mass and SZ or X-ray mass for individual targets, we employed a simple model to test for the level at which the mass ratios are consistent with unity. To this end, we randomly drew 10,000 weak lensing masses $M_{\mathrm{WL,rand,}i}$ from a Normal distribution $\mathcal{N}(M_{500\mathrm{c}}^\mathrm{biased,ML}, \sigma_\mathrm{stat}(M_{500\mathrm{c}}^\mathrm{biased,ML}))$ given the best-fit weak lensing mass estimates and statistical uncertainties (see Sect. \ref{Sec: fits_to_tangential_reduced_shear_profiles} and Table \ref{tab:mass-Xray}). We divided these by correction factors randomly drawn from the corresponding log-normal mass bias distributions (described in Sect. \ref{Sec:corr_for_mass_modelling_bias}). Similarly, we drew 10,000 SZ (or X-ray) masses $M_{\mathrm{SZ,rand,}i}$ (or $M_{\mathrm{X,rand,}i}$) from the best-fit values in conjunction with their uncertainties (Table \ref{tab:Cluster sample properties} for SZ masses, \citet{McDonald2017} for X-ray masses), using a Normal distribution. In case of asymmetric uncertainties, a two-piece Normal distribution \citep[e.g. ][]{John1982} was employed. We proceeded to take ratios of the weak lensing and SZ (or X-ray) mass distributions $M_{\mathrm{WL,rand,}i}/M_{\mathrm{SZ,rand,}i}$ (or $M_{\mathrm{WL,rand,}i}/M_{\mathrm{X,rand,}i}$). For a given target, the resulting ratio distribution was analysed for its consistency with unity. In particular, we constructed confidence intervals based on the shortest possible interval containing a given fraction (the confidence level) of the distribution. In this way, we found the lowest level of confidence making the mass ratio consistent with one. For SPT-CL{\thinspace}$J$2040$-$4451, which has a best-fit weak lensing mass noticeably higher than the SZ mass (X-ray mass), we found this confidence level to be 70 per cent (75 per cent), corresponding to a probability of 0.3 (0.25) of seeing an outlier with this degree or more of discrepancy (for an individual cluster). Similarly, for SPT-CL{\thinspace}$J$0205$-$5829, the probability of an outlier with or exceeding the observed degree of discrepancy is 0.09 for the SZ mass (0.21 for the X-ray mass). We conclude that the observed scatter between lensing masses and SZ or X-ray masses is well within the expectation given the large statistical uncertainties of our study, and given that these two clusters are the most extreme outliers within our sample of nine clusters. \section{Weak lensing results: mass maps and tangential reduced shear profiles} \label{Appendix:WeakLensingResults} We show the weak lensing results, including the mass maps and tangential reduced shear profiles for the studied cluster sample in Figs. \ref{fi:wl_results_2} to \ref{fi:wl_results_4}. In addition, we display the stacked profile of the cluster sample in Fig. \ref{Fig:Stacked-profile}. Following \citetalias{Schrabback2018} (their sect. 7.3), we stacked the lensing signal of the clusters in terms of the differential surface mass density $\Delta \Sigma(r)$, where we computed $\Sigma_\mathrm{crit}$ based on the average lensing efficiency $\langle \beta \rangle$ from the individual clusters, respectively. Since the clusters vary in mass, we rescaled them to an approximately similar signal amplitude with the help of the SZ masses listed in Table \ref{tab:Cluster sample properties}. Based on this mass and assuming the concentration--mass relation by \citet{Diemer2015} with updated parameters from \citet{Diemer2019}, we computed a theoretical NFW model for the differential surface mass density $\Delta \Sigma_\mathrm{model}$. We then rescaled the cluster lensing signal by a factor $s$ according to \begin{equation} \Delta \Sigma^{\ast} (r) = s\Delta \Sigma(r) \equiv \frac{\langle \Delta \Sigma_\mathrm{model}(800\,\mathrm{kpc})\rangle}{\Delta \Sigma_\mathrm{model}(800\,\mathrm{kpc})} \Delta\Sigma(r) \,, \end{equation} where we used $r=800\,\mathrm{kpc}$ as the reference scale to evaluate the theoretical model. The weighted average then reads \begin{equation} \langle \Delta \Sigma^{\ast} \rangle (r_j) = \sum_{i \in \mathrm{clusters}} \Delta \Sigma^{\ast}_i (r_j) \hat{W}_{ij} / \sum_{i \in \mathrm{clusters}} \hat{W}_{ij} \,, \end{equation} with $\hat{W}_{ij} = \left[ s\sigma(\Delta \Sigma(r_j))\right]^{-2}$ and $\sigma(\Delta \Sigma(r_j))$ as the $1\sigma$ uncertainty of $\Delta \Sigma(r_j)$. \end{appendix}
Title: The late afterglow of GW170817/GRB170817A: a large viewing angle and the shift of the Hubble constant to a value more consistent with the local measurements
Abstract: The multi-messenger data of neutron star merger events are promising for constraining the Hubble constant. So far, GW170817 is still the unique gravitational wave event with multi-wavelength electromagnetic counterparts. In particular, its radio and X-ray emission have been measured in the past $3-5$ years. In this work, we fit the X-ray, optical, and radio afterglow emission of GW170817/GRB 170817A and find out that a relatively large viewing angle $\sim 0.5\, \rm rad$ is needed; otherwise, the late time afterglow data can not be well reproduced. Such a viewing angle has been taken as a prior in the gravitational wave data analysis, and the degeneracy between the viewing angle and the luminosity distance is broken. Finally, we have a Hubble constant $H_0=72.00^{+4.05}_{-4.13}\, \rm km\, s^{-1}\, Mpc^{-1}$, which is more consistent with that obtained by other local measurements. If rather similar values are inferred from multi-messenger data of future neutron star merger events, it will provide critical support to the existence of the Hubble tension.
https://export.arxiv.org/pdf/2208.09121
\title{The late afterglow of GW170817/GRB170817A: a large viewing angle and the shift of the Hubble constant to a value more consistent with the local measurements} \correspondingauthor{Yi-Zhong Fan} \email{yzfan@pmo.ac.cn} \correspondingauthor{Zhi-Ping Jin} \email{jin@pmo.ac.cn} \author[0000-0003-1215-6443]{Yi-Ying Wang} \affiliation{Key Laboratory of Dark Matter and Space Astronomy, Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210033, People's Republic of China} \affiliation{School of Astronomy and Space Science, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China} \author[0000-0001-9120-7733]{Shao-Peng Tang} \affiliation{Key Laboratory of Dark Matter and Space Astronomy, Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210033, People's Republic of China} \author[0000-0003-4977-9724]{Zhi-Ping Jin} \affiliation{Key Laboratory of Dark Matter and Space Astronomy, Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210033, People's Republic of China} \affiliation{School of Astronomy and Space Science, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China} \author[0000-0002-8966-6911]{Yi-Zhong Fan} \affiliation{Key Laboratory of Dark Matter and Space Astronomy, Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210033, People's Republic of China} \affiliation{School of Astronomy and Space Science, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China} \newcommand{\ud}{\mathrm{d}} \section{Introduction}\label{sec:1} The Hubble constant ($H_0$) is a fundamental parameter of cosmology. However, its measurements obtained with different methods are clustered into two groups \citep{2019NatAs...3..891V}: one is represented by the cosmic microwave background (CMB) measurement from the Planck Collaboration for the early universe ($67.66 \pm 0.42 {\rm ~km\,s^{-1}\,Mpc^{-1}}$; \citealt{2020A&A...641A...6P}) and the other is represented by the type Ia supernova and Cepheids measurements from the SH0ES (Supernova, $H_0$, for the Equation of state of Dark Energy) team in the local universe ($73.30 \pm 1.04 {\rm ~km\,s^{-1}\,Mpc^{-1}}$; \citealt{2021arXiv211204510R}). So far, it is still unclear whether such a severe tension is due to the presence of new physics (i.e., the modification of the standard cosmology model) or alternatively some unknown systematic bias introduced in the local measurements \citep{2021ApJ...912..150D,2022Galax..10...24D}. Independent precise local $H_0$ measurements without using the distance ladders are thus necessary to check the second possibility. The gravitational wave (GW) events are expected to play an important role in such an aspect since GW, serving as the standard siren, can be used to measure $H_0$ \citep{1986Natur.323..310S}. This is particularly the case for the neutron star merger events with detected multi-wavelength electromagnetic counterparts. For such events, the redshifts can be reliably measured, and the inclination angles (i.e., the viewing angle) of the mergers may be robustly inferred. Therefore, the degeneracy between the luminosity distance and the inclination angle can be effectively broken. The accurate redshift and luminosity distance thus lead to a direct measurement of $H_0$. The multi-messenger data of GW170817 \citep{2017ApJ...848L..12A, 2019PhRvX...9a1001A} has been extensively used to estimate the Hubble constant. \citet{2017Natur.551...85A} took the strain data and the redshift measurement of GW170817 to measure $H_0$ which was determined to be ${70}^{+12.0}_{-8.0} ~\rm km\,s^{-1}\,Mpc^{-1}$. Later, the inclusion of the electromagnetic radiation information yields a more accurate measurement of $H_0$, as reported in the literature. For instance, \citet{2019NatAs...3..940H} took both the superluminal motion and the multi-wavelength radiation of the relativistic jet into account and yielded a viewing angle of $0.29^{+0.03}_{-0.02}$ rad for the Power-law jet model, with which a $H_0=68.1^{+4.5}_{-4.3}\, \rm km\,s^{-1}\, Mpc^{-1}$ is reported. \citet{2021ApJ...908..200W} found that $H_0=69.48^{+4.3}_{-4.2}\, \rm km\,s^{-1}\,Mpc^{-1}$ when further including the direct measurement of the luminosity distance. There are also some constraints reported from other literature, e.g., $H_0=64.8^{+7.3}_{-7.2}\, \rm km\,s^{-1}\,Mpc^{-1}$ \citep{2020MNRAS.492.3803H}, $H_0=66.2^{+4.4}_{-4.2}\, \rm km\,s^{-1}\,Mpc^{-1}$ \citep{2020Sci...370.1450D}, and $H_0=68.3^{+4.6}_{-4.5}\, \rm km\,s^{-1}\,Mpc^{-1}$ \citep{2021A&A...646A..65M}. Though the uncertainties are relatively large, their median values are close to those found in the CMB, baryon acoustic oscillations, and Big Bang nucleosynthesis experiments \citep{2018MNRAS.480.3879A}. However, in the previous afterglow-involved $H_0$ measurements, only the first-year afterglow data of GRB 170817A have been included in the modeling (for instance, \citet{2021ApJ...908..200W} just fitted the afterglow data collected in the first 200 days to infer the viewing angle of the GRB ejecta due to the lack of reliable treatment on the sideways expansion in the code). Currently, the afterglow observations of GRB 170817A have been accumulated to almost 4.8 years \citep{2022GCN.32065....1O}. These late time afterglow data are expected to constrain the physical parameters tightly. Interestingly, it is found that the viewing angle $\theta_{\rm v}$ of the GRB ejecta (which is widely assumed to be the same as the inclination angle $\iota$ of the merger event) gets increased if the late time data have been included in the fit \citep{2019ApJ...886L..17H, 2021MNRAS.502.1843N, 2022ApJ...927L..17H}. Therefore, it is necessary to fit all the available afterglow data to yield a more reliable $\theta_{\rm v}$ (i.e., $\iota$). Besides, the superluminal motion of a relativistic jet of GW170817 \citep{2018Natur.561..355M, 2019Sci...363..968G, 2019NatAs...3..940H} that provides an extra constraint on $\theta_{\rm v}$ will also be incorporated in this work. By performing the Bayesian analysis on both the multi-wavelength light curves and GW data, we find that the Hubble constant is $H_0=72.57^{+4.09}_{-4.17}\, \rm km\, s^{-1}\, Mpc^{-1}$, which is more consistent with that obtained by other local measurements. \section{The Afterglow Data}\label{sec:2} Recently, the synchrotron afterglow emission of GRB 170817A in X-ray wavelength after 1674 days since the merger of GW170817 was detected \citep{2022GCN.32065....1O}. Therefore, we incorporate this new observation with all of the available data in radio, optical, and X-ray bands into the afterglow modeling (which will be described in Section.~{\ref{sec:3}}). In the radio band, the data cover the early and late duration from 16 to 1243 days after the BNS merger which are obtained by the Karl G. Jansky Very Large Array \citep[VLA;][]{2017Sci...358.1579H, 2018ApJ...863L..18A, 2018ApJ...856L..18M, 2018Natur.561..355M}, the Australia Telescope Compact Array \citep[ATCA;][]{2017Sci...358.1579H, 2018ApJ...858L..15D, 2018ApJ...868L..11M, 2018Natur.554..207M, 2021ApJ...922..154M}, the Giant Metrewave Radio Telescope \citep[uGMRT;][]{2018ApJ...867...57R, 2018ApJ...868L..11M}, the enhanced Multi Element Remotely Linked Interferometer Network \citep[eMERLIN;][]{2021ApJ...922..154M}, the Very Long Baseline Array \citep[VLBA;][]{2019Sci...363..968G}, and the MeerKAT telescope \citep{2018ApJ...868L..11M, 2021ApJ...922..154M}. In the optical band, the data distributed around the light-curve peak from 109 to 362 days post-merger are obtained by the Hubble Space Telescope \citep[HST;][]{2018NatAs...2..751L, 2019ApJ...883L...1F, 2019ApJ...870L..15L}. The observations in the X-ray band from the early (9 days) to the very late (1674 days) time after the merger are obtained by the XMM-Newton \citep{2018A&A...613L...1D, 2019MNRAS.483.1912P} and Chandra \citep{2017Natur.551...71T, 2018MNRAS.478L..18T, 2019ApJ...886L..17H, 2020MNRAS.498.5643T, 2022GCN.32065....1O, 2022MNRAS.510.1902T}. \section{Method}\label{sec:3} As the only ``standard siren" so far, the source of GW170817 is confirmed to be located in NGC 4993 \citep{2017ApJ...848L..12A}. In the local universe, we have Hubble's law, \begin{equation}\label{eq:hubble_law} v_{\rm H}=v_{\rm r}-v_{\rm p}=H_0d_{\rm L}, \end{equation} where $v_{\rm H}$ is the local Hubble flow velocity of the galaxy, $v_{\rm r}$ is the recession velocity of the galaxy relative to the CMB frame, and $v_{\rm p}$ is the peculiar velocity of the galaxy. Thus, the posterior probability of $H_0$ is \begin{equation}\label{eq:standard_llh} p(H_0|x_{\rm GW},\langle v_{\rm H}\rangle)\propto p(H_0)\int \ud d_{\rm L} \ud v_{\rm H} p(v_{\rm H})p(\langle v_{\rm H}\rangle|v_{\rm H})p(d_{\rm L}|x_{\rm GW}), \end{equation} where $p(d_{\rm L}|x_{\rm GW})$ is the posterior distribution of $d_{\rm L}$ given by the Bayesian analysis on $x_{\rm GW}$, and $p(\langle v_{\rm H}\rangle|v_{\rm H})$ is the likelihood of Hubble flow velocity measurement. Here, we use the result from \citet{2021A&A...646A..65M}, i.e., the velocity of the Hubble flow is $v_{\rm H}=2954\pm148.6 \rm~km \, s^{-1}$ following a Gaussian distribution. To obtain a high precision luminosity distance from the GW data analysis, we need to break the degeneracy between $\iota$ and $d_{\rm L}$. Therefore, how to (independently) acquire a reliable measurement of $\iota$ is crucial. Assuming that the viewing angle $\theta_{\rm v}$ in GRB afterglow is equal to the inclination angle $\iota$, an acknowledged method is to infer the angle by fitting multi-light curves with afterglow models. In this work, we adopt two approaches (i.e., the {\tt Afterglowpy} and {\tt JetFit}) developed by \citet{2020ApJ...896..166R} and \citet{2018ApJ...869...55W} to constrain $\theta_{\rm v}$, respectively. For {\tt Afterglowpy}, multi-numerical/analytic structured jet models are implemented for calculating GRB afterglow light curves and spectra. While for {\tt JetFit}, since there are $\sim$2,000,000 synchrotron spectra computed from the hydrodynamic simulations, the full parameter space for fitting the light curves can be well explored by Markov chain Monte Carlo analysis \citep{2018ApJ...869...55W}. In this work, we take the model from \citet{2018ApJ...869...55W} as the fiducial one because the {\tt Afterglowpy} code makes approximation\footnote{To simplify the calculation, {\tt Afterglowpy} assumes a sideways expansion rate of the local sound speed. However, the numerical hydrodynamical simulations \citep{2003ApJ...591.1075K} revealed that the sideways expansion rate is usually lower than the speed of sound, which predicts shallower-decaying light curves at late times.} on the sideways expansion; moreover, the {\tt JetFit} gives stronger Bayes evidence, as we will show below. Except for GRB afterglow, kilonova afterglow might become a dominant component \citep{2022ApJ...927L..17H} at late times. Driven by the kilonova blast wave, the kilonova afterglow is caused by a shock through the external medium. Therefore, it can be approximated as a spherical cocoon model in the sub-relativistic regime. Lately, \citet{2022MNRAS.516.4949S} exploited an open source package {\tt Redback} for fitting electromagnetic transient. They approximated the kilonova afterglow as a spherical cocoon afterglow extended in {\tt afterglowpy} but with some constraints. As shown in Table~\ref{tb:Ag_par}, we adopt the setting of the prior of the kilonova afterglow in {\tt Redback} but import {\tt afterglowpy} directly. More complicated models of the kilonova afterglow and further discussions can be found in \citet{2019MNRAS.487.3914K} and \citet{2021MNRAS.506.5908N}. Therefore, such a component, calculated with the {\tt Afterglowpy} code, is also incorporated in our numerical fits. We update the analysis with multi-band light-curve data of GRB 170817A, including the data from radio and optical bands to X-ray bands, from 9.2 to 1674 days. Following \citet{2018ApJ...869...55W}, the {\tt JetFit} model in our work has eight free parameters. Their prior distributions are summarized in Table~\ref{tb:Ag_par}\footnote{The same as those in the Table~2 of \citet{2018ApJ...869...55W}, except that the fraction of electrons accelerated by the shock is fixed to $1$ and the luminosity distance $d_{\rm L}$ follows a Gaussian distribution with $\mu=40.7$ and $\sigma=2.36$ Mpc \citep{2021A&A...646A..65M}}. For other structured jet models in {\tt Afterglowpy}, the priors are similar to those in Table~\ref{tb:Ag_par}. Additionally, the superluminal motion of the jet observed with Very Long Baseline Interferometry (VLBI) gives a constraint of $0.25<\theta_{\rm v} \bigl(\frac{d_{\rm L}}{41 \rm Mpc} \bigr)<0.5$ rad \citep{2018Natur.561..355M}. The likelihood of fitting the observation data (assuming the measurement error follows Gaussian distribution) with the afterglow model in the Bayesian statistical framework can be written as \begin{equation} {\rm Likelihood}=\prod^{N}_{i} \frac{1}{\sqrt{2\pi}\sigma_i} {\rm exp} \biggl[ -\frac{1}{2} \biggl(\frac{f(x_i)-y_i}{\sigma_i} \biggr)^2 \biggr], \end{equation} where $(x_i,y_i)$ and $\sigma_i$ are the observed light-curve data and their uncertainties, respectively. $f(x_i)$ is the value predicted by the afterglow model at $x_i$. {Balancing the accuracy and the efficiency, the Bayesian analysis of these afterglow parameters adopt {\tt Pymultinest} as the sampler.} After obtaining the posterior distribution of inclination angle through the afterglow light-curve fitting, we can take the posterior distribution as a prior and input it into the Bayesian analysis of GW data. The priors of other GW parameters are shown in Table~\ref{tb:GW_par}, where the prior of $d_{\rm L}$ follows the distribution obtained by \citet{2021A&A...646A..65M}. We assume that the BNS has aligned spins, and the precession effects can be neglected. As for the waveform template, we use the model of {\tt IMRPhenomD\_NRTidal} \citep{PhysRevD.99.024029}. The calibration uncertainties may slightly impact Hubble constant measurements \citep{2022arXiv220403614H}. While for GW170817, we find that the difference between $H_0$ results obtained with and without considering calibration is negligible since the uncertainty of peculiar velocity still dominates the uncertainty of $H_0$. We calculate the marginalized posterior distribution of luminosity distance using the GW parameter inference code {\tt Bilby} \citep{2019ApJS..241...27A} and adopt {\tt dynesty} \citep{2020MNRAS.493.3132S} as the nest sampler. Given a uniform prior distribution from 20 to 140 $\rm km\, s^{-1}\, Mpc^{-1}$, the Hubble constant then can be well estimated using Equation~(\ref{eq:standard_llh}) based on the tight constraint of the luminosity distance. \begin{table*}[ht!] \begin{ruledtabular} \centering \caption{Prior distributions of the parameters for GRB 170817A and kilonova afterglow} \label{tb:Ag_par} \begin{tabular}{lcc} Names &Parameters &Priors of parameter inference \\ \hline Explosion energy &$\log_{10} E_{0,50}$\textsuperscript{a} &Uniform(-6, 7)\\ Circumburst density\textsuperscript{b} &$\log_{10} n_{0,0}$\textsuperscript{a} &Uniform(-6, 0)\\ Asymptotic Lorentz factor &$\eta_0$ &Uniform(2, 10)\\ Boost Lorentz factor &$\gamma_B$ &Uniform(1, 12)\\ Spectral index\textsuperscript{b} &$p$ &Uniform(2, 2.5)\\ Electron energy fraction\textsuperscript{b} &$\log_{10} \epsilon_e$ &Uniform(-6, 0)\\ Magnetic energy fraction\textsuperscript{b} &$\log_{10} \epsilon_B$ &Uniform(-6, 0) \\ Viewing angle\textsuperscript{b} &$\theta_{\rm v}/\rm rad$ &Sine(0, $\pi$) \\ Isotropic-equivalent energy\textsuperscript{b} &$\log_{10} E_{\rm iso}/\rm erg$ &Uniform(-44, 57)\\ Half opening angle\textsuperscript{b} &$\theta_c$ &Uniform(0, $\pi/2$)\\ Outer truncation angle\textsuperscript{b} &$\theta_w$ &Uniform(0, $\pi/2$)\\ Luminosity distance\textsuperscript{b} &$d_{\rm L}/\rm Mpc$ &Gaussian($\mu=40.7, \sigma=2.36$) \\ \hline Maximum 4-velocity of outflow &$U_{\rm Max}$ &Uniform(0.15, 0.7)\\ Minimum 4-velocity of outflow &$U_{\rm Min}$ &Uniform(0.1, 0.15)\\ Normalization of outflow's energy distribution &$E_{\rm r}/\rm erg$ &Uniform(45,50)\\ Power-law index of outflow's injection &$k$ &Uniform(0.5,4)\\ Mass of material at $U_{\rm Max}$ &$\log_{10}(\rm Eject \ mass)/\rm M_{\odot}$ &Uniform(45,50)\\ Spectral index &$p$ &Uniform(2.0, 2.5)\\ Electron energy fraction &$\log_{10} \epsilon_e$ &Uniform(-5, 0)\\ Magnetic energy fraction &$\log_{10} \epsilon_B$ &Uniform(-5, 0) \\ Fraction of electrons that get accelerated &$\xi_{N}$ &Uniform(0,1)\\ Initial lorentz factor &$\Gamma_0$ &Uniform(1,0) \end{tabular} \begin{tablenotes} \item[a] \textsuperscript{a} Note that, $E_{0,50} \equiv E_0/10^{50} \, \rm erg$ and $n_{0,0} \equiv n_0/1 \, \rm proton \, cm^{-3}$. \item[b] \textsuperscript{b} These parameters are used in Gaussian structured jet in {\tt Afterglowpy}. \end{tablenotes} \end{ruledtabular} \end{table*} \section{Result}\label{sec:4} We have systematically investigated the factors that might influence the determination of viewing angle, including the approaches ({\tt Afterglowpy} and {\tt JetFit}), the structured jet models (Gaussian, Power-Law, and Top-Hat structures), and the data sets (200-days/entire afterglow data and VLBI's constraint). Figure.~\ref{fig:LC} presents our optimal fitting with two different models. The posterior results shown in Figure~\ref{fig:posterior} indicate that the superluminal motion of the jet gives a combined limitation between the viewing angle and the luminosity distance. In general, the inclusion of the late time (i.e., $\geq 200$ days) afterglow data yields a larger viewing angle with a lower uncertainty, as anticipated (one exception is the superluminal motion constrains the viewing angle obtained in the {\tt JetFit} model of all the afterglow data to be consistent with that of the first 200 days). Without the sideways lateral expansion, the logarithm of Bayes evidence (${\rm ln}Z$) is $10$ less than the Gaussian structured jet with expansion, suggesting a much poorer fit. For {\tt JetFit} model using the entire data set, the $\theta_{\rm v}$ is constrained to $0.53^{+0.01}_{-0.01}$ rad (at the $68.3\%$ credible level; other parameters of the afterglow model are presented in Table~\ref{tb:Ag_par} and Figure~\ref{fig:posterior}, which is consistent to the results obtained by \citet{2021MNRAS.502.1843N}.) Using the same boosted fireball model and similar data set, \citet{2022ApJ...927L..17H} instead found $\theta_{\rm v}=0.44^{+0.01}_{-0.01}$ rad. Such a difference is mainly due to the fixed $n_{0,0}$, $\gamma_B$, and $\epsilon_e$ in the afterglow modeling of \citet{2022ApJ...927L..17H}. With {\tt Afterglowpy}, the Gaussian structured jet model (the posterior results are shown in Figure~\ref{fig:posterior}) gives a $\theta_{\rm v}=0.51^{+0.01}_{-0.02}$ rad. For the {\tt JetFit} and the {\tt Afterglowpy} Gaussian structured jet models, the ${\rm ln}Z$ are 594 and 562, respectively. In comparison to the {\tt Afterglowpy} Gaussian jet model, the Power-Law and Top-Hat models can not well fit the observations, and the results are ${\rm ln}Z = 546$ ($\theta_{\rm v}=0.46^{+0.01}_{-0.01}$ rad) and 426 ($\theta_{\rm v}=0.50^{+0.03}_{-0.04}$ rad), respectively. All of the posterior distributions of these scenarios are presented in Appendix~\ref{sec:6}. Therefore, we only display the best-fit light curves for the {\tt JetFit} and Gaussian structured jet models in Figure~\ref{fig:LC}. We would like to also comment on the role of the kilonova afterglow component. Though the inclusion of this component can improve the goodness of the fit, its existence can not be convincingly established in the {\tt JetFit} model, for which the enhancement of $\ln Z$ is just $\approx 4$. In the {\tt Afterglowpy} Gaussian jet model, the kilonova afterglow is more prominent and dominates the emission at $t\geq 700$ days. The corresponding posterior distribution is presented in Appendix~\ref{sec:6} (The estimated parameters of kilonova afterglow do not show well convergence if we use {\tt JetFit} model for GRB afterglow). Motivated by its highest $\ln Z$ value, in this work we adopt the {\tt JetFit} model as our fiducial approach. Nevertheless, the rather similar $\theta_{\rm v}$ inferred in the above diverse approaches/models do consistently favor a large viewing angle of $\approx 0.5$ rad. Another thing that should be mentioned is that the explosion energy we obtained is larger than that estimated in several early works \citep[e.g.,][]{2019ApJ...870L..15L,2019MNRAS.485.2155L,2019MNRAS.489.1919T,2020ApJ...896..166R}, but is close to some recent evaluations \citep[e.g.,][]{2021MNRAS.502.1843N,2022ApJ...927L..17H}. Such high energy supports the claim that GRB 170817A would be one of the brightest short events detected so far \citep{DuanKK2019}. The inferred large $\theta_{\rm v}$ points towards a high GRB/GW association rate and hence a promising multi-messenger detection prospect of the double neutron star mergers in the future \citep{2018ApJ...857..128J}. Besides, we do not find evidence for the sizeable deviation of the model prediction from the data, which suggests that there is no sign of a continual energy injection from the central engine. Hence, it is in favor of a black hole rather than a neutron star central engine for GRB 170817A \cite[see also][for an independent argument]{2022arXiv220713613H}. Then, we use the posterior distribution of the viewing angle $\theta_{\rm v}=0.53^{+0.01}_{-0.01} \rm rad$ obtained in {\tt JetFit} model ($\theta_{\rm v}=0.51^{+0.01}_{-0.02}\rm rad$ for Gaussian jet model) as the prior of inclination angle in GW data analysis. The full Bayesian inference on GW170817 limits $d_{\rm L}$ to $40.67^{+1.11}_{-1.03}\,\rm Mpc$ ($d_{\rm L}=41.09^{+1.19}_{-1.05}\,\rm Mpc$ when using Gaussian structured jet) at the $68.3\%$ confidence level. The estimations of other GW parameters are shown in Table.~\ref{tb:GW_par}. Because of the breaking of the degeneracy between $\iota$ and $d_{\rm L}$, GW analysis gives about $5.6\%$ uncertainty on the Hubble constant and limits $H_0$ close to the SH0ES result, i.e., $H_0=72.57^{+4.09}_{-4.17}\, \rm km\, s^{-1}\, Mpc^{-1}$ ($H_0=71.80^{+4.15}_{-4.07}\, \rm km\, s^{-1}\, Mpc^{-1}$ when using Gaussian jet model) at the $68.3\%$ credible level (see Figure~\ref{fig:H0}). If inheriting both the estimated $d_{\rm L}$ and $\theta_{\rm v}$ from afterglow fitting as the prior of GW analysis, we would have $H_0=75.52^{+4.09}_{-4.04}\, \rm km\, s^{-1}\, Mpc^{-1}$ ($H_0=73.53^{+4.00}_{-3.96}\, \rm km\, s^{-1}\, Mpc^{-1}$) for {\tt JetFit} (Gaussian) model, with which the difference from the Planck $H_0$ measurement is more distinct. One thing that should be mentioned is that if we do not take into account the constraint of viewing angle, i.e., $0.25<\theta_{\rm v} \bigl(\frac{d_{\rm L}}{41 \rm Mpc} \bigr)<0.5$ rad \citep{2018Natur.554..207M, 2019NatAs...3..940H}, in the afterglow modeling, $\theta_{\rm v}$ can still be well constrained but prefers a larger value ($\sim 0.6 \rm \, rad$ in {\tt JetFit} model) and hence a larger $d_{\rm L}$. Correspondingly, in comparison to the results considering the VLBI's constraint, we would yield a larger result $H_0=74.36^{+4.44}_{-4.32}\, \rm km\, s^{-1}\, Mpc^{-1}$, and the difference from the Planck $H_0$ measurement is more distinct, also. \begin{table*}[ht!] \begin{ruledtabular} \centering \caption{Prior distributions and posterior results of the parameters for GW170817} \label{tb:GW_par} \begin{tabular}{lccc} Names &Parameters &Priors of parameter inference &Posterior results\textsuperscript{b}\\ \hline Chirp mass &$\mathcal{M}/M_{\odot}$ &Uniform(0.4, 4.4) &$1.1976^{+0.0001}_{-0.0001}$\\ Mass ratio &$q$ &Uniform(0.125, 1.0) &$0.76^{+0.16}_{-0.17}$\\ Aligned spin &$\chi_{1,2}$ &AlignedSpin\textsuperscript{a} &$0.01^{+0.12}_{-0.09}~\&~0.02^{+0.16}_{-0.14}$\\ Polarization of GW &$\psi$ &Uniform(0, 2$\pi$) &$1.60^{+1.04}_{-1.16}$\\ Coalescence time &$t_{\rm c}/\rm s$ &1187008882.42 &-\\ Coalescence phase &$\phi_{\rm c}$ &Uniform(0, 2$\pi$) &Marginalized \\ Right ascension &$\alpha$ &3.44616 &-\\ Declination &$\delta$ &-0.408084 &- \\ Tidal deformability &$\Lambda_{1,2}$ &Uniform(0,5000) &$204^{+342}_{-147}~\&~590^{+752}_{-344}$\\ Inclination angle &$\iota/\rm rad$ &Constrained by afterglow model analysis &$2.61^{+0.01}_{-0.01}$\\ Luminosity distance &$d_{\rm L}/\rm Mpc$ &Gaussian($\mu=40.7, \sigma=2.36$) &$40.67^{+1.11}_{-1.03}$\\ \end{tabular} \begin{tablenotes} \item[a] \textsuperscript{a} The spin component projected to the orbit angular momentum follows the distribution described in Equation (A7) of \citet{2018arXiv180510457L} with $\chi_{\rm max}$ = 0.89, and the spin's tilt angle is taken to be aligned. \item[b] \textsuperscript{b} The posterior results are at the $68.3\%$ credible level. \end{tablenotes} \end{ruledtabular} \end{table*} \section{Conclusion}\label{sec:5} For the GW event accompanying an EM counterpart, $H_0$ can be estimated by the standard siren method. This method utilizes redshift information from a confirmed host galaxy and the posterior distribution of luminosity distance from GW data analysis to obtain the $H_0$ value following Hubble's Law. The bright BNS merger event GW170817 has been identified in NGC 4993, so the Hubble flow velocity can be measured. After fitting the multi-wavelength light curves with the model developed by \citet{2018ApJ...869...55W}, the viewing angle $\theta_{\rm v}$ is constrained to $0.53^{+0.01}_{-0.01} \, \rm rad$ ($\theta_{\rm v}=0.51^{+0.01}_{-0.02}$), which partially breaks the degeneracy between $\iota$ and $d_{\rm L}$ and improves the accuracy of $d_{\rm L}$ estimation in GW data analysis. Therefore, we finally obtain the estimation of the Hubble constant as $H_0=72.57^{+4.09}_{-4.17}\, \rm km\, s^{-1}\, Mpc^{-1}$ ($H_0=71.80^{+4.15}_{-4.07}\, \rm km\, s^{-1}\, Mpc^{-1}$) from GW170817/GRB170817, which is more consistent with the SH0ES result rather than the CMB result. However, the uncertainty is still too large to confirm the Hubble tension. In our modeling, the possible kilonova afterglow component has been taken into account. In the {\tt JetFit} model, the contribution of such a new component is not significant. In the {\tt Afterglowpy} Gaussian structured jet model, the kilonova afterglow is more prominent and dominates the observed flux at $t\geq 700$ days. This distinction is most likely arisen from the different treatments on the sideways expansion. The {\tt Afterglowpy} assumes the expansion spreads at sound speeds \citep{2020ApJ...896..166R}, while the {\tt JetFit} model has a slower speed based on relativistic hydrodynamical jet simulations. Therefore, in the {\tt JetFit} scenario, the deceleration of the GRB ejecta will be less prominent than the case of {\tt Afterglowpy} \citep{2003ApJ...591.1075K,2012ApJ...749...44V,2018ApJ...869...55W}. Consequently, the GRB afterglow emission decline is shallower for {\tt JetFit} and the contribution of the kilonova afterglow component is suppressed. Therefore, more data are needed to convincingly establish the presence of a kilonova afterglow. With the improvement of multi-band telescopes and the upgrade of GW interferometers, it is expected to detect more neutron star mergers with multi-messengers. \citet{2018Natur.562..545C} have predicted that $H_0$ measurement will reach two percent precision within five years based on the standard siren method. And the combination of posterior distribution of $H_0$ estimation can reduce the error of $H_0$ to $\sigma_{H_0}/\sqrt{N}$ with $N$ bright GW events, where $\sigma_{H_0}$ is the typical width of the $H_0$ measurement. As shown in \citet{2020MNRAS.493.1633S} and \citet{2022MNRAS.513.4159P}, the number of multi-messenger detection of BNS merger will be close to the single digits during O4/O5 because the large $d_{\rm L}$ restricts the detection of GW signal and the large $\theta_{\rm v}$ restricts the GRB detection for the meanwhile. Since GW170817 is very close to us, its off-axis afterglow emission is still detectable in quite a few years. However, it is not the case for more distant events as expected. Fortunately, the afterglow emission would be much brighter for the on-axis events. More importantly, the jet opening angle, as well as its uncertainty, can be reliably estimated, with which both the $d_{\rm L}$ and $H_0$ can be well constrained ($\sim 3\%$ accuracy) even with a single BNS merger \citep{2022PhRvD.106b3011W}. In view of the above facts, we conclude that more precise $H_0$ is expected with the GW standard sirens in the near future, and the Hubble tension will be credibly clarified. \begin{acknowledgements} We thank the anonymous referee for very helpful comments and suggestions. Y.Y. Wang thanks S.J. Gao for the help in improving the computational efficiency of the codes and for the valuable suggestions. This work was supported in part by NSFC under Grants No. 11921003, No. 12225305 and No. 12233011. This research has made use of data and software obtained from the Gravitational Wave Open Science Center \url{https://www.gw-openscience.org}, a service of LIGO Laboratory, the LIGO Scientific Collaboration, and the Virgo Collaboration. LIGO is funded by the U.S. National Science Foundation. Virgo is funded by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale della Fisica Nucleare (INFN), and the Dutch Nikhef, with contributions by Polish and Hungarian institutes. $Software:$ {\tt Afterglowpy} (\citet{2020ApJ...896..166R}, \url{https://pypi.org/project/afterglowpy/}), {\tt JetFit} (\citet{2018ApJ...869...55W}, \url{https://github.com/NYU-CAL/JetFit}), {\tt Bilby} (\citet{2019ApJS..241...27A}, version 1.0.4, \url{https://git.ligo.org/lscsoft/bilby/}), {\tt Pymultinest} (\citet{2016ascl.soft06005B}, version 2.11, \url{https://pypi.org/project/pymultinest/}), {\tt Dynesty} (\citet{2020MNRAS.493.3132S}, version 1.1, \url{https://dynesty.readthedocs.io/en/latest/}). \end{acknowledgements} \appendix \section{The posterior distributions of the GRB and kilonova afterglow parameters}\label{sec:6} Here, we present some posterior distributions of the GRB and kilonova afterglow parameters mentioned in Section.~\ref{sec:4}. As an extra supplementary for previous discussions, Figure.~\ref{fig:app} shows four scenarios, including the {\tt JetFit} model without the constraint of the superluminal motion, the Gaussian structured jet model without the lateral spreading, the Gaussian structure jet model with fitting data during the first 200 days only, and the Power-Law structured jet. The first scenario yields a high $\ln Z$ (the same as the {\tt JetFit} model with the constraint of the superluminal motion) but would predict an even higher $H_0$ because of the suggested $\theta_{\rm v}\sim 0.6$ rad. The last three scenarios have significantly lower $\ln Z$ (represented in Figure.~\ref{fig:app}) because of the poorer fits to the data. In Figure.~\ref{fig:app2}, we plot the posterior distributions of the parameters of kilonova afterglow modeling displayed in Figure.~\ref{fig:LC}. The spherical cocoon model, which describes the evolution of the kilonova afterglow approximately, specifies that the energy-velocity distribution follows a Power-Law distribution $E(u)=E_0 ({u}/{u_{\rm max}})^{-k}$, where $u$ is the dimensionless 4-velocity and within $(u_{\rm min}, u_{\rm max})$. In this framework, the shock driven by the kilonova blast wave is refreshed by the coasting of the slow material when it decelerates. \clearpage \bibliography{ref} \bibliographystyle{aasjournal}
Title: Discovery of Peculiar Radio Morphologies with ASKAP using Unsupervised Machine Learning
Abstract: We present a set of peculiar radio sources detected using an unsupervised machine learning method. We use data from the Australian Square Kilometre Array Pathfinder (ASKAP) telescope to train a self-organizing map (SOM). The radio maps from three ASKAP surveys, Evolutionary Map of Universe pilot survey (EMU-PS), Deep Investigation of Neutral Gas Origins pilot survey (DINGO) and Survey With ASKAP of GAMA-09 + X-ray (SWAG-X), are used to search for the rarest or unknown radio morphologies. We use an extension of the SOM algorithm that implements rotation and flipping invariance on astronomical sources. The SOM is trained using the images of all "complex" radio sources in the EMU-PS which we define as all sources catalogued as "multi-component". The trained SOM is then used to estimate a similarity score for complex sources in all surveys. We select 0.5\% of the sources that are most complex according to the similarity metric, and visually examine them to find the rarest radio morphologies. Among these, we find two new odd radio circle (ORC) candidates and five other peculiar morphologies. We discuss multiwavelength properties and the optical/infrared counterparts of selected peculiar sources. In addition, we present examples of conventional radio morphologies including: diffuse emission from galaxy clusters, and resolved, bent-tailed, and FR-I and FR-II type radio galaxies. We discuss the overdense environment that may be the reason behind the circular shape of ORC candidates.
https://export.arxiv.org/pdf/2208.13997
\sloppy\sloppypar\raggedbottom\frenchspacing \section{Introduction} \label{SEC:Intro} The next generation of large and deep continuum radio surveys will produce catalogues with multi-million radio sources. This will have both a huge impact on our understanding of the evolution of galaxies and a large potential for new discoveries. The majority of these surveys will use advanced radio interferometers, including the Australian Square Kilometre Array Pathfinder \citep[ASKAP:][]{johnston07ASKAP,DeBoer09,hotan21}, the Murchison Widefield Array \citep[MWA:][]{tingay13,wayth18}, MeerKAT \citep{jonas16}, the Low Frequency Array \citep[LOFAR:][]{vanharleem13} and the Karl G. Jansky Very Large Array \citep[JVLA:][]{perley11}. These instruments have already shown their capability to survey hundreds of square degrees of radio sky at unprecedented depths. To capture the full potential of these surveys comes the need to transform the data analysis and interpretation techniques. Historically, the greatest scientific discoveries with major telescopes are serendipitous and lie beyond the original goals of the experiment \citep[][]{norris15}. \cite{ekers09} finds that in the last 60 years, only seven out of 18 major astronomical discoveries were planned. Currently, existing methods to make unexpected discoveries are primarily powered by human intelligence that are not expected to scale up to the massive data volumes of this decade. Without redesigning the search efforts, several unknown radio phenomena may take years to be found, or may never be found. In recent years, machine learning has emerged as a powerful tool to model highly non-linear data. Depending on the availability of data, machine learning can be performed in a supervised or unsupervised manner. For supervised learning the model is trained on several examples of input-output pairs. Such a model trained with truth labels is then used to estimate the output from a given input. Recently, these machine learning models have shown encouraging results when used to classify the radio source morphologies \citep[e.g.][]{lukic18, alger18,wu19,viera21}. However, without training labels these models in their current form are useless. With multi-million radio detections in future surveys where labelling a large training dataset is both expensive and time consuming, making it more pertinent to invest in unsupervised learning techniques. In the present work, we use a self-organizing map \citep[SOM][]{kohonen82} that does not require truth labels and focuses on the recognition of structure in a dataset. SOMs have previously been used to classify the radio morphologies \citep[e.g.][]{ralph19,galvin19,galvin20} and very recently to find some of the rarest radio morphologies \citep[][]{mostert21}. Following these previous studies, we use an implementation of the SOM that is invariant to affine transformations e.g. rotational, flipping and scaling variation of a radio galaxy. We train a SOM using a catalogue of ``complex'' (defined here as all multi-component) sources in ASKAP's Evolutionary Map of Universe pilot survey \citep[EMU-PS;][]{norris11, norris21}. The trained SOM is then used to find the most unusual radio sources. We derive a similarity metric for complex sources in EMU-PS as well as the pilot phase of Deep Investigation of Neutral Gas Origins survey (DINGO\footnote{https://dingo-survey.org/}) and the Survey With ASKAP of GAMA-09 + X-ray \citep[SWAG-X;][]{moss22prep}. Based on this similarity metric score, we visually inspect sources with the top 0.5\% most complex radio morphologies. We present the rarest radio morphologies in the top 0.5\% complex sources. Among these are the peculiar morphologies with unusual radio structures and, no corresponding diffuse emission in the optical wavelengths. We briefly discuss some of these peculiar sources in the present paper and note that future work should study them in more detail to understand the unconventional physical mechanisms behind their formation. In addition, the rest of the top 0.5\% complex sources have conventional radio morphologies with known mechanisms of formation. We present few examples of these sources as well. The paper is structured as follows. In Section~\ref{SEC:Observations}, we describe the ASKAP observations and other multiwavelength datasets we used. Section~\ref{SEC:method} is dedicated to the methods that include data pre-processing, description of SOMs, details about the network training, and the procedure to select peculiar sources. In Section~\ref{SEC:results}, we present a multiwavelength view of peculiar radio sources and examples of conventional sources. In Section~\ref{SEC:Discussion}, we discuss the overdensity of galaxies near the circular radio sources. We summarise our findings in Section~\ref{SEC:Summary} and provide directions for future work. Throughout this paper, we assume a flat $\Lambda$CDM cosmology based on \cite[][]{planck18-1} with $H_0=67.5$ and $\Omega_{m}=0.315$. \section{Observations} \label{SEC:Observations} In this section we describe the radio, infra-red and optical observations we used. \subsection{ASKAP Observations} \label{SEC:ASKAP} ASKAP is a radio telescope located at the Murchison Radio-astronomy Observatory (MRO). The telescope is equipped with the phased array feed \citep[PAF:][]{hay06} technology that enables high survey speed by virtue of wide instantaneous field of view. ASKAP has 36 antennas with a range of baselines. Most of these are located within a region of 2.3 km diameter, with the outer six extending the baselines up to 6.4 km \citep{hotan21}. ASKAP has recently completed the first all-sky Rapid ASKAP Continuum Survey \citep[RACS:][]{McConnell20} covering the entire sky south of Declination $+41^{\circ}$ to a median RMS of about 250 µJy/beam. This has paved a way for subsequent deeper surveys using ASKAP. One such survey is the Evolutionary Map of the Universe \citep[EMU;][]{norris11}, which is planned to observe the entire Southern Sky and is expected to produce a catalogue of as many as 40 million sources \footnote{Forecast based on the allocated time for the EMU 5-year survey program (see https://www.atnf.csiro.au/projects/askap/commissioning\_update.html).}. Proceeding in this direction, the EMU Pilot Survey \citep[EMU-PS:][]{norris21} was completed in late 2019. The EMU-PS covers 270 deg$^2$ of sky with $301^{\circ}< {\rm RA} < 336^{\circ}$ and $-63^{\circ}< {\rm Dec} < -48^{\circ}$. It consists of 10 tiles with total integration time of $\sim 10$ hours each, reaching an RMS sensitivity of $25-35~\mu$Jy/beam and a beamwidth of $13^{\prime\prime} \times 11^{\prime\prime}$ FWHM. The operating frequency of EMU-PS is between 800 and 1088 MHz centred at 944 MHz. The raw data was processed using the ASKAPsoft pipeline \citep[][]{whiting17,norris21}. As the survey data consists of ten overlapping tiles, value-added processing was performed to produce a unified image and source catalogue. This includes merging of tiles by performing the weighted average of the data in overlapping regions and convolving the unified image to a common restoring beam size of $18^{\prime\prime}$ FWHM to overcome the variations in point spread function (PSF) from beam to beam \citep[][]{norris21}. A catalogue of islands and components is then constructed by running the ''$Selavy$" source finder \citep[e.g.][]{whiting12} on the convolved image. This catalogue contains 220,102 components with 81.3\% simple (or single component) and 18.7\% complex (multiple components) sources. As the main goal of the present work is to find a way to streamline a search of new peculiar radio sources, we have limited our analysis to the 41,181 components of complex sources in the catalogue. The second survey used here is the Deep Investigation of Neutral Gas Origins pilot survey (DINGO\footnote{https://dingo-survey.org/}). DINGO aims to provide a legacy of deep HI observations out to redshift $z\sim0.4$. The key science goals of DINGO are to study the evolution of the cosmic HI density and the evolution of galaxies \citep[][]{meyer09}. The central frequency of the survey is 1367 MHz. In the present work, we use 11 DINGO tiles publicly available from the CSIRO ASKAP Science Data Archive (CASDA\footnote{https://research.csiro.au/casda/}). Each tile has a total integration time of $\sim 8$ hours except for two tiles with $\sim 6$ hours of integration. The average beamwidth of the survey is $10^{\prime\prime} \times 6^{\prime\prime}$ FWHM. Each tile was processed using ASKAPsoft with standard continuum settings. Seven tiles with Scheduling Block IDs (SBIDs): 10991, 10994, 11000, 11003, 11006, 11010 and 11026 cover the same sky region with $338^{\circ}< {\rm RA} < 346^{\circ}$ and $-36^{\circ}< {\rm Dec} < -29^{\circ}$. These tiles have RMS sensitivity between 49 and $64~\mu$Jy/beam. Weighting the individual tiles proportional to $1/\rm RMS^2$ we generate an averaged map from these tiles with a final RMS sensitivity near $21~\mu$Jy/beam. In the same way, tiles with SBIDs 14109 and 14136 covering the area of $217^{\circ}< {\rm RA} < 223^{\circ}$ and $-3^{\circ}< {\rm Dec} < +4^{\circ}$ are also combined to get a second averaged map with final RMS noise of $40~\mu$Jy/beam. A third averaged map is generated combining SBIDs 14055 and 14082 covering the area of $211^{\circ}< {\rm RA} < 218^{\circ}$ and $-3^{\circ}< {\rm Dec} < +4^{\circ}$ with resultant RMS noise of $37~\mu$Jy/beam. Source catalogues are publicly available at CASDA for each of the 11 tiles. In this analysis we use three catalogues that correspond to the three tiles with the lowest RMS noise in that sky area. We then combine these three catalogues by removing duplicate sources in the overlapping regions. The final catalogue has a total number of 34,705 components with 3,841 complex source components. We use source positions given in the catalogues to make cutouts from the averaged maps. Another ASKAP survey used in the present work is the Survey With ASKAP of GAMA-09 + X-ray (SWAG-X) which as the name suggests is designed to cover the GAMA\footnote{http://www.gama-survey.org/} and eROSITA\footnote{https://www.mpe.mpg.de/eROSITA} Final Equatorial-Depth Survey \citep[eFEDS;][]{brunner21} fields. This survey comprises 13 ASKAP tiles (publicly available at CASDA) for complete coverage of the eFEDS region, with $\sim 8$ hours integration per tile. Similar to EMU-PS and DINGO, each tile is processed using ASKAPsoft. The average beamwidth of the survey is $14^{\prime\prime} \times 12^{\prime\prime}$ FWHM. The frequency band of the survey is centred at 888 MHz. The RMS noise of these 13 tiles ranges from 49 to $64~\mu$Jy/beam. These tiles cover the sky area with $126^{\circ}< {\rm RA} < 146^{\circ}$ and $-5^{\circ}< {\rm Dec} < +8^{\circ}$. We generated six averaged maps by combining 2-3 tiles for each map and using weights proportional to $1/\rm RMS^2$. The tile SBIDs used for making averaged maps include: 10132 and 20875; 10108 and 20931; 10123 and 10475; 10135 and 20132; 10126 and 21021; 10137, 10129 and 10486. The resultant RMS noise of these six averaged maps is between 32 and $36~\mu$Jy/beam. We use six catalogues corresponding to the tiles with the lowest RMS noise in the same sky region. We then combine these three catalogues by removing duplicate sources in the overlapping regions. The combined catalogue has 145,011 components with 21,324 complex source components. As mentioned before, our analysis in this paper is limited to the complex sources from all three ASKAP surveys. We use EMU-PS for training the ML model and the other two surveys are used to infer the trained model. Note that the source catalogues used to get the positions of radio sources in the SWAG-X and the DINGO surveys are from the individual tiles. However, we use the averaged maps instead of the individual tiles to make image cutouts at the positions of these radio sources. These cutouts are then used to find the peculiar sources using the trained ML model and for the figures in the present work. Due to lower noise in the averaged maps, it is possible that the complex radio sources detected in the individual tiles have higher signal-to-noise in the averaged maps. Here we assume that these catalogues have all the top peculiar complex sources that are detected by our ML method. Future work should verify this by creating source catalogues from the averaged images, which is beyond the scope of the current work that is focused on the development of ML method from available catalogues. \subsection{Infrared and Optical data} \label{SEC:DES_SDSS} We use the photometric data available for the ASKAP survey regions to identify the infrared and optical sources in the region of circular and peculiar radio objects presented here. The Wide-field Infrared Survey Explorer \citep[WISE;][]{wright10} is an all sky infrared survey observed in the W1, W2, W3 and W4 bands that correspond to 3.4, 4.6, 12 and 22 $\mu$m wavelength. In this study, we use only the W1 band from AllWISE \citep[AllWISE;][]{cutri13} that has a 5$\sigma$ point source sensitivity of 28 $\mu$Jy. The optical data were taken from the publicly available 9th data release of the Dark Energy Spectroscopic Instrument's Legacy Imaging Surveys \citep[DESI LS DR9\footnote{https://www.legacysurvey.org/dr9/};][]{schlegel21}, the Science Archive Server of Sloan Digital Sky Survey \citep[SDSS;][]{alam15} and Dark Energy Survey \citep[DES;][]{abbott18}. Unless specified otherwise, we report photometric redshifts from the counterparts in DESI LS DR9 throughout this paper. \section{Method} \label{SEC:method} The first crucial step while fitting a machine learning model is to pre-process the data and make it suitable for the machine. In this section, we describe the pre-processing procedure as well as the machine learning technique used here. \subsection{Data Pre-processing} \label{SEC:preprocessing} The most important aspect of machine learning is the quality of data used to train models. The high sensitivity of ASKAP surveys creates advanced challenges for data pre-processing due to the large source density in survey images. We design the following pre-processing scheme to enhance useful features in the radio images: \begin{itemize} \item We create cutouts from the survey images at the positions of all components of complex radio sources. We chose a cutout size of $5^{\prime} \times 5^{\prime}$ as only 11 sources in EMU-PS (i.e. 1 in $\sim 20,000$) have a size greater than $5^{\prime}$ \citep[][]{yew22prep}. This gives us a $150\times 150$ pixel image with pixel size of $2^{\prime \prime}$. One such cutout is shown in the left panel of Figure~\ref{FIG:Preprocessing}. This map has a faint double lobed radio source in the centre which has a low signal-to-noise ratio. \item We estimate the noise in each cutout. This is done by first measuring the Median Absolute Deviation (MAD) of pixel values. Two rounds of data clipping are then applied to remove outlying pixels. The outlier threshold is chosen at $3\times$MAD. The noise is then estimated as the standard deviation of the clipped distribution. The second panel of Figure~\ref{FIG:Preprocessing} shows the full and clipped distributions of image pixels in blue (filled) and orange (dashed) colors. \item We perform an island segmentation for each cutout by generating masks of island sources with pixel values greater than 3$\sigma$. Here $\sigma$ is defined as the standard deviation of the clipped distribution. At the positions of these masks, we convert the pixel values to a logarithmic scale and perform Min-Max normalisation that enhances the signal on the scales of islands. The pixel values of the rest of the image are set to zero and the Min-Max normalisation of segmented regions changes the image scale in the range of 0 to 1. In the resultant image, shown in the third panel of Figure~\ref{FIG:Preprocessing}, the source density is moderately high, and some of the islands may just be noise fluctuations or artefacts. \item To overcome this issue we impose a threshold on the number of pixels that constitute an island in the image. This means that we keep only those islands for which the signal is distributed over a large number of pixels. After some tests and visual inspections we set the minimum size for an island to 60 pixels. This threshold removes most of the noise fluctuations from the maps. Note that this limit may also remove some point sources. However, that doesn't effect our analysis as the purpose of this study is to discover the most peculiar complex sources. The final pre-processed radio image is shown in the right panel of Figure~\ref{FIG:Preprocessing}. \end{itemize} \subsection{Self Organizing Map} \label{SEC:SOM} A self-organizing map \citep[SOM;][]{kohonen82} is a neural network that provides an efficient way to understand high-dimensional data. The neural network constructs a representative feature map of the training dataset. This can be used for the tasks of dimensionality reduction and to display similarities among data sets. SOM learns in an unsupervised manner and does not require a target vector for the dataset. This is important for our task as the radio sources that we aim to find are unknown objects. An advantage of using SOM over other unsupervised architectures is topologically preserved mapping from input to output spaces. This is important to retain the spatial information of astronomical images. The basic unit of the SOM is a neuron $n$. A number of $N$ neurons are organized in an input layer and are connected to an output feature map. These connections have associated weights $w$ that are randomly initialised. While training, data is provided to the input layer and the extracted features are propagated to the output map. The output map has the form of a lattice or grid where each neuron is placed at a position $p$. Each neuron in the lattice competes with the others to win every subject in the dataset. For a training iteration $i$, a subject $d$ from the dataset $D$ is selected to compute a similarity measure $S(d,w_p)$ with respect to a neuron with prototype weights $w_p$. The winning neuron for $D_j$ is its Best Matching Unit (BMU) whose position is identified as $k$. Following this the prototype weights of BMU and neighbouring neurons are updated as \BE w_p^{\prime} = w_p + (\phi(d)-w_p) \times G(p,k) \times L(i), \label{EQ:weights} \EE where $w_p^{\prime}$ is the updated weight. The term $(\phi(d)-w_p)$ is required to spatially align $d$ onto $w_p$. $G(p,k)$ is the neighbourhood function parametrised as a Gaussian that controls the propagation of weight updates to neighbouring neurons. In principle, the neighbouring neurons of a BMU get smaller updates and the amount depends upon the separation between $k$ and $p$ as well as the chosen width $\sigma_G$ of the Gaussian. $L(i)$ is the learning rate that further controls the weighting updates for each iteration. The SOMs have been used previously in astronomy for classification of light curves and clustering of gigahertz-peaked sources \citep[e.g.][]{brett04,torniainen08}. More recently, SOMs are used for the estimation of photometric redshifts in large surveys \citep[e.g.][]{geach12,wright20}. These datasets are in the form of catalogues of sources. The application of neural networks on the astronomical image datasets requires that the method is invariant to affine transformations. Examples of such transformations include translation, scaling, flipping and rotation of images. This means that for sources in an image, e.g. double lobed Active Galactic Nuclei (AGN), the algorithm should not be sensitive to their orientation in the sky. To approach a solution, \cite{ralph19} used a convolutional auto-encoder to reduce the impact of affine transformations for the classification of radio galaxies. Similarly, \citep{segal22} is using auto-encoders to measure the complexity of radio galaxies. However, the training of the SOM using the compressed latent vector space of auto-encoders results in the loss of topological information. In a different approach, \cite{polsterer15} developed Parallelized rotation and flipping INvariant Kohonen maps (PINK) to incorporate the transformational invariance into the SOMs. \cite{galvin19} showed that PINK can be an ideal solution to break the degeneracy arising due to affine transformations without losing topological information. \cite{galvin20} further exploited PINK to classify different morphologies of radio sources using the Faint Images of the Radio-Sky at Twenty centimetres \citep[FIRST;][]{becker95}. Following this, we use PINK in this analysis to find the rare and unusual radio morphologies. PINK implements a modified Euclidean distance metric for similarity measure that can be written as \BE S(d, w_k) = \underset{\forall \phi \in \Phi}{{\rm minimize}(\phi)} \sqrt{\sum_{c=0}^C \sum_{x=0}^X \sum_{y=0}^Y \left(w_{k(c,x,y)} - \phi(d_{c,x,y}) \right)^2}, \label{EQ:similarity} \EE where $\phi$ is an affine transformation drawn from a set of $\Phi$ and is optimized to align an image to features in the BMU. This is propagated to update the neighbouring units. $C$ is the number of channels of an image. Here we use only one channel. $X$ and $Y$ define the pixel size of the image. This optimizes the search for transformation parameters to align $d$ to prototype weights $w_k$ of a SOM. \begin{table}[!ht] \centering \begin{center} \begin{tabular}{lccccc} \hline \hline \multicolumn{1}{c}{Stage} & \multicolumn{1}{c}{Iterations} & \multicolumn{1}{c}{Rotations} & \multicolumn{1}{c}{Increments} & \multicolumn{1}{c}{$\sigma_G$} & \multicolumn{1}{c}{$L$} \\ \hline 1 & 5 & 90 &$4^{\circ}$& 1.5 & 0.1 \\ 2 & 5 & 180 &$2^{\circ}$& 1.0 & 0.05 \\ 3 & 5 & 360 &$1^{\circ}$& 0.7 & 0.05 \\ 4 & 10 & 360 &$1^{\circ}$& 0.5 & 0.005\\ \hline \end{tabular} \end{center} \caption{Parameters for different stages of training. From left to right are the number of iterations, number of rotations, increment with each rotation, width of $G(p,k)$ and learning rate.} \label{TAB:TRAINING} \end{table} \subsection{Training} \label{SEC:Training} We construct a SOM in a Cartesian lattice space with $10\times 10$ neurons. Each neuron has a circular shape initialised with uniform random noise between 0 and 1. The circular shape preserves the entire region of the image against the affine transformations. This is an improvement over the previous versions of PINK with square shaped neurons which resulted in the loss of information in the outer regions due to image transformations \citep[e.g.][]{galvin19, galvin20}. The SOM is trained in four stages with user-defined parameters outlined in Table~\ref{TAB:TRAINING}. In each stage, every subject in the dataset is passed through the network to update prototype weights. Each full passage of the dataset through the network is called an iteration. The first three stages include five iterations each of the dataset, and the final stage has 10 iterations. Across all stages, a normalised Gaussian neighbouring function is used to update the weights of neighbouring neurons. The $1\sigma$ width of the Gaussian is reduced in every stage with $\sigma_G = 1.5$ and 0.5 for the first and final stages, respectively. This helps in establishing a broad set of morphologies across the lattice in the first stage, and fine tuning of the small scale structure in later and final stages. For the same reason, the first stage requires a minimal set of rotations. Thus our first training stage has 90 rotations for each subject in dataset with increments of $4^{\circ}$. This is increased to 360 rotations in the final stage with increments of $1^{\circ}$. The large learning rate and the size of neighbouring function in the first stage allows the modification of many prototypes with each update. These are subsequently reduced to shrink the region of influence of each prototype weight in later stages. Note that there are no formal convergence criteria for training a SOM as the algorithm works in an unsupervised way. This makes the manual estimation of the training parameters an important aspect of our analysis. With a small learning rate, the SOM will take a long computational time to train. On the other hand, larger values result in unstable prototype updates. Similarly, a small neighbouring function decouples the neurons from each other, whereas its larger width results in the modification of more prototype weights. We converge on the training parameters for the four stages by experimenting with several possibilities and qualitative examination of the meaningful morphologies across the SOM lattice. We also train a SOM with $25\times 25$ neurons and find no difference in the detection of rare radio morphologies when compared with a SOM of $10\times 10$ neurons. In this analysis, the SOM is trained using the 41,181 components of complex radio sources from the EMU-PS catalogue. Each image is centred at the component position and has a cutout size of $150\times150$ pixels amounting to a $5^{\prime}\times 5^{\prime}$ field of view. The training of the SOM is carried out on a cluster with 8 GPUs and 64 GB of memory for a total of $\sim 18$ hours. \begin{table*}[!ht] \centering \begin{center} \begin{tabular}{ccccccc} \hline \hline Name & Integrated radio & RA (Deg). & Dec (Deg) & Survey & Reference \\ & flux density (mJy) \\ \hline \\ ORC J2102--6200 & 6.26 & 315.7429 & $-62.0044$ & ASKAP & \citet{norris21b} \\ & & & & (EMU-PS) & \\ ORC J2058--5736 & 6.97 & 314.6783 & $-57.6161$ & ASKAP & \citet{norris21b} \\ & & & & (EMU-PS) & \\ ORC J2058--5736 & 1.86 & 314.7346 & $-57.6153$ & ASKAP & \citet{norris21b} \\ & & & & (EMU-PS) & \\ ORC J1555+2726 & -- & 238.8527 & $+27.4427$ & GMRT & \citet{norris21b} \\ ORC J0102--2450 & 3.9 & 015.6016 & $-24.8442$ & ASKAP & \citet{koribalski21} \\ \hline \\ J084927.5--045721 & 228.5 & 132.3645 & $-4.956 $ & ASKAP & Present work \\ & & & & (SWAG-X) & \\ J222339.5--483449 & 17.2 & 335.9145 & $-48.5803$ & ASKAP & Present work \\ & & & & (EMU-PS) & \\ \hline \hline \end{tabular} \end{center} \caption{Previously known ORCs (top 5 rows) and ORC candidates from present work (bottom 2 rows). From left to right we show: IDs, names using the approximated centre of diffuse emission, integrated radio flux densities, approximate geometrical centres of these systems, their parent surveys and references.} \label{TAB:all-orcs} \end{table*} \subsection{Final SOM \& Selection of Rare Radio Morphologies} \label{SEC:selection} The final trained SOM is shown in Figure~\ref{FIG:SOM}. After four stages of training the SOM appears to show meaningful radio morphologies. These morphologies include resolved radio lobes, extended structures bridged by diffuse emission, and more compact sources. The information attached to a neuron can be used to identify all subjects that share this neuron as their BMU. A properly trained SOM contains a representative neuron for each subject in the training dataset. Using this information, we map the image dataset on the trained SOM to evaluate the similarity statistics. Figure~\ref{FIG:BMUcounts} shows the number counts of EMU-PS components for each of the neurons in the SOM lattice. The lowest number of subjects in the lattice is attached to the neuron (6,7). The largest number is associated with the neuron (8,5) with resolved double lobed sources. Note that SOM BMUs are representative of the majority of sources in a sample (the typical radio galaxies). Rare and unusual sources will be much more poorly characterised by the BMUs, leading to a much larger Euclidean distance than for the bulk of sources. For an adequately trained SOM, all sources in the dataset have a BMU. As can be noted from the prototypes in the trained SOM lattice in Figure~\ref{FIG:SOM}, all structures in the neurons can be identified as known morphologies of radio sources. These prototypes can be used to classify these radio sources which is beyond the scope of the present work as here we are focused only on finding the rare radio morphologies. The rare and unusual sources are not expected to be clustered in a single neuron. Therefore, we use a similarity measure to identify the most peculiar sources in the dataset. We use the modified Euclidean distance metric to identify these objects. Note that the SOM is trained with EMU-PS complex sources only but we map the complex sources from all three surveys on the trained lattice. We examine the distributions of Euclidean distances. Figure~\ref{FIG:Eucl_histogram} shows the Euclidean distance histograms for EMU-PS (solid green), SWAG-X (dashed blue) and DINGO (dot-dashed red) complex sources. The median (and standard deviation) of these distributions are 2.1 (2.3), 3.1 (2.4) and 3.2 (2.1) for EMU-PS, SWAG-X and DINGO, respectively. We notice that the SWAG-X and DINGO distributions have higher median Euclidean distances as compared to the EMU-PS. This is possibly due to the differences in observing frequencies, map resolutions and RMS sensitivities of these surveys described in Section~\ref{SEC:ASKAP} and/or a lower number of complex sources in DINGO and SWAG-X surveys. For each of these distributions, we chose a lower limit to the Euclidean distances and visually examine the top 0.5\% of complex sources for peculiarity. We note that this is a simplistic approach to reduce the number of visual inspections. The choice of the top rarest 0.5\% leaves us with approximately 200, 100 and 20 sources in the EMU-PS, SWAG-X and DINGO surveys, respectively. In the following sections, we discuss some of these rare radio source morphologies. \floatsetup[table]{font=tiny} \begin{table*}[!ht] \centering \begin{center} \begin{tabular}{ccccccccccccc} \hline \hline \\ \multicolumn{1}{c}{Name} &\multicolumn{1}{c}{RA (deg)} & \multicolumn{1}{c}{Dec (deg)} & \multicolumn{1}{c}{Flux (mJy)} & \multicolumn{1}{c}{Counterparts} & \multicolumn{1}{c}{$g$} & \multicolumn{1}{c}{$r$} & \multicolumn{1}{c}{$i$} & \multicolumn{1}{c}{W1} & \multicolumn{1}{c}{W2} & \multicolumn{1}{c}{W1-W2} & \multicolumn{1}{c}{$z_{\rm ph}$} & \multicolumn{1}{c}{$z_{\rm spec}$} \\ \hline \\ SWAG-X \\ J084927.5--045721\\ \\ A & 132.3638 & -4.9588 & 3 & WISEA J084927.33-045732.3 & 16.17 & 15.56 & 15.32 & 13.24 & 13.32 & -0.08 & $0.02\pm0.05$ & --\\% & -- \\ & & & & 2MASS J08492733-0457315 & \\ B & 132.3659 & -4.9614 & 9 & WISEA J084927.80-045741.1 & 17.78 & 16.94 & 16.39 & 12.48 & 12.45 & -0.03 & $0.08\pm0.01$ & --\\% & Galaxy\\ & & & & 2MASX J08492779-0457412 & \\ C & 132.3692 & -4.9542 & 6 & WISEA J084928.60-045715.0 & 18.07 & 17.23 & 16.81 & 12.54 & 12.49 & -0.05 & $0.08\pm0.02$ & 0.07697\\% & Galaxy\\ & & & & 2MASX J08492860-0457152 & \\ D & 132.3684 & -4.9505 & 18 & WISEA J084928.42-045702.1 & 18.36 & 17.48 & 17.01 & 12.69 & 12.69 & 0.00 & $0.08\pm0.01$ & --\\% & -- \\ & & & & 2MASS J08492840-0457017 & \\ E & 132.3607 & -4.9544 & 2 & WISEA J084926.56-045715.9 & 18.85 & 18.1 & 17.68 & 14.34 & 14.31 & 0.03 & $0.09\pm0.01$ & --\\% & -- \\ \\ \hline \\ EMU-PS \\ J222339.5--483449\\ \\ A & 335.9158 & -48.5827 &0.06& WISEA J222339.73-483457.9 & 20.78 & 19.35 & 18.87 & 15.71 & 15.45 & 0.26 & $0.34\pm0.04$ & --\\% & -- \\ B & 335.9148 & -48.5903 &0.10& WISEA J222339.53-483524.8 & 18.76 & 17.61 & 17.19 & 15.05 & 14.77 & 0.28 & $0.22\pm0.02$ & --\\% & Galaxy\\ & & & & 2MASS J22233951-4835247 & \\ C & 335.9145 & -48.5803 &0.06& WISEA J222343.07-483440.6 & 19.51 & 18.32 & 17.93 & 14.52 & 14.17 & 0.35 & $0.23\pm0.01$ & --\\% & Galaxy\\ & & & & 2MASS J22234313-4834406 & \\ D & 335.9075 & -48.5785 &0.07& WISEA J222337.80-483442.4 & 21.27 & 19.94 & 19.41 & 15.78 & 15.52 & 0.26 & $0.33\pm0.04$ & --\\% & -- \\ \\ \hline \hline \end{tabular} \end{center} \caption{Properties of optical and infrared sources near the two new ORC candidates presented in the present work. From left to right, we show ORC names and prominent optical sources. Right Ascension (RA) and Declination (Dec) of these sources. Integrated radio flux density estimated at their positions using ASKAP images. The optical ($gri$) and infrared (W1, W2) photometry for each of the nearby sources. Photometric redshifts from DESI LS DR9 and spectroscopic redshifts where available. The $gri$ information for SWAG-X J084927.5--045721 is taken from Pan-STARRS \citep{flewelling20} and for EMU-PS J2223-4834 from DES surveys. W1, W2 band information is from the WISE survey. Photometric redshifts are taken from DESI LS DR9. } \label{TAB:ORC-counterparts} \end{table*} \section{Results} \label{SEC:results} In this section, we present peculiar radio source morphologies among the top 0.5\% complex sources along with their observations in optical and infrared bands. The peculiar radio sources have unconventional shapes with no corresponding diffuse emissions at optical wavelenghts. Note that the purpose of this study is to streamline the detection of rare radio morphologies using machine learning. Future work should should study each of these in more detail to uncover the mechanisms of their formation. In addition to peculiar sources, we discuss examples of other conventional radio morphologies among the top 0.5\% complex sources. \subsection{Peculiar Radio Morphologies} \label{SEC:ORCs} Among the peculiar radio morphologies, we find sources with nearly circular diffuse radio emission. Such circular shapes are well known in radio images, and they either arise due to imaging artefacts or are real physical structures. Among the known circular structures are the supernova remnants, planetary nebulae, circumstellar shells, face-on spiral galaxies or protoplanetary discs. In a recent study, \cite{norris21b} reports the discovery of a new class of circular features in radio images and named them as Odd Radio Circles (ORCs). They report the discovery of three ORCs in EMU-PS and one in archival data from the Giant Metrewave Radio Telescope \citep[GMRT;][]{ananthakrishnan01}. Another ORC was discovered by \cite{koribalski21} using a different ASKAP survey. All of these are identified serendipitously by visual inspection of the radio images (see Table~\ref{TAB:all-orcs} for the complete list). Three out of these five previously discovered ORCs have a central galaxy. Figure~\ref{FIG:ORCCandidates0} shows two of the previously discovered ORCs in EMU-PS \citep[ORC J2102–6200 and ORC J2058-5736;][]{norris21b}, and our method places them among the top 0.5\% complex sources. Each row in the figure has three panels. The left panels show radio images of $12^{\prime} \times 12^{\prime}$ size. Throughout the present work, we show pre-processed radio images with no threshold on the number of pixels for an island. Central pixel sky positions, ID numbers for visual inspections and Euclidean distances are noted on these images. The value of ID increases with decreasing Euclidean distance and describes the chronology for visual inspections in order of decreasing complexity. For example, a source with ID $=0$ is termed most peculiar with highest Euclidean distance and has highest priority for visual inspection. The maximum value for the ID is equivalent to the number of top 0.5\% complex sources. The middle panels show same sized infrared images from WISE W1 bands on top of the radio contours. The larger images show that there are no prominent structures near the ORCs to which these objects may have possible associations (see Section~\ref{SEC:familiar} for other examples). The right panels show smaller cutouts of $5^{\prime} \times 5^{\prime}$, the size used to train the SOM, with radio contours overlaid on the infrared image. In this paper, we present two more ORC candidates that are also among the top 0.5\% sources and are similar to other previously known ORCs. Table~\ref{TAB:all-orcs} presents positions and integrated flux densities of previously known ORCs and two ORC candidates from this analysis. These positions correspond to their approximate geometrical center. Table~\ref{TAB:ORC-counterparts} shows the properties of infrared and optical sources within the extent of the continuum emission of these ORC candidates. We present positions, ASKAP fluxes, counterparts in different surveys, redshifts, and types of morphology from literature. The $gri$ colors and WISE (W1, W2) photometry are also shown. \begin{table*}[!ht] \centering \begin{center} \begin{tabular}{ccccccccccccc} \hline \hline \\ \multicolumn{1}{c}{Name} &\multicolumn{1}{c}{RA (deg)} & \multicolumn{1}{c}{Dec (deg)} & \multicolumn{1}{c}{Flux (mJy)} & \multicolumn{1}{c}{Counterparts} & \multicolumn{1}{c}{$g$} & \multicolumn{1}{c}{$r$} & \multicolumn{1}{c}{$i$} & \multicolumn{1}{c}{W1} & \multicolumn{1}{c}{W2} & \multicolumn{1}{c}{W1-W2} & \multicolumn{1}{c}{$z_{\rm ph}$} & \multicolumn{1}{c}{$z_{\rm spec}$} \\ % \hline \\ EMU-PS \\ J213409.5--533631 \\ \\ A & 323.5738 & -53.6363 &18 & WISEA J213417.69-533811.1 & 15.24 & 14.29 & 13.90 & 11.49 & 11.48 & 0.01 & $0.07\pm0.03$ & 0.0763\\% & Lenticular\\ & & & & 2MASX J21341775-5338101 & \\ B & 323.5367 & -53.5811 &5.4 & WISEA J213408.81-533451.8 & 16.39 & 15.44 & 15.07 & 12.75 & 12.73 & 0.02 & $0.11\pm0.06$ & --\\% & Galaxy\\ & & & &2MASX J21340880-5334516 & \\ C & 323.5278 & -53.5719 &0.4 & WISEA J213406.70-533418.7 & 15.29 & 14.35 & 13.97 & 11.74 & 11.71 & 0.03 & $0.08\pm0.01$ & 0.07836\\% & Lenticular\\ & & & & 2MASX J21340666-5334186 & \\ \\ \hline \\ EMU-PS \\ J220026.3--561030 \\ \\ A & 330.1004 & -56.1782 &110 & WISEA J220024.11-561041.7 & 14.93 & 13.99 & 13.59 & 11.71 & 11.75 & -0.04 & $0.05\pm0.01$ & 0.0757\\% & Elliptical\\ & & & & 2MASX J22002408-5610413 & \\ B & 330.1346 & -56.1742 &1 & WISEA J220032.19-561026.0 & 17.20 & 16.25 & 15.86 & 13.59 & 13.58 & 0.01 & $0.08\pm0.01$ & --\\% & Galaxy \\ & & & & 2MASX J22003234-5610273 & \\ \\ \hline \\ EMU-PS \\ J215026.5--621006 \\ \\ A & 327.6138 & -62.1703 & 36 & WISEA J215027.29-621013.3 & 15.79 & 14.85 & 14.47 & 12.10 & 12.08 & 0.02 & $0.07\pm0.01$ & -- \\% & Galaxy\\ & & & & 2MASX J21502732-6210129 & \\ B & 327.5745 & -62.1852 & 4 & WISEA J215017.86-621106.4 & 15.66 & 14.77 & 14.39 & 12.28 & 12.27 & 0.01 & $0.06\pm0.01$ & --\\% & Lenticular \\ & & & & 2MASX J21501790-6211070 & \\ C & 327.6038 & -62.1485 & 3 & WISEA J215024.94-620854.5 & 17.22 & 16.28 & 15.90 & 13.37 & 13.33 & 0.04 & $0.08\pm0.01$ & --\\% & Galaxy \\ & & & & 2MASX J21502489-6208550 & \\ \\ \hline \\ SWAG-X\\ J093803.4--015247\\ \\ A & 144.5139 & -1.88 & 2 & WISEA J093803.35-015247.9 & 18.44 & 17.06 & 16.56 & 13.94 & 13.65 & 0.29 & $0.22\pm0.01$ & --\\% & -- \\ & & & & 2MASS J09380334-0152480 & \\ \\ \hline \\ SWAG-X \\ J085234.4+062801 \\ \\ A & 133.149 & 6.4725 & 2.1& WISEA J085235.74+062821.1 & 18.54 & 17.35 & 13.59 & 13.99 & 13.51 & 0.48 & $0.19\pm0.02$ & 0.15958\\% & Galaxy\\ & & & & 2MASX J08523573+0628209 & \\ B & 133.1357 & 6.4605 & 1.2& WISEA J085232.90+062731.9 & 21.84 & 20.8 & 20.49 & 15.55 & 15.8 & -0.25 & $0.18\pm0.06$ & --\\% & Galaxy \\ & & & & SDSS J085232.91+062731.7 & \\ C & 133.1442 & 6.455 & 0.3& WISEA J085235.31+062720.2 & 22.1 & 20.93 & 20.32 & 13.92 & 13.78 & 0.14 & $0.26\pm0.05$ & --\\% & Galaxy \\ & & & & SDSS J085235.32+062720.5 & \\ \\ \hline \hline \end{tabular} \end{center} \caption{Properties of optical and infrared sources near the peculiar radio sources other than the ORC candidates. The columns are the same as described in Table~\ref{TAB:ORC-counterparts}. The $gri$ information here for EMU-PS J213409.5--533631, EMU-PS J220026.3--561030 and EMU-PS J215026.5--621006 is taken from DES, and for SWAG-X J093803.4--015247 and SWAG-X J085234.4+062801 is taken from SDSS.} \label{TAB:PEC-counterparts} \end{table*} \subsubsection{SWAG-X J084927.5--045721} \label{SEC:ORC-1} This ORC candidate is found in the 888~MHz SWAG-X survey. The left panels of Figure~\ref{FIG:ORCCandidates1} show radio images of $12^{\prime} \times 12^{\prime}$ size implying no sign of association with other surrounding sources. In the middle panel radio contours are shown overlaid on the infrared image from WISE band W1. The right panel shows a smaller cutout with the same size that is used to train the SOM ($5^{\prime} \times 5^{\prime}$). The source has a near circular shape with a diameter of $\sim 50^{\prime \prime}$. The integrated 888~MHz flux density is 228 mJy. This source is also known as PMN J0849-0457 \citep[Parkes-MIT-NRAO Surveys;][]{wright94}. We identify five optical/infrared sources near the ORC candidate. The left panel of Figure~\ref{FIG:ORCs2} shows the radio contours overlaid on the DESI LS DR9 composite image using $gri$ optical bands. Near the geometrical centre of the ORC candidate, we find a bright optical/infrared source labelled as ``A", which is WISEA~J084927.33-045732.3, and \citep[2MASS J08492733-0457315;][]{skrutskie06}. DESI LS DR9 gives a highly uncertain photometric redshift of $z=0.02\pm0.05$. The Gaia parallax ($1.9\pm0.3$ mas) and proper motion ($13.31\pm0.36$ mas/year) measurements suggest that it is a nearby Galactic star \citep[][]{brott05}. A galaxy labelled as ``B" is located towards the south-east of ``A". This galaxy is WISEA~J084927.80-045741.1 and also \citep[2MASX J08492779-0457412;][]{jarrett00}, with a photometric redshift $z_{\rm ph} = 0.08\pm0.01$. Two more galaxies labelled as ``C" and ``D" are located at the north-east edge at photometric redshifts of $0.08\pm0.02$ and $0.08\pm0.01$, respectively. Galaxy ``C" is WISEA~J084928.60-045715.0 or also 2MASX~J08492860-0457152, with $z_{\rm spec} = 0.07697$ \citep[][]{jones09}. Galaxy ``D" is WISEA~J084926.56-045715.9 or also 2MASS~J08492840-0457017. One more galaxy labelled as ``E" (WISEA J084926.56-045715.9) is located at the north-west edge with $z_{\rm ph}=0.09\pm0.01$. The redshifts of these four galaxies are consistent with 0.08 which may also be the redshift of the ORC candidate. Note that the detected radio emission of this source resembles that of previously known ORCs. However, two collimated jets from galaxy ``B", seen in the Very Large Array Sky Survey (VLASS) 2-4 GHz images\footnote{http://cutouts.cirada.ca/} \citep[][]{lacy20}, suggest that it may also be a bent-tail radio galaxy with its too far outer tails forming a rare ring-like shape. A dedicated study of this radio source may help us to understand the physics of the previously known ORC J2058--5736 that also have ring-shaped radio lobes \citep[][]{norris21b}. We find 4 four galaxy clusters within the $10^{\prime}$ radius (closest one at a separation of $\sim 3^{\prime}$) of this radio source in the Canada France Hawaii Telescope Legacy Survey (CFHTLS) galaxy cluster catalogue \citep[][]{durret11}. However they are all located at much higher redshifts, between 0.75 and 1. We also look for possible associations with galaxy clusters in DESI survey \citep[][]{zou21} and do not find any below $z=0.5$. However, the cluster catalogued as WHY~J084927.8--045741 with $z=0.0935$ \citep[][]{wen18} lies within the ASKAP detected emission, and includes the group of galaxies seen in the left panel of Fig.~\ref{FIG:ORCs2} In Section~\ref{SEC:Discussion}, we discuss a galaxy overdensity around this radio source. \subsubsection{EMU-PS J222339.5--483449} \label{SEC:ORC-2} This ORC candidate is in the EMU-PS survey field and was also discovered serendipitously \citep[][]{norris22prep}. We independently rediscover this source using our machine learning technique. It has a near circular morphology with diameter of $\sim 80^{\prime \prime}$. From left to right, the top panels of Figure~\ref{FIG:ORCCandidates1} show radio continuum image, radio contours overlaid on WISE-W1 infrared image, and a smaller cutout with the same size that is used to train the SOM. The $12^{\prime} \times 12^{\prime}$ radio continuum image shows that it has no association with any of the extended radio structures in its vicinity. We identify four optical/infrared sources near this ORC candidate. The right panel of Figure~\ref{FIG:ORCs2} shows radio continuum contours overlaid on DES $gri$-color composite image. Near its geometrical centre, we find an optical/infrared source labelled ``A". It is WISEA J222339.73-483457.9, and DESI LS DR9 gives $z_{\rm ph}=0.34\pm0.04$ for it. Its morphological type is not known but the colors indicate that it is a passive galaxy. Towards the north-east edge, we find a galaxy (labelled ``B") which is WISEA J222339.53-483524.8, or also 2MASS J22233951-4835247 at $z_{\rm ph}=0.22\pm0.02$. Near the southern edge, we identify another galaxy (labelled ``C") which is WISEA J222343.07-483440.6, or also 2MASS J22234313-4834406, at $z_{\rm ph}=0.23\pm0.01$. Another optical/infrared source labelled as ``D" (WISEA J222337.80-483442.4) is seen due west of the radio source centre, with $z_{\rm ph}=0.33\pm0.04$. We find one galaxy cluster at a separation of $\sim 8^{\prime}$ using the galaxy cluster catalogue from South Pole Telescope \citep[SPT;][]{bleem15}. This cluster is both far away from ORC candidate and is located at a much higher redshift of 0.65. We also look for possible associations with galaxy clusters in the DESI survey \citep[][]{zou21} and find one galaxy cluster at a separation of $\sim 4^{\prime}$ and $z=0.51\pm0.02$. As the maximum redshift among all optical/infrared sources is much smaller, this galaxy cluster is not likely to be associated with the ORC candidate. In Section~\ref{SEC:Discussion} we discuss other possibilities of association. Other than the ORC candidates, we also find several other peculiar radio morphologies among the 0.5\% of sources with highest Euclidean distance. Table~\ref{TAB:PEC-counterparts} shows the properties of infrared and optical sources near them. We briefly describe these radio sources in the following sections. \subsubsection{EMU-PS J213409.5--533631} \label{SEC:PEC-1} This peculiar radio source found in the EMU-PS consists of a group of distorted radio components, collectively known as PKS~2130--538 \citep[][]{otrupcek91}, and nicknamed ''the dancing ghosts" \citep[see Figure~21 in][]{norris21}. This radio source has the highest Euclidean distance which means that our algorithm classifies it as the most peculiar source in EMU-PS. The top panels of Figure~\ref{FIG:unusual_radio_shapes-1} show radio and infrared images. The top left panel of Figure~\ref{FIG:unusual_radio_shapes-1-3big} shows radio continuum contours overlaid on the DES 3-color ($gri$) composite image ($12^{\prime} \times 7^{\prime}$). These ''dancing ghosts" are in galaxy cluster ABELL 3785 \citep[][]{abell89}. The twisted shape of this structure is possibly due to an interaction of a intergalactic wind with radio jets from two super massive black holes in lenticular galaxies ``A" and ``C" \citep[][]{norris21}. The two galaxies ``A" and ``C" shown in Figure~\ref{FIG:unusual_radio_shapes-1-3big} have reported $z_{\rm spec} =0.0763$ and $0.07836$, respectively \citep[][]{lauer14}. The galaxy ``B" has $z_{\rm ph} = 0.07444$ \citep[][]{bilicki14}. \subsubsection{EMU-PS J220026.3--561030} \label{SEC:PEC-2} This peculiar radio source also has a high Euclidean distance in the EMU-PS. The middle panels of Figure~\ref{FIG:unusual_radio_shapes-1} show radio and infrared images. These images imply a circular morphology where radio jets are emitted from the galaxy nucleus and may have caused the jets to have bent in nearly half circles (analogous to a rotating garden sprinkler). We identify two galaxies near this radio source. Near the geometrical centre of the structure, we find a bright elliptical galaxy 2MASX~J22002408-5610413 (WISEA J220024.11-561041.7) labelled as ``A" with $z_{\rm spec}=0.0757$ \citep[][]{jones09}. Another galaxy located towards the east, labelled ``B", is 2MASX J22003234--5610273 (WISEA J220032.19-561026.0), with $z_{\rm ph} = 0.08\pm 0.01$. The rich galaxy cluster \citep[ABELL 3826][]{abell89} at $z=0.075$ is centred $4.2^{\prime}$ (or 0.36 Mpc) north-west of the elliptical galaxy ``A". This suggests that the shape of this radio source is induced by the cluster environment. Future work should study the environmental effects leading to this shape in more detail. \subsubsection{EMU-PS J215026.5--621006} \label{SEC:PEC-3} This peculiar radio source has a radio core and an extended emission towards west and north-east. The bottom panels of Figure~\ref{FIG:unusual_radio_shapes-1} show radio and infrared images. The middle left panel of Figure~\ref{FIG:unusual_radio_shapes-1-3big} shows radio continuum contours overlaid on the DES 3-color ($gri$) composite image ($12^{\prime} \times 12^{\prime}$). We identify three galaxies near the radio source. The galaxy labelled as ``A" in the center of circular structure is 2MASX J21502732-6210129 (WISEA J215027.29-621013.3) at $z_{\rm ph}=0.07\pm0.01$. There is a lenticular galaxy (``B") in the south-west direction where the extended emission towards the west starts and whose jet passes over the circular emission towards north-east. This galaxy is 2MASX J21501790-6211070 (WISEA J215017.86-621106.4) with $z_{\rm ph}=0.06\pm0.01$. One more galaxy labelled as ``C" (2MASX J21502489-6208550, WISEA J215024.94-620854.5) has $z_{\rm ph}=0.08\pm0.01$ and is located towards the north edge of the radio source. However, this galaxy is unlikely to act as a host of any parts of the radio emission due to its position. We find a previously identified galaxy group $\sim 2^{\prime}$ north-east from ``A" \citep[DZ2015 028;][]{diaz15}. This suggests that the emission around the central galaxy is possibly due to emission from the group of galaxies. Future work should study the group environmental effects leading to this radio shape in more detail. \subsubsection{SWAG-X J093803.4--015247} \label{SEC:PEC-4} This peculiar radio source is in the SWAG-X field. The top panels of Figure~\ref{FIG:unusual_radio_shapes-2} show the radio continuum image (left), radio contours overlaid on WISE-W1 infrared image (middle), and a smaller cutout with the same size that is used to train the SOM (right). The $12^{\prime} \times 12^{\prime}$ radio continuum image shows that it has no association with any of the nearby extended radio sources. The middle right panel of Figure~\ref{FIG:unusual_radio_shapes-1-3big} shows radio continuum contours overlaid on an SDSS 3-color ($gri$) composite image. We find an optical/infrared object labelled as ``A"(2MASS J09380334-0152480, WISEA J093803.35-015247.9) with $z_{\rm ph}=0.22\pm0.01$ near the geometrical center of the source. This radio structure with a bright source at its center is possibly an end-on remnant radio galaxy, though it shows indications of a partial outer ring in radio emission similar to ORCs. Future work should study this morphology in more detail. \subsubsection{SWAG-X J085234.4+062801} \label{SEC:PEC-5} This peculiar radio morphology is also in the SWAG-X field. From left to right, the bottom panels of Figure~\ref{FIG:unusual_radio_shapes-2} show the radio continuum image, radio contours overlaid on WISE-W1 infrared image, and a smaller cutout with the same size that is used to train the SOM. We identify three galaxies near the edges of this structure. The bottom panel of Figure~\ref{FIG:unusual_radio_shapes-1-3big} shows radio continuum contours overlaid on a SDSS 3-color ($gri$) composite image. Towards the north edge, we find a galaxy ``A" (2MASX J08523573+0628209, WISEA J085235.74+062821.1) at $z_{\rm spec} = 0.15958$ (from SDSS). Near the south-west edge, we identify a galaxy ``B" (SDSS J085232.56+062737.6, WISEA J085232.90+062731.9) at $z_{\rm ph}=0.18\pm0.06$. Another optical/infrared object ``C" lies due south-east (2MASS J08523531+0627206, WISEA J085235.31+062720.2) and has $z_{\rm ph}=0.26\pm0.05$. The Gaia parallax ($3.5\pm 0.1$ mas) and proper motion ($42.9\pm 0.1$ mas/year) measurements suggests it to be a star. The radio emission appears to be dominated by the two overlapping bright galaxies ``A" and ``B”. In fact, galaxy ``A” with mostly compact radio emission appears to be hosting a bent-tail jet that points toward ``B” making a half circle. The circular diffuse emission is possibly associated with ``A” and/or ``B”. Two arcminutes north of galaxy ``A", there is an extended radio source which appears to be unrelated to the diffuse emission from this source. \subsection{Conventional Radio Morphologies} \label{SEC:familiar} The ORC candidates and other peculiar radio sources discussed in the previous section are the most unusual radio morphologies in the three ASKAP pilot surveys. The rest of the top 0.5\% radio sources have standard morphologies with known mechanisms of formation. These conventional sources include the diffuse emission from galaxy clusters, resolved star forming galaxies, bent-tailed galaxies and Fanaroff-Riley sources. These sources generally have more complex shapes and larger extent compared to the typical radio galaxies, and therefore have higher Euclidean distances than the rest of the data. The discussion of all of these sources is out of the scope of the present work. However, we present some representative examples of these sources in this section. \subsubsection{Diffuse emission from galaxy clusters} \label{SEC:galaxyclusters} Galaxy clusters are usually detected in microwave \citep[e.g.][]{planck13-29, bleem19, hilton21}, X-ray \citep[e.g.][]{piffaretti11, liu21} and optical \citep[e.g.][]{rykoff16} wavelengths. Galaxy clusters are known to have an overdensity of radio sources as compared to the field \citep[e.g.][]{coble07, gupta17a, gupta20b}. Recently, a growing number of galaxy clusters are found to have sources with diffuse radio emission. These sources are classified as radio halos, radio shocks (relics), and revived AGN fossil plasma sources \citep[e.g.][]{weeren19, giovannini20}. With the higher sensitivity of the new generation of radio telescopes like ASKAP, we expect to see diffuse emission from galaxy clusters. In Figure~\ref{FIG:Gcl}, we show two such systems at very high Euclidean distances from the SWAG-X and DINGO surveys. The top panels show diffuse emission from the galaxy cluster MaxBCG J145.82575+05.91142 identified in the SDSS survey using the maxBCG red-sequence method \citep[][]{koester07a}. The sky-blue square in the right panel of the figure shows its brightest cluster galaxy (BCG). This cluster is located at $z=0.094$ \citep[][]{rozo15}. Less than $2^{\prime}$ north-east of this system, there is an another known galaxy cluster located at $z=0.334$ \citep[WHL J094322.3+055537;][]{wen12, wen15}. The sky-blue circle in the right panel of the figure shows its brightest cluster galaxy (BCG). The bottom panels show a rare diffuse radio emission possibly from two galaxy clusters at different redshifts. The radio emission has the highest Euclidean distance score which means that it is the most peculiar source in the DINGO survey. We find galaxy clusters HSCS J143936+003231 \citep[$z=0.108$;][]{oguri18} and WHL J143934.3+003153 \citep[$z=0.15$;][]{wen15} towards the north-west and west edges of the diffuse emission. The sky-blue cross and circle in the right panel of the figure show BCGs of HSCS J143936+003231 and WHL J143934.3+003153, respectively. It is not clear whether both or only one of these clusters have diffuse emission towards the east of their central BCG positions. Future dedicated work should study the radio emission these galaxy clusters in more detail. \subsubsection{Resolved star forming galaxies} \label{SEC:resolvedGl} Nearby edge-on and face-on star forming galaxies are usually detected in radio continuum images and H$\alpha$ emission lines \citep[e.g.][]{pogge93, colbert96}. In all cases, infrared and radio continuum emissions are known to be correlated \citep[e.g.][]{murphy06, vlahakis07, garn09, lacki10}. The radio emission associated with these resolved galaxies has two well known components that correlate with the star formation rate i.e. the synchrotron emission from relativistic electrons accelerated by supernova remnants and the free–free emission emerging directly from H-II regions containing massive ionizing stars \citep[e.g.][]{condon92, murphy11, kennicutt12}. Among the 0.5\% sources at high Euclidean distances, we find many edge-on and face-on star forming galaxies. Figure~\ref{FIG:Resolved_galaxies} shows two such resolved star-forming galaxies in EMU-PS survey. The top panels show NGC~7125, a spiral galaxy located at $z_{\rm sp}=0.0105$ \citep[][]{wong06}. This galaxy is also a part of the galaxy group PGC1 0067418 NED002 \citep[][]{kourkchi17}. The bottom panels show NGC~2967, a face-on star forming spiral galaxy at $z_{\rm sp}=0.0063$ \citep[][]{couto06}. The star formation properties of the inner ring are known to be independent of the ring shape of this source \citep[][]{grouchy10}. \subsubsection{Bent-tailed sources} \label{SEC:bent-tail} Bent-Tailed (BT) radio sources are those where radio lobes and jets are not aligned linearly with the host galaxy. These sources are broadly classified into Wide-Angle Tail (WAT) and Narrow-Angle Tail (NAT) radio galaxies. WATs are usually associated with central cluster galaxies and possess a pair of well-collimated jets with small opening angles ($\leq 60^{\circ}$). NATs have plumes of emission which are bent to such a degree that their whole radio structure lies on one side of the optical host galaxy. BT radio galaxies are exclusively found in the most dense environments like galaxy clusters or groups \citep[e.g.][]{mao09}. The peculiar morphology of BT radio galaxies is typically a result of ram pressure stripping due to the relative movement of the host galaxy through an intra-cluster or intra-group medium \citep[e.g.][]{gunn72, miley72, eilek84, sakelliou2000}. Several BT galaxies appear at high Euclidean distances among the top 0.5\% sources. Figure~\ref{FIG:WATs} shows two such galaxies in the EMU-PS survey. The top panels show a BT radio galaxy near the ABELL 3771 cluster at $z=0.075$ \citep[][]{martinez14}. The bottom panels show another BT galaxy at $z=0.081$. \subsubsection{FR-I and FR-II sources} \label{SEC:FRI-II} The morphologies of extended radio emission of radio galaxies are typically classified into two broad categories: Fanaroff-Riley Class I (FR-I) and Class II (FR-II) sources \citep[][]{fanaroff74}. FR-I radio galaxies generally have lower radio brightness with increasing distance from the host galaxy. FR-II radio galaxies often have linear jets that terminate in hotspots of large radio lobes. Thus, FR-I and FR-II radio galaxies are typically described as edge-darkened and edge-brightened Active Galactic Nuclei (AGN), respectively. We find several large scale FR-I sources among the 0.5\% sources with largest Euclidean distances, and in Figure~\ref{FIG:FRI} we show the two topmost such FR-I sources, both found in the EMU-PS survey. The top panels show a bright FR-I source with a total projected angular size of $\sim 12^{\prime}$. The host galaxy, 2MASX J21512991-5520124 with $z_{\rm sp}=0.0388$ \citep[][]{hernan95} is located in the galaxy cluster MCXC J2151.3-5521 \citep[][]{piffaretti11}. The bottom panels show another FR-I radio source with host galaxy 2MASX J20455226-5106267 located at $z_{\rm sp}=0.0485$ \citep[][]{jones09} and radio emission extending over $\sim 12^{\prime}$. Note that the cutouts used to train the SOM are on the right panels and are too small to cover the full continuum emission of FR-I sources. Despite that the radio emission fills these cutouts to a large extent, we still find these sources at highest Euclidean distances. Several FR-II galaxies are also found among the top 0.5\% sources. Figure~\ref{FIG:FRII} shows three giant radio galaxies (GRGs) in the DINGO, SWAG-X, and EMU surveys. All these sources appear at high Euclidean distances although the information that makes them peculiar to machine learning algorithm comes from the edge-brightened hotspots as shown in the right panels. The top panels show a FR-II source with largest angular size (LAS) of $= 4.9^{\prime}$ and projected largest linear size (LLS) of 1~Mpc. The host galaxy 2MASS J22533602-3455305 is located at $z_{\rm sp}=0.2115$ \citep[][]{colless03}. This GRG in Abell 3936 has been studied in detail by \cite{seymour20}. It shows continuous emission towards the east and a detached lobe towards the west. The middle panels show linear radio jets from another FR-II source with host SDSS J090229.15+033204.3 (2MASS J09022915+0332041) at $z_{\rm ph}=0.25$, LAS~$= 6.8^{\prime}$ and LLS~$=1.6$~Mpc. This source is a restarted radio galaxy that exhibits, in addition to the outer lobes, more recent double-lobed emission near the central galaxy. The bottom panels show another FR-II source with potential host 2MASX J21365159-6125128 at $z_{\rm sp}=0.1249$ \citep[][]{colless03}, LAS~$= 11.1^{\prime}$ and LLS~$=1.49$~Mpc. The other potential host is 2MASS J21370099-6119472 at $z_{\rm ph}=0.277\pm0.054$ (DESI DR9) close to the north-east lobe would lead to LLS~$=2.56$~Mpc. \section{Environment of ORC Candidates} \label{SEC:Discussion} The previously known three ORCs (ORC J2102–6200, ORC J2058–573 and ORC J0102–2450, see Table~\ref{TAB:all-orcs}) either lie in a significant overdensity or have a close companion \citep[][]{norris21c, norris22}. This suggests that the environment may be important in their formation. For the two ORC candidates from the present work, we look for possible associations with low redshift galaxy clusters in Planck \citep[][]{planck13-29}, Dark Energy Spectroscopic Instrument \citep[DESI;][]{zou21} and Meta-catalogue of X-Ray Detected Clusters of Galaxies \citep[MCXC;][]{piffaretti11} catalogues. We do not find any galaxy cluster candidate within $10^{\prime}$ from the centre of ORCs in the redshift range of their optical sources (see Table~\ref{TAB:ORC-counterparts}). Galaxy cluster catalogues do not necessarily have group scale systems with fewer number of galaxies. We therefore explore the overdensities of galaxies at the positions of two ORCs using the photometric redshift catalogue from DESI DR8 \citep[][]{zou20}. We estimate the number of galaxies in a circle of $5^{\prime}$ radius of ``A" and ``B" galaxies near both ORC candidates (see Table~\ref{TAB:ORC-counterparts} and Fig~\ref{FIG:ORCs2}). We chose these two galaxies as they have different redshifts that are not consistent with each other. Redshifts of other galaxies are consistent with either of these sources. We use their photometric redshift uncertainty as the redshift range to estimate overdensities. We restrict the DESI photo-z catalogue within $z<0.07$ and $0.07<z<0.09$ for ``A" and ``B" galaxies of SWAG-X J084927.5--045721, respectively. Although source ``A" here is expected to be a Galactic star (see Section~\ref{SEC:ORC-1}), nevertheless, we estimate overdensities near it due to its location at the centre of the ORC candidate. For sources ``A" and ``B" in EMU-PS J222339.5–483449, subsets of the DESI photo-z catalogue with $0.3<z<0.38$ and $0.2<z<0.24$, respectively, were used. We also estimate the number of galaxies in circles of radius $5^{\prime}$ sliding in a continuously increasing RA range (with $5^{\prime}$ RA increments and keeping Dec fixed) to compare the field number density with the density near ORC candidates. The top panel of Figure~\ref{FIG:ORC1-2-overdensity} shows the number of galaxies within a circle of $5^{\prime}$ radius in the specified $z$ range. The red circle shows that there are only 3 galaxies in $5^{\prime}$ radius from ``A" and the green dot-dashed line indicates an average galaxy count of 2.2 for the field. Thus if the true redshift of ORC candidate is equivalent to that of ``A", the inter-galactic environment may not be the reason for its circular morphology. The second panel from the top shows the galaxy counts for source ``B". The red circle shows that there are 13 galaxies in $5^{\prime}$ radius and the green dot-dashed line indicates an average number of 1.3 for the field. This implies that the ORC candidate if located at its redshift could have formed its circular morphology under the impact of yet unknown inter-galactic processes. The bottom panels of Figure~\ref{FIG:ORC1-2-overdensity} show the number of galaxies in a circle of $5^{\prime}$ radius of EMU-PS J222339.5--483449 ``A" and ``B" sources. The red circles show that there are 42 and 21 galaxies in $5^{\prime}$ radius of ``A" and ``B", respectively. The average number of galaxies around ``A" and ``B" are 26.1 and 9.7, respectively. Although the two sources have very different redshifts, the high number density around both implies that the ambient galaxy density may be an important aspect for the circular morphology of EMU-PS J222339.5--483449. We note that understanding the physics of the origin of ORCs is beyond the scope of the present work that focuses on the detection of peculiar systems. Future work should study each of these rare circular and other peculiar sources to understand the physical mechanism behind their origin. \section{Summary} \label{SEC:Summary} We present a machine learning method to search for the rarest and most interesting sources in ASKAP continuum radio surveys. We use PINK which is an implementation of self-organising maps and accounts for the affine transformations of astronomical images in an efficient manner. We train the machine learning algorithm using $\sim 42,000$ cutouts ($5^{\prime} \times 5^{\prime}$) at the positions of complex radio sources (sources with more than one components) in the EMU pilot survey. The trained model is then used to map these sources into a $10\times 10$ lattice of neurons according to the relative similarity between the sources. We use an Euclidean distance metric to compute the similarity in an unsupervised manner. The Euclidean distance metric is then used to identify the rarest and most interesting sources in the survey. We select a small fraction of radio sources with highest Euclidean distances for visual inspection. We chose the top 0.5\% complex sources that amounts to 200 sources at high Euclidean distances in the EMU-PS field. Radio sources among this cut include previously discovered circular radio sources in the EMU-PS survey. These circular sources are also known as Odd Radio Circles (ORCs) and were previously discovered using a dedicated visual inspection of the EMU-PS field. In addition to EMU-PS, we also search for interesting radio sources in two other ASKAP surveys, and to this end we map complex sources in the SWAG-X and DINGO pilot surveys to the trained lattice. Note that the training is done only once using EMU-PS and the trained model is used to map sources from the EMU-PS as well as the other two surveys. For the SWAG-X and DINGO pilot surveys, we inspect 100 and 20 (top 0.5\%) complex radio sources, respectively. Among these top 0.5\% complex sources at high Euclidean distances we find two new ORC candidates, namely SWAG-X J084927.5--045721 and EMU-PS J222339.5--483449 (see Section~\ref{SEC:ORCs}). We identify host galaxies at the positions of these ORC candidates from literature and by using multiwavelength data from AllWISE, DES, SDSS and other serendipitous surveys. Using the DESI DR8 galaxy catalogue, we find that both ORC candidates are possibly located at local overdensities. Other than these ORC candidates, we present five more peculiar radio sources with rare morphologies. Future work should study each of these peculiar sources to understand the physical mechanism behind their origin. The rest of the the top 0.5\% complex radio sources have conventional morphologies. In the present work, we show some representative examples of these sources which include diffuse emission from galaxy clusters, resolved star-forming galaxies, and bent-tailed, FR-I and FR-II radio galaxies (see Sections~\ref{SEC:galaxyclusters}, \ref{SEC:resolvedGl}, \ref{SEC:bent-tail} and \ref{SEC:FRI-II}). A useful list of all sources requires additional work to cluster multiple component radio sources and identify them with optical/infrared host galaxies. The development of such a clustering method is currently in progress and we plan to discuss it in detail in our future work. The scope and intent of the present work is to develop a method to discover unusual objects in the next generation radio surveys that are expected to detect multi-million radio sources. Our machine learning method detects previously known ORCs and new ORC candidates among the top 0.5\% complex sources which amounts to only 200 systems for EMU-PS. This number is quite small for the pilot ASKAP surveys investigated here. The full surveys will produce many more possibilities of finding rare sources. For instance, the EMU survey is expected to produce a catalogue of 40 million radio components \citep[][]{norris11} with 18\% components associated with complex sources. A fraction of 0.5 per cent of complex sources would lead to $\sim 36,000$ sources for visual inspection. As the latter will be highly time-consuming, future work should further improve the machine learning method by implementing new techniques to automate the discovery of rare objects in big surveys. Future work should also investigate the means to include small scale (few arcseconds) and large scale (several arcminutes e.g. some FR-I and FR-II) radio source images in the training sample. As mentioned in Section~\ref{SEC:method}, only 1 in 20,000 sources have a larger extent than the $5^{\prime}$ image size used for training the machine. However, we find these sources at high Euclidean distances where only a small part of the continuum emission is contained in the $5^{\prime}\times 5^{\prime}$ cutout. In addition, several images at high Euclidean distances have multiple radio sources. These images are filled with several point-like as well as small scale double lobed galaxies. Future work should study ways to reduce the number of such images. Finally, while here we rely on the source catalogues to find peculiar radio sources, future work should develop ML models based only on the full survey images to localise and detect rare morphologies. \section{Acknowledgements} The Australian SKA Pathfinder is part of the Australia Telescope National Facility (https://ror.org/05qajvd42) which is managed by CSIRO. Operation of ASKAP is funded by the Australian Government with support from the National Collaborative Research Infrastructure Strategy. ASKAP uses the resources of the Pawsey Supercomputing Centre. Establishment of ASKAP, the Murchison Radio-astronomy Observatory and the Pawsey Supercomputing Centre are initiatives of the Australian Government, with support from the Government of Western Australia and the Science and Industry Endowment Fund. We acknowledge the Wajarri Yamatji people as the traditional owners of the Observatory site. The photometric redshifts for the Legacy Surveys (PRLS) catalogue used in this paper was produced thanks to funding from the U.S. Department of Energy Office of Science, Office of High Energy Physics via grant DE-SC0007914. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. NG acknowledges support from CSIRO’s Machine Learning and Artificial Intelligence Future Science Platform. HA has benefited from grant CIIC 138/2022 of Universidad de Guanajuato, Mexico. \bibliography{ASKAP_PASA} \label{lastpage}
Title: VERITAS highlights of observations and results
Abstract: Located in southern Arizona, VERITAS is amongst the most sensitive detectors for astrophysical very high energy (VHE; E>100 GeV) gamma rays and has been operational since April 2007. We highlight some recent results from VERITAS observations. These include the long-term observations of the gamma-ray binaries HESS J0632+057 and LS I +61{\deg} 303, the observations of the Galactic Center region, and of the supernova remnant Cas~A. We discuss the results from a decade of multi-wavelength observations of the blazar 1ES 1215+303, the EHT 2017 campaign on the M87 galaxy, the discovery of 3C 264 in VHE, and the observation of three flaring quasars. Brief highlights of the indirect dark matter searches and targets-of-opportunity (ToO) observations are also discussed. The ToO observations allow for rapid follow-up of multi-messenger alerts and astrophysical transients.
https://export.arxiv.org/pdf/2208.11597
\begin{center}{\Large \textbf{ VERITAS highlights of observations and results\\ }}\end{center} \begin{center} Patel, S. R.\textsuperscript{1*} For the VERITAS Collaboration\textsuperscript{2} \end{center} \begin{center} {\textsuperscript{\bf 1}} Deutsches Elektronen-Synchrotron DESY, Platanenallee 6, D-15738 Zeuthen, Germany\\ {\textsuperscript{\bf 2}} http://veritas.sao.arizona.edu \\ * sonal.patel@desy.de \end{center} \begin{center} \today \end{center} \definecolor{palegray}{gray}{0.95} \begin{center} \colorbox{palegray}{ \begin{tabular}{rr} \begin{minipage}{0.1\textwidth} \includegraphics[width=30mm]{TIFR.png} \end{minipage} & \begin{minipage}{0.85\textwidth} \begin{center} {\it 21st International Symposium on Very High Energy Cosmic Ray Interactions (ISVHECRI 2022)}\\ {\it Online, 23-27 May 2022} \\ \doi{10.21468/SciPostPhysProc.?}\\ \end{center} \end{minipage} \end{tabular} } \end{center} \section*{Abstract} {\bf Located in southern Arizona, VERITAS is amongst the most sensitive detectors for astrophysical very high energy (VHE; E>100 GeV) gamma rays and has been operational since April 2007. We highlight some recent results from VERITAS observations. These include the long-term observations of the gamma-ray binaries HESS J0632+057 and LS I +61В° 303, the observations of the Galactic Center region, and of the supernova remnant Cas~A. We discuss the results from a decade of multi-wavelength observations of the blazar 1ES 1215+303, the EHT 2017 campaign on the M87 galaxy, the discovery of 3C 264 in VHE, and the observation of three flaring quasars. Brief highlights of the indirect dark matter searches and targets-of-opportunity (ToO) observations are also discussed. The ToO observations allow for rapid follow-up of multi-messenger alerts and astrophysical transients.} \vspace{10pt} \noindent\rule{\textwidth}{1pt} \tableofcontents \thispagestyle{fancy} \noindent\rule{\textwidth}{1pt} \vspace{10pt} \section{Introduction} \label{sec:intro} VERITAS is one of the most sensitive ground-based $\gamma$-ray observatories. It is located at the Fred Lawrence Whipple Observatory in southern Arizona (31 40N, 110 57W, 1.3 km a.s.l.). The VERITAS array has four 12-m diameter, 12-m focal length imaging atmospheric Cherenkov telescopes. Each telescope has a Davies-Cotton-design segmented mirror dish of 345 facets and focuses the Cherenkov light from particle showers onto a pixelated camera having 499 PMTs and a total field of view of 3.5$^\circ$. VERITAS began full operation in April 2007. Since then, it has undergone two major upgrades. In the first upgrade, during summer 2009, one of the four telescopes was relocated to a different position \cite{Perkins2009}. In the second upgrade, in summer 2012, PMTs (with the higher quantum efficiency and shorter time profile), and a new topological trigger system were installed \cite{Ziter2013}. The current configuration of the array can detect an object having 1$\%$ of the Crab Nebula flux in $\sim$25 hours with a gamma-ray-photon energy resolution of 15-25$\%$. For a 1 TeV photon, the 68$\%$ containment radius is $\le$0.1$^\circ$, with a pointing accuracy of <50". The details of the evolution of the performance of the VERITAS instrument with time are discussed in \cite{Park2015}. In order to account for the changing optical throughput and detector performance over time, signal calibration methods have recently been implemented to produce a fine-tuned instrument response functions \cite{Adam2022}. \section{VERITAS Observing program} The VERITAS observing season spans from September to July each year. Each year, the array collects $\sim$950 h of good-weather data during dark time and $\sim$250 h of data during bright moon (illumination of 30-65$\%$). The typical annual VERITAS observation plan breakdown is shown in Figure~\ref{fig:observing}. \section{Galactic Science highlights} \paragraph{$\gamma$-ray binaries - HESS J0632+057 and LS I +61$^{\circ}$ 303:} Based on the composition and energy output, gamma-ray binaries can be defined as systems consisting of a compact object orbiting a star (O or Be), with periodic release of large amounts of non-thermal emission at energies >1 MeV \cite{Dubus2013}. The VERITAS archive includes one of the largest data sets for the sources of this class, which includes $\sim$260 h and $\sim$174 h of good quality data (after applying weather-based time cuts) for HESS J0632+057 and LS I +61$^{\circ}$ 303, respectively. Located at a distance of 1.1-1.7 kpc, HESS J0632+057 consists of an unknown compact object orbiting a Be star (MWC 148) with a period of 321 В± 5 days \cite{Bongiorno2011}. The data from three major atmospheric Cherenkov telescopes: H.E.S.S. \cite{Hinton2004}, MAGIC \cite{MAGIC2016}, and VERITAS \cite{Park2015} were used to study the very high energy (VHE) $\gamma$-ray emission from this source, resulting in a total of 450 h over 15 years, between 2004 and 2019. The VHE $\gamma$-ray fluxes were found to be modulated with the orbital period of 316.7 В± 4.4(stat) В± 2.5(sys), consistent with the value obtained at X-ray energies \cite{Adams2021}. This large data set includes dense observational coverage for several orbits, which reveal short-timescale and orbit-to-orbit variability. The $\gamma$-ray binary LS I +61$^{\circ}$ 303 consists of a rapidly rotating BeVe star located at 2.65 В± 0.09 kpc \cite{Lindegren2021} and a compact object orbiting with a period of 26.5 days \cite{Casares2005}. The recent detection of radio pulsations from the direction of this source by the the Five-hundred-meter Aperture Spherical radio Telescope suggests that the compact object is a rotating neutron star \cite{Weng2022}. The orbital phase light curve of nightly-binned flux points from the analysis of $\sim$174 h of VERITAS data above 300 GeV is shown in Figure~\ref{fig:lsi}. The box shown in red includes the highest state ($\sim$30$\%$ Crab Nebula flux above 300 GeV) of LS I +61$^{\circ}$ 303 occurred in October 2014, during which the flux variability on night-timescale was observed \cite{Archambault2016}. Other than this state, nearly a factor of two flux difference in the orbital phase bin of (0.55-0.65) suggests the orbit-to-orbit variability. LS I +61$^{\circ}$ 303 is detected above 5$\sigma$ in most of the bins, except phase bins (0.1-0.2), (0.2-0.3), and (0.9-1.0), where the significance is about 4$\sigma$ \cite{Kieda2022}. \paragraph{Galactic Center Region:} The Galactic Center (GC) region hosts numerous powerful sources and potential sites of particle acceleration, including the supermassive black hole Sagittarius A* (Sgr A*), supernova remnants (SNRs), and pulsar wind nebulae. VERITAS observes the GC at large zenith angles ($\geqslant$ 60$^\circ$), resulting in an increased effective area of about four times greater than the effective area at a zenith angle of 20$^\circ$ for energies above 10 TeV, and increased in the systematic. This also raises the energy threshold for the GC analysis to about 2 TeV. 125 h of VERITAS data resulted in a detection of Sgr A* at a significance of 38$\sigma$. The differential spectrum is best fitted with a power law with an exponential cutoff at 10.0$^{+4.0}_{-2.0}$ TeV and a spectral index of 2.12$^{+0.22}_{-0.17}$. The analysis of the diffuse GC ridge shows a power law spectrum extending to the highest observed energy of $\sim$40 TeV and a hard spectral index of 2.19 В± 0.20. This supports the evidence for a PeVatron in the GC region. A more detailed discussion of this analysis can be found in \cite{Adams2021b}. \paragraph{Supernova Remnant - Cassiopeia A:} SNRs are thought to be the most favorable sites for the acceleration of Galactic cosmic-rays up to PeV energies. Among the SNRs, the young SNR are considered the best candidates to be detected at VHE, since the highest energy cosmic rays, which are believed to be produced early on, will not yet have escaped from the production site. The young ($\sim$350 years old) core-collapse SNR Cassiopeia A is a potential PeVatron candidate. VERITAS has studied this source with a deep exposure of 65 h. The joint spectrum was produced with VERITAS data covering 200~GeV-10~TeV and 10.8 years of $\textit{Fermi}$-Large Area Telescope data covering 0.1-500 GeV. The best fit was obtained with the power law spectral index of 2.17 В± 0.02(stat) and cutoff energy of 2.3 В± 0.5(stat) TeV \cite{Abeysekara2020}. Considering a one-zone model, proton acceleration up to at least 6 TeV is required to reproduce the observed $\gamma$-ray spectrum. \section{Extra-galactic Science Highlights} \label{sec:egal} The VERITAS AGN observations are broadly conducted under four major programs, namely: discovery program, multi-wavelength (MWL) campaigns, ToO, and cosmology. The highlights of the first three programs are mentioned in this section. \paragraph{Blazar - 1ES 1215+303:} A high-synchrotron-peaked BL Lac (HBL) 1ES 1215+303 ($z$=0.13, \cite{Truebenbach2017}) was extensively studied in a MWL context using long-term data from radio to $\gamma$-ray energies. a VERITAS exposure of 175.8 h was used in this study, which includes regular monitoring observations since December 2008. The synchrotron peak frequency from a low state to the 2017 flaring state was observed to be shifted from infrared to soft X-ray \cite{Valverde2020}. \paragraph{Radio Galaxies - 3C 264 and M87:} The misaligned geometry of radio galaxies provides a unique view of the AGN's jet and super massive back hole. Among TeV-detected radio galaxies, 3C 264 ($z=0.0217$, \cite{Smith2000}) is the most distant, and is only the fourth known radio galaxy at these energies. VERITAS has collected $\sim$57 h good quality data between February 2017 and May 2019, which yielded a detection with a statistical significance of 7.8$\sigma$. The VHE $\gamma$-ray spectrum was well described by a power law with an index of 2.20 В± 0.27 and a flux of $\sim$0.7$\%$ of the Crab Nebula flux above 315 GeV \cite{Archer2020}. During the 2017 Event Horizon Telescope MWL campaign on M87, VERITAS captured the source in its historically low state, but still dominating over the nearest knot, HST-1 \cite{EHT2021}. The most complete simultaneous MWL spectrum was reported in that study, which included 15 h of quality-selected VERITAS data. The analysis resulted in an overall statistical significance of 3.8$\sigma$. The legacy data set and analysis scripts were made available to the community through the Cyverse repository \cite{dataset}. \paragraph{Quasars - 3C 279, PKS 1222+216, and Ton 599: } Known flat-spectrum radio quasars (FSRQs) are generally at larger distances compared to BL Lacs, hence with the current generation VHE $\gamma$-rays telescopes their detection is difficult, both because of the attenuation of VHE $\gamma$-rays from these distances due to absorption by the extra-galactic background light and because of possible intrinsic spectral cutoffs above GeV energies \cite{Stern2014}. Three FSRQs; 3C 279, PKS 1222+216, and Ton 599, were studied to explore the $\gamma$-ray variability and spectral characteristics using almost 100 h of VERITAS data spanning over 10 years. The location of the $\gamma$-ray emission region and the jet Doppler factor were constrained during VHE-detected flares in 2014 and 2017, for PKS 1222+216 and Ton 599, respectively \cite{Adams2022b}. Also, theoretical constraints on the potential production of PeV-scale neutrinos were placed during these VHE flares. \section{Indirect dark matter search} Dwarf Spheroidal galaxies (dSphs) are a favorable target class for an indirect dark matter (DM) search. The analysis of four dSphs (BoГ¶tes I, Draco, Segue 1, and Ursa Minor) with the VERITAS data taken from 2007 to 2013 was reported in \cite{Giuri2022}. This included a total quality-selected observation time of 476 h to search for a DM signal and was sensitive to potential signals in the $\tau^+\tau^-$ and $b\Bar{b}$ annihilation channels. No DM signal was detected. \section{Multi-messenger and astrophysical transient observations} VERITAS devotes a part of its observing time to multi-messenger (MM) and astrophysical transient observations to search for electromagnetic counterparts to high energy neutrinos and gravitational waves. These observations are proposal driven and result into significant fraction of total VERITAS observing time. Figure~\ref{fig:mm} shows some MM triggers observed by VERITAS. The IceCube observatory reported a well-reconstructed high energy neutrino event, IceCube-201114A (GCN 28887), on November 14, 2020, having an estimated energy of $\sim$214 TeV. It is spatially coincident with the high-energy-peaked object, NVSS J065844+063711 \cite{Menezes2022}. During November 15-19, 2020, VERITAS observed NVSS J065844+063711, and a differential upper limit was reported in \cite{Menezes2022} using 7 h quality-selected data. \section{Summary and Conclusions} VERITAS has a strong and multi-faceted science program in the $\gamma$-ray band. Selected science results are highlighted, covering results from Galactic, extra-galactic, fundamental physics, and multi-messenger observations. VERITAS is operating extremely well and continues to provide high quality VHE $\gamma$-ray data and scientific results to the community. VERITAS has been recommended for the next cycle of NSF operations funding through 2025. \section*{Acknowledgements} This research is supported by grants from the U.S. Department of Energy Office of Science, the U.S. National Science Foundation and the Smithsonian Institution, by NSERC in Canada, and by the Helmholtz Association in Germany. This research used resources provided by the Open Science Grid, which is supported by the National Science Foundation and the U.S. Department of Energy's Office of Science, and resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility operated under Contract No. DE-AC02-05CH11231. We acknowledge the excellent work of the technical support staff at the Fred Lawrence Whipple Observatory and at the collaborating institutions in the construction and operation of the instrument. \bibliographystyle{SciPost_bibstyle} \bibliography{ISVHECRI.bib} \nolinenumbers
Title: Uncertainty in mean $X_{\rm max}$ from diffractive dissociation estimated using measurements of accelerator experiments
Abstract: Mass composition is important for understanding the origin of ultra-high-energy cosmic rays. However, interpretation of mass composition from air shower experiments is challenging, owing to significant uncertainty in hadronic interaction models adopted in air shower simulation. A particular source of uncertainty is diffractive dissociation, as its measurements in accelerator experiments demonstrated significant systematic uncertainty. In this research, we estimate the uncertainty in $\langle X_{\rm max}\rangle$ from the uncertainty of the measurement of diffractive dissociation by the ALICE experiment. The maximum uncertainty size of the entire air shower was estimated to be $^{+4.0}_{-5.6} \mathrm{g/cm^2}$ for air showers induced by $10^{17}$ eV proton, which is not negligible in the uncertainty of $\langle X_{\rm max}\rangle$ predictions.
https://export.arxiv.org/pdf/2208.04645
\begin{center}{\Large \textbf{ Uncertainty in mean $X_{\rm max}$ from diffractive dissociation estimated using measurements of accelerator experiments }}\end{center} \begin{center} Ken Ohashi\textsuperscript{1$\star$}, Hiroaki Menjo\textsuperscript{1}, Takashi Sako\textsuperscript{2}, and Yoshitaka Itow\textsuperscript{1, 3} \end{center} \begin{center} {\bf 1} Institute for Space-Earth Environmental Research, Nagoya University \\ {\bf 2} Institute for Cosmic Ray Research, the University of Tokyo \\ {\bf 3} Kobayashi-Maskawa Institute for the Origin of Particles and the Universe, Nagoya University \\ * ohashi.ken@isee.nagoya-u.ac.jp \end{center} \begin{center} \today \end{center} \definecolor{palegray}{gray}{0.95} \begin{center} \colorbox{palegray}{ \begin{tabular}{rr} \begin{minipage}{0.1\textwidth} \includegraphics[width=30mm]{TIFR.png} \end{minipage} & \begin{minipage}{0.85\textwidth} \begin{center} {\it 21st International Symposium on Very High Energy Cosmic Ray Interactions (ISVHE- CRI 2022)}\\ {\it Online, 23-27 May 2022} \\ \doi{10.21468/SciPostPhysProc.?}\\ \end{center} \end{minipage} \end{tabular} } \end{center} \section*{Abstract} {\bf Mass composition is important for understanding the origin of ultra-high-energy cosmic rays. However, interpretation of mass composition from air shower experiments is challenging, owing to significant uncertainty in hadronic interaction models adopted in air shower simulation. A particular source of uncertainty is diffractive dissociation, as its measurements in accelerator experiments demonstrated significant systematic uncertainty. In this research, we estimate the uncertainty in $\langle X_{\rm max}\rangle$ from the uncertainty of the measurement of diffractive dissociation by the ALICE experiment. The maximum uncertainty size of the entire air shower was estimated to be $^{+4.0}_{-5.6}~\mathrm{g/cm^2}$ for air showers induced by $10^{17}$~eV proton, which is not negligible in the uncertainty of $\langle X_{\rm max}\rangle$ predictions. } \section{Introduction} \label{sec:intro} Mass composition of ultra-high energy cosmic rays is important for understanding the origin of these cosmic rays, as acceleration at the source and interactions during propagation depends on composition; acceleration of such cosmic rays depends on their charge if we assume acceleration by magnetic fields. Interactions with cosmic-microwave background photons during propagation vary between nuclei. Measurements of these cosmic rays have been performed using observations of air showers induced by cosmic rays, for example, the Telescope Array experiment~\cite{Abbasi:2021JW} and the Pierre Auger Observatory~\cite{AugerDetector}. The depth of maximum of air shower developments, $X_{\rm max}$, are widely measured as an estimator of mass composition. Mass composition is interpreted by comparing the measurements of $X_{\rm max}$ and its predictions using simulation. However, simulation predictions vary if the hadronic interaction model adopted changes to another model. Precise understanding of hadronic interactions is crucial for mass composition interpretation. Hadronic interaction models were updated using accelerator experiments. For example, EPOS-LHC~\cite{EPOS, EPOSLHC} were tuned using measurements of inelastic cross sections, the distribution of charged particles, and particle productions by experiments at the Large Hadron Collider (LHC). Meanwhile, measurements of diffractive dissociation by experiments at LHC have significant uncertainty~\cite{ALICE_diff_7tev,CMS_diff_7tev}. The effects of diffractive dissociation were discussed in our previous study\cite{Ohashi2021},where differences in the cross sections of diffractive dissociation among hadronic interaction models affected 8.9~$\mathrm{g/cm^2}$ on $\langle X_{\rm max}\rangle$ for air showers induced by $10^{19}~\mathrm{eV}$ protons. Uncertainty in diffractive dissociation measurements can affect $\langle X_{\rm max}\rangle$. In this research, we estimate the effects of uncertainty in the measurements on $\langle X_{\rm max}\rangle$ for air showers induced by $10^{17}~{\rm eV}$ protons. After categorizing the simulated events using the definitions in the ALICE experiment, we weighted the fractions of each category by the ratio of the experimental data to the predictions. \section{Diffractive dissociation} Diffractive dissociation is one type of hadronic interaction, caused by the exchange of a pomeron and characterized by low-momentum transfer. In the collision, a colliding particle is scattered, becomes a diffractively excited state, and subsequently dissociates into particles. The other colliding particle can either be intact or dissociate. If the other colliding particle remains intact, the collisions are called single diffractions (SD). If both colliding particles dissociate, the collisions are called double diffractions (DD). From a cosmic-ray point of view, four types of diffractive dissociation exist: single diffraction with projectile cosmic-ray dissociation (projectile SD), single diffraction with target air nucleus dissociation (target SD), double diffraction (DD), and central diffraction (CD), in which two colliding particles were intact, however, particles were produced in the exchange of two or more Pomerons. Hereafter, collision types other than these types in the hadronic interaction are considered non-diffractive collisions (ND). Notably, CD are not considered and included in non-diffractive collisions in this study, as some models predict extremely small cross sections for CD. \section{Air shower simulation} In this study, air showers were simulated using the air shower simulation package CONEX v6.40~\cite{Bergmann2007}. EPOS-LHC~\cite{EPOS, EPOSLHC} and SIBYLL~2.3c~\cite{SIBYLL21, Riehn2017} were adopted as hadronic interaction models for collisions induced by particles above 80~GeV. UrQMD~\cite{URQMD1, URQMD2} was adopted as a hadronic interaction model for low-energy collisions. Two samples were simulated; 40000 showers, hereafter sample a), were simulated using EPOS-LHC and SIBYLL~2.3c, respectively. Additionally, air showers with projectile SD, target SD, and DD at the first interaction were simulated by changing simulation codes. Hereafter this sample is referred to as sample b). 1000 showers were simulated for each case in sample~b). These simulated samples were categorized by collision type, diffractive mass, and rapidity gap. The collision type was defined using type information in each hadronic interaction model. For EPOS-LHC, the collision type information in the model was used. For SIBYLL 2.3c, the type information was provided for each interaction between two partons. If an interaction consists of one interaction between two partons and classified as diffractive dissociation, the collision was considered diffractive dissociation. The dissociation system was separated using the largest rapidity gap considering all particles for DD and target SD and using a threshold to separate the dissociation system for projectile SD. The threshold rapidity gap was set at 1.5 in the laboratory system. If, by accident, only one particle in the dissociation system exists in the process of the identification of dissociation system, the second largest rapidity gap was adopted to separate the dissociation system. The diffractive mass was subsequently calculated from the momentum of particles in the dissociation system. Gaps between charged particles in pseudorapidity were calculated from the distribution of produced charged particles and sorted by the pseudorapidity of each particle. The largest gap was considered the rapidity gap $\Delta \eta$. Collision types were added to the outputs for both samples~a) and b). Rapidity gaps and the diffractive mass were only calculated in sample b) to consider definitions of the experimental result. Notably, sample~a) was identical to the samples used in \cite{Ohashi2021}. \section{Analysis method\label{sec:method}} In this work, we focus on the effects of the first intercation of air showers. We categorize simulated air showers by collision types at the first intercation and the mean value was calculated. The mean value of $X_{\rm max}$, $\langle X^{\rm all}_{\rm max}\rangle$, of each categorized sample was calculated as follows: \begin{equation} \langle X^{\rm all}_{\rm max}\rangle = \sum^{i} f^i \langle X^{i}_{\rm max}\rangle, \label{eq:mean_Xmax} \end{equation} where $i$ runs over all categorized samples. $f^i$ is the fraction of each category in the total sample. By changing the fraction $f^i$ in Eq.~\ref{eq:mean_Xmax}, we estimated the effect of each fraction. We modified the fractions based on the LHC experimental result. Using cross sections of SD and DD from MC simulation, $\sigma^i_{\rm MC}$, where $i$ runs for SD and DD, and the experimental result of cross sections, $\sigma^i_{\rm Data}$, the ratio of experimental data to predictions by the simulation, $R^i_{\rm Data/MC} = \sigma^i_{\rm Data}/\sigma^i_{\rm MC}$, was calculated for each category. The ratios were applied to modify fractions. The modified $\langle X_{\rm max}\rangle$ was then calculated using modified fractions and Eq.~\ref{eq:mean_Xmax}. Using the uncertainty of experimental data, the uncertainty of $R_{\rm Data/MC}$ and finally $\langle X_{\rm max}\rangle$ can be calculated. We note that the inelastic cross sections remain unchanged. The effects of differences in particle productions of diffractive dissociation were not considered, while they demonstrated minor effects on $\langle X_{\rm max}\rangle$~\cite{Ohashi2021}. We consider a measurement of single and double diffraction by the ALICE experiment for proton-proton collisions with $\sqrt{s} = 7~{\rm TeV}$~\cite{ALICE_diff_7tev}. The cross section of single diffraction $\sigma^{\rm SD}$ and double diffraction $\sigma^{\rm DD}$ measured by the ALICE experiment was $14.9^{+3.4}_{-5.9}~{\rm mb}$ and $9.0\pm2.6~{\rm mb}$, respectively. $\sigma^{\rm SD}$ was measured for $M_{X} < 200~{\rm GeV/c^2}$, where $M_{X}$ was the diffractive mass of the dissociation system. $\sigma^{\rm DD}$ was measured for $\Delta \eta > 3$. We note that $\Delta \eta$ was the pseudo-rapidity gap for charged particles and non-diffractive collisions were not subtracted in the measurement. $R_{\rm Data/MC}$ were calculated from the experimental result and simulations of EPOS-LHC and SIBYLL 2.3 by CRMC v1.6~\cite{CRMC}. $R_{\rm Data/MC}$ for EPOS-LHC was $1.95^{+0.45}_{-0.78}$ for single diffraction and $0.54^{+0.16}_{-0.16}$ for double diffraction. $R_{\rm Data/MC}$ for SIBYLL 2.3 was $1.85^{+0.43}_{-0.73}$ for single diffraction and $0.38^{+0.11}_{-0.11}$ for double diffraction. Then, to calculate the modified $\langle X_{\rm max}\rangle$ and its uncertainty, these $R_{\rm Data/MC}$ calculated from the proton-proton collision were applied for the first proton-air nucleus interaction in an air shower with two assumptions; one was that the $R_{\rm Data/MC}$ calculated at $\sqrt{s} = 7~{\rm TeV}$ can be applied for collisions induced by the $10^{17}$~eV proton, although the center-of-mass energy differed slightly. The other was that $R_{\rm Data/MC}$ calculated from \emph{proton-proton collisions} can be applied for predictions of \emph{proton-air nucleus collisions}. Considering the second assumption, we rely on proton-nucleus collision modeling in each hadronic interaction model, therefore results differences in the modified $\langle X_{\rm max}\rangle$ owing to hadronic interaction models were expected. We note that differences between SIBYLL 2.3 and SIBYLL 2.3c were ignored in this study, as these differences were relevant to particle productions in fragmentation and beam remnants, not for diffractive dissociation ~\cite{Riehn2017}. \section{Result and discussions} \begin{table}[] \footnotesize \centering \begin{tabular}{cc|ccc|cc} & &\multicolumn{5}{c}{categorized by the ALICE definitons}\\ interaction & collision type & \multicolumn{3}{c}{the number of events} & \multicolumn{2}{c}{$\langle X_{\mathrm{max}} \rangle$ [$\mathrm{g/cm^2}$]} \\ model & in the model & total & diffraction & non-diffraction & diffraction & non-diffraction \\ \hline EPOS-LHC & projectile SD & 1000 & 502 & 498 & 732.33 $\pm$ 0.14 & 722.10 $\pm$ 0.13\\ & target SD & 1000 & 609 & 391 & 735.51 $\pm$ 0.12 & 720.59 $\pm$ 0.18 \\ & DD & 1000 & 647 & 353 & 731.56 $\pm$ 0.10 & 711.56 $\pm$ 0.17\\ & ND & 10000 & 973 & 9027 & 714.91 $\pm$ 0.07 & 684.12 $\pm$ 0.01 \\ \hline SIBYLL~2.3c & projectile SD & 1000 & 643 & 357 & 729.30 $\pm$ 0.11 & 729.94 $\pm$ 0.20\\ & target SD & 1000 & 638 & 362 & 755.72 $\pm$ 0.13 & 749.42 $\pm$ 0.23 \\ & DD & 1000 & 746 & 254 & 725.38 $\pm$ 0.09 & 722.68 $\pm$ 0.25 \\ & ND & 10000 & 2557 & 7443 & 723.41 $\pm$ 0.03 & 693.50 $\pm$ 0.01\\ \end{tabular} \caption{The number of events and $\langle X_{\rm max}\rangle$ of sample b) with categorization using the definitions in the ALICE experiment~\cite{ALICE_diff_7tev}. 1000 or 10000 showers were simulated for each collision type at the first interaction in each model.} \label{tab:simulation_ALICEdef} \end{table} \subsection{Simulation results of fractions and $\langle X_{\rm max}\rangle$ with categorization using the ALICE experiment definitions} Table~\ref{tab:simulation_ALICEdef} shows the fractions and $\langle X_{\rm max}\rangle$ considering the definitions in the ALICE experiment result\cite{ALICE_diff_7tev} for sample b). 1000 or 10000 air showers were simulated for each collision type at the first interaction based on the definition in each hadronic interaction model. These were subsequently classified into diffraction and non-diffraction based on the experiment definitions~\cite{ALICE_diff_7tev}. We note that the definitions for the result of SD in the ALICE experiment were considered for projectile SD and target SD, and that of DD were considered for DD and ND. Finally, the fraction and $\langle X_{\rm max}\rangle$ of air showers categorized by definitions in \cite{ALICE_diff_7tev} were calculated using the results of sample a), which are summarized in~\cite{Ohashi2021}, and the results in Table~\ref{tab:simulation_ALICEdef}. The results are shown in Tables~\ref{tab:fraction_eposlhc} and \ref{tab:fraction_sibyll}. \begin{table}[] \centering \begin{tabular}{c|cccc} & Projectile SD & Target SD & DD (including ND) & others\\ \hline fraction [\%] & 2.0 & 2.7 & 13.1 & 82.2 \\ $\langle X_{\rm max}\rangle$ [${\rm g/cm^2}$]& 732.3 & 735.5 & 721.5 & 688.0 \end{tabular} \caption{ Fractions and $\langle X_{\rm max}\rangle$ categorized at the first proton-air interaction of air showers by following definitions of the ALICE experiment. EPOS-LHC was adopted as a hadronic interaction model for high energy. } \label{tab:fraction_eposlhc} \end{table} \begin{table}[] \centering \begin{tabular}{c|cccc} & Projectile SD & Target SD & DD (including ND) & others\\ \hline fraction [\%] & 4.3 & 1.9 & 23.5 & 70.3 \\ \end{tabular} \caption{Fractions of air showers at the first proton-air intercation are categorized by following the definition of the ALICE experiment. SIBYLL 2.3c was adopted as a hadronic interaction model for high energy.} \label{tab:fraction_sibyll} \end{table} \subsection{Results of the modified $\langle X_{\rm max}\rangle$ and its uncertainty} The modified $\langle X_{\rm max}\rangle$ and its uncertainty were calculated using the method described in Section \ref{sec:method} and the fractions and $\langle X_{\rm max}\rangle$ in Tables~\ref{tab:fraction_eposlhc} and \ref{tab:fraction_sibyll}. The results were $694.6^{+1.2}_{-1.8}~\mathrm{g/cm^2}$ using fractions predicted by EPOS-LHC and $696.2^{+1.5}_{-2.2}~\mathrm{g/cm^2}$ using fractions predicted by SIBYLL 2.3c. For both results, $\langle X_{\rm max}\rangle$ simulated with EPOS-LHC was adopted. The difference between the two modified $\langle X_{\rm max}\rangle$, which was $1.6~\mathrm{g/cm^2}$, stemmed from treatments of proton-nucleus collisions in each hadronic interaction model. Total uncertainty for the first interaction considering the result of the ALICE experiment was $^{+1.7}_{-2.3}~\mathrm{g/cm^2}$ calculated from the $1.6~\mathrm{g/cm^2}$ difference between two models and larger uncertainty in the two modified results, which was $^{+1.5}_{-2.2}~\mathrm{g/cm^2}$. This estimation only considers the effects of diffractive dissociation at the first interaction. In our previous study~\cite{Ohashi2021}, we estimated the ratio of the effect of the entire air shower to the effect at the first interaction, which was a maximum of 2.4. Thus, the maximum size of uncertainty from the result of the ALICE experiment for the entire air shower is estimated to be $^{+4.0}_{-5.6}~\mathrm{g/cm^2}$ by multiplying $^{+1.7}_{-2.3}~\mathrm{g/cm^2}$ by a factor 2.4~\cite{Ohashi2021}. This size of uncertainty corresponds to approximately half of the difference in $\langle X_{\rm max}\rangle$ predictions among hadronic interaction models. Although the difference is caused by several sources~\cite{Ostapchenko2016b}, half of the difference is not negligible in the uncertainty of $\langle X_{\rm max}\rangle$ predictions. \section{Conclusion} In this research, the effects of uncertainty in the accelerator experiment on $\langle X_{\rm max}\rangle$ were estimated. Concentrating on the first interaction of air showers, the uncertainty of $\langle X_{\rm max}\rangle$ owing to the uncertainty in the diffractive dissociation measurements by the ALICE experiment~\cite{ALICE_diff_7tev} was estimated to be $^{+1.7}_{-2.3}~\mathrm{g/cm^2}$. The maximum size of uncertainty of the entire air shower was estimated to be $^{+4.0}_{-5.6}~\mathrm{g/cm^2}$, which is not negligible for the uncertainty of $\langle X_{\rm max}\rangle$ predictions. \section*{Acknowledgements} \paragraph{Funding information} K.O. was supported by Grant-in-Aid for JSPS Fellows (JP21J11122). \bibliography{reference.bib} \nolinenumbers
Title: Susceptibility study of TES micro-calorimeters for X-ray spectroscopy under FDM readout
Abstract: We present a characterization of the sensitivity of TES X-ray micro-calorimeters to environmental conditions under frequency-domain multiplexing (FDM) readout. In the FDM scheme, each TES in a readout chain is in series with a LC band-pass filter and AC biased with an independent carrier at MHz range. Using TES arrays, cold readout circuitry and warm electronics fabricated at SRON and SQUIDs produced at VTT Finland, we characterize the sensitivity of the detectors to bias voltage, bath temperature and magnetic field. We compare our results with the requirements for the Athena X-IFU instrument, showing the compliance of the measured sensitivities. We find in particular that FDM is intrinsically insensitive to the magnetic field because of TES design and AC readout.
https://export.arxiv.org/pdf/2208.10875
\title{\textbf{Susceptibility study of TES micro-calorimeters\\for X-ray spectroscopy under FDM readout}} \author[1]{D.~Vaccaro\thanks{d.vaccaro@sron.nl}} \author[1]{H.~Akamatsu} \author[1]{L.~Gottardi} \author[2]{J.~van~der~Kuur} \author[1]{E.~Taralli} \author[1]{M.~de~Wit} \author[1]{M.P.~Bruijn} \author[1]{R.~den~Hartog} \author[3]{M.~Kiviranta} \author[1]{A.J.~van~der~Linden} \author[1]{K.~Nagayoshi} \author[1]{K.Ravensberg} \author[1]{M.L.~Ridder} \author[1]{S.~Visser} \author[1]{B.D.~Jackson} \author[1,4]{J.R.~Gao} \author[1]{R.W.M.~Hoogeveen} \author[1,5]{J.W.A.~den~Herder} \affil[1]{NWO-I/SRON Netherlands Institute for Space Research, Niels Bohrweg 4, 2333CA Leiden, Netherlands} \affil[2]{NWO-I/SRON Netherlands Institute for Space Research, Landleven 12, 9747 AD Groningen, Netherlands} \affil[3]{VTT Technical Research Centre of Finland, Tietotie 3, 02150 Espoo, Finland} \affil[4]{Optics Group, Department of Imaging Physics, Delft University of Technology, Delft, 2628 CJ, Netherlands} \affil[5]{Universiteit van Amsterdam, Science Park 904, 1090GE Amsterdam, The Netherlands} \date{} \twocolumn[ \begin{@twocolumnfalse} \begin{quotation} \textbf{This paper has been accepted for publication in \textit{Journal of Low Temperature Physics}.} \end{quotation} \end{@twocolumnfalse} ] \section{Introduction}\label{intro} Transition-edge sensors\cite{tes} (TES) are the baseline detector technology in future X-ray space-borne telescopes, such as Athena X-IFU\cite{athena}, Lynx\cite{lynx} and HUBS\cite{hubs}. To meet the scientific goals, thousands of TESs will be hosted on the focal plane of these instruments, with high requirements on energy resolution, spatial resolution and count-rate capability. Given the stringent limitations of space-borne missions in terms of available cooling power at cryogenic temperatures, electrical power and mass, the readout of TESs is usually performed under a multiplexing scheme, the most common ones being Time-Division Multiplexing\cite{tdm} (TDM) and Frequency-Domain Multiplexing\cite{hiroki2021} (FDM) (a description of the TES architecture, as well as both readout designs, can be found in \textit{Gottardi, Nagayoshi 2021}\cite{tes}). Requirements on the detector spectral performance dictate a stringent energy resolution budget on the various contributors to the total instrumental energy resolution, such as detector and readout noise, sensitivity to environmental conditions and instrumental drifts. In particular, the latter factors affect the total energy resolution via undesired variations of the TES responsivity. The responsivity, or gain, of a TES to a photon of a certain energy depends on the setpoint along the superconducting transition, which is defined by the bias voltage $V$, the bath temperature $T$ and the magnetic field $B$. A small change in these parameters can affect in some measure the TES gain, which implies that photons of identical energy $E$ could generate pulses of different height and/or shape. This effect can in principle degrade the instrumental energy resolution. For this reason, gain sensitivities are important parameters to characterize, since they contribute to defining the instrumental design. For example, the $B$-field sensitivity is related to the magnetic shielding: a lower sensitivity to magnetic fields allows for a magnetic shield of lower mass, a very important factor to consider for a space-born instrument. SRON has been developing, in the framework of Athena X-IFU, TES micro-calorimeters for X-ray spectroscopy\cite{ken} and a frequency-domain multiplexing (FDM) readout with base-band feedback (BBFB)\cite{bbfb,hiroki2021}. In the FDM scheme, the readout of a TES array is performed by placing a tuned high-$Q$ LC band-pass filter in series with each detector and providing an ac-bias with an independent carrier in the MHz range. The signals of all the detectors in the readout chain are summed at the input coil of a Superconducting QUantum Interference Device (SQUID), which provides a first amplification at cryogenic temperature. Further amplification and conversion into digital is performed at room temperature by a control electronics board, which is also responsible for the bias carrier generation and demodulation of the output comb. In the BBFB scheme, the demodulated TES signals are again remodulated using the same carrier frequencies, compensated with a phase delay and fed back at the SQUID feedback coil, to null the current at the SQUID input: this allows a more efficient use of the SQUID dynamic range and effectively increases the number of pixels readable in a single readout chain. The FDM scheme is fundamentally different from TDM, where TESs are dc-biased. In this contribution, we present a characterization of the gain sensitivities of a TES array using a cryogenic FDM setup. \section{Experimental setup and measurement method}\label{method} For our experiments we use a $8\times 8$ uniform TES array, with 31 devices connected to the readout circuit. Each TES is a $80\times 13\ \upmu$m$^{2}$ Ti/Au bilayer, coupled to a $240\times 240\ \upmu$m$^{2}$, 2.3~$\upmu$m thick Au absorber (thermal capacitance $C \simeq 0.85$~pJ/K at 90~mK) via two central pillars and with four additional corner stems providing mechanical support. These devices have critical temperature $T_{C} \simeq 84$~mK, normal resistance $R_{N} \simeq 155$~m$\upOmega$ and thermal conductance $G\sim~65$~pW/K at $T_{C}$. For the FDM readout, the TESs are coupled to custom superconducting LC filters \cite{marcel} and transformers for impedance matching. A stiff voltage bias is provided through an effective shunt resistance of $\sim 1$~m$\upOmega$. The TES summed signals are pre-amplified at cryogenic temperature via two SQUIDs (Front-End + Amplifier). \iffalse (a 6-series array SQUID as Front End and a 100-series array SQUID as Amplifier). then further amplified by a Low Noise Amplifier and digitized in a custom digital electronics board. \fi Such "cold" components are hosted on custom oxygen-free high-conductivity (OFHC) copper and enclosed in a niobium shield. Superconducting Helmholtz coils are employed to control the magnetic field applied to the detectors. A $^{55}$Fe source hosted on the Nb magnetic shield is used to hit the detectors with 5.9~keV X-rays, typically with a count rate of $\sim1$~count per second per pixel. The setup is housed in a Leiden Cryogenics dilution unit with a cooling power of 400~$\upmu$W at 120~mK. The setups are hung via Kevlar wires to the mixing chamber to damp mechanical oscillations\cite{gotkevlar}. OFHC copper braids connecting the setups to the mixing chamber ensure the thermal anchoring. The setup temperature is controlled via a Ge thermistor anchored to the copper holder and kept stable to 55~mK. To characterize the gain sensitivities, we bias the pixels at a reference setpoint along the superconducting transition, defined by the reference values $V_0, T_0, B_0$ (where $B_0$ is chosen to minimize the residual magnetic field and $V_0$ corresponds to $R\approx 0.1 R_N$, where the best single pixel spectral performances are observed, likely due to the higher loop gain than at larger $R/R_N$), and at different setpoints obtained by individually changing each parameter by a quantity $\Delta V, \Delta T, \Delta B$, respectively. For each setpoint we acquire X-rays events with $\approx$ 500 events per pixel, in multiplexing mode. The X-ray energy of each event is assessed by using the X-ray pulse and noise information with the optimal filtering technique. To do so, an optimal filter template is generated for each pixel at the reference setpoint. To extract the impact on the estimated X-ray energy, each dataset is analysed using the optimal filter template of the reference setpoint. The energy scale is calibrated using the known K$\upalpha_1$ line of the $^{55}$Fe source spectrum, with energy $E_0 = 5898.75$~eV. To assess the energy for the sensitivity estimation, the acquired photons in the K$\alpha$ energy range are collected into a histogram and a gaussian fit is performed to extract the position $E$ for the K$\upalpha_1$ line. In this way, for each setpoint we can measure the shift in energy $\Delta E = E - E_0$ of the K$\upalpha_1$ line caused by the variation of the TES gain. Repeating this action for several setpoints, we can fit $\Delta E$ as a function of $\Delta V, \Delta T, \Delta B$, respectively, to deduce a dependency and estimate the gain sensitivity for our TES arrays. \section{Results and discussion} In Table~\ref{tabres} we summarize the results of our gain sensitivity measurements, with a comparison with the requirements for Athena X-IFU. Since for X-IFU the requirements are at an energy of 7~keV, we also linearly scale up our values, measured with 5.9~keV photons. From the IV curves we calibrate the reference voltage $V_0$ to bias each pixel at a resistance $R\approx 0.1 R_N$. We performed measurements also at higher bias points, up to $\approx 0.3 R_N$. Sensitivity values measured at such bias points are still compatible with what described in the following. To characterize the voltage sensitivity, we change the bias voltage of each pixel in a range $\Delta V = \pm 5\% = \pm 5\cdot10^4$~ppm. For each pixel we plot the shift in energy $\Delta E$ as a function of $\Delta V$ and use a linear fit to extract the dependency. The measured voltage sensitivity curves for each pixel are reported in Fig.~\ref{results}a. The results of the fit range from $6.4 \pm 0.8$~meV/ppm to $7.7 \pm 1.0$~meV/ppm, with a mean sensitivity of $7.2 \pm 0.3$~meV/ppm. We repeat the same process for the temperature sensitivity, varying the bath temperature in a range $\Delta T = \pm 160\ \upmu$K from a base temperature $T_0 = 55$~mK. The measured temperature sensitivities curves for each pixel are reported in Fig.~\ref{results}b. The results of the fit range from $67 \pm 3$~meV/$\upmu$K to $121 \pm 4$~meV/$\upmu$K, with a mean sensitivity of $94 \pm 14$~meV/$\upmu$K. The T-sensitivity values show a larger spread than the V-sensitivity case: we interpret this as $\alpha$ and $\beta$ not being exactly the same for pixels biased at different frequencies, as a consequence of the weak-link effect. For this geometry, at $R \approx 0.1R_N$ typical values for $\alpha$ and $\beta$ are 500 and 5, respectively. To characterize the magnetic field susceptibility, we change the external magnetic field in a range $\Delta B = 6\ \upmu$T, after calibrating the starting value $B_0$ as the external magnetic field that in average minimizes the residual magnetic field for all the pixels. We only scan for positive $\Delta B$ values, since for our setup the sensitivity curve is symmetrical around the zero residual field $B_0$. In principle, this magnetic field dependence comes from the weak-link behaviour of the TES, acting as a SNS Josephson junction (S being the superconducting leads and N the TES bilayer itself)\cite{smithbfield} with a gauge-invariant phase $\varphi \propto \sqrt{PR}/\omega_0$ depending on the device power, resistance and the frequency of the magnetic field, either external or self-induced. Fig.~\ref{results}c shows the measured magnetic field sensitivity curves, along with a fit using a second order polynomial. The different parabolic trends observed could be interpreted as a consequence of the frequency dependency of the TES weak-link behaviour. Given the non-linear behaviour, an univocal value for $\Delta E / \Delta B$ cannot be extracted as for the voltage and temperature sensitivities. Therefore, we perform a quadratic fit and calculate the sensitivity as the derivative of the fit function at a certain $\Delta B$. In this way the sensitivities we obtain from the fit for $\Delta B = 100$~nT, where the X-IFU requirement is defined, are of the order of 1 meV/nT or less, but with errors of comparable value. To be conservative, we then estimate an upper limit, performing a linear interpolation of the data-points and directly calculating the differentials $dB$ and $dE$ as a function of the applied magnetic field. We then calculate the sensitivity as the derivative $dE/dB$. As shown in Fig.~\ref{results}d, across the measured range the sensitivity values are less than 10 meV/nT for all the pixels. At $\Delta B = 1\ \upmu$T, $i.e.$ the expected drift during one cool-down cycle on the X-IFU Focal Plane Assembly (FPA), the magnetic field sensitivity is $\lesssim 2$~meV/nT. \begin{table*}[!h] \begin{center} \begin{tabular}{ cccc } \textbf{Sensitivity} & \textbf{X-IFU req. @ 7 keV} & \textbf{Measured @ 5.9 keV} & \textbf{Scaled up to 7 keV} \\ \hline\hline $\Delta E/\Delta V$ & 15 meV/ppm & 7.2 meV/ppm $\pm$ 0.3 meV/ppm & $\approx$ 9 meV/ppm \\ $\Delta E/\Delta T$ & 0.15 eV/$\upmu$K & 0.09 eV/$\upmu$K $\pm$ 0.01 eV/$\upmu$K & $\approx$ 0.1 eV/$\upmu$K \\ $\Delta E/\Delta B$ & 8 eV/nT @ $0.1\ \upmu$T & $\lesssim 2\cdot10^{-3}$~eV/nT @ $1\ \upmu$T & $\lesssim 2\cdot10^{-3}$~eV/nT @ $1\ \upmu$T \\ \hline \end{tabular} \end{center} \caption{Summary of the measured TES gain sensitivities. The requirements for X-IFU are derived by simulations with the $xifusim$\cite{xifusim} software, using an old TDM-optimized design of pixels.}\label{tabres} \end{table*} The voltage and temperature sensitivities are compliant with the X-IFU requirements within reasonable margin, the measured values for magnetic field sensitivity are orders of magnitude lower. Recent tests under dc readout, performed at NASA Goddard with a different pixel design, showed a magnetic field sensitivity at a level of 200~meV/nT\cite{smith2021}. Note that we are here considering the influence of dc magnetic fields and dc gradients on the array, since the main worry are magnetic fields at low frequencies such as stray fields from the cryo-cooling system on-board the satellite (compressor, ADR, etc.). This large difference with the magnetic field sensitivity measured with our TES arrays and FDM system can be understood as due to two concurrent factors. The first one is the readout. Under ac-readout, the TES current is continuously sweeping between positive and negative values, hence intrinsically less sensitive to dc-magnetic fields than when readout using a dc-bias. The second one is the TES design. The geometry of our TES bilayers for FDM readout has significantly evolved over the last years, moving from the classical large square, low-$R_N$ designs (more suitable for dc readout) towards smaller geometries with high aspect ratios\cite{martinhar} (width $W$ much smaller than the length $L$), resulting in higher normal resistances $R_N$, a feature more desirable for AC readout. In fact, as shown in Gottardi \emph{et al.}\cite{lgjosephson}, this optimization allows to minimize the weak-link effect, detrimental for the TES spectral performance under FDM readout. Overall, the sensitivity then depends on a combination of TES design plus readout scheme. In principle, though optimal for ac-readout, higher $R_N$ devices could be used under dc-readout to mitigate the $B$-susceptibility. To our knowledge, the X-ray performance and $B$-sensitivity of such higher $R_N$ under dc-readout however has not yet been reported. In Fig.~\ref{PHvsB}a we show the measured TES current and the pulse height for 6~keV photons as a function of external magnetic field. As can be seen, there is no shift between the two curves which follow the same dependence on $B$, indicating that also the impact of the TES self-induced magnetic field is negligible\cite{smithbfield}. This means that the TES current can be used as a figure of merit for choosing the optimal setpoint to both minimize the $B$-sensitivity and maximize the pulse height, $i.e.$ without sacrificing energy resolution. Fig.~\ref{PHvsB}b shows that also the pulse shape is very well conserved at the different $\Delta B$, at a level better than 1\%. The measured $B$-field sensitivity should produce negligible impact on the energy resolution of the detectors for small changes in magnetic field. To verify this, we performed three consecutive X-ray measurements with all the 31 pixels active and simultaneously readout, for $\Delta B = 0$~$\upmu$T and $\pm 1$ $\upmu$T. Such values were chosen because 1 $\upmu$T is approximately the expected magnetic field gradient on the focal plane for X-IFU. The measured spectra, reported in Fig.~\ref{bres}, show consistent energy resolutions. Such a negligible sensitivity to external dc magnetic fields of TES devices under FDM readout has important implications at an instrumental level: in particular, the design of the FPA of future space-borne instruments employing TES arrays could be greatly simplified if FDM readout is used, $e.g.$ with the reduction of the mass of the magnetic shielding. \section{Summary} We presented a characterization of TES gain sensitivities under FDM readout using 6~keV photons. The measured values are compliant with large margin with the requirements for the Athena X-IFU instrument. In particular, we found that TES geometries optimized for ac-readout have a sensitivity to dc-magnetic fields orders of magnitude lower than the current baseline under dc-bias for X-IFU. For future TES-based space-born missions, this would allow a simpler, lighter FPA design with much less stringent needs for magnetic shielding. In the near future we plan to perform further measurements using a Modulated X-ray Source to probe different energies than 6~keV and with larger, kilo-pixel arrays, more representative of the actual arrays that would be used on a real instrument, to further validate these results and to verify the magnetic susceptibility of the energy scale calibration. \section*{Acknowledgements} SRON is financially supported by the Nederlandse Organisatie voor Wetenschappelijk Onderzoek. This work is part of the research programme Athena with project number 184.034.002, which is (partially) financed by the Dutch Research Council (NWO). The SRON TES arrays used for the measurements reported in this paper is developed in the framework of the ESA/CTP grant ITT AO/1-7947/14/NL/BW. \section*{Data availability} The corresponding author makes available the data presented in this paper upon reasonable request.
Title: A Submillimeter Survey of Faint Galaxies Behind Ten Strong Lensing Clusters
Abstract: We present deep SCUBA-2 450 micron and 850 micron imaging of ten strong lensing clusters. We provide a >4-sigma SCUBA-2 850 micron catalog of the 404 sources lying within a radius of 4.5' from the cluster centers. We also provide catalogs of the >4.5-sigma ALMA 870 micron detections in the clusters A370, MACSJ1149.5+2223, and MACSJ0717.5+3745 from our targeted ALMA observations, along with catalogs of all other >4.5-sigma ALMA (mostly 1.2 mm) detections in any of our cluster fields from archival ALMA observations. For the ALMA detections, we give spectroscopic or photometric redshifts, where available, from our own Keck observations or from the literature. We confirm the use of the 450 micron to 850 micron flux ratio for estimating redshifts. We use lens models to determine magnifications, most of which are in the 1.5-4 range. After supplementing the ALMA cluster sample with Chandra Deep Field (CDF) ALMA and SMA samples, we find no evidence for evolution in the redshift distribution of submillimeter galaxies down to demagnified 850 micron fluxes of 0.5 mJy. Given this result, we conclude that our observed trend of increasing F160W to 850 micron flux ratio from brighter to fainter demagnified 850 micron flux results from the fainter submillimeter galaxies having less extinction. However, there is wide spread in this relation, including the presence of some optical/NIR dark galaxies down to fluxes below 1 mJy. Finally, with insights from our ALMA analysis, we analyze our SCUBA-2 sample and present 55 850 micron-bright z>4 candidates.
https://export.arxiv.org/pdf/2208.03328
\newcommand{\afluxa}{450~$\mu$m\ } \newcommand{\afluxb}{850~$\mu$m\ } \newcommand{\afluxar}{450~$\mu$m} \newcommand{\afluxbr}{850~$\mu$m} \title{A Submillimeter Survey of Faint Galaxies Behind Ten Strong Lensing Clusters} \author[0000-0002-6319-1575]{L.~L.~Cowie} \affiliation{Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822, USA} \author[0000-0002-3306-1606]{A.~J.~Barger} \affiliation{Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822, USA} \affiliation{Department of Astronomy, University of Wisconsin-Madison, 475 N. Charter Street, Madison, WI 53706, USA} \affiliation{Department of Physics and Astronomy, University of Hawaii, 2505 Correa Road, Honolulu, HI 96822, USA} \author{F.~E.~Bauer} \affiliation{Instituto de Astrof\'isica and Centro de Astroingenier\'ia, Facultad de F\'isica, Pontificia Universidad Cat\'olica de Chile, Casilla 306, Santiago 22, Chile} \affiliation{Millennium Institute of Astrophysics (MAS), Nuncio Monse{\~{n}}or S{\'{o}}tero Sanz 100, Providencia, Santiago, Chile} \affiliation{Space Science Institute, 4750 Walnut Street, Suite 205, Boulder, Colorado 80301, USA} \author[0000-0002-3805-0789]{C.-C.~Chen} \affiliation{Academia Sinica Institute of Astronomy and Astrophysics, P.O. Box 23-141, Taipei 10617, Taiwan} \author[0000-0002-1706-7370]{L.~H.~Jones} \affiliation{Department of Astronomy, University of Wisconsin-Madison, 475 N. Charter Street, Madison, WI 53706, USA} \affiliation{Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218} \author{C.~Orquera} \affiliation{Instituto de Astrof\'isica and Centro de Astroingenier\'ia, Facultad de F\'isica, Pontificia Universidad Cat\'olica de Chile, Casilla 306, Santiago 22, Chile} \affiliation{Millennium Institute of Astrophysics (MAS), Nuncio Monse{\~{n}}or S{\'{o}}tero Sanz 100, Providencia, Santiago, Chile} \author[0000-0003-3910-6446]{M.~J. Rosenthal} \affiliation{Department of Astronomy, University of Wisconsin-Madison, 475 N. Charter Street, Madison, WI 53706, USA} \author[0000-0003-1282-7454]{A.~J.~Taylor} \affiliation{Department of Astronomy, University of Wisconsin-Madison, 475 N. Charter Street, Madison, WI 53706, USA} \keywords{cosmology: observations --- galaxies: distances and redshifts --- galaxies: evolution --- galaxies: starburst} \section{Introduction} The discovery of the far-infrared (FIR) Extragalactic Background Light (EBL) demonstrated that about half of the universe's starlight at UV/optical wavelengths is absorbed by dust and re-radiated into the FIR (Puget et al.\ 1996; Fixsen et al.\ 1998; Dole et al.\ 2006). At high redshifts, observations with single-dish submillimeter telescopes, such as the 15~m James Clerk Maxwell Telescope (JCMT), can provide direct detections of the most luminous, dusty, star-forming galaxies (Smail, Ivison, \& Blain 1997; Barger et al.\ 1998; Hughes et al.\ 1998; Eales et al.\ 1999). However, at \afluxbr, such surveys become confusion limited at $\sim1.6$~mJy (4$\sigma$; Cowie et al.\ 2018), preventing the detection of fainter submillimeter galaxies (SMGs) with infrared luminosities $\lesssim10^{12}~L_\odot$, or star formation rates (SFRs) $\lesssim200~M_\odot$~yr$^{-1}$ for a Kroupa (2001) initial mass function (IMF). SMGs selected from other ground-based single-dish surveys (e.g., LABOCA, the South Pole Telescope) are brighter and hence more extreme starbursts (see, e.g., Hodge et al.\ 2013), even after taking into account gravitational lensing when present (see, e.g., Spilker et al.\ 2016). SMGs selected from blank field, confusion-limited surveys with SCUBA-2 (Holland et al.\ 2013) on the JCMT are found to be substantially distinct from the extinction-corrected UV-selected population (Barger et al.\ 2014; Cowie et al.\ 2017). However, these SMGs only contain about 20--30\% of the submillimeter EBL. Fainter SMGs are more common objects that contribute the majority of the EBL, but interferometry of any field or single-dish observations of massive lensing cluster fields are required to detect them. Since many of these fainter SMGs may also be selected in UV samples, we need to obtain a census of faint SMGs and determine their redshifts and properties to avoid double-counting and biasing the star formation history. To this end, we have been observing massive lensing cluster fields with SCUBA-2 to study the population of faint SMGs with SFRs comparable to those of the brighter UV-selected population. Gravitational lensing by foreground massive galaxy clusters is an excellent way to detect intrinsically faint SMGs, but it also has the advantages that lensed sources are magnified at all wavelengths and lensed images benefit from enhanced spatial resolution. Number counts in lensed fields are now well established at both \afluxa and \afluxb (e.g., Chen et al.\ 2013; Hsu et al.\ 2016), allowing us to determine the flux levels above which we see 50\% of the submillimeter light. These flux levels are $\sim3$~mJy at \afluxa and $\sim1$~mJy at \afluxb for the Fixsen et al.\ (1998) EBL measurements, with the primary uncertainty being the submillimeter EBL. Our goal is to generate a uniformly selected sample of many hundreds of galaxies in cluster lensing fields that have been intensively studied at other wavelengths. We can then optimize the extremely valuable interferometric time on the Atacama Large Millimeter/submillimeter Array (ALMA), the Northern Extended Millimeter Array (NOEMA), and the Submillimeter Array (SMA) to measure accurate positions and redshifts for the detected sample. In combination with the high-quality lensing models, the interferometry also allows us to determine the amplifications at the galaxy positions and to measure the de-lensed fluxes. We note that fainter SMGs can also be directly detected with ALMA, but the field-of-view is so small---even at millimeter wavelengths---as to make developing large samples through ALMA mosaicking inefficient (see, e.g., Zavala et al.\ 2021, who found 13 sources ($>5\sigma$) at 2~mm over 184~arcmin$^2$). Our SCUBA-2 program targets 10 massive lensing cluster fields (see Table~\ref{table1}). We chose these fields to have good lensing models and a wealth of optical, near-infrared (NIR), mid-infrared (MIR), FIR, radio, and X-ray data from the Hubble Space Telescope (HST), the Spitzer Space Telescope, the Herschel Space Observatory, the Karl G. Jansky Very Large Array, the Chandra X-ray Observatory, and ground-based observatories. Where possible, we chose clusters from the HST Frontier Fields (HFF; Lotz et al.\ 2017) program with its extraordinarily deep data and well-defined lensing models from 10 independent teams. Five of the six HFFs are in our sample, while the sixth, Abell S1063, is too far south to be observed with the JCMT. The HFF images have now been enlarged under the Beyond Ultra-deep Frontier Fields And Legacy Observations (BUFFALO) HST Treasury program (Steinhardt et al.\ 2020), providing a better match in area to the submillimeter observations, though with shallower images than the HFF regions. We take the BUFFALO images from the Mikulski Archive for Space Telescopes (MAST). We mark the clusters with HFF/BUFFALO data in Table~\ref{table1}. Three other clusters in our survey come from the Cluster Lensing And Supernova Survey with Hubble (CLASH) HST Treasury program (Postman et al.\ 2012). The Herschel data listed in Table~\ref{table1} come from Oliver et al.\ (2012; Herschel Multi-tiered Extragalactic Survey, or HerMES), Smith et al.\ (2010; Local Cluster Substructure Survey or LoCuSS), Egami et al.\ (2010; Herschel Lensing Survey or HLS), and Eales et al.\ (2010, Herschel ATLAS or HATLAS). We will use the Herschel data in a subsequent paper when constructing spectral energy distributions for the sources. Here we only show some of the Herschel/PACS 100~$\mu$m images for illustrative purposes. In Section~2, we provide a description of our SCUBA-2 observations, data reduction, and sample construction, together with the final catalog of 404 \afluxb ($>4\sigma$) sources. In Section~3, we discuss our follow-up with ALMA for the A370, MACJ0717, and MACSJ1149 clusters, and we catalog the ALMA detected ($>4.5\sigma$) sources from our work and from the ALMA archive for all the clusters. In Section~4, we interpret the ALMA cluster sample, supplemented with a CDF-S ALMA sample and a CDF-N SMA sample. In Section~5, we interpret our SCUBA-2 cluster sample with insights from our ALMA analysis. Finally, in Section~6, we summarize our results. \section{The SCUBA-2 Survey} For our SCUBA-2 observations, we use two scan patterns: CV DAISY, which has a field size of $5\farcm5$ radius, and PONG-900, which has a field size of $10\farcm5$ radius. Detailed information about the SCUBA-2 scan patterns can be found in Holland et al.\ (2013). In our reductions, we use all available \afluxa and \afluxb data from SCUBA-2 in the CADC archive, together with more recent observations from our own programs. In Table~\ref{table1} and Figure~\ref{450_rad}, we summarize the current status of the SCUBA-2 data. Chen et al.\ (2013) and Cowie et al.\ (2017) provide a detailed description of the data reduction, which uses the Dynamic Iterative Map Maker (DIMM) in the {\sc SMURF} package from the STARLINK software developed by the Joint Astronomy Centre (Jenness et al.\ 2011; Chapin et al.\ 2013). We expect nearly all of the galaxies to appear as unresolved sources. Thus, we apply a matched-filter to our maps, which provides a maximum-likelihood estimate of the source strength for unresolved sources (e.g., Serjeant et al.\ 2003). Each matched-filter image has a PSF with a Mexican hat shape and a FWHM corresponding to the telescope resolution. The FWHM values for the JCMT are $\sim7\farcs5$ at \afluxa and $\sim14''$ at \afluxbr. \begin{deluxetable*}{ccccccccc} \tablecaption{Massive Cluster Lensing Fields Observed with SCUBA-2 \label{table1}} \tablehead{ Field & R.A. & Decl. & Central RMS & HFF/ & Herschel & 870~$\mu$m \\ & & & [450, 850\,$\mu$m] & BUFFALO & & Interferometric \\ & & & (mJy) & & & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) } \startdata A370 & 02 39 53.1 & -01 34 35.0 & [2.31, 0.29] & y &HerMES & ALMA \\ A1689 & 13 11 29.0 & -01 20 17.0 & [2.27, 0.32] & n &LoCuSS &\\ A2390 & 21 53 36.8 & \,\,17 41 44.2 & [2.38, 0.28] & n & LoCuSS &\\ A2744 & 00 14 21.2 & -30 23 50.1 & [3.03, 0.30] &y &HLS &\\ MACSJ0416.1-2403 & 04 16 08.9 & -24 04 28.7 & [2.16, 0.31] & y & HLS\\ MACSJ0717.5+3745 & 07 17 34.0 & \,\,37 44 49.0 & [1.55, 0.22] & y & HLS &ALMA \\ MACSJ1149.5+2223 & 11 49 36.3 & \,\,22 23 58.1 & [1.34, 0.23] & y & HLS &ALMA\\ MACSJ1423.8+2404 & 14 23 48.3 & \,\,24 04 47.0 & [1.61, 0.23] & n & HLS &\\ MACSJ2129.4-0741 & 21 29 26.2 & -07 41 26.0 & [3.21, 0.40] & n & HLS &\\ RXJ1347 & 13 47 31.5 & -11 44 19.0 & [1.90, 0.32] & n & HATLAS & \\ \enddata \tablecomments{ The central rms noise quoted in Column~(4) is white noise and does not include confusion noise. We list ALMA in Column (7) for the fields where we have obtained near-complete ALMA 870~$\mu$m targeted follow-up of the SCUBA-2 sources. } \end{deluxetable*} We next generate the \afluxb source catalog by identifying the peak signal-to-noise (S/N) pixel, subtracting this peak pixel and its surrounding areas using the PSF scaled to the source flux and centered on the value and position of that pixel, and then searching for the next S/N peak. We iterate this process until we reach a S/N of 3.5. The reason for this iterative process is to remove contamination by brighter sources before we identify fainter sources and measure their fluxes. We then limit the catalog to the sources with a S/N above 4, where we include a confusion noise of 0.33~mJy (Cowie et al.\ 2017) added in quadrature. This choice of S/N provides a robust sample. For example, after analyzing the negatives of the images in exactly the same way as the actual images, we find a 4\% false positive rate, which is consistent with historical estimates of $\le5$\% (e.g., see Figure~7 of Casey et al.\ 2013 for the field environment and Section~5.1 of Chen et al.\ 2013a for the cluster environment). Within a radius of $4\farcm5$ from the cluster centers, we have an \afluxb catalog of 404 sources. This radius corresponds to the position where the \afluxa noise is roughly twice the central noise (see Figure~\ref{450_rad}). At larger radii, the noise rises rapidly. We measure the \afluxa fluxes (whether positive or negative) and statistical uncertainties by searching for the brightest pixel in a $4''$ radius around each \afluxb source position in the \afluxa map. We chose this search radius based on the positional uncertainties in the \afluxb sample (see, e.g., Cowie et al.\ 2017). We detect 341 of the 850~$\mu$m sources above the $2\sigma$ level at \afluxar, and 261 above the $3\sigma$ level. This procedure produces an upward bias. Based on randomized samples, we estimate a false positive rate of 14\% at $2\sigma$ and 6\% at $3\sigma$. In Figure~\ref{macs1149_sample}, we show representative SCUBA-2 images (of the cluster MACSJ1149) based on just under 60~hours of exposure in weather band~1 conditions ($\tau_{225~{\rm GHz}}<0.05$). We indicate where strongly magnified sources are expected to be by overplotting green contours on the \afluxa image that show the Zitrin (2021) lens model for $z=2$ at magnifications of 1.4, 2, and 4. In general, higher redshifts will have similar but somewhat more compact contours. We show the rest of the \afluxb and \afluxa SCUBA-2 images for the cluster fields in the Appendix. In Table~\ref{tab2}, we summarize our \afluxb source catalog. We do not apply any corrections from peak fluxes to total fluxes or any de-boosting correction. For the \afluxb noise, we use the measured white noise and a confusion noise of 0.33~mJy (Cowie et al.\ 2017) added in quadrature. We retain the brightest cluster galaxy (BCG) sources in Table~\ref{tab2} [labeled as ``BCG" in Column~(10)], even though they are associated with cluster cooling flows and are not produced by star formation (e.g., Edge et al. 2010). In the next section, we describe the ALMA observations of the various cluster fields. Table~\ref{tab2} provides the target list for this follow up. \section{ALMA Imaging} Our SCUBA-2 sample provides a large number of sources for interferometric follow-up. The primary goals of the interferometric work are to obtain accurate positions, to determine the optical/NIR counterparts (if any) to the SMGs, from which we can then derive photometric redshifts (hereafter, photzs). The follow-up interferometric continuum imaging is most efficiently carried out with ALMA. In Table~\ref{table1}, we mark the fields where we have done targeted ALMA observations. This is a nearly complete 870~$\mu$m interferometric follow-up of the SCUBA-2 sources in the central areas of these fields. With ALMA's subarcsecond resolution, we can obtain accurate positions for the SCUBA-2 sources and detect any multiple counterparts that are blended into a single source at the single-dish resolution. Based on our follow-up observations of SCUBA-2 sources with SMA and ALMA (e.g., Chen et al.\ 2013a; Cowie et al.\ 2017, 2018), we expect $\sim10$\% of our sample to be blended multiples, but there are still few studies of the SCUBA-2 multiplicity fraction for our flux range. In order to separate multiples, we aim for an rms sensitivity of $\sim0.2$~mJy at \afluxbr, which allows for the detection of the brighter component of any $\sim2:1$ ratio blends that combine to form a single 2~mJy source, which is the typical flux of our targets. (Note that this is the observed flux; the corresponding de-lensed fluxes probe the desired 1~mJy range.) \subsection{A370, MACSJ1149, and MACSJ0717} In our program ``An ALMA Survey of Lensed SMGs in the Hubble Frontier Fields" (ALMA program \#2017.1.00341.S, \#2018.1.00003.S; PI: F.~Bauer), we observed fields in A370, MACSJ1149, and MACSJ0717. For each cluster, we observed all objects with SCUBA-2 fluxes above 1.6~mJy lying within a $2'$ radius from the cluster center, together with a small number of fainter objects. The observations were made in band~7 (870~$\mu$m) using the C43-3 array configuration. We centered the observations on the targeted source positions and used a spectral set-up configured with four 1.875~GHz spectral windows (using time division mode) placed around a central frequency of 343.5~GHz in order to match the SCUBA-2 \afluxb observations. This returned a representative spectral resolution of $\sim28$~km~s$^{-1}$. Nominal natural-weighted beams of $\approx0\farcs9\times0\farcs5$, $0\farcs6\times0\farcs5$, and $0\farcs9\times0\farcs4$ were achieved for our observations of A370, MACSJ1149 and MACSJ0717, respectively. We made images of the targets using the task {\sc clean}. We made dirty images using natural weighting and a mild {\em uv}-taper to achieve a synthesized beam of $1\farcs0$ for the major axis; the minor axis was still $\sim0.6-0\farcs7$. Based on the size estimates from Simpson et al.\ (2015), this offers the best trade-off between retaining a relatively low rms and increasing the sensitivity to extended sources in our data; tapering the data to larger beams means weighting toward shorter baselines, and hence fewer antennas and lower sensitivity. We performed an initial source search on the dirty images prior to primary beam correction, with the dual purpose of assessing the rms sensitivity and finding all secure detections. We produced cleaned images by placing $2''\times2''$ clean boxes around all secure sources detected with S/N$\geq5$ in the dirty images. We stopped the final cleaning process after 1000 iterations, such that most of the emission associated with the sources was recovered. We note that this choice does not strongly affect the resulting fluxes or rms; for example, opting for only 100 iterations and/or setting the clean threshold to $1\sigma$ results in drops of 1--4\% in peak flux and 5--8\% in rms. We searched each of the $1\farcs0$ cleaned images for significant sources. We restricted our search to the area contained within an $8\farcs75$ radius, which corresponds to the ALMA half power. This also is well matched to the SCUBA-2 FWHM and should contain all sources contributing to the corresponding SCUBA-2 flux. We selected all sources with a peak flux S/N $>4.5$ (see below). We determined the peak flux noise using the dispersion in 100 independent beam positions surrounding the source. We measured a median central rms noise of 0.24~mJy for the sources in A370, 0.21~mJy in MACSJ1149, and 0.4~mJy in MACSJ0717. The total searched area corresponds to 15,000 independent beams. For a Gaussian distribution in the noise, we expect 0.5 false sources above a S/N cut of 4, and 0.05 false sources above a S/N cut of 4.5. We restrict our final sample to sources with S/N $>4.5$, which we expect to be highly robust. We have tested this selection by searching for sources in the negative of the images. At S/N = 4.5, we find no negative image detections, which confirms the robustness of the detected sample. We find that the ALMA peak fluxes, even in the $1\farcs0$ tapered images, slightly underestimate the total fluxes. The reason for this is that the sources are resolved. However, $2''$ aperture fluxes well approximate the total ALMA fluxes. We list these aperture fluxes in Tables~\ref{a370_band7} and \ref{macsj1149_band7} for A370 and MACSJ1149, respectively. For the blended sources in A370 (numbers 1 and 2), we give the peak flux times 1.3 to approximate the total ALMA flux. We note that the $2''$ aperture fluxes closely match the SCUBA-2 fluxes, with a median ratio of SCUBA-2/ALMA of 1.12. This shows that ALMA is recovering most of the SCUBA-2 flux. We now describe the ALMA observations for each cluster individually: \vskip 0.5cm {\bf A370:} In band~7 (870~$\mu$m), we observed 15 ALMA fields within a $2'$ radius from the cluster center. These covered all 13 SCUBA-2 sources with measured \afluxb fluxes in excess of 1.6~mJy in the region and two additional fields. We show the ALMA images in the upper left panel of Figure~\ref{alma_images}. We detect 10 sources ($>4.5\sigma$) in the ALMA images. We summarize the properties of these sources in Table~\ref{a370_band7}, and we mark their positions with white circles in all panels of Figure~\ref{alma_images}. This includes one double source in the ALMA observations (a well-known pair at $z=2.8$ studied by Ivison et al.\ 1998), which blends into a single SCUBA-2 source. Eight of the ALMA detections are single sources (one of which is at $z=1.056$ and was studied by Barger et al.\ 1999). In total, 9 of the 13 SCUBA-2 sources are detected with ALMA. One of the four SCUBA-2 sources that was not detected by ALMA corresponds to the giant arc at $z=0.724$ (Soucail et al.\ 1988). This source is extended even in the SCUBA-2 image (Figure~\ref{a370_arc}) and is seen as an elongated source in the 100~$\mu$m Herschel/PACS image (Rawle et al.\ 2016). Although the source is over-resolved and undetected in the ALMA image, the SCUBA-2 identification is clear. The three undetected sources are the faintest in the SCUBA-2 sample with \afluxb fluxes between 1.6~mJy and 1.8~mJy. These sources may be missed if they are extended or multiple, but they are also the most likely to be spurious in the \afluxb sample. We also searched the ALMA archive for additional data. Mosaics of the field in band~6 (1.2~mm) have been obtained by Gonz\'alez-L\'opez et al.\ (2017) and by ALMA program \#2018.1.00035.L (PI:~K.~Kohno). We show these in the upper center and upper right panels of Figure~\ref{alma_images}, respectively, using images taken from the Japanese Virtual Observatory (JVO) archive. We summarize the three band~6 detected sources (two from Gonz\'alez-L\'opez et al.\ and one from ALMA program \#2018.1.00035.L) in Table~\ref{a370_band6}, but they all overlap with our band~7 sample. The giant arc is not detected in either of the mosaicked images. In the lower panels of Figure~\ref{alma_images}, we show the SCUBA-2 \afluxar, HerMES 100~$\mu$m, and BUFFALO F160W images, respectively. In Figure~\ref{contour_images}, we show BUFFALO/HFF three-color images with the ALMA emission overlaid. \vskip 0.5cm {\bf MACS\,J1149.5+2223:} In band~7 (870~$\mu$m), we observed 22 ALMA fields within a $2'$ radius from the cluster center. These included 19 of the 20 SCUBA-2 sources with \afluxb fluxes above 1.6~mJy, together with 3 fainter sources. One SCUBA-2 source with a flux of 1.7~mJy, which was marginally below our S/N selection threshold at the time of setting up the observations, was omitted. We show these targeted ALMA observations in the upper left panel of Figure~\ref{alma_images2}. We detect 12 sources ($>4.5\sigma$) in the ALMA images. We summarize the properties of these sources in Table~\ref{macsj1149_band7}, and we mark their positions with white circles in all panels of Figure~\ref{alma_images2}. This includes one double source, where the separation of the two ALMA sources is $6\farcs7$. This is small enough to produce a blended SCUBA-2 source. ALMA mosaics of the field in band~6 (1.2~mm) were obtained by Gonz\'alez-L\'opez et al.\ (2017) and by ALMA program \#2018.1.00035.L (PI:~K.~Kohno). We show these images, which we took from the JVO archive, in the upper center and upper right panels of Figure~\ref{alma_images2}, respectively. There is one unique $>4.5\sigma$ detected source in each image, and they both overlap with our band~7 sample. Targeted band~6 observations based on an AzTEC image were also made as part of ALMA program \#2016.1.00293.S (PI:~A.~Pope), and we summarize these eight detections in Table~\ref{macsj1149_band6}. Only two lie within our $2'$ selection radius, both of which we detected in band~7. In the lower panels of Figure~\ref{alma_images2}, we show the SCUBA-2 \afluxar, HLS 100~$\mu$m, and BUFFALO F160W images, respectively. \vskip 0.5cm {\bf MACS\,J0717.5+37450:} The SCUBA-2 data show that the central region of this cluster contains a surprisingly low number of sources (three) with \afluxb fluxes greater than 2~mJy within a $2'$ radius from the cluster center. Two of these sources are a close pair and correspond to lensed source~5 from Zitrin et al.\ (2009). Only half of the ALMA exposure time for our band~7 (870~$\mu$m) observations in this cluster were completed, and hence our sensitivity is considerably poorer than for the other two clusters, with a typical central rms of 0.4~mJy. Only the southern source of the close pair (corresponding to component 5.2 of the lensed source) is detected when our data are combined with deeper archival band~7 data from ALMA program \#2017.1.00091.S (PI:~A.~Pope). Pope et al.\ (2017) also detected the source at 1.1~mm with the AzTEC camera on the Large Millimeter Telescope. We summarize the source's properties in Table~\ref{macsj0717_band7}, and we show the system in more detail in Figure~\ref{alma_lens1}. This field was only partially observed in the ALMA band~6 program of Gonz\'alez-L\'opez et al.\ (2017). These observations are less sensitive than the other mosaics on the HFFs and yielded no detections. Targeted band~6 observations based on an AzTEC image were made as part of ALMA program \#2016.1.00293 (PI:~A.~Pope), and we summarize these four bright detections, which lie outside our $2'$ selection radius, in Table~\ref{macsj0717_band6}. In the upper left panel of Figure~\ref{alma_images3}, we show both the band~6 (square fields) and band~7 (circular fields) targeted ALMA observations. In the upper right panel, we show the Gonz\'alez-L\'opez et al.\ (2017) band~6 mosaic. In the lower panels, we show the SCUBA-2 \afluxar, HLS 100~$\mu$m, and BUFFALO F160W images, respectively. In all panels, we mark with white circles the positions of either a band~6 or a band~7 detection from the targeted observations. \subsection{Remaining Clusters} We searched the ALMA archives for detected sources lying within a radius of $5'$ from the cluster centers. These observations are primarily at millimeter wavelengths, but having accurate positions for SMGs is useful for our analysis in Section~\ref{interpretALMA}. We summarize the ALMA positions of the detected sources in Table~\ref{archivetable}. In some cases, the ALMA source lies below the SCUBA-2 detection threshold, but we give the measured SCUBA-2 \afluxb and \afluxa fluxes at the ALMA position for all the ALMA sources. For the SCUBA-2 errors, we list the white noise, since we are pre-selecting from ALMA. \vskip 0.5cm Breakdown by cluster: \vskip 0.5cm \begin{itemize} \item[$\bullet$] A2390 and MACSJ1423: Only the BCGs have ALMA archival observations. \item[$\bullet$] A1689: There are 4 detected sources in the ALMA archive. One is the complex, heavily studied $z=7.13$ source A1689-zd1 (Watson et al.\ 2015; Wong et al.\ 2022). Two others appear to be cluster members based on the photzs. Only the final source is bright enough to be detected in the SCUBA-2 sample. \item[$\bullet$] A2744: There are 9 detected sources in the ALMA archive, seven of which come from Gonz\'alez-L\'opez et al.\ (2017) and the remaining 2 from ALMA program \#2017.1.01219.S (PI:~F.~Bauer). Seven of the nine sources are detected in the SCUBA-2 images. \item[$\bullet$] MACSJ0416: There are 5 detected sources in the ALMA archive, four of which come from Gonz\'alez-L\'opez et al.\ (2017); two of these four have SCUBA-2 detections. One source is near coincident with the strong lens component 12.3 in Hoag et al.\ (2016), but we cannot obtain a consistent solution for the submillimeter fluxes at all three positions. Thus, we do not believe this SMG is associated with the lensed system. \hskip 0.35cm The fifth source is MACS0416\_Y1, which lies at a redshift of $z=8.311$ based on ALMA fine structure line measurements (Tamura et al.\ 2019; Bakx et al.\ 2020). Consistent with the ALMA measured 870~$\mu$m flux of 0.14~mJy (Bakx et al.\ 2020), this source is not detected in the SCUBA-2 imaging. \item[$\bullet$] RXJ1347: Apart from the BCG, the only other ALMA detections from ALMA program \#2018.1.00035.L (PI:~K.~Kohno) are two components of a lensed system (5.1, 5.2, 5.3 of Zitrin et al.\ 2015). We give their positions in Table~\ref{archivetable}. Zitrin et al.\ measured a photz of 1.28 for the lensed system, which we give in the table, but note that Brada\v{c} et al.\ (2008) placed it at a redshift of $z=4$ based on their gravitational lens modeling. We show the system in more detail in Figure~\ref{alma_lens2}. \end{itemize} \section{Interpreting the ALMA sample} \label{interpretALMA} In this section, we analyze the redshift distributions, demagnified flux distributions, and flux ratios of the ALMA cluster sample (we exclude the BCGs) with either speczs or photzs. This includes nearly all our band~7 A370, MACSJ1149, and MACSJ0717 sources (Tables~\ref{a370_band7}, \ref{macsj1149_band7}, and \ref{macsj0717_band7}) and nearly all the (mostly band~6) remaining cluster sources in our Table~\ref{archivetable}. \subsection{Redshifts} Measuring the redshifts of SMGs is key to determining their physical sizes, luminosities, and stellar masses, along with accurate magnifications for the lensed sources. Where possible, we use the optical/NIR spectroscopic redshifts (hereafter, speczs) that we either obtained with the DEIMOS, LRIS, and MOSFIRE instruments on the Keck telescopes or pulled from the literature. However, a number of the SMGs are faint at optical/NIR wavelengths and hence do not have speczs. For these objects, we use photzs. We give all the redshifts and their sources in the tables. We plot redshift versus observed HST F160W magnitude for the sample in Figure~\ref{m160_plot_z}. We also include unlensed sources from the CDF-N and CDF-S fields (Cowie et al.\ 2017, 2018) for comparison. We note that there is a strong correlation of the redshift with the observed F160W magnitude, with fainter sources being systematically at higher redshift. This means, in turn, that the spectroscopically identified sources are strongly biased to lower redshifts, and that the highest redshift sources may be hard to obtain even photzs for. Different lensing cluster fields appear to have different redshift distributions. In Figure~\ref{histogram}, we show the redshift distributions for the 5 HFFs contained in the sample. Some fields, such as MACSJ1149, are dominated by low-redshift ($z<2$) sources, while other fields, such as A2744, have a significant number of high-redshift ($z>4$) sources. This variance, which is a consequence of the small field sizes, emphasizes the need for multiple fields to obtain properly averaged redshift distributions. The \afluxa to \afluxb flux ratio may be used to make a rough estimate of the redshift, with lower ratios corresponding to higher redshift sources (e.g., Barger et al.\ 2022). We show the \afluxa to \afluxb flux ratio versus redshift in Figure~\ref{plot_f4f8_z} compared with the power law fit from Barger et al.\ for the CDF-N and CDF-S. High redshifts ($z>4$) generally correspond to sources where the flux ratio is less than 2. \subsection{Magnifications} In order to compute the magnifications and intrinsic flux densities of our faint SMGs, lens models of the clusters are required. Lens models are available for the HFFs from 10 teams. We use the online tool provided by Dan Coe (\url{https://archive.stsci.edu/prepds/frontier/lensmodels/webtool/magnif.html}) to obtain median magnifications and standard deviations for the ALMA sources with redshifts in A370, MACSJ1149, and MACSJ0717 (Tables~\ref{a370_band7}--\ref{macsj0717_band7}) to illustrate the range of magnifications. Most of the sources in the cluster samples have modest magnifications in the 1.5--4 range and errors of $\sim10$--20\%. However, for the very small number of high amplification sources lying close to the critical lines at $z =$1--6, the errors can be larger. Meneghetti et al.\ (2017) quote an uncertainty of 10\% at magnifications of 3, with a degradation to 30\% at magnifications of 10. Meanwhile, based on A2744, Priewe et al.\ (2017) quote a larger uncertainty of 30\% at magnifications of 2, with a degradation to 70\% at rare high magnifications of 40. For uniformity, we choose to use the Zitrin et al.\ (2009, 2013, 2014, 2015) and Zitrin (2021) models for the HFFs and the CLASH clusters MACSJ1423, MACS2129, and RXJ1347 (though note that we only have the one BCG in MACSJ1423, so we do not consider that cluster further). Thus, due to the lack of Zitrin models, we do not consider A2390 (for which we only have the one BCG, anyway) or A1689 further in our ALMA analysis. We compute the source magnifications from the appropriate Zitrin models using the median in a $1''$ box surrounding the source positions; however, we note that these values are very similar to the values computed at the central positions of the sources. For the few sources that lie outside the Zitrin fields in the HFFs (no sources lie outside the Zitrin fields in the CLASH clusters), we adopt the median magnifications from the other models in the HFFs. In Figure~\ref{newhist}, we show the effects of demagnification. In the lower histogram, we show the distribution of observed \afluxb flux for the ALMA sources with SCUBA-2 \afluxb flux $>1.6$~mJy, while in the upper histogram, we show the distribution of demagnified \afluxb flux. The median observed flux is 3.6~mJy, and the median demagnified flux is 1.5~mJy. Only three sources have magnifications $>4$. The highest magnification of 88 (in MACSJ0416; coordinate R.A. $=4^h 16^m 10.80^s$, decl. $=-24^\circ 4' 47.6''$) produces the one very faint source in the demagnified histogram. \subsection{Optical/NIR Counterparts to the Faint SMGs} The detailed matching of individual faint SMGs with their UV/optical counterparts is still poorly understood and will only be resolved with large, well-studied samples, such as those presented here. Studies of low-redshift starburst galaxies (e.g., Chary \& Elbaz 2001; Le Floc'h et al.\ 2005; Reddy et al.\ 2010) have shown that fainter sources are generally less dusty. However, some recent work has suggested that fainter SMGs are, on average, at lower redshifts (e.g., Mobasher et al.\ 2009; Magliocchetti et al.\ 2011; Hsu et al.\ 2016; Aravena et al.\ 2016, 2020; Cowie et al.\ 2018). In this section, we will argue that the primary evolution is in the extinction rather than in the redshift distribution. In Figure~\ref{plot_zitrin_magnif}, we plot for the ALMA cluster sample with redshifts the F160W to \afluxb flux ratio versus demagnified SCUBA-2 \afluxb flux (left panel), and redshift versus demagnified SCUBA-2 \afluxb flux (right panel). We supplement this with blank-field observations of the CDF-N and CDF-S. There are 23 CDF-N SMGs in the footprint of the HST F160W image with $>5\sigma$ SMA detections from Cowie et al.\ (2017). All of these are also detected in the SCUBA-2 \afluxb image, with the lowest S/N being 7.5. Targeted ALMA imaging in the CDF-S by Cowie et al.\ (2018) yields 74 ALMA detected sources in the footprint of the HST F160W image. Contiguous millimeter mosaics have also been used to generate ALMA samples in the CDF-S, but the most recent analysis of these data (G\'omez-Guijarro et al.\ 2022) only adds 10 directly detected sources to those given in Cowie et al.\ (2018), reflecting the inefficiency of this procedure. 83 of the 84 sources in this combined ALMA CDF-S sample are detected above a $2\sigma$ threshold (79 above $3\sigma$, and 76 above $4\sigma$) in the SCUBA-2 \afluxb image. Seven of the 10 additional ALMA sources are detected above the $4\sigma$ level in the SCUBA-2 image. We add these two samples (84 CDF-S and 23 CDF-N SMGs) to Figure~\ref{plot_zitrin_magnif} using the measured SCUBA-2 fluxes. We measured the $1.6~\mu$m fluxes using corrected $1''$ diameter apertures on the Hubble Legacy Fields\footnote{\url{http://archive.stsci.edu/hlsps/hlf}} combined F160W images (G.~Illingworth et al., in preparation). In Figure~\ref{plot_zitrin_magnif}(a), we see a trend of increasing F160W to \afluxb flux ratio from brighter to fainter SMGs. However, there is a wide spread in this ratio. Hereafter, we will refer to the sources that are extremely faint in this ratio (i.e., $<10^{-4}$) as optical/NIR dark SMGs (horizontal line). For a source with a 2~mJy \afluxb flux, this would correspond to an F160W magnitude of 25.6. Based on the figure, such sources are more common at brighter \afluxb fluxes but continue to exist down to \afluxb fluxes fainter than 1~mJy. In Figure~\ref{plot_zitrin_magnif}(b), some of the lowest flux sources at are very high redshifts due to ALMA targeting of known high-redshift sources (the $z=8.311$ source in MACSJ0416 from Tamura et al.\ 2019 and Bakx et al.\ 2020) or to possibly uncertain photzs (the $z=5.56$ source in A2744 and the $z=4.52$ source in MACSJ0717). For sources around 1~mJy, redshifts range from cluster redshifts to redshifts just above $z=4$. The median redshifts for the combined sample are $z=2.29$, 2.00, and 2.20, and 2.49 in the 0.5--1, 1--2, 2--4, and 4--8~mJy flux ranges. These median redshifts are slightly lower than the strongly-lensed galaxy sample \afluxb curve over this flux range given in B{\'e}thermin et al.\ (2015; Figure~3), which is based on their phenomenological model of galaxy evolution. Their curve shows a decline from about $z=2.9$ to $z=2.6$, with the lower fluxes being at lower redshifts. Meanwhile, their full galaxy sample \afluxb curve declines from about $z=2.7$ to $z=1.9$. For the present sample above 0.5~mJy, the field galaxies have a median redshift of $z=2.27$, while the lensed galaxies have a median redshift of $z=2.20$. For the sources that are not in our primary ALMA band~7 sample, there may be biases due to selection effects. However, in our highly complete band~7 sample in A370, MACSJ1149, and MACSJ1707 (only two of these sources do not have redshifts, one of which lies off the F160W field), above 0.5~mJy we find a median redshift of $z=2.00$, which is quite similar to our overall lensed galaxy median redshift of $z=2.20$, as quoted above. Thus, we see no strong evidence for evolution in the redshift distribution versus demagnified \afluxb flux over this flux range. This implies that the trend of increasing F160W to \afluxb flux ratio from brighter to fainter SMGs that we observe results from decreasing extinction as we move to fainter SMGs. However, we emphasize that there is a wide range of measured F160W to \afluxb flux ratios in the faint SMG samples, including sources that are extremely dark in the optical/NIR, as was first noted by Chen et al.\ (2014) and Hsu et al.\ (2017). In Figure~\ref{sample}, we show an example of an optical/NIR dark, faint SMG in the A2744 field with $z_{phot}=4.16$ (see Table~\ref{archivetable}). The source, which has a delensed \afluxb flux of 0.9~mJy, is extremely faint in the NIR with a F160W magnitude of 27.0. This suggests that there is a population of faint SMGs that are also extremely optical/NIR faint, either because they are very dusty and/or because they are at very high redshifts (see also, e.g., Wang et al.\ 2019). \subsection{Estimating Redshifts from the F160W Flux} Given the observed distribution for the combined sample in Figure~\ref{m160_plot_z}, where we plot redshift versus observed F160W magnitude, sources with F160W magnitudes $\gtrsim25$ tend to be at $z>4$. We can further refine the use of F160W magnitude for estimating redshifts by plotting the F160W to \afluxb flux ratio versus demagnified flux with the sources color-coded by redshift (see Figure~\ref{plot_color_smm_z}). We see that, as an ensemble, the sources in the various redshift ranges are reasonably well separated and can be fit by a power law of the form (gold curve in Figure~\ref{m160_plot_z}) \begin{equation} 1+z = 0.624 (f_{160})^{-0.258} \,, \label{eqn_final} \end{equation} where $f_{160}$ is in mJy. There does not appear to be a strong dependence on the \afluxb flux. In Figure~\ref{fitted_hist}, we show the distribution of redshifts determined from Equation~\ref{eqn_final}, though we caution that redshifts for individual sources may have substantial uncertainty. We find that for sources above 0.5~mJy, $20\pm4$\% are at $z>4$. For sources in the range 0.5 to 2~mJy, 16 (10, 24)\% are at $z>4$, where the numbers in parentheses are the 68\% confidence range. \section{Interpreting the SCUBA-2 Sample} Now that we are informed by the ALMA data from the previous section, we turn to analyzing the SCUBA-2 sample. After developing the SCUBA-2 sample, one of our main goals is to determine candidate high-redshift SMGs. This is important, because the space density at $z > 4$ remains relatively poorly constrained. \subsection{Positional Uncertainty} We first measured the accuracy of the \afluxa source positions using the SCUBA-2 cluster sample together with the CDF-N and CDF-S samples (Cowie et al.\ 2017, 2018). We measured the offset between the $>4\sigma$ \afluxa source position and the nearest ALMA/SMA counterpart, if one exists within $5''$. In the well-studied A370 and MACS1149 fields, there are 18 ALMA sources detected above $4\sigma$ in the \afluxa band. Of these, there are two pairs that correspond to a single \afluxa source, one in each field. The remainder are single matches. This corresponds to a multiplicity of $11^{26}_{4}\%$, where the upper and lower values give the $68\%$ confidence range. In Figure~\ref{450_offsets}, we show the distribution of offsets between the \afluxa SCUBA-2 and ALMA positions. For 80\% of the sources with a counterpart, the offsets are $<2\farcs5$. The mean and median offsets are both 1\farcs7. This is very similar to the offsets measured in Barger et al.\ (2022) between the $>4\sigma$ \afluxa source position and the nearest 20~cm counterpart. Since many of the SCUBA-2 cluster sample do not have $>4\sigma$ \afluxa detections, we also need the uncertainties for the \afluxb positions. These scale roughly with the PSF, and Cowie et al.\ (2017) found that 96\% of the $>4\sigma$ \afluxb source positions are within $4''$ of the nearest 20~cm counterpart in the CDF-N. We take this as the uncertainty in the \afluxb positions. \subsection{Central Sample} Only the central members of the SCUBA-2 cluster sample lie in regions where lensing magnifications are significant. In studying the fainter SMGs, we therefore restrict to sources within a $1\farcm75$ radius from the cluster center, where amplifications may be $>2$. Excluding \afluxb sources corresponding to BCGs and pairs (see Table~2), this provides a sample of 111 SMGs. In Figure~\ref{new_matching}, we plot the observed \afluxa flux versus the observed \afluxb flux for this sample. 16 (14\%) have measured ALMA counterparts (red squares), and 39 have $>4\sigma$ detections at \afluxar. The ALMA detected sources appear to be well representative of this sample, so we can assume that information derived in Section~\ref{interpretALMA} can be applied here. We now again exclude A1689 and A2390 and refer to the remaining sample as {\em our central \afluxb sample.\/} Sources with modest magnifications (we take this to mean $<4.2$) are relatively insensitive to $4''$ positional changes, but sources with higher magnifications can be much more uncertain. In both cases, the redshift is the primary source of uncertainty, but this uncertainty is again less for the sources with modest magnifications. For the sources in these fields, we computed the magnification at the SCUBA-2 \afluxb position for $z=2$ and estimated the uncertainty by considering a range from $z=1$--4. To illustrate how well this works, we show in Figure~\ref{compare_magnif} for A370 and MACSJ1149 these magnifications versus magnifications computed at the ALMA positions and accurate redshifts. Within the magnification uncertainties corresponding to the adopted $z=1-4$ range, there is broad agreement. \subsection{450 to 850 Micron Ranges} In the top two panels of Figure~\ref{f4_flux}, we show the \afluxa to \afluxb flux ratio for the central sample versus the demagnified \afluxb flux, with the magnification uncertainties corresponding to the adopted $z = 1$--4 range. We use two panels to split the SMGs into those with high magnifications (here $>4.2$ for $z=2$), where the uncertainties are large, and those with modest magnifications, where the uncertainties are smaller. In the bottom panel, we show the sample outside a $2\farcm5$ radius from the cluster center (again excluding BCGs and close pairs, but using all of the clusters), where we expect the magnification to be near one. Here we use the observed \afluxb flux. These three samples have very similar distributions of \afluxa to \afluxb flux ratios, as we show in Figure~\ref{f4f8_hist}. A Mann-Whitney test does not show a significant difference between the three samples. If, as we have argued, this flux ratio is a good redshift indicator, then the redshift distribution of this large \afluxb sample is invariant as a function of the \afluxb flux over the 0.5--10~mJy flux range. This is consistent with the smaller ALMA sample with direct redshift measurements (see Figure~\ref{plot_zitrin_magnif}). \subsection{Candidate High-Redshift SMGs} We restrict to the $>5\sigma$ \afluxb full cluster sample, after including confusion noise, to improve the fidelity of our candidate high-redshift SMG selection. We determine the sources in this sample that have \afluxa to \afluxb flux ratios $<2$, which Barger et al.\ (2022) give as a criterion to select candidate high-redshift SMGs with $z>4$. We list these candidates, which make up 21\% of the $>5\sigma$ \afluxb full cluster sample, in Table~\ref{faintsample}. At present, only one has a measured ALMA counterpart (source~2 in MACSJ1149; see Table~\ref{macsj1149_band7}). This source does not have a specz or a photz. The 55 SMGs in Table~\ref{faintsample} are quite bright, with \afluxb fluxes from 2.3~mJy to 12.1~mJy and a median flux of 4.3~mJy. They should be straightforward targets for future interferometric observations. Seven lie within the central high-magnification regions ($<1\farcm75$ radius) of clusters observable with ALMA, and these represent the most interesting targets. \section{Summary} In this paper, we presented deep SCUBA-2 \afluxa and \afluxb imaging of ten strong lensing clusters. We constructed a catalog of 404 sources with \afluxb fluxes detected above $4\sigma$ that lie within a radius of $4\farcm$5 from the cluster centers. We also presented catalogs of $>4.5\sigma$ ALMA band~6 (1.2~mm) and band~7 (870~$\mu$m) detections from our observations and from the archive, for which we gave, where available, spectroscopic or photometric redshifts from new Keck observations or from the literature. We supplemented our cluster lensed ALMA sample with a CDF-S ALMA sample and a CDF-N SMA sample from previous work to aid with interpretation. Our main results based on these samples are as follows: \vskip 0.25cm $\bullet$ We noted a correlation between redshift and observed F160W magnitude, with fainter sources being at higher redshifts. Higher redshift sources may therefore be hard to get even photometric redshifts for. $\bullet$ We found that different cluster fields have different redshift distributions, emphasizing the need for observations of multiple fields to obtain properly averaged redshift distributions. $\bullet$ We confirmed the use of the \afluxa to \afluxb flux ratio for estimating redshifts, with $z>4$ generally corresponding to a flux ratio $<2$. $\bullet$ We used publicly available lens models for the clusters to determine magnifications, most of which are in the 1.5--4 range with modest errors of 10--20\%, though there are a very small number of high amplification sources where the errors can be large. $\bullet$ Utilizing both our cluster lensed sample and the CDF samples, we found a trend of increasing F160W to \afluxb flux ratio from brighter to fainter SMGs. $\bullet$ We found no evidence that the fainter SMGs, which we probe primarily through our cluster lensed sample, have a different redshift distribution than the brighter SMGs, which we probe primarily through the CDF samples. $\bullet$ Since we did not find a change in the redshift distribution as a function of the demagnified \afluxb flux, we concluded that the observed trend in the F160W to \afluxb flux ratio as a function of the demagnified \afluxb flux results from decreasing extinction in the fainter SMGs. $\bullet$ We caution, however, that there is wide spread in the F160W to \afluxb flux ratio relation. For example, although optical/NIR dark SMGs, which we defined as being extremely faint in the F160W to \afluxb flux ratio (i.e., $<10^{-4}$), are more common at brighter \afluxb fluxes, we found that they continue to exist down to \afluxb fluxes fainter than 1~mJy. $\bullet$ We argued that roughly 20\% of the SMGs are at $z>4$, independent of submillimeter flux. \vskip 0.25cm Finally, informed by the ALMA results, we separately analyzed the SCUBA-2 cluster sample and identified 55 $z>4$ candidates selected on the basis of the \afluxa to \afluxb flux ratio. These are bright at \afluxb and hence good targets for future interferometric observations, including seven that lie in the central high magnification regions of clusters that are observable with ALMA. \vskip 0.75cm We thank the anonymous referee for constructive comments that helped us to improve the manuscript. We gratefully acknowledge support for this research from NASA grants NNX17AF45G and 80NSSC22K0483 (L.~L.~C.), the William F. Vilas Estate (A.~J.~B.), a Kellett Mid-Career Award and a WARF Named Professorship from the University of Wisconsin-Madison Office of the Vice Chancellor for Research and Graduate Education with funding from the Wisconsin Alumni Research Foundation (A.~J.~B.), the Millennium Science Initiative Program -- ICN12\_009 (F.~E.~B), CATA-Basal -- FB210003 (F.~E.~B), and FONDECYT Regular -- 1190818 (F.~E.~B) and 1200495 (F.~E.~B, C.~O.), the Ministry of Science and Technology of Taiwan (MOST 1092112-M-001-016-MY3) (C.-C.~C.), WSGC Graduate and Professional Research Fellowships (L.~H.~J., A.~J.~T.), and a Sigma Xi Grant in Aid of Research (A.~J.~T.). This work utilizes gravitational lensing models produced by PIs Brada\v{c}, Natarajan \& Kneib (CATS), Merten \& Zitrin, Sharon, Williams, Keeton, Bernstein and Diego, and the GLAFIC group. This lens modeling was partially funded by the HST Frontier Fields program conducted by STScI. STScI is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS 5-26555. The lens models were obtained from the Mikulski Archive for Space Telescopes (MAST). The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This paper makes use of the following ALMA data: ADS/JAO.ALMA\,\#2013.1.00999.S, \\ % ADS/JAO.ALMA\,\#2015.1.01425.S, \\ % ADS/JAO.ALMA\,\#2016.1.00293.S, \\ % ADS/JAO.ALMA\,\#2017.1.00091.S, \\ % ADS/JAO.ALMA\,\#2017.1.00341.S, \\ % ADS/JAO.ALMA\,\#2017.1.01219.S, \\ % ADS/JAO.ALMA\,\#2018.1.00003.S, \\ % and ADS/JAO.ALMA\,\#2018.1.00035.L. % ALMA is a partnership of ESO (representing its member states), NSF (USA), and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO, and NAOJ. The James Clerk Maxwell Telescope is operated by the East Asian Observatory on behalf of The National Astronomical Observatory of Japan, Academia Sinica Institute of Astronomy and Astrophysics, the Korea Astronomy and Space Science Institute, the National Astronomical Observatories of China and the Chinese Academy of Sciences (grant No. XDB09000000), with additional funding support from the Science and Technology Facilities Council of the United Kingdom and participating universities in the United Kingdom and Canada. The W.~M.~Keck Observatory is operated as a scientific partnership among the California Institute of Technology, the University of California, and NASA, and was made possible by the generous financial support of the W.~M.~Keck Foundation. We wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. \facilities{ALMA, JCMT, KeckI, KeckII} \begin{deluxetable*}{lccrcrrcrlcc} \setcounter{table}{1} \renewcommand\baselinestretch{1.0} \tablewidth{0pt} \tablecaption{SCUBA-2 \afluxb Sample ($>4\sigma$) \label{tab2}} \scriptsize \tablehead{No. and Name & R.A. & Decl.& $f_{850}$ & Error & S/N & $f_{450}$ & Error & S/N & $f_{450}/f_{850}$ & Error & Offset \\ & J2000.0 & J2000.0 & \multicolumn{2}{c}{(mJy)} & & \multicolumn{2}{c}{(mJy)} & & & & (arcmin) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12)} \startdata 1 SMM131126-11906 & 13 11 26.3 & -1 19 06.3 & 12.0 & 0.5 & 24.0 & 33.9 & 2.6 & 12.0 & 2.73 & 0.23 & 1.2\cr 2 SMM131121-11951 & 13 11 21.8 & -1 19 51.3 & 11.0 & 0.5 & 21.0 & 14.7 & 2.7 & 5.3 & 1.31 & 0.25 & 1.7\cr 3 SMM131131-11804 & 13 11 31.8 & -1 18 04.3 & 9.5 & 0.6 & 17.0 & 38.3 & 2.8 & 13.0 & 3.99 & 0.37 & 2.2\cr 4 SMM131117-12239 & 13 11 17.5 & -1 22 39.3 & 7.6 & 0.6 & 12.0 & 18.3 & 3.7 & 4.8 & 2.39 & 0.52 & 3.7\cr 5 SMM131121-12108 & 13 11 21.8 & -1 21 08.3 & 6.2 & 0.5 & 11.0 & 18.2 & 2.8 & 6.4 & 2.92 & 0.52 & 1.9\cr 6 SMM131118-12213 & 13 11 18.6 & -1 22 13.3 & 5.4 & 0.6 & 8.9 & 15.3 & 3.6 & 4.2 & 2.80 & 0.73 & 3.2\cr 7 SMM131119-12151 & 13 11 19.2 & -1 21 51.3 & 5.4 & 0.6 & 9.1 & 9.5 & 3.4 & 2.7 & 1.75 & 0.66 & 2.9\cr 8 SMM131129-12048 & 13 11 29.2 & -1 20 48.4 & 5.1 & 0.5 & 10.0 & 14.5 & 2.3 & 6.2 & 2.84 & 0.52 & 0.6\cr 9 SMM131138-11636 & 13 11 38.0 & -1 16 36.3 & 4.8 & 0.7 & 7.2 & 8.1 & 3.3 & 2.4 & 1.68 & 0.73 & 4.3\cr 10 SMM131128-12224 & 13 11 28.4 & -1 22 24.3 & 4.7 & 0.5 & 8.8 & 21.5 & 2.9 & 7.3 & 4.50 & 0.79 & 2.2\cr 11 SMM131113-11858 & 13 11 13.2 & -1 18 58.3 & 4.7 & 0.6 & 7.9 & 14.0 & 3.6 & 3.8 & 2.95 & 0.85 & 4.0\cr 12 SMM131134-12017 & 13 11 34.8 & -1 20 17.3 & 4.7 & 0.5 & 9.0 & 11.2 & 2.5 & 4.4 & 2.38 & 0.59 & 1.5\cr 13 SMM131114-12229 & 13 11 14.7 & -1 22 29.4 & 4.4 & 0.6 & 7.4 & 15.1 & 3.6 & 4.1 & 3.43 & 0.95 & 4.1\cr 14 SMM131126-12317 & 13 11 26.0 & -1 23 17.4 & 4.3 & 0.6 & 7.7 & 23.6 & 3.1 & 7.4 & 5.48 & 1.02 & 3.1\cr 15 SMM131127-11847 & 13 11 27.1 & -1 18 47.3 & 4.2 & 0.5 & 8.2 & 11.3 & 2.7 & 4.1 & 2.67 & 0.71 & 1.4\cr 16 SMM131119-12257 & 13 11 19.8 & -1 22 57.3 & 4.2 & 0.6 & 7.0 & 13.1 & 3.6 & 3.5 & 3.10 & 0.97 & 3.5\cr 17 SMM131123-12046 & 13 11 23.9 & -1 20 46.3 & 4.1 & 0.5 & 8.1 & 9.8 & 2.5 & 3.8 & 2.37 & 0.67 & 1.3\cr 18 SMM131133-11712 & 13 11 33.3 & -1 17 12.3 & 3.8 & 0.6 & 6.3 & 18.0 & 3.0 & 5.9 & 4.70 & 1.08 & 3.1\cr 19 SMM131116-12307 & 13 11 16.2 & -1 23 07.3 & 3.8 & 0.6 & 6.3 & 11.6 & 3.7 & 3.1 & 3.06 & 1.09 & 4.2\cr 20 SMM131141-11824 & 13 11 41.1 & -1 18 24.3 & 3.7 & 0.7 & 5.3 & 9.3 & 3.8 & 2.4 & 2.50 & 1.12 & 3.6\cr 21 SMM131130-11911 & 13 11 30.8 & -1 19 11.3 & 3.6 & 0.5 & 7.1 & 14.6 & 2.4 & 5.9 & 4.04 & 0.88 & 1.1\cr 22 SMM131128-11816 & 13 11 28.1 & -1 18 16.4 & 3.5 & 0.5 & 6.6 & 13.6 & 2.9 & 4.6 & 3.79 & 0.99 & 1.9\cr 23 SMM131133-12023 & 13 11 33.8 & -1 20 23.3 & 3.5 & 0.5 & 6.9 & 9.6 & 2.4 & 3.9 & 2.71 & 0.78 & 1.3\cr 24 SMM131132-11952 & 13 11 32.2 & -1 19 52.3 & 3.5 & 0.5 & 6.9 & 3.6 & 2.3 & 1.5 & 1.02 & 0.69 & 1.0\cr 25 SMM131121-11946 & 13 11 21.0 & -1 19 46.3 & 3.3 & 0.5 & 6.3 & 15.0 & 2.7 & 5.3 & 4.50 PAIR & 1.09 & 1.9\cr 26 SMM131113-12115 & 13 11 13.2 & -1 21 15.3 & 3.3 & 0.8 & 5.7 & 6.8 & 3.4 & 1.9 & 2.02 & 1.09 & 4.0\cr 27 SMM131124-12212 & 13 11 24.1 & -1 22 12.4 & 3.2 & 0.6 & 5.7 & 8.0 & 3.0 & 2.6 & 2.50 & 1.04 & 2.3\cr 28 SMM131131-12409 & 13 11 31.9 & -1 24 09.4 & 3.1 & 0.6 & 5.3 & 15.3 & 3.4 & 4.4 & 4.83 & 1.41 & 4.0\cr 29 SMM131118-12326 & 13 11 18.8 & -1 23 26.3 & 3.0 & 0.6 & 5.1 & 24.6 & 3.6 & 6.6 & 8.06 & 1.98 & 4.0\cr 30 SMM131132-11821 & 13 11 32.7 & -1 18 21.4 & 2.8 & 0.6 & 5.0 & 5.4 & 2.7 & 1.9 & 1.88 & 1.03 & 2.0\cr 31 SMM131139-12034 & 13 11 39.0 & -1 20 34.3 & 2.8 & 0.6 & 4.9 & 9.5 & 2.8 & 3.3 & 3.38 & 1.23 & 2.5\cr 32 SMM131133-12424 & 13 11 33.5 & -1 24 24.3 & 2.7 & 0.6 & 4.4 & 11.9 & 3.7 & 3.2 & 4.29 & 1.64 & 4.3\cr 33 SMM131134-11804 & 13 11 34.4 & -1 18 04.3 & 2.7 & 0.6 & 4.5 & 10.0 & 2.9 & 3.3 & 3.60 & 1.34 & 2.6\cr 34 SMM131134-11904 & 13 11 34.0 & -1 19 04.3 & 2.7 & 0.5 & 5.0 & 11.3 & 2.6 & 4.2 & 4.10 & 1.27 & 1.7\cr 35 SMM131121-11634 & 13 11 21.8 & -1 16 34.4 & 2.7 & 0.7 & 4.0 & 0.4 & 4.2 & 0.1 & 0.13 & 1.57 & 4.0\cr 36 SMM131128-11907 & 13 11 28.0 & -1 19 07.4 & 2.6 & 0.5 & 5.3 & 0.9 & 2.5 & 0.4 & 0.34 & 0.95 & 1.1\cr 37 SMM131129-11818 & 13 11 29.3 & -1 18 18.3 & 2.6 & 0.5 & 4.8 & 5.2 & 2.7 & 1.8 & 2.00 & 1.14 & 1.8\cr 38 SMM131134-11721 & 13 11 34.5 & -1 17 21.4 & 2.5 & 0.6 & 4.1 & 8.7 & 3.0 & 2.8 & 3.39 & 1.45 & 3.1\cr 39 SMM131122-11746 & 13 11 22.8 & -1 17 46.4 & 2.5 & 0.6 & 4.0 & 5.2 & 3.7 & 1.3 & 2.05 & 1.55 & 2.7\cr 40 SMM131117-12234 & 13 11 17.8 & -1 22 34.4 & 2.4 & 0.6 & 4.0 & 17.3 & 3.7 & 4.6 & 6.97 PAIR & 2.28 & 3.6\cr $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ &$\cdots$ \cr \enddata \tablecomments{ The columns are (1) number and name, (2) and (3) SCUBA-2 \afluxb R.A. and decl., (4), (5), and (6) SCUBA-2 \afluxb flux, error (we have included a confusion noise of 0.33~mJy added in quadrature to the white noise here), and S/N, (7), (8), and (9) SCUBA-2 \afluxa flux, error, and S/N, (10) and (11) SCUBA-2 \afluxa to \afluxb flux ratio and error, and (12) offset from the cluster center in arcmin. In Column~(10), we also indicate whether the source is part of a close pair (labeled ``PAIR'') or corresponds to the brightest cluster galaxy (labeled ``BCG"). (This table is available in its entirety in machine-readable form.) } \end{deluxetable*} \begin{deluxetable*}{lccrcrrcclccc} \setcounter{table}{2} \renewcommand\baselinestretch{1.0} \tablewidth{0pt} \tablecaption{A370 Band~7 (870~$\mu$m) ALMA $>4.5\sigma$ \label{a370_band7}} \scriptsize \tablehead{No. and Name & R.A. & Decl. & Pk Flux & Error & S/N & Tot Flux & SCUBA-2 & m160 & $z$ & Median & $\sigma$ & Zitrin \\ & & & & & & & $f_{850}:f_{450}$ & & & Magnif. & & Magnif. \\ & J2000.0 & J2000.0 & \multicolumn{2}{c}{(mJy)} & & (mJy) & (mJy) & & & & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12) & (13)} \startdata 1 ALMA023951--13559 & 2 39 51.94 & -1 35 59.0 & 11.00 & 0.35 & 30.8 & 14.3 & 19.0 : 37.0 & 22.7 & 2.808 & 3.93 & 0.52 & \nodata \cr 2 ALMA023951--13558 & 2 39 51.80 & -1 35 58.5 & 6.16 & 0.37 & 16.4 & 8.0 & 18.0 : 40.0 & 20.7 & 2.806$^a$ & 3.61 & 0.46 & \nodata \cr 3 ALMA023957--13453 & 2 39 57.58 & -1 34 53.7 & 4.05 & 0.24 & 16.6 & 4.2 & 4.1 : 10.0 & 23.5 & 2.04 & 2.10 & 0.41 & 2.44 \cr 4 ALMA023956--13426 & 2 39 56.57 & -1 34 26.3 & 5.25 & 0.25 & 20.4 & 6.2 & 6.8 : 37.0 & 19.5 & 1.062$^a$ & 2.87 & 1.10 & 2.78 \cr 5 ALMA023949--13551 & 2 39 49.12 & -1 35 51.8 & 1.55 & 0.24 & 6.3 & 1.9 & 1.8 : 4.6 & 22.8 & 2.507 & 1.87 & 0.46 & 3.09 \cr 6 ALMA023950--13542 & 2 39 50.19 & -1 35 42.1 & 2.49 & 0.24 & 10.0 & 2.6 & 1.2 : 9.4 & 23.5 & 2.01 & 2.48 & 2.96 & 2.12 \cr 7 ALMA023947--13517 & 2 39 47.11 & -1 35 17.7 & 2.20 & 0.39 & 5.5 & 2.1 & 2.4 : 7.0 & 23.2 & 3.472 & 1.56 & 0.38 & 1.65 \cr 8 ALMA023958--13424 & 2 39 58.16 & -1 34 24.7 & 1.58 & 0.25 & 6.1 & 2.1 & 0.6 : 11.0 & 20.7 & 1.256 & 1.33 & 0.27 & \nodata \cr 9 ALMA023946--13332 & 2 39 46.88 & -1 33 32.8 & 2.43 & 0.27 & 8.7 & 2.6 & 2.2 : 13.0 & 21.8 & 2.487 & 1.55 & 0.29 & 1.00 \cr 10 ALMA023954--13320 & 2 39 54.24 & -1 33 20.7 & 2.04 & 0.24 & 8.3 & 3.5 & 2.4 : 13.0 & 21.1 & 1.523 & 3.17 & 0.97 & \nodata \cr \enddata \tablecomments{ The columns are (1) number and name, (2) and (3) ALMA R.A. and decl., (4), (5), and (6) ALMA peak flux, error, and S/N, (7) ALMA $2''$ aperture flux (except for blended source numbers 1 and 2, which are peak flux times 1.3), (8) SCUBA-2 \afluxb and \afluxa fluxes, (9) m160 magnitude, (10) redshift (speczs have three digits after the decimal point, and photzs have two), (11) and (12) median magnification and standard deviation for all the models submitted by the HFF teams, as obtained from the online tool provided by Dan Coe (\url{https://archive.stsci.edu/prepds/frontier/lensmodels/webtool/magnif.html}), and (13) adopted Zitrin magnification. Note that all speczs are from our Keck observations. Sources~1 and 2 also had previous speczs from Ivison et al.\ (1998), and source~4 had a previous specz from Barger et al.\ (1999) and Soucail et al.\ (1999). The m160 magnitudes and photzs are from Brada\v{c} et al.\ (2019). $^a$Type~2 AGN. } \end{deluxetable*} \begin{deluxetable*}{lccccrcclccc} \setcounter{table}{3} \renewcommand\baselinestretch{1.0} \tablewidth{0pt} \tablecaption{A370 Band~6 (1.2~mm) ALMA $>4.5\sigma$ \label{a370_band6}} \scriptsize \tablehead{No. and Name & R.A. & Decl. & Pk Flux & Error & S/N & SCUBA-2 & m160 & $z$ & Median & $\sigma$ & Zitrin \\ & & & & & & $f_{850}:f_{450}$ & & & Magnif. & & Magnif. \\ & J2000.0 & J2000.0 & \multicolumn{2}{c}{(mJy)} & & (mJy) & & & & & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12)} \startdata 1 ALMA023956--13426$^b$ & 2 39 56.57 & -1 34 26.3 & 1.50 & 0.06 & 23.4 & 6.8 : 37.0 & 19.5 & 1.062$^a$ & 2.87 & 1.10 & 2.78 \cr 2 ALMA023950--13542$^b$ & 2 39 50.19 & -1 35 42.1 & 0.92 & 0.18 & 5.2 & 1.2 : 9.4 & 23.5 & 2.01 & 2.48 & 2.96 & 2.12 \cr 3 ALMA023958--13424$^b$ & 2 39 58.14 & -1 34 24.6 & 0.55 & 0.12 & 4.5 & 0.6 : 11.0 & 20.7 & 1.256 & 1.33 & 0.27 & \nodata \cr \enddata \tablecomments{ The columns are (1) number and name, (2) and (3) ALMA R.A. and decl., (4), (5), and (6) ALMA peak flux, error, and S/N, (7) SCUBA-2 \afluxb and \afluxa fluxes, (8) m160 magnitude, (9) redshift (speczs have three digits after the decimal point, and photzs have two), (10) and (11) median magnification and standard deviation for all the models submitted by the HFF teams, as obtained from the online tool provided by Dan Coe (\url{https://archive.stsci.edu/prepds/frontier/lensmodels/webtool/magnif.html}), and (12) adopted Zitrin magnification. Note that both speczs are from our Keck observations. The m160 magnitudes and photz are from Brada\v{c} et al.\ (2019). $^a$Type~2 AGN. $^b$This source is also in the band~7 sample (see Table~\ref{a370_band7}). } \end{deluxetable*} \begin{deluxetable*}{lccccrccclccc} [ht] \setcounter{table}{4} \renewcommand\baselinestretch{1.0} \tablewidth{0pt} \tablecaption{MACSJ1149.5+2223 Band~7 (870~$\mu$m) ALMA $>4.5\sigma$ \label{macsj1149_band7}} \scriptsize \tablehead{No. and Name & R.A. & Decl. & Pk Flux & Error & S/N & Tot Flux & SCUBA-2 & m160 & $z$ & Median & $\sigma$ & Zitrin \\ & & & & & & & $f_{850}:f_{450}$ & & & Magnif. & & Magnif. \\ & J2000.0 & J2000.0 & \multicolumn{2}{c}{(mJy)} & & (mJy) & (mJy) & & & & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12) & (13)} \startdata 1 ALMA014941-222316 & 11 49 41.48 & 22 23 16.1 & 0.94 & 0.14 & 6.8 & 1.3 & 1.4 : 5.0 & 24.2 & 2.52 & 1.97 & 0.48 & 2.04 \cr 2 ALMA014943-222400 & 11 49 43.64 & 22 24 0.39 & 1.08 & 0.19 & 5.7 & 1.0 & 2.2 : 1.9 & \nodata & \nodata & \nodata & \nodata & \nodata \cr 3 ALMA014933-222226 & 11 49 33.87 & 22 22 26.9 & 0.85 & 0.15 & 5.5 & 1.0 & 1.4 : 6.1 & 17.4 & 0.554 & 1.00 & 0.02 & \nodata \cr 4 ALMA014942-222339 & 11 49 42.37 & 22 23 39.5 & 0.60 & 0.13 & 4.6 & 0.6 & 1.3 : 6.4 & 21.1 & 1.632 & 1.43 & 0.21 & \nodata \cr 5 ALMA014941-222436 & 11 49 41.46 & 22 24 36.1 & 1.49 & 0.17 & 9.0 & 2.0 & 1.5 : 8.5 & 22.9 & 1.720 & 1.20 & 0.14 & 1.29 \cr 6 ALMA014930-222427 & 11 49 30.67 & 22 24 27.7 & 2.57 & 0.15 & 17.0 & 4.3 & 4.6 : 15.0 & 20.4 & 1.489 & 1.68 & 0.29 & 2.03 \cr 7 ALMA014937-222430 & 11 49 37.28 & 22 24 30.3 & 0.67 & 0.14 & 4.7 & 0.2 & 2.4 : 6.0 & 20.1 & 1.020 & 1.53 & 0.16 & 1.60 \cr 8 ALMA014936-222424 & 11 49 36.09 & 22 24 24.2 & 0.91 & 0.11 & 8.3 & 1.6 & 1.5 : 5.4 & 20.5 & 1.603 & 3.07 & 0.72 & 3.42 \cr 9 ALMA014934-222445 & 11 49 34.42 & 22 24 45.3 & 0.81 & 0.16 & 5.1 & 1.3 & 1.8 : 5.3 & 20.1 & 0.976 & 3.89 & 2.11 & 2.72 \cr 10 ALMA014935-222231 & 11 49 35.46 & 22 22 31.9 & 1.02 & 0.11 & 9.2 & 1.2 & 1.8 : 5.5 & 22.0 & \nodata & \nodata & \nodata & \nodata \cr 11 ALMA014930-222253 & 11 49 30.82 & 22 22 53.7 & 1.26 & 0.15 & 8.5 & 1.2 & 1.4 : 8.3 & 18.6 & 0.31 & 1.05 & 0.11 & 1.11 \cr 12 ALMA014931-222252 & 11 49 31.30 & 22 22 52.2 & 0.62 & 0.13 & 4.9 & 1.2 & 1.3 : 8.8 & 19.0 & 0.540 & 1.01 & 0.06 & 1.16 \cr \enddata \tablecomments{ The columns are as in Table~\ref{a370_band7}. Note that all of the speczs are from our Keck observations, except for sources~7 and 12, which are from the HST grism reductions of Rawle et al.\ (2016). The m160 magnitudes and photzs are from the ASTRODEEP catalog of Di Criscienzo et al.\ (2017), except for sources~4 and 5, where we measured corrected $2''$ diameter m160 magnitudes from the BUFFALO F160W image. Source~2 lies outside the BUFFALO image, but it is extremely faint in the Subaru/Suprime-Cam $z'$ image (Umetsu et al.\ 2014). } \end{deluxetable*} \begin{deluxetable*}{lccccrcccccc} \setcounter{table}{5} \renewcommand\baselinestretch{1.0} \tablewidth{0pt} \tablecaption{MACSJ1149.5+2223 Band~6 (1.2~mm) ALMA $>4.5\sigma$ \label{macsj1149_band6}} \scriptsize \tablehead{No. and Name & R.A. & Decl.& Pk Flux & Error & S/N & SCUBA-2 & m160 & $z$ & Median & $\sigma$ & Zitrin \\ & & & & & & $f_{850}:f_{450}$ & & & Magnif. & & Magnif. \\ & J2000.0 & J2000.0 & \multicolumn{2}{c}{(mJy)} & & (mJy) & & & & & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12)} \startdata 1 ALMA014941-222436$^a$ & 11 49 41.46 & 22 24 36.2 & 0.51 & 0.06 & 7.8 & 1.4 : 7.5 & 22.9 & 1.720 & 1.20 & 0.14 & 1.29 \cr 2 ALMA014926-222455 & 11 49 26.57 & 22 24 55.7 & 3.05 & 0.07 & 42.6 & 6.8 : 12.0 & \nodata & \nodata & \nodata & \nodata & \nodata \cr 3 ALMA014930-222427$^a$ & 11 49 30.69 & 22 24 27.7 & 0.55 & 0.07 & 7.5 & 4.7 : 15.0 & 20.4 & 1.489 & 1.68 & 0.29 & 2.03 \cr 4 ALMA014933-222552 & 11 49 33.33 & 22 25 52.2 & 1.89 & 0.07 & 26.7 & 3.7 : 8.9 & \nodata & \nodata & \nodata & \nodata & \nodata \cr 5 ALMA014946-222542 & 11 49 46.82 & 22 25 42.1 & 1.25 & 0.08 & 16.1 & 6.0 : 20.0 & \nodata & \nodata & \nodata & \nodata & \nodata \cr 6 ALMA014945-222232 & 11 49 45.01 & 22 22 32.6 & 0.87 & 0.09 & 9.2 & 2.9 : 8.2 & \nodata & \nodata & \nodata & \nodata & \nodata \cr 7 ALMA014927-222402 & 11 49 27.52 & 22 24 2.80 & 0.53 & 0.07 & 7.8 & 2.3 : 7.5 & 21.1 & \nodata & \nodata & \nodata & \nodata \cr 8 ALMA014928-222300 & 11 49 28.07 & 22 23 0.69 & 0.82 & 0.08 & 10.5 & 3.0 : 8.4 & 25.2 & \nodata & \nodata & \nodata & \nodata \cr \enddata \tablecomments{ The columns are as in Table~\ref{a370_band6}. Note that both speczs are from our Keck observations. The m160 magnitudes are from the ASTRODEEP catalog of Di Criscienzo et al.\ (2017), except for source~1, where we measured the corrected $2''$ diameter m160 magnitude from the BUFFALO F160W image. $^a$This source is also in the band~7 sample (see Table~\ref{macsj1149_band7}). } \end{deluxetable*} \begin{deluxetable*}{lccccccccccc} \setcounter{table}{6} \renewcommand\baselinestretch{1.0} \tablewidth{0pt} \tablecaption{MACSJ0717.5+3745 Band~7 (870~$\mu$m) ALMA $>4.5\sigma$ \label{macsj0717_band7}} \scriptsize \tablehead{No. and Name & R.A. & Decl. & Pk Flux & Error & S/N & SCUBA-2 & m160 & $z$ & Median & $\sigma$ & Zitrin \\ & & & & & & $f_{850}:f_{450}$ & & & Magnif. & & Magnif. \\ & J2000.0 & J2000.0 & \multicolumn{2}{c}{(mJy)} & & (mJy) & & & & & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12)} \startdata 1 ALMA071730-374433 & 7 17 30.69 & 37 44 33.0 & 0.52 & 0.09 & 6.0 & 2.8 : 2.7 & 23.7 & 4.52 & 6.94 & 2.02 & 8.71 \cr \enddata \tablecomments{ The columns are as in Table~\ref{a370_band6}. The m160 magnitude and photz are from the ASTRODEEP catalog of Di Criscienzo et al.\ (2017). We do not include the Brada{\v c} models in the median, as they give a very high magnification. } \end{deluxetable*} \begin{deluxetable*}{lccccrcccccc} \setcounter{table}{7} \renewcommand\baselinestretch{1.0} \tablewidth{0pt} \tablecaption{MACSJ0717.5+3745 Band~6 (1.2~mm) ALMA $>4.5\sigma$ \label{macsj0717_band6}} \scriptsize \tablehead{No. and Name & R.A. & Decl. & Pk Flux & Error & S/N & SCUBA-2 & m160 & $z$ & Median & $\sigma$ & Zitrin \\ & & & & & & $f_{850}:f_{450}$ & & & Magnif. & & Magnif. \\ & J2000.0 & J2000.0 & \multicolumn{2}{c}{(mJy)} & & (mJy) & & & & & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12)} \startdata 1 ALMA071724-374329 & 7 17 24.55 & 37 43 29.5 & 2.62 & 0.18 & 13.8 & 4.9 : 18.0 & \nodata & \nodata & \nodata & \nodata & \nodata \cr 2 ALMA071742-374225 & 7 17 42.86 & 37 42 25.7 & 3.02 & 0.20 & 14.6 & 2.8 : 5.0 & \nodata & \nodata & \nodata & \nodata & \nodata \cr 3 ALMA071742-374231 & 7 17 42.55 & 37 42 31.0 & 2.26 & 0.22 & 10.1 & 2.4 : 6.5 & \nodata & \nodata & \nodata & \nodata & \nodata \cr 4 ALMA071726-374247 & 7 17 26.01 & 37 42 47.5 & 1.03 & 0.17 & 6.0 & 3.8 : 11.0 & \nodata & \nodata & \nodata & \nodata & \nodata \cr \enddata \tablecomments{ The columns are as in Table~\ref{a370_band6}.} \end{deluxetable*} \begin{deluxetable*}{lrrrcrcccl} \setcounter{table}{8} \tablewidth{0pt} \tablecaption{ALMA Archive Sample \label{archivetable} } \scriptsize \tablehead{Cluster & R.A. & Decl. & $f_{850}$ & Error & $f_{450}$ & Error & Note & m160& Redshift \\ & J2000.0 & J2000.0 & \multicolumn{2}{c}{(mJy)} & \multicolumn{2}{c}{(mJy)} & & & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10)} \startdata A1689& 13 11 30.80& -1 19 11.3& 3.6 & 0.3 & 13.0 & 2.4 & & 24.4& \nodata \cr A1689& 13 11 31.45& -1 19 32.4& -1.2 & 0.3 & 0.6 & 2.3 & & 17.4& 0.187 \cr A1689& 13 11 30.03& -1 20 28.4&-0.7 & 0.3 &0.1 & 2.2 & & 18.1& 0.200 \cr A1689& 13 11 29.93& -1 19 18.7& 0.8 & 0.3 & 0.4 & 2.4 & & 25.3& 7.130 \cr A2390& 21 53 36.78& 17 41 42.3& 7.2 & 0.3 & 4.9 & 2.4 & BCG &\nodata&\nodata\cr A2744& 0 14 19.80& -30 23 07.7 & 1.7 & 0.3 & 0.8 & 3.3 & & 24.4& 2.96\cr A2744& 0 14 18.35& -30 24 47.3& 6.7 & 0.3 & 10.0 & 3.4 & & 25.2& 2.482\cr A2744& 0 14 20.40& -30 22 54.3& 3.6 & 0.4 & 7.8 & 3.3 & & 23.5& 3.058\cr A2744& 0 14 17.58& -30 23 00.5 & 3.3 & 0.3 & 18.0 & 3.4 & & 21.9& 1.498\cr A2744& 0 14 19.12& -30 22 42.2& 1.8 & 0.4 & 6.5 & 3.4 & & 23.5& 2.409\cr A2744& 0 14 17.28& -30 22 58.5& 4.1 & 0.3 & 18.0 & 3.4 & & 22.0& 1.55\cr A2744& 0 14 22.10& -30 22 49.6&-0.4 & 0.4 & 5.2 & 3.4 & & 25.4& 2.644\cr A2744& 0 14 19.50& -30 22 48.7& 1.8 & 0.4 & -6.0 & 3.4 & & 27.0& 4.16\cr A2744& 0 14 19.79& -30 22 37.8& 0.5 & 0.4 & 5.1 & 3.5 & & 25.5& 5.56\cr MACSJ0416& 4 16 10.79& -24 04 47.4& 3.7 & 0.3 & 11.0 & 2.2 & & 21.4& 2.086\cr MACSJ0416& 4 16 06.96& -24 04 00.0 & 1.5 & 0.3 & 2.8 & 2.2 & & 22.0& 1.953\cr MACSJ0416& 4 16 08.81& -24 05 22.5& 0.5 & 0.3 & 2.0 & 2.4 & & 23.6& 1.50\cr MACSJ0416& 4 16 11.66& -24 04 19.4&-0.8 & 0.3 & 5.4 & 2.2 & & 23.2& 2.13\cr MACSJ0416& 4 16 09.44& -24 05 35.3&-0.1 & 0.4 & -2.8 & 2.5 & & 25.9& 8.311\cr MACSJ1423& 14 23 47.87& 24 04 42.2& 1.7 & 0.2 & 6.5 & 1.6 & BCG &\nodata&\nodata\cr MACSJ2129& 21 29 21.32& -7 41 15.4& 8.8 & 0.4 & 28.0 & 3.3 & &\nodata&\nodata\cr MACSJ2129& 21 29 22.35& -7 41 31.0&-0.6 & 0.4 & -2.8 & 3.2 & & 20.3& 1.537\cr RXJ1347& 13 47 30.62& -11 45 09.6& 3.7 & 0.3 & 3.2 & 1.9 & BCG &\nodata&\nodata\cr RXJ1347& 13 47 27.82& -11 45 55.9& 12.0 & 0.3 & 47.0 & 2.0 & LENS & 20.2& 1.28\cr RXJ1347& 13 47 27.64& -11 45 51.0& 14.0 & 0.3 & 51.0 & 1.9 & LENS & 21.4& 1.28\cr \enddata \tablecomments{ The columns are (1) cluster, (2) and (3) ALMA R.A. and decl., (4) and (5) SCUBA-2 \afluxb flux and error (we have included a confusion noise of 0.33~mJy added in quadrature to the white noise here), (6) and (7) SCUBA-2 \afluxa flux and error, (8) note as to whether the source is a brightest cluster galaxy (labeled ``BCG'') or a multiply-lensed source (labeled ``LENS"; see Figure~\ref{alma_lens2}), (9) m160 magnitude, and (10) redshift (speczs have three or more digits after the decimal point, and photzs have two). Note that in A1689, the speczs for the second, third, and fourth sources are from Colless et al.\ (2003), Lin et al.\ (2018), and Wong et al.\ (2022), respectively. For A2744, the speczs are from Mu\~{n}oz Arancibia et al.\ (2022), and the photzs for the remaining sources are from the ASTRODEEP catalogs of Merlin et al.\ (2016) and Castellano et al.\ (2016). For MACSJ0416, the first two sources have speczs from the Grism Lens-Amplified Survey from Space (GLASS) survey (Treu et al.\ 2015), as quoted in Laporte et al.\ (2017). The fifth source has a specz from Bakx et al.\ (2020). The third and fourth sources have photzs from the ASTRODEEP catalogs of Merlin et al.\ (2016) and Castellano et al.\ (2016). For MACSJ2129, the specz is from Molino et al.\ (2017). For RXJ1347, the photz for the multiply-lensed source is from Zitrin et al.\ (2015). However, Brada\v{c} et al.\ (2008) placed it at $z=4$ based on their gravitational lens modeling. } \end{deluxetable*} \clearpage \startlongtable \begin{deluxetable*}{lrrrcrrcrrc} \centerwidetable \renewcommand\baselinestretch{1.0} \tablewidth{0pt} \tablecaption{SCUBA-2 High-Redshift Candidates\label{faintsample}} \scriptsize \tablehead{No. and Name & R.A. & Decl.& $f_{850}$ & Error & S/N & $f_{450}$ & Error & S/N & $f_{450}/f_{850}$ & Radius \\ & J2000.0 & J2000.0 & \multicolumn{2}{c}{(mJy)} & & \multicolumn{2}{c}{(mJy)} & & & (arcmin) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) } \startdata 1 A1689 & 13 11 21.8 & -1 19 51.3 & 11.0 & 0.5 & 21.0 & 14.7 & 2.7 & 5.3 & 1.31 & 1.7\cr 2 A1689 & 13 11 19.2 & -1 21 51.3 & 5.4 & 0.6 & 9.1 & 9.5 & 3.4 & 2.7 & 1.75 & 2.9\cr 3 A1689 & 13 11 38.0 & -1 16 36.3 & 4.8 & 0.7 & 7.2 & 8.1 & 3.3 & 2.4 & 1.68 & 4.3\cr 4 A1689 & 13 11 32.2 & -1 19 52.3 & 3.5 & 0.5 & 6.9 & 3.6 & 2.3 & 1.5 & 1.02 & 1.0\cr 5 A1689 & 13 11 32.7 & -1 18 21.4 & 2.8 & 0.6 & 5.0 & 5.4 & 2.7 & 1.9 & 1.88 & 2.0\cr 6 A1689 & 13 11 28.0 & -1 19 07.4 & 2.6 & 0.5 & 5.3 & 0.9 & 2.5 & 0.4 & 0.34 & 1.1\cr 7 A2390 & 21 53 23.5 & 17 40 07.2 & 5.8 & 0.5 & 10.0 & 11.0 & 3.7 & 2.9 & 1.90 & 3.4\cr 8 A2390 & 21 53 37.4 & 17 43 39.2 & 3.5 & 0.5 & 7.2 & 6.7 & 2.5 & 2.5 & 1.89 & 1.6\cr 9 A2390 & 21 53 17.9 & 17 41 27.0 & 3.3 & 0.6 & 5.8 & 3.8 & 3.6 & 1.0 & 1.15 & 4.1\cr 10 A2390 & 21 53 32.6 & 17 44 32.1 & 3.1 & 0.5 & 6.1 & 6.2 & 2.7 & 2.2 & 1.99 & 2.4\cr 11 A2390 & 21 53 27.2 & 17 40 40.2 & 3.1 & 0.5 & 6.0 & -1.0 & 3.4 & -0.3 & -0.31 & 2.4\cr 12 A2390 & 21 53 26.1 & 17 44 21.1 & 2.8 & 0.5 & 5.3 & 5.5 & 3.5 & 1.5 & 1.90 & 3.1\cr 13 A2744 & 0 14 19.4 & -30 23 07.1 & 2.5 & 0.5 & 5.1 & 4.2 & 3.1 & 1.3 & 1.64 & 0.8\cr 14 A370 & 2 39 57.0 & -1 37 18.9 & 6.2 & 0.6 & 10.0 & 7.9 & 3.8 & 2.0 & 1.25 & 3.2\cr 15 A370 & 2 40 3.03 & -1 31 16.0 & 5.4 & 0.7 & 8.2 & 8.9 & 3.3 & 2.6 & 1.62 & 4.0\cr 16 A370 & 2 39 51.6 & -1 30 33.0 & 5.4 & 0.6 & 9.4 & 8.2 & 3.6 & 2.2 & 1.50 & 3.7\cr 17 A370 & 2 39 45.3 & -1 38 07.0 & 4.0 & 0.6 & 6.9 & -0.7 & 3.5 & -0.2 & -0.18 & 4.1\cr 18 A370 & 2 39 35.2 & -1 34 55.0 & 3.6 & 0.6 & 6.3 & 5.8 & 4.1 & 1.4 & 1.60 & 4.3\cr 19 A370 & 2 39 47.1 & -1 32 20.0 & 2.8 & 0.6 & 5.0 & 0.5 & 3.5 & 0.1 & 0.16 & 2.3\cr 20 MACSJ0416 & 4 16 5.61 & -24 07 18.7 & 8.8 & 0.6 & 14.0 & 12.8 & 3.5 & 3.5 & 1.45 & 2.9\cr 21 MACSJ0416 & 4 16 19.1 & -24 03 59.7 & 4.4 & 0.6 & 7.5 & 8.0 & 2.8 & 2.7 & 1.82 & 2.6\cr 22 MACSJ0717 & 7 17 54.2 & 37 42 53.9 & 12.0 & 0.5 & 23.0 & 12.6 & 2.6 & 4.8 & 1.04 & 4.1\cr 23 MACSJ0717 & 7 17 43.0 & 37 40 35.9 & 8.8 & 0.5 & 16.0 & 16.4 & 2.4 & 6.8 & 1.86 & 4.4\cr 24 MACSJ0717 & 7 17 34.3 & 37 48 01.0 & 6.2 & 0.5 & 12.0 & 12.0 & 2.3 & 5.0 & 1.93 & 3.2\cr 25 MACSJ0717 & 7 17 51.3 & 37 44 50.9 & 5.9 & 0.5 & 12.0 & 10.8 & 2.3 & 4.5 & 1.82 & 3.2\cr 26 MACSJ0717$^a$ & 7 17 30.8 & 37 44 37.9 & 4.3 & 0.4 & 9.8 & 5.1 & 1.6 & 3.1 & 1.18 & 0.8\cr 27 MACSJ0717 & 7 17 40.6 & 37 47 10.0 & 2.8 & 0.5 & 5.9 & 4.8 & 2.2 & 2.1 & 1.69 & 2.6\cr 28 MACSJ0717 & 7 17 38.7 & 37 40 27.0 & 2.7 & 0.5 & 5.1 & -0.2 & 2.4 & -0.1 & -0.08 & 4.4\cr 29 MACSJ0717 & 7 17 53.5 & 37 46 26.9 & 2.5 & 0.5 & 5.0 & 3.8 & 2.5 & 1.4 & 1.47 & 4.0\cr 30 MACSJ1149 & 11 49 44.8 & 22 21 45.7 & 9.4 & 0.5 & 19.0 & 12.8 & 1.7 & 7.3 & 1.36 & 2.9\cr 31 MACSJ1149 & 11 49 17.5 & 22 23 19.7 & 7.5 & 0.5 & 14.0 & 13.1 & 2.0 & 6.3 & 1.73 & 4.3\cr 32 MACSJ1149 & 11 49 49.5 & 22 24 41.7 & 7.2 & 0.5 & 14.0 & 12.7 & 1.9 & 6.6 & 1.76 & 3.1\cr 33 MACSJ1149 & 11 49 46.3 & 22 26 31.7 & 7.0 & 0.5 & 14.0 & 13.6 & 1.9 & 7.1 & 1.94 & 3.4\cr 34 MACSJ1149 & 11 49 28.9 & 22 27 10.7 & 6.2 & 0.5 & 12.0 & 11.5 & 1.9 & 6.0 & 1.84 & 3.5\cr 35 MACSJ1149 & 11 49 27.6 & 22 26 02.8 & 5.5 & 0.5 & 11.0 & 9.5 & 1.8 & 5.1 & 1.69 & 2.8\cr 36 MACSJ1149 & 11 49 38.8 & 22 27 54.7 & 5.2 & 0.6 & 9.0 & 7.0 & 2.6 & 2.6 & 1.33 & 3.9\cr 37 MACSJ1149 & 11 49 50.3 & 22 22 02.8 & 4.6 & 0.5 & 9.3 & 8.0 & 2.1 & 3.7 & 1.70 & 3.8\cr 38 MACSJ1149 & 11 49 49.2 & 22 27 06.8 & 4.1 & 0.5 & 8.1 & 7.6 & 2.1 & 3.4 & 1.80 & 4.3\cr 39 MACSJ1149 & 11 49 55.3 & 22 23 52.7 & 2.6 & 0.5 & 5.0 & -0.1 & 2.0 & -0.1 & -0.06 & 4.4\cr 40 MACSJ1149 & 11 49 43.7 & 22 24 02.8 & 2.3 & 0.5 & 5.0 & 3.9 & 1.6 & 2.3 & 1.62 & 1.7\cr 41 MACSJ1423 & 14 23 40.6 & 24 07 39.1 & 7.5 & 0.5 & 15.0 & 12.5 & 2.2 & 5.5 & 1.66 & 3.0\cr 42 MACSJ1423 & 14 23 28.4 & 24 05 38.9 & 5.8 & 0.5 & 11.0 & 7.8 & 2.7 & 2.7 & 1.33 & 4.4\cr 43 MACSJ1423 & 14 24 3.17 & 24 02 53.1 & 3.8 & 0.5 & 7.4 & 6.6 & 2.4 & 2.6 & 1.72 & 4.1\cr 44 MACSJ1423 & 14 23 28.4 & 24 04 24.0 & 3.5 & 0.5 & 6.7 & 3.9 & 2.6 & 1.4 & 1.09 & 4.4\cr 45 MACSJ1423 & 14 23 50.0 & 24 08 36.1 & 3.2 & 0.6 & 5.8 & 6.1 & 3.1 & 1.9 & 1.87 & 3.6\cr 46 MACSJ1423 & 14 24 3.84 & 24 04 52.9 & 2.8 & 0.6 & 5.1 & 4.3 & 2.6 & 1.6 & 1.52 & 3.6\cr 47 MACSJ1423 & 14 24 7.40 & 24 04 30.0 & 2.8 & 0.5 & 5.2 & 4.9 & 2.5 & 1.9 & 1.72 & 4.4\cr 48 MACSJ1423 & 14 23 40.4 & 24 06 08.1 & 2.3 & 0.5 & 5.0 & 4.1 & 2.0 & 1.9 & 1.76 & 2.0\cr 49 MACSJ2129 & 21 29 26.1 & -7 45 29.8 & 8.7 & 0.7 & 12.0 & 12.9 & 4.9 & 2.6 & 1.47 & 3.9\cr 50 RXJ1347 & 13 47 21.4 & -11 48 59.9 & 9.6 & 0.6 & 15.0 & 17.9 & 3.1 & 5.6 & 1.84 & 4.4\cr 51 RXJ1347 & 13 47 17.4 & -11 45 50.0 & 7.0 & 0.6 & 12.0 & 12.0 & 2.7 & 4.4 & 1.71 & 3.0\cr 52 RXJ1347 & 13 47 41.8 & -11 47 20.0 & 6.8 & 0.7 & 10.0 & 12.5 & 3.2 & 3.8 & 1.82 & 3.7\cr 53 RXJ1347 & 13 47 14.0 & -11 44 34.9 & 4.4 & 0.6 & 7.2 & 6.7 & 2.9 & 2.2 & 1.52 & 3.8\cr 54 RXJ1347 & 13 47 31.2 & -11 48 12.9 & 3.6 & 0.6 & 6.3 & 5.6 & 2.8 & 1.9 & 1.53 & 3.1\cr 55 RXJ1347 & 13 47 23.0 & -11 48 59.0 & 3.4 & 0.6 & 5.4 & 5.4 & 3.1 & 1.7 & 1.58 & 4.2\cr \enddata \tablecomments{ The columns are (1) number and name, (2) and (3) SCUBA-2 \afluxb R.A. and decl., (4), (5), and (6) SCUBA-2 \afluxb flux, error (we have included a confusion noise of 0.33~mJy added in quadrature to the white noise here), and S/N, (7), (8), and (9) SCUBA-2 \afluxa flux, error, and S/N, (10) SCUBA-2 \afluxa to \afluxb flux ratio, and (11) offset from the cluster center in arcmin. $^a$This is a multiply-lensed source that is blended at the SCUBA-2 resolution (see Figure~\ref{alma_lens1}). } \end{deluxetable*} \appendix Here we show the \afluxb (left) and \afluxa (right) SCUBA-2 images for the remaining 9 clusters (MACSJ1149 is shown in Figure~\ref{macs1149_sample}), with the detected ($>4\sigma$) \afluxb sources marked (white circles).
Title: Long-Term Simulations of Dynamical Ejecta: Homologous Expansion and Kilonova Properties
Abstract: Accurate numerical-relativity simulations are essential to study the rich phenomenology of binary neutron star systems. In this work, we focus on the material that is dynamically ejected during the merger process and on the kilonova transient it produces. Typically, radiative transfer simulations of kilonova light curves from ejecta make the assumption of homologous expansion, but this condition might not always be met at the end of usually very short numerical-relativity simulations. In this article, we adjust the infrastructure of the BAM code to enable longer simulations of the dynamical ejecta with the aim of investigating when the condition of homologous expansion is satisfied. In fact, we observe that the deviations from a perfect homologous expansion are about 30% at roughly 100ms after the merger. To determine how these deviations might affect the calculation of kilonova light curves, we extract the ejecta data for different reference times and use them as input for radiative transfer simulations. Our results show that the light curves for extraction times later than 80ms after the merger deviate by less than 0.4mag and are mostly consistent with numerical noise. Accordingly, deviations from the homologous expansion for the dynamical ejecta component are negligible for the purpose of kilonova modelling.
https://export.arxiv.org/pdf/2208.13460
\title{Long-Term Simulations of Dynamical Ejecta: Homologous Expansion and Kilonova Properties} \author{Anna \surname{Neuweiler}$^{1}$} \author{Tim \surname{Dietrich}$^{1,2}$} \author{Mattia \surname{Bulla}$^{3,4}$} \author{Swami Vivekanandji \surname{Chaurasia}$^{3}$} \author{Stephan \surname{Rosswog}$^{3,5}$} \author{Maximiliano \surname{Ujevic}$^{6}$} \affiliation{${}^1$Institut f\"{u}r Physik und Astronomie, Universit\"{a}t Potsdam, Haus 28, Karl-Liebknecht-Str. 24/25, 14476, Potsdam, Germany} \affiliation{${}^2$Max Planck Institute for Gravitational Physics (Albert Einstein Institute), Am M\"uhlenberg 1, Potsdam 14476, Germany} \affiliation{${}^3$The Oskar Klein Centre, Department of Astronomy, Stockholm University, AlbaNova, SE-10691 Stockholm, Sweden} \affiliation{${}^4$Department of Physics and Earth Science, University of Ferrara, via Saragat 1, 44122 Ferrara, Italy} \affiliation{${}^5$Sternwarte Hamburg, Gojenbergsweg 112, 21029 Hamburg, Germany} \affiliation{${}^6$Centro de CiГЄncias Naturais e Humanas, Universidade Federal do ABC, Santo AndrГ© 09210-170, SP, Brazil} \date{\today} \section{Introduction} \label{sec:intro} About 90 gravitational wave (GW) events have been detected since the Advanced LIGO and Advanced Virgo detectors started operating~\citep{LIGOScientific:2018mvr,LIGOScientific:2020ibl,LIGOScientific:2021}. The observed GWs originated from the merger of three different systems: binary black holes (BBHs), e.g.,~\citep{Abbott:2016blz,Abbott:2016nmj}, binary neutron stars (BNSs)~\citep{TheLIGOScientific:2017qsa, Abbott:2020uma}, and black hole - neutron stars (BHNSs)~\cite{LIGOScientific:2021qlt}. Systems containing at least one neutron star (NS) are especially interesting, as additional electromagnetic (EM) counterparts might be present. The observation of a phenomenon via different messengers can provide valuable insights into both the involved physics and the astronomical environment. The first and so far only event for which GW and EM counterparts were unambiguously detected was GW170817 which occurred on August 17$^\mathrm{th}$, 2017~\citep{TheLIGOScientific:2017qsa}. The kilonova AT2017gfo~\citep{Abbott:2017wuw} and the gamma-ray burst GRB170817A~\citep{Goldstein:2017mmi, Savchenko:2017ffs} are associated with the same source. \par Because of their high compactness, NSs allow for the study of matter at densities that are inaccessible to terrestrial laboratories. In the last few decades, NS mergers have been considered the place for the formation of the heaviest elements in our Universe~\cite{Lattimer:1974a, Eichler:1989ve, Freiburghaus:1999, Rosswog:1998hy, Korobkin:2012uy, Wanajo:2014wha}. The formation of about half of all elements heavier than iron involves neutron capture reactions that must be rapid (r-process) compared to $\beta$-decay~\cite{Burbidge:1957vc, Cameron:1957}, and therefore requires a neutron-rich environment, see \cite{cowan21} for a recent review. Analogous to type Ia supernovae, the radioactive decay of the formed r-process nuclei is expected to power an EM transient in the optical, infrared, and ultraviolet bands. In the literature this transient is called a kilonova~\citep{Li:1998bw, Metzger:2016pju} or macronova~\citep{Kulkarni:2005jw}. Indeed, the EM counterpart AT2017gfo showed signatures indicative of a transient triggered by r-process nuclei including Lanthanides and Actinides~\cite{Barnes:2016umi, Kasen:2017sxr, Tanaka:2017qxj, Tanvir:2017pws, Rosswog:2017sdn, wu19, Miller:2019dpt, kasliwal22}. \par Simulations based on numerical-relativity (NR) are crucial for the study of these systems and a correct interpretation of the observational data. By solving Einstein's field equations, NR enables accurate simulations of merging BBH, BNS, and BHNS systems. From the simulations, GW signals can be extracted and used to develop models to analyse the detected data. Similarly, the output describing the ejected material can be used to model spectra and light curves for EM transients, e.g., \cite{Tanaka:2013ana,Tanaka:2013ixa,Kawaguchi:2016ana,Dietrich:2016fpt,Perego:2017wtu,Kawaguchi:2019nju}, that can be compared to observations. \par Due to their computational cost, NR simulations typically cover only a few tens of milliseconds after the merger~\cite{Hotokezaka:2012ze,Kawaguchi:2015bwa,Kyutoku:2015,Sekiguchi:2016bjd,Radice:2016dwd,Kiuchi:2017zzg}. However, the kilonova itself can last several days up to a few weeks. Subsequently, most studies using radiative transfer codes or (semi-) analytic models to calculate kilonova light curves, e.g., \cite{Kasen:2017sxr, Tanaka:2017qxj, Banerjee:2020, Kawaguchi:2020osi, Korobkin:2021, Bulla:2019muo}, assume naturally a homologous expansion, i.e., that the radial velocity of each ejecta element remains constant. As the ejecta expands, the density and thus the speed of sound decreases rapidly until it is only a tiny fraction of the expansion speed. Under these circumstances, the different parts of the ejecta can no longer ``communicate'' through pressure waves and consequently can no longer influence each other, i.e., they are out of sonic contact. This means that the velocity structure can no longer change and the movement is homologous, but this condition might not be met at the end of a typical NR simulation. A first effort to include hydrodynamic effects using a spherically symmetric Lagrangian radiation hydrodynamics code to calculate kilonova light curves from NR data showed different results with and without the assumption of homologous expansion~\cite{Wu:2021ibi}. Refs.~\cite{Kawaguchi:2020vbf,Kawaguchi:2022bub} examined the ejecta evolution of BNS merger from NR simulations on a fixed gravitational background assuming axisymmetry for homologous expansion. It was shown that only $0.1$\,days after the merger the deviation of the radial velocity distribution from the assumption of homologous expansion is smaller than $1$\,\%, i.e., a homologous expansion is only evident afterwards. We note that this study considered multiple ejecta components, in particular post-merger components, which might delay the homologous expansion phase. \par Using three-dimensional NR simulations, we want to readdress the problem by investigating how this assumption affects the calculation of the kilonova light curves, focusing on the dynamical ejecta component. Dynamical ejecta from NS merger simulations have been hydrodynamically evolved to late times before \cite{Rosswog:2013kqa,Grossman:2013lqa}. While these earlier studies were not fully relativistic, their Lagrangian nature allowed to evolve the ejecta up to $100$~years after the merger. Long-term evolutions are harder in Eulerian approaches, but first steps have already been taken to perform seconds-long NR simulations, e.g., using Cowling approximation~\cite{Hayashi:2021oxy} or assuming axisymmetry~\cite{Fujibayashi:2020dvr}. \par The aim of this study is to adapt our NR code \textsc{bam}~\cite{Bruegmann:2006at,Thierfelder:2011yi} for such long-term evolutions of the dynamical ejecta component and to investigate the degree of homology of the expanding material.\footnote{We note that~\cite{Fernandez:2014bra, Fernandez:2016sbf} showed that the interaction of multiple ejecta components affect the ejecta profile and are needed for realistic kilonova models. Hence, in future studies, we plan to simulate secular ejecta that are emitted on longer time scales up to seconds after the merger, which cannot be considered with the present method.} In particular, we modify the grid structure of our simulations after the merger, i.e., we coarsen the resolution to reduce the computational costs and to allow for faster simulations. Additionally, the size of the grid is increased to track the outflowing material for a longer period. Since these changes cause the strong-field region to be insufficiently resolved, we apply the Cowling approximation, i.e., we freeze the spacetime. We probe our new method by simulating two BNS systems with different Equations of State (EOSs), SLy and H4. For the simulations we use the NR code \textsc{bam}~\cite{Bruegmann:2006at,Thierfelder:2011yi}. The results are then transferred to the radiative transfer code \textsc{possis} \cite{Bulla:2019muo} to calculate light curves and to analyse the properties of the kilonova.\par The article is structured as follows. In Sec.~\ref{sec:methods}, we discuss the techniques used in our simulations and we describe the implemented changes. In Sec.\,\ref{sec:simulation}, we present the results from both BNS setups. In particular, we study the impact of the different EOSs and investigate the homologous nature of the expansion. Furthermore, the light curves of the kilonova are modelled to determine the impact of the homologous expansion assumption in Sec.\,\ref{sec:kilonova}. We summarise the main aspects and give a short outlook in Sec.~\ref{sec:summary}. Unless otherwise specified, we employ dimensionless units with $G=c= \mathrm{M}_\odot =1$. Further, we apply a metric with signature $\left(-+++\right)$. \section{Methods} \label{sec:methods} \subsection{Standard Compact Binary Evolution} \subsubsection{Spacetime and Matter Evolution} \textsc{bam} employs method-of-lines for the dynamical evolution of the gravitational field, where we apply a fourth-order Runge-Kutta scheme and a Courant-Friedrichs-Lewy (CFL) coefficient of 0.25. For our NR simulations, Einstein's field equations are written in $3 + 1$ form. We use the Z4c reformulation \citep{Hilditch:2012fp, Bernuzzi:2009ex}, together with the 1+log slicing~\citep{Bona:1994b} and a Gamma driver shift condition~\citep{Alcubierre:2002kk} to ensure a long-term, stable evolution. \par We perform pure General Relativistic Hydrodynamic (GRHD) simulations. The state of the fluid is fully described by the primitive variables $\mathbf{w}$, which comprise the proper rest-mass density $\rho_0$, the internal energy density $\epsilon$, the pressure $p$, and the fluid velocity $v^i$. The evolution equations are derived from the energy-momentum conservation law and conservation of particles. Introducing the conservative variables $\mathbf{q}$, which are the conserved rest-mass density $D$, the momentum density $S_i$, and the internal energy density $\tau$ as seen by the Eulerian observer, the resulting evolution system is written in the form of a balance-law, i.e., $\partial_t {\bf q} + \partial_k {\bf f}^k({\bf q}) = {\bf s}({\bf q})$. The conservative variables $\mathbf{q}$ are related to the primitive variables $\mathbf{w}$ by: \begin{equation} D = \rho W, \hspace{0.5cm} S_i = \rho h W^2 v_i, \hspace{0.5cm} \tau = \rho h W^2 -p - D, \end{equation} \noindent with the Lorentz factor $W=(1- v_i v^i)^{-1/2}$ and the specific enthalpy $h = 1+ \epsilon + p/\rho_0$. For a detailed discussion and derivation of the evolution equations, we refer to \cite{font:2000}.\par In order to close the evolution system, an EOS with $p = p\left(\rho_0, \epsilon\right)$ is needed. For the performed BNS simulations, we used piecewise-polytropic fits of the SLy EOS~\citep{Douchin:2001sv} and the H4 EOS~\citep{Lackey:2006} following \cite{Read:2008iy}. Of these two EOSs, SLy is softer and H4 is stiffer. The zero-temperature EOSs are extended to include thermal effects by adding a thermal pressure $P_{\rm th} = \left(\Gamma_{\rm th} - 1\right)\rho_0 \epsilon_{\rm th}$, see \citep{Bauswein:2010dn}. For the presented BNS simulations, we set $\Gamma_{\rm th} = 1.75$. \subsubsection{Vacuum and Low-Density Treatment} The simulation of the vacuum region surrounding the system is numerically challenging. One reason is the reconstruction of conservative variables \textbf{q} to primitive variables \textbf{w} due to the presence of the rest-mass density $D$ in the denominator~\citep{Thierfelder:2011yi}. Hence, the standard approach is to fill the vacuum with a cold, low-density static artificial atmosphere. The atmosphere density $\rho_{\rm atm}$ is typically defined as a fraction $f_{\rm atm}$ of the initial central density $\rho_c$ of the NS as $\rho_{\rm atm} = f_{\rm atm} \rho_c$. As soon as the density of a grid cell falls below a density threshold $\rho_{\rm thr}$ defined as a fraction $f_{\rm thr}$ of the atmosphere density, say $\rho_{\rm thr} = f_{\rm thr} \rho_{\rm atm}$, it is set to the atmosphere value $\rho_{\rm atm}$. \par Because the density difference between the NS and the artificial atmosphere is several orders of magnitude, the dynamical impact is often claimed to be negligible. This may be true for properties connected to the bulk motion such as the GW emission or the timing of the merger, but the assumption certainly breaks for outflowing ejecta. Since the density continues to decrease as the ejecta expands, it could eventually fall below the threshold $\rho_{\rm thr}$ and set to $\rho_{\rm atm}$. Consequently, the artificial atmosphere potentially distorts the ejecta simulation and should be avoided in order to obtain reliable results. \par In \textsc{bam}, besides the artificial atmosphere method, the ``vacuum method'', introduced in \cite{Poudel2020}, is implemented. The idea of the ``vacuum method'' is to set all matter variables to zero if the pressure $p$ in the conservatives to primitives reconstruction cannot be found. The variables of a grid cell are also set to zero if quantities are not physical, e.g., if the density is negative with $D < 0$ or if the energy density $\epsilon$ is complex. Furthermore, the flux computation at the interface between matter cells and vacuum cells must be adjusted to achieve physical results. This method allows for simulations with ``real vacuum" and is the preferred choice in this work. \subsubsection{Grid Structure} A common challenge in numerical simulations is to sufficiently resolve different length scales: the strong-field region inside and close to the NSs, and the far-field region where we extract GWs. For this purpose \textsc{bam} uses an Adaptive Mesh Refinement (AMR) technique following the ``moving boxes" approach employing a hierarchy of cell-centered nested Cartesian boxes. The numerical grid comprises $L$ refinement levels from $l=0$ being the coarsest to $l=L-1$ being the finest level. Following a $2:1$ refinement strategy, the resolution increases by a factor of two on every level. Accordingly, the grid spacing $h_l$ on level $l$ is determined by $h_l = 2^{-l} h_0$ for a fixed spacing $h_0$ on the coarsest level. Two successive refinement levels are called the parent level for the coarser, larger level $l$ and the child level for the finer, smaller level $l+1$. \par Each refinement level consists of one or more Cartesian boxes with equal grid spacing. For the inner refinement levels with $l > l_m$, the boxes can move and adjust dynamically during the evolution to ensure that the NSs are always covered by the refinement box with the highest resolution. The Cartesian refinement boxes have a fixed number of grid points in each direction. There is a distinction between an outer box with $n$ grid points and an inner moving box with $n_m$ grid points. As the grid spacing decreases for higher levels, the numerical domain of level $l$ is generally larger compared to its child level $l+1$. To increase the numerical domain but maintain the finest resolution, additional coarser levels can be attached to the grid structure, which is exploited in the new implementation. \par The CFL coefficient sets an upper bound for the time step $\Delta t$ depending on the resolution $\Delta x$, to ensure the stability of the simulation, through the relation $\left| \Delta t/\Delta x \right| \leq a$, where the CFL coefficient $a$ depends on the characteristic velocity of the simulated system and the employed numerical method. Because each refinement level uses a different grid spacing $\Delta x$, each level has different upper limits for $\Delta t$. Theoretically, the smallest time step determined by the finest refinement level can be applied for all other levels. However, this increases noticeably the computational time and slows down the simulation. For this reason, the Berger-Oliger scheme~\citep{Berger:1984zza} is used in \textsc{bam}, see \cite{Brugmann:2008zz}. The basic idea of \textsc{bam}'s Berger-Oliger implementation is simple: given the $2:1$ refinement strategy, the child level $l+1$ performs two time steps with $\Delta t_{l+1} = 2^{-1} \Delta t_l$, while the parent level $l$ evolves only one step with $\Delta t_l$. When successive levels are aligned in time, restriction and prolongation steps are applied to match the evolution with the different resolutions at the refinement levels. \par Every refinement level requires boundary conditions which are typically set by physical or symmetry conditions. Higher refinement levels use so-called buffer regions that are populated by prolongation of data from the parent level to the child level whenever the two levels are aligned in time. We use six buffer points and perform linear interpolation in time to update the buffer region for the substeps of the child level, see \citep{Brugmann:2008zz}.\par The prolongation step to fill the buffer region generally carries numerical truncation errors. For this reason, we apply a correction step that ensures flux conservation across refinement boundaries. This is referred to as conservative mesh refinement (CAMR) and follows the Berger-Colella scheme~\citep{Berger:1989a}. For details on the CAMR implementation in \textsc{bam}, we refer to \cite{Dietrich:2015iva}. \subsection{Introducing a New Grid Structure} Because NR simulations are computationally expensive, usually only a few tens or hundreds of milliseconds around the merger are covered. The longest NR simulations to date cover a few seconds, but are restricted to Cowling approximation~\cite{Hayashi:2021oxy} and axisymmetry~\cite{Fujibayashi:2020dvr}. Long-term simulations are essentially needed for a more comprehensive study of the ejected material and for a consistent understanding of the merger and post-merger processes. For this purpose, the computational costs must be reduced, which can be achieved with a lower grid resolution. But, since the simulation of the merger requires a well-resolved strong-field region, the resolution can only be reduced afterwards. Once the resolution is reduced, we use the Cowling approximation, i.e., we stop the evolution of the gravitational field and the spacetime is ``frozen in time".\par Fig.~\ref{fig:checkpoints} shows the time sequence of the simulations for one example. We start with a simulation using our standard grid structure. In the top row, we show the merging process in one of our simulations, which takes only a few milliseconds. The formation of a stable remnant system, here a BH with accretion disk, requires a few tens of milliseconds. Modifying the existing checkpoint algorithm\footnote{The general purpose of the checkpoint algorithm is it to enable a restart of an existing simulation. As NR simulations can take several weeks or months, a running simulation may be aborted by processor problems or simply by limited walltime on a High Performance Computing system. For this reason, regular checkpoints are saved containing all the information about the grid, spacetime, and fluid variables at a given time. With the checkpoint the simulation can be continued from this time step.}, we use a written checkpoint after the collapse to change the grid structure and continue the ``modified" simulation with frozen spacetime and reduced resolution to allow for faster computation. In Fig.~\ref{fig:newgrid}, the grid modification is illustrated in two dimensions. The original grid consists of several nested refinement levels including inner moving boxes. The first step to reduce the resolution is it to remove the finest refinement levels. The number of removed levels, $l_\textnormal{rm}$, determines the coarseness of the posterior simulation. The next step is it to extend the numerical domain by adding $l_\textnormal{add}$ new coarser refinement levels. In total, the modified grid structure comprises $L-l_\textnormal{rm}+l_\textnormal{add}$ refinement levels. For consistency, the initial level labels are shifted by $l_\textnormal{add}$, i.e., $l=0$ becomes $l=l_\textnormal{add}$, $l=1$ becomes $l=l_\textnormal{add}+1$, $l=2$ becomes $l=l_\textnormal{add}+2$, etc., and the new coarsest level starts again at $l=0$.\par For the extended region, we assume Schwarzschild spacetime~\citep{Schwarzschild:1916}. For the initialisation, we use the remnant mass $M_{\rm rem}$ as an additional input parameter in the code. To ensure a smooth transition between the Schwarzschild spacetime in the outer regions and the original spacetime, a Planck Taper window function $f_\textnormal{PT}\left(r\right)$ is applied, which is set to $f_\textnormal{PT}\left(r\right)=0$ for $r\geq R_2$ and to $f_\textnormal{PT}\left(r\right)=1$ for $r< R_1$. For the transition region between $R_1 < r < R_2$, we use: \begin{equation} f_\textnormal{PT}\left(r\right) = \frac{1}{1+\exp\left(\frac{R_2-R_1}{R_2-r}-\frac{R_1-R_2}{R_1-r}\right)}. \label{eq:PT} \end{equation} \noindent Concretely, we multiply our original spacetime data by this function $f_\textnormal{PT}$ and the Schwarzschild metric by $\left(1-f_\textnormal{PT}\right)$ to have a smooth transition. Thus, the parameter $R_1$ defines the spatial range for which we keep the original spacetime data and $R_2$ defines the distance from which pure Schwarzschild spacetime is assumed. \par Additionally, we implement a mask to distinguish grid cells containing bound and unbound matter when the grid structure is changed. We define unbound matter through: \begin{equation} u_0 < -1 \hspace{0.5cm} \mathrm{and} \hspace{0.5cm} v_r > 0. \label{eq:unbound} \end{equation} \noindent The first condition, the geodesic criterion, refers to the time component of the four velocity $u_0$ and requires an unbound trajectory of the fluid element, provided it follows a geodesic. The second condition demands an outward pointing radial velocity $v_r$. We denote the conserved rest-mass density for unbound matter by $D_u$. \par The effects of the grid modifications, are most severe in the dense, bound matter region around the remnant. Fluctuations in the metric are strongest here and may still affect the behaviour of the matter. In fact, the freezing of the metric and the reduced resolution causes some bound matter around the remnant to expand and become unbound, which can distort the results. For this reason, we remove this part from the simulation when the grid is changed. To verify that the evolution of the unbound matter is not affected by the modifications, we have run the ``normal" simulations alongside the ``modified" simulations a bit further. The comparison showed that the results are qualitatively the same. \section{Binary neutron stars simulations} \label{sec:simulation} \begin{table}[t!] \centering \caption{Grid parameter of the ``normal'' simulations, from left to right: Simulation name, the total number of refinement levels $L$, the finest non-moving level $l_m$, the number of grid points in each direction for fixed boxes $n$ and for moving boxes $n_m$, the grid spacing on the coarsest level $h_0$ and on the finest level $h_{L-1}$ given in M$_\odot$, and the applied EOS.} \label{tab:simulation_grid} \begin{tabular}{l|c c c c c c c c} \hline Simulation & $L$ & $l_m$ & $n$ & $n_m$ & $h_0$ & $h_{L-1}$ & EOS \\ \hline \hline H4-096 & 9 & 5 & 128 & 96 & 120 & 0.234 & H4 \\ \hline H4-128 & 9 & 5 & 170 & 128 & 90 & 0.176 & H4 \\ \hline H4-144 & 9 & 5 & 192 & 144 & 80 & 0.156 & H4 \\ \hline \hline SLy-096 & 9 & 5 & 128 & 96 & 120 & 0.234 & SLy \\ \hline SLy-128 & 9 & 5 & 170 & 128 & 90 & 0.176 & SLy \\ \hline SLy-144 & 9 & 5 & 192 & 144 & 80 & 0.156 & SLy \\ \hline \end{tabular} \end{table} \subsection{Configurations} We construct initial data for two equal-mass BNS simulations both with gravitational masses $m_A = m_B = 1.35$\,M$_\odot$ using the pseudospectral code \texttt{SGRID} \citep{Tichy:2006qn,Tichy:2009yr}. We employ the SLy EOS and H4 EOS. The two NSs have baryonic masses of $m_{b,A} = m_{b,B} = 1.49$\,M$_\odot$ for SLy and $m_{b,A} = m_{b,B} = 1.47$\,M$_\odot$ for H4. We perform the simulations with three different resolutions: low, medium, and high with $96$, $128$, and $144$\,grid points covering the NS, see Tab.\,\ref{tab:simulation_grid}. Since our analysis focuses on the processes after the merger, we choose a small initial separation: $46.96$\,km, for the simulations with SLy and $37.70$\,km, for the simulations with H4. The two NSs merge already after two orbits at $t_\mathrm{merger} \approx 4.7$\,ms for the simulations with H4, and after seven orbits at $t_\mathrm{merger} \approx 19.7$\,ms for the simulation using the SLy EOS. \begin{table}[t!] \centering \caption{Parameter to determine the grid changes of the ``modified'' simulations, from left to right: Simulation name (consisting of the name of the corresponding ``normal'' simulation and the time of the grid modification relative to the merger time in milliseconds), number of removed refinement levels $l_{\rm rm}$, number of added refinement levels $l_{\rm add}$, remnant mass $M_{\rm rem}$ to set Schwarzschild spacetime, $R_2$ and $R_1$ to specify the transition region with the Planck Taper window function. Further, we list the ejecta mass $M_{\rm ej}$. } \label{tab:mod_paramter} \begin{tabular}{l|c c c c c|c} \hline Simulation & $l_{\rm rm}$ & $l_{\rm add}$ & $M_{\rm rem}$ & $R_2$ & $R_1$ & $M_{\rm ej}$ \\ & & & $\left[{\rm M}_\odot\right]$ & $\left[{\rm km}\right]$ & $\left[{\rm km}\right]$ & $\left[{\rm M}_\odot\right]$ \\ \hline \hline H4-096-30ms & 6 & 3 & 2.64 & 1916 & 1342 & 0.0035 \\ \hline H4-128-30ms & 6 & 3 & 2.65 & 2211 & 1548 & 0.0022 \\ \hline H4-128-34ms & 6 & 3 & 2.65 & 2211 & 1548 & 0.0021 \\ \hline H4-128-39ms & 6 & 3 & 2.64 & 2211 & 1548 & 0.0022 \\ \hline H4-144-35ms & 6 & 3 & 2.65 & 2211 & 1548 & 0.0030 \\ \hline \hline SLy-096-26ms & 6 & 3 & 2.62 & 2211 & 1548 & 0.0141 \\ \hline SLy-128-25ms & 6 & 3 & 2.63 & 2948 & 2064 & 0.0131 \\ \hline SLy-144-25ms & 6 & 3 & 2.60 & 2948 & 2064 & 0.0178 \\ \hline \end{tabular} \end{table} In both cases, a hypermassive NS (HMNS)\footnote{A HMNS is defined as a NS that exceeds the maximum mass of a uniformly rotating NS supported by the EOS, see \cite{Baumgarte:1999cq}, and is avoiding collapse due to differential rotation.} forms. As the system is dynamically unstable, it usually forms a BH within a few tens of milliseconds. In the simulations with H4, the lifetime of the HMNS is $\tau_\mathrm{HMNS} \approx 25$\,ms, and in the simulations with SLy, it is $\tau_\mathrm{HMNS} \approx 16$\,ms. The time of the collapse varies for different resolutions: for the simulation with H4 by $\pm 2$\,ms and for the simulations with SLy by $\pm 7$\,ms. Generally, the results for $\tau_\mathrm{HMNS}$ are in agreement with \cite{Dietrich:2015iva}. \par The determination of the lifetime $\tau_\mathrm{HMNS}$ is crucial for the choice of an appropriate time for the grid change $t_{\rm ch}$. We assume a stationary spacetime after the collapse. Therefore, we chose the time for the grid modification to be $t_{\rm ch} > t_{\rm merger} + \tau_{\rm HMNS}$. For the simulations with H4 we use $t_{\rm ch}=30$\,ms after merger and for the simulations with SLy we use $t_{\rm ch}=25$\,ms after merger. Because, the HMNS lifetime for the H4-144 simulation is slightly longer, a later change time is chosen with $t_{\rm ch} = 35$\,ms after merger. The same applies for the SLy-096 simulation, for which we take $t_{\rm ch} = 26$\,ms. Furthermore, we select two additional change times for the H4-128 simulation to determine the influence of $t_{\rm ch}$.\par The parameters for changing the grid, such as the number of removed $l_{\rm rm}$ and added $l_{\rm add}$ refinement levels, are listed in Tab.~\ref{tab:mod_paramter}. The bound mass is removed from the simulation when the grid is changed and thus only the dynamical ejecta is evolved in the posterior simulation. With the employed changes, the ``modified'' simulations after grid change are about $6$ times faster than the ``normal'' ones. \subsection{Dynamical Ejecta} The dynamics of the post-merger relates strongly with the binary parameters, such as the total mass, the mass ratio, and the spins of the NSs, see, e.g., \citep{Nedora:2020qtd}. We focus here on the effects of the different EOSs and discuss the influences of different resolutions. There are, broadly speaking, two important mechanisms that produce the dynamical ejecta: heating due to shocks at the collision interface and core bounces, and the torques of the system causing tidal ejecta. For our equal mass BNS merger with SLy, i.e., the softer EOS, we find that the shock heating is more dominant than for the H4 setups, see, e.g., \cite{Sekiguchi:2016bjd}.\par In Fig.~\ref{fig:BNSmass_res}, the evolution of the ejecta masses and the total rest-masses for each resolution are compared. The upper panel shows the results of the simulations with H4 and the lower panel shows the results of the simulations with SLy. The ejecta mass of the simulations using SLy is significantly larger with $M_{\rm ej} \approx1.5\times 10^{-2}$\,M$_\odot$ than the ejecta mass of the simulations using H4 with $M_{\rm ej} \approx 2\times 10^{-3}$\,M$_\odot$. Previous studies also showed for equal mass BNS mergers, that the ejecta can be larger for softer EOSs than for stiffer EOSs, e.g., \cite{Hotokezaka:2012ze, Bauswein:2013yna, Dietrich:2015iva, Sekiguchi:2016bjd,rosswog22b}. The physical reason is that the stars are more compact for softer EOSs and merge with greater velocities at smaller orbital distances, which makes the encounters more violent. \par Because the bound matter is removed when the grid is changed, the total rest-mass in the ``modified" simulations coincides with the ejecta mass of the ``normal" simulation at $t_{\rm ch}$. For the simulations with H4 the difference in ejecta mass between the medium and low resolution is $1.3 \times 10^{-3}$\,M$_\odot$ which is by a factor of $1.625$ larger compared to the difference between the high and medium resolution with $0.8 \times 10^{-3}$\,M$_\odot$. Also for the simulations with SLy the ejecta mass varies for different resolutions: low and medium resolution differ by $1.0 \times 10^{-3}$\,M$_\odot$ and medium and high by $4.7 \times 10^{-3}$\,M$_\odot$.\par The ``modified" simulations for H4 as well as for SLy show almost perfect conservation of the total rest-mass. However, the ejecta mass is not constant in the ``modified" simulations. This is not surprising and can be explained by the following considerations. On the one hand, the Cowling approximation and the assumption of Schwarzschild spacetime at large distances compromise the geodesic criterion, see Eq.~\eqref{eq:unbound}, and thus the determination of the unbound matter. On the other hand, the removal of matter at the centre leads to a lack of pressure. In fact, part of the matter falls back and no longer fulfils the second condition of Eq.~\eqref{eq:unbound}. As a consequence, the ejecta mass decreases initially. We observe this drop in the ejecta mass until $\Delta t \approx 25$\,ms after the grid change for all simulations. For this reason, we use the total rest-mass density $D$ instead of $D_u$ in the following analysis of ejecta evolution. Since we removed the bound part of the matter in the ``modified'' simulation, this corresponds to the dynamical ejecta component. \par To ensure that our results are independent of the time of the grid change, we used for the H4-128 simulation three different times $t_{\rm ch}$ for the grid modification. We show two-dimensional plots of the rest-mass density $D$ in Fig.~\ref{fig:BNSsnaps_changetimes} to compare the qualitative differences in the evolution. The snapshots show the distributed ejecta in the $x$-$y$ plane at $t=59$\,ms after the merger. The images show almost identical results for different change times $t_{\rm ch}$, i.e., the behaviour is consistent and independent of the time for the grid change. \subsection{Expansion of the Ejecta} We study the expansion of the dynamical ejecta component by analysing the time evolution of the rest-mass density distribution. Fig.~\ref{fig:BNSsnaps_expansion} shows snapshots of the rest-mass density $D$ in the $x$-$y$ and in the $x$-$z$ plane for different times after the merger. In addition, contour lines for the distribution of radial velocities are plotted from $v_r = 0.1$\,$c$ (white line) to $v_r=0.6$\,$c$ (dark green line) in $\Delta v_r = 0.1$\,$c$ steps. For both systems, the overall distribution appears to be fairly spherical. Accordingly, shock heating seems to be the primary source of the dynamical ejecta. If tidal disruption had been more dominant, the tidal force would distribute the ejecta in the orbital plane resulting in a spheroidal distribution. There are deviations from the spherical distribution. In particular, the negative $x$ and negative $y$ quarters of the SLy plots show a fissured structure. This material is already ejected at the beginning of the simulation by artificial shocks at the surface of the NSs. Since we later choose the azimuth $\Phi = 0 $ for the calculation of the light curves, i.e., along the positive x-axis, these numerical errors should not affect our final results.\par The velocity fronts maintain a spherical shape throughout the entire simulation, which is expected for a homologous expansion. The consistency of the overall structure is also an essential feature. For a more detailed analysis of the expansion, we compute the ejecta mass $m_r$ inside a sphere with radius $r$ via: \begin{equation} m_{r} := \int_0^r \int_0^{2\pi} \int_0^{\pi} D \sin{\Theta} dr'd\Phi' d\Theta'. \label{eq:mr} \end{equation} \noindent When the ejecta expands, the radius $r$ of the sphere containing $m_r$ increases. In the case of homologous expansion, the radius should increase linearly. The mass spheres are traced by considering $m_r$ as a function of $r$ evolved in time. However, as discussed above, the evolution of the ejecta in the central region might be biased by the implemented modifications. Therefore, we consider instead the mass outside the sphere with $\left(M_{\rm ej}-m_r\right)$. The results are shown in Fig.~\ref{fig:BNSexpansion}. The evolution of the radius for the mass spheres in time is clearly visible. In fact, we find an almost linear dependence, indicating a homologous expansion of the dynamical ejecta in both simulations. In addition, we compute the mean radial velocities $\Bar{v}_r$ for each shell of mean radius $r$. The radial profiles of the velocity $\Bar{v}_r$ is included in Fig.~\ref{fig:BNSexpansion} by contour lines from $\Bar{v}_r =0.1$\,$c$ (white line) to $\Bar{v}_r=0.6$\,$c$ (dark green line) in $\Delta \Bar{v}_r = 0.1$\,$c$ steps. The contour lines of the radial velocity are almost perfectly linear and agree well with the expansion of the mass spheres in both systems. Thus, our analysis indicates that homologous expansion is reached during our simulation. \\ For a more quantitative investigation, we use the approach of \cite{Rosswog:2013kqa} and define a homology parameter: \begin{equation} \chi := \frac{\Bar{a}t}{\Bar{v}}, \label{eq:homology} \end{equation} \noindent with average acceleration $\Bar{a}$ and average velocity $\Bar{v}$ of the dynamical ejecta. The homology parameter $\chi$ specifies whether the expansion is accelerated or homologous, i.e., for a constant acceleration $\chi \longrightarrow 1$ and for a constant velocity $\chi \longrightarrow 0$. The results for $\chi \left(t\right)$ are summarized in Fig.~\ref{fig:homology}. Overall, the values for $\chi$ are higher before the grid change and lower afterwards. More precisely, after the grid modification, the parameter is $\lesssim 0.5$ in the simulations with H4 and $\lesssim 0.2$ in the simulations with SLy. In particular, at around $100$\,ms after merger, the expansion deviates by $\sim (10 - 30)$\,\% from a perfect homologous expansion. The homology parameter in \cite{Rosswog:2013kqa} is generally smaller and has values of about $10^{-2}$ at $100$\,ms after the merger. A difference is also that whereas the parameter tends to decrease in our simulation, $\chi$ initially increases in \cite{Rosswog:2013kqa} and reaches a maximum of $\sim 10^{-1}$ after one second; cf. Fig.~$7$ in \cite{Rosswog:2013kqa}. This is because the latter work also implemented the nuclear heating from r-process according to \cite{Korobkin:2012uy} which continuously injects thermal energy into the ejecta and thus delays reaching the homologous phase. \section{Kilonova Properties} \label{sec:kilonova} We use the three-dimensional Monte Carlo radiative transfer code \textsc{possis}~\cite{Bulla:2019muo} to model kilonova light curves based on our ejecta simulations (see Appendix~\ref{app:possis}). \textsc{possis} requires input data of the ejecta including the density, velocity, and electron fraction at a reference time $t_{0}$ to calculate kilonova light curves. Subsequently, the grid is evolved for each time step $t_j$ assuming homologous expansion. The velocity $\Vec{v}_i$ of each fluid cell $i$ remains constant, while the grid coordinates evolve following a homologous expansion. \par To probe how the deviations from a perfect homologous expansion influence the computation of the light curves, we extract the ejecta quantities at six different times after the grid change for each simulation. The snapshots in Fig.\,\ref{fig:BNSsnaps_expansion} represent the six reference times $t_0$ for which the ejecta data is extracted to start the radiative transfer simulations. If the assumption of homologous expansion is correct, the results should be independent of the extraction time $t_0$. We note that at the time when our work started, \textsc{bam} could not evolve the electron fraction $Y_e$ (see \cite{Gieg:2022mut} for the implementation of the $Y_e$ evolution in \textsc{bam}), which is why we have to make assumptions considering a shocked and an un-shocked component, that can be associated with a Lanthanide-free and a Lanthanide-rich component, respectively. Previous studies showed that both components are required to reproduce the kilonova observation AT2017gfo, i.e., to explain the early blue part of the light curve, and the long-term near-infrared emission \citep{Kasen:2017sxr}. We define an entropy indicator $\hat{S} = p/p\left(T=0\right)$. The entropy indicator $\hat{S}$ is high if the thermal component of the pressure $P_{\rm th}$ is large. Thus, for the ejecta caused by shock heating, $\hat{S}$ is expected to be higher than for the ejecta caused by torque. We set accordingly the electron fraction $Y_e$ lower for low $\hat{S}$ and higher for high $\hat{S}$. More precisely, using a threshold $\hat{S}_{\rm th} = 50$, the electron fraction of the grid cells with $\hat{S} > \hat{S}_{\rm th}$ is set to $Y_e = 0.3$ and for $\hat{S} < \hat{S}_{\rm th}$ to $Y_e = 0.15$.\par We first calculated the light curves using $D_u$ as input. However, the determination of $D_u$, see Eq.\,\eqref{eq:unbound}, is impaired by the Cowling approximation and the implemented modifications, as discussed above, leading to an artificial decrease of the ejecta mass, see Fig.\,\ref{fig:BNSmass_res}. The light curves for the corresponding reference times are consequently less bright. To avoid this bias, we use instead the total rest-mass density $D$ of our simulations as input for \textsc{possis} in the presented results. Since we remove the bound matter with the grid modification, this total mass consists of the dynamical ejecta component only.\par Fig.~\ref{fig:lightcurvesH4144} and Fig.~\ref{fig:lightcurvesSLy144} show the results of the simulations with the highest resolution, i.e., H4-144-35ms and SLy-144-25ms. We focus on the infrared bands J, H, and K, since these are the dominant frequency bands for the simulated dynamical ejecta component. Shorter wavelengths in the optical bands, such as the u-, g-, r-, i- and z-bands, show similar behaviour but are less bright. \par The light curves for the simulations with H4 show an earlier and sharper peak at about $\sim 4$\,days after the merger in the H and K bands. This is followed by a relatively steady decline. The peak of the light curves for the simulations with SLy is less sharp and rather flat, and is only reached about $\sim 5$ days in the H and K bands. Overall, the light curves for the simulations with SLy are brighter than for the simulations with H4, due to the higher ejecta mass. The differences for the viewing angles $\theta_{\rm obs}$ are small which can be explained by the small deviations from spherical symmetry of the ejecta input, see Fig.~\ref{fig:BNSsnaps_expansion}.\par Our results show that the light curves for different extraction times are very consistent in each case. The light curves for $t_0 < 60$\,ms have an earlier and faster drop, but generally differ by only $\lesssim 1$\,mag until $\sim 12$\,days after the merger. For $t_0 > 80$\,ms the differences are mostly $\lesssim 0.4$\,mag and for $t_0 > 100$\,ms even within the Monte Carlo noise range. This shows that the assumption of a homologous expansion from $ t_0 > 80$\,ms after the merger seems to be absolutely valid. \begin{table}[t!] \centering \caption{A selection of NR simulations in the literature for which the ejecta data have been used in studies modelling kilonova light curves. Listed are references for BNS or BHNS simulations, and when/how the ejecta properties were extracted.} \label{tab:lc_review} \begin{tabular}{l|c r} \hline Reference & System & Extraction of Ejecta \\ \hline \citet{Hotokezaka:2012ze} & BNS & at $t_0 \approx 10$\,ms after the merger \\ \citet{Sekiguchi:2016bjd} & BNS & at $t_0 \approx 30$\,ms after the merger \\ \citet{Kiuchi:2017zzg} & BNS & at $t_0 \approx 30$\,ms after the merger \\ \citet{Radice:2016dwd} & BNS & on a sphere with $r \approx 295$\,km \\ \citet{Kawaguchi:2015bwa} & BHNS & at $t_0 \approx 10$\,ms after the merger \\ \citet{Kyutoku:2015} & BHNS & at $t_0 \approx 10$\,ms after the merger \\ \hline \end{tabular} \end{table} Most previous studies calculating kilonova light curves using radiative transfer codes or semi-analytical light curve models based on ejecta data from NR simulations use an idealised geometry. In this context, the light curve models of \cite{Tanaka:2013ana, Tanaka:2013ixa, Perego:2017wtu, Dietrich:2016fpt, Kawaguchi:2016ana, Kawaguchi:2019nju} utilize the NR simulations listed in Tab.~\ref{tab:lc_review}. In these, the ejecta is already extracted at $\left(10-30\right)$\,ms after merger. Also for \cite{Radice:2016dwd}, which uses an extraction sphere of $r \approx 295$\,km, an extraction time of $t_0 \approx 10$\,ms can be associated assuming a low ejecta velocity of $v = 0.1$\,$c$ and even earlier for higher velocities. Also in a recent study~\cite{Kullmann:2021gvo, Just:2021vzy} based on smoothed particle hydrodynamics homologous expansion is assumed at $\left(10-20\right)$\,ms after the merger. Our results show that the assumption of a homologous expansion at this time might bias the light curves, and that a later extraction time would be better. \section{Conclusion} \label{sec:summary} In this article, we have presented a simple method to perform longer simulations of the dynamical ejecta with \textsc{bam}. By changing the grid structure of our code and applying the Cowling approximation after the collapse of the merger remnant and the formation of a BH, we were able to reduce the computational cost and speed up the simulation. This allowed us to perform long-term simulations of the ejecta in a reasonable computational time. We demonstrated our new method by simulating two equal mass BNS systems with different EOSs. With our new framework, the speed of the simulations increased by a factor of six.\par We used our simulations to test when homologous expansion of the ejecta is reached. Our results show that, although the expansion generally appears very homologous, deviations of around $\left(10 - 30\right)$\,\% from a perfectly homologous expansion are still present at $100$\,ms after the merger. To investigate how this affects the light curves, we used our data as input for radiative transfer simulations and modelled kilonova light curves. The results show that $\sim 80$\,ms after the merger the differences in the light curves are negligible. Thus, previous studies that used NR simulations and extracted ejecta properties already $(10 - 30)$\,ms after merger appear rather optimistic, as the expansion may not be fully homologous yet.\par While our results focus on the dynamical ejecta component and equal mass systems, additional work is needed for an accurate description of the ejecta evolution and kilonova light curves, in particular, through the inclusion of other ejecta components. \begin{acknowledgments} We thank P.~Biswas, B.~Br\"ugmann, M.~Emma, M.~Mattei, V.~Nedora, H.~Pfeiffer, and F.~Schianchi for helpful discussions. M.B. acknowledges support from the Swedish Research Council (Reg. no. 2020-03330). S.V.C. was supported by the research environment grant ``Gravitational Radiation and Electromagnetic Astrophysical Transients (GREAT)'' funded by the Swedish Research council (VR) under Grant No. Dnr. 2016-06012. SR has been supported by the Swedish Research Council (VR) under grant number 2020-05044, by the research environment grant ``Gravitational Radiation and Electromagnetic Astrophysical Transients'' (GREAT) funded by the Swedish Research Council (VR) under Dnr 2016-06012, and by the Knut and Alice Wallenberg Foundation under grant Dnr. KAW 2019.0112. The simulations were performed on the national supercomputer HPE Apollo Hawk at the High Performance Computing (HPC) Center Stuttgart (HLRS) under the grant number GWanalysis/44189, on the GCS Supercomputer SuperMUC\_NG at the Leibniz Supercomputing Centre (LRZ) [project pn29ba], and on the HPC systems Lise/Emmy of the North German Supercomputing Alliance (HLRN) [project bbp00049]. \end{acknowledgments} \appendix \section{\textsc{possis}} \label{app:possis} \textsc{possis} is a Monte Carlo radiative transfer code~\cite{Bulla:2019muo} that requires input for a three-dimensional grid at a reference time $t_{0}$. The input data represent a snapshot of the ejecta. Subsequently, the grid is evolved for each time step $t_j$ assuming a homologous expansion. In particular, the velocity $\Vec{v}_i$ of each fluid cell $i$ remains constant, while the grid coordinates evolve. The density at time $t_j$ within the cells is determined by: \vspace{-0.2cm} \begin{equation} \rho_{ij} = \rho_{i,0} \left(\frac{t_j}{t_0}\right)^{-3}, \end{equation} \noindent with the rest-mass density $\rho_{i,0}$ as initial density at $t_0$.\par The code generates photon packets at each time step that propagate through the ejecta material. Each packet has properties assigned containing information about the energy and frequencies as well as the direction of the propagation. The initial energy is determined by the relevant radioactive decay processes. The total energy $E_{\rm tot}\left(t_j\right)$ is then divided equally among all the photon packets generated. For the radiation transfer simulations performed, $N_{\rm ph} = 10^6$ photon packets are used.\par The initial frequency for each photon packet is determined by Kirchhoff's law, i.e., by sampling over the thermal emissivity: \vspace{-0.2cm} \begin{equation} S\left(\nu\right) = \kappa\left(\nu\right) B\left(\nu , T \right). \end{equation} \noindent Here $\kappa \left(\nu\right)$ is the opacity and $B\left(\nu , T \right)$ is the Planck function at temperature $T$. Thus, the wavelength of the photons depends on the ejecta temperature $T$ and the opacity $\kappa$ of the material, which again depends on the electron fraction $Y_e$. We use the opacities of \cite{Tanaka:2020}. \par The photon packets are propagated throughout the ejecta taking into account interactions such as scattering and absorption, that change the properties of the respective photon packet, i.e., the direction, the frequency, and the energy. Finally, synthetic observable such as flux and polarization spectra are computed using the event-based technique discussed in \cite{Bulla:2019muo} for different observation angles $\theta_{\rm obs}$. For this work, eleven different angles are considered from $\theta_{\rm obs} = 0$ perpendicular to the orbital axis to $\theta_{\rm obs} = \pi / 2$ parallel to the orbital axis in $\Delta \cos \theta_{\rm obs} = 0.1$ steps. We set the azimuth angle $\Phi$ to $0$, i.e., we observe within the $x-z$ plane. \bibliography{paper20220831.bbl}
Title: Atmospheric Thermal Emission Effect on Chandrasekhar's Finite Atmosphere Problem
Abstract: The solutions of the \textit{diffuse reflection finite atmosphere problem} are very useful in the astrophysical context. Chandrasekhar was the first to solve this problem analytically, by considering atmospheric scattering. These results have wide applications in the modeling of planetary atmospheres. However, they cannot be used to model an atmosphere with emission. We solved this problem by including \textit{thermal emission effect} along with scattering.Here, our aim is to provide a complete picture of generalized finite atmosphere problem in presence of scattering and thermal emission, and to give a physical account of the same. For that, we take an analytical approach using the invariance principle method to solve the diffuse reflection finite atmosphere problem in the presence of atmospheric thermal emission. We established the general integral equations of modified scattering function $S(\tau; \mu, \phi; \mu_0, \phi_0)$, transmission function $T(\tau; \mu, \phi; \mu_0, \phi_0)$ and their derivatives with respect to $\tau$ for a thermally emitting atmosphere. We customize these equations for the case of isotropic scattering and introduce two new functions $V(\mu)$ and $W(\mu)$, analogous to Chandrasekhar $X(\mu)$, and $Y(\mu)$ functions respectively. We also derive a transformation relation between the modified S-T functions and give a physical account of $V(\mu)$ and $W(\mu)$ functions. Our final results are consistent with those of Chandrasekhar at low emission limit (i.e. only scattering). From the consistency of our results, we conclude that the consideration of thermal emission effect in diffuse reflection finite atmosphere problem gives more general and accurate results than considering only scattering.
https://export.arxiv.org/pdf/2208.06656
\title{Atmospheric Thermal Emission Effect on Chandrasekhar's Finite Atmosphere Problem} \correspondingauthor{Soumya Sengupta} \email{soumya.s@iiap.res.in} \author[0000-0002-7006-9439]{Soumya Sengupta} \affiliation{Indian Institute of Astrophysics, Koramangala 2nd Block, Sarjapura Road, Bangalore 560034, India} \affiliation{Pondicherry University, R.V. Nagar, Kalapet, 605014, Puducherry, India} \keywords{Diffuse radiation, Atmospheric effects, Radiative transfer equation, Radiative transfer} \section{Introduction}\label{sec: intro} Chandrasekhar did the pioneering work on the process of radiative transfer which is the heart of the observations as well as modeling in astrophysical context \cite{chandrasekhar1960radiative}. One of his most interesting and useful method is \textit{the Invariance Principle} technique, which has a great deal of applications in atmospheric modeling. Although this principle was first introduced by \cite{ambartsumian1943cr,ambartsumian1944problem}, \cite{chandrasekhar1960radiative} used this theory to solve the semi-infinite and finite atmosphere problem in its most elegant way by introducing the scattering function $S(\tau;\mu,\phi;\mu_0,\phi_0)$ and transmission function $T(\tau;\mu,\phi;\mu_0,\phi_0)$. The final results of those treatments can be represented in terms of H-function (semi-infinite case) \cite{chandrasekhar1947radiative1} and X-, Y- functions (finite case) \cite{chandrasekhar1948radiative}. The values of the H-function \citep{chandrasekhar1947radiative2} and X-,Y- functions \citep{chandrasekhar1952x1,chandrasekhar1952x2} in the case of isotropic scattering are directly used in atmosphere modeling. Even a simple transformation rule between S and T was established by \cite{coakley1973simple}. Although the results provided in \cite{chandrasekhar1960radiative} have direct applications in stellar and planetary problems, the treatment is not complete in some sense as it does not consider the atmospheric emission and scattering simultaneously. \cite{bellman1967chandrasekhar} include the thermal emission in the planetary atmosphere problem and started a new technique called invariant embedding \citep{bellman1992introduction}. In the context of exoplanetary transmission spectra modeling, \cite{sengupta2020optical} and \cite{chakrabarty2020effects} showed the crucial effect of scattering and atmospheric re-emission respectively. Recently, \cite{sengupta2021effects} considered scattering and atmospheric emission simultaneously to study the modifications in Chandrasekhar's semi-infinte atmosphere problem. However, the effect of emission on the finite atmosphere problem, which is more general than the semi-infinite one, remains unsolved. In this work we solve the finite atmosphere problem in case of isotropic scattering and emission by the same analytical procedure as shown in \cite{sengupta2021effects}. For that we consider local thermodynamic equilibrium condition in vertical atmospheric layers, which ensures the fact that each layer contributes the Blackbody emission according to Kirchoff's law \citep{chandrasekhar1960radiative,seager2010exoplanet}. We used the invariance principle method \citep{ambartsumian1944problem,chandrasekhar1960radiative} to derive the modified scattering, transmission function and final radiation to show that our results are more general than Chandrasekhar's results. This treatment is also free from the isothermal atmosphere condition, which was a limitation of the work in \cite{sengupta2021effects}. In section~\ref{sec: Invariance Principle for Finite atmosphere} we state the mathematical formula of invariance principles for finite atmosphere following \cite{chandrasekhar1960radiative}. Section~\ref{sec: general integral equations} is devoted to deriving the general integral equations of the Scattering function (S) and Transmission function (T) in case of thermal emission with scattering. The modified form of these functions specifically for the isotropic scattering case is shown in section~\ref{sec: specific form of integral equations}. Then we establish a simple transformation rule between the $S(\mu)$-$T(\mu)$ in section~\ref{sec: transformation rule} and give their physical interpretations in section~\ref{sec: physical meaning}. The consistency of our new results with the literature is discussed in section~\ref{sec: consistency} and concluded with an elaborated discussion in section~\ref{sec: Discussion}. \section{Invariance Principle for Finite atmosphere}\label{sec: Invariance Principle for Finite atmosphere} The radiative transfer equation in case of plane-parallel approximation can be written as, \begin{equation}\label{eq: radiative transfer} \mu \frac{dI_\nu(\tau_\nu,\mu,\phi)}{d\tau_\nu} = I_\nu(\tau_\nu,\mu,\phi) - \xi_\nu(\tau,\mu,\phi) \end{equation} Here $I_\nu(\tau_\nu,\mu,\phi)$ is the specific intensity at a particular frequency $\nu$, direction cosine $\mu$, angle of azimuth $\phi$ and optical depth range between $\tau_\nu$ to $\tau_\nu + d\tau_\nu$. With these same parameters the source function is written as, $\xi_\nu(\tau_\nu,\mu,\phi)$. For an atmosphere with simultaneous scattering and absorption, the optical depth can be defined as \citep{domanus1974fundamental,sengupta2020optical,sengupta2021effects}, \begin{equation}\label{eq: optical depth} d\tau_\nu = -[\kappa_\nu(z) + \sigma_\nu(z)]dz = -\chi_\nu(z)dz \end{equation} Here $\kappa_\nu(z),\sigma_\nu(z)$ and $\chi_\nu(z)$ are the volumetric absorption co-efficient, scattering co-efficient and extinction co-efficient at a particular frequency $\nu$ and depth z respectively. Note that, for the sake of simplicity in further calculations we suppress the subscript $\nu$ by considering all the calculations at a particular frequency. It should not be confused with the Grey atmosphere approximation as there is no such assumption in the present work. Now finite atmosphere is bounded by optical depth $\tau = 0$ to $\tau=\tau_1$ \citep{chandrasekhar1960radiative}. To provide a solution of the problem of only diffuse reflection from such an atmosphere, \cite{chandrasekhar1947radiative} used the invariance principle method. We will use the same methodology following \citep{chandrasekhar1960radiative} to get a solution of the more general problem where atmospheric thermal emission is also included with the diffuse scattering. When a radiation of light $\pi F$ incident on an atmosphere of optical thickness $\tau_1$ along the direction ($-\mu_0,\phi_0$), then the diffusely reflected and transmitted intensities can be represented as, \begin{equation}\label{eq: diffuse reflection1} I(0,\mu,\phi) = \frac{F}{4\mu}S(\tau_1,\mu,\phi;\mu_0,\phi_0); \hspace{0.5cm} I(\tau_1,-\mu,\phi) = \frac{F}{4\mu}T(\tau_1,\mu,\phi;\mu_0,\phi_0) \end{equation} respectively, where $S(\tau_1,\mu,\phi;\mu_0,\phi_0)$ and $T(\tau_1,\mu,\phi;\mu_0,\phi_0)$ are the scattering and transmission functions. Note that, these two intensities refer only that light which has suffered at least one scattering process and do not include any direct transmission along $(-\mu_0,\phi_0)$ direction. For a detailed discussion on this, we refer \textit{Radiative Transfer} by \cite{chandrasekhar1960radiative}. The four mathematical expressions of invariance principle in finite atmosphere problem can be written as \citep{chandrasekhar1960radiative}, \textbf{Principle I} \begin{equation}\label{eq: Principle I} I(\tau,+\mu,\phi) = \frac{F}{4\mu}e^{-\tau/\mu_0}S(\tau_1-\tau,\mu,\phi;\mu_0,\phi_0) + \frac{1}{4\pi\mu}\int_0^1\int_0^{2\pi}I(\tau,-\mu',\phi')S(\tau_1-\tau,\mu,\phi;\mu',\phi')d\mu'd\phi' \end{equation} \textbf{Principle II} \begin{equation}\label{eq: Principle II} I(\tau,-\mu,\phi) = \frac{F}{4\mu}T(\tau,\mu,\phi;\mu_0,\phi_0) + \frac{1}{4\pi\mu}\int_0^1\int_0^{2\pi}I(\tau,+\mu',\phi')S(\tau,\mu,\phi;\mu',\phi')d\mu'd\phi' \end{equation} \textbf{Principle III} \begin{equation}\label{eq: Principle III} \begin{split} \frac{F}{4\mu}S(\tau_1;\mu,\phi;\mu_0,\phi_0) = \frac{F}{4\mu}S(\tau;\mu,\phi;\mu_0,\phi_0)+e^{-\tau/\mu}I(\tau,+\mu,\phi) + \frac{1}{4\pi\mu}\int_0^1\int_0^{2\pi}I(\tau,+\mu',\phi')T(\tau,\mu,\phi;\mu',\phi')d\mu'd\phi' \end{split} \end{equation} \textbf{Principle IV} \begin{equation}\label{eq: Principle IV} \begin{split} \frac{F}{4\mu}T(\tau_1;\mu,\phi;\mu_0,\phi_0) =& \frac{F}{4\mu}e^{-\tau/\mu_0}T(\tau_1-\tau;\mu,\phi;\mu_0,\phi_0)+e^{-(\tau_1-\tau)/\mu}I(\tau,-\mu,\phi)\\ &+ \frac{1}{4\pi\mu}\int_0^1\int_0^{2\pi}I(\tau,-\mu',\phi')T(\tau_1-\tau,\mu,\phi;\mu',\phi')d\mu'd\phi' \end{split} \end{equation} These equations are derived and diagramatically shown in \cite{chandrasekhar1947radiative,chandrasekhar1960radiative,peraiah2002an}. The boundary conditions used to calculate $S(\tau_1;\mu,\phi;\mu',\phi')$ and $T(\tau_1;\mu,\phi;\mu',\phi')$ are, \begin{equation}\label{eq: boundary condition1} I(0,-\mu,\phi) = 0\hspace{1cm} \text{and} \hspace{1cm} I(\tau_1,+\mu,\phi) = 0 \end{equation} \cite{chandrasekhar1960radiative} used these boundary conditions in eqn.\eqref{eq: radiative transfer} and derived the four invariance principles \eqref{eq: Principle I}|\eqref{eq: Principle IV} in terms of source functions $\xi(0,\mu,\phi)$ and $\xi(\tau_1,\mu,\phi)$ . We directly use those relations in this paper. \section{The general integral equations for Scattering and Thermally Emitting Atmosphere}\label{sec: general integral equations} When there is atmospheric emission as well as scattering, the source function $\xi$ can be written as \citep{sengupta2021effects}, \begin{equation}\label{eq: source function with emission} \begin{split} \xi(\tau,\mu,\phi) = \beta(\tau,\mu,\phi)+\frac{1}{4}Fe^{-\tau/\mu_0}p(\mu,\phi;-\mu_0,\phi_0) + \frac{1}{4\pi}\int_{-1}^1\int_0^{2\pi} p(\mu,\phi;\mu'';\phi'')I(\tau,\mu'',\phi'')d\phi''d\mu'' \end{split} \end{equation} Here $p(\mu,\phi;\mu'';\phi'')$ and $\beta(\tau,\mu,\phi)$ are the phase function and atmospheric emission respectively. This atmospheric emission $\beta$ can be expanded \citep{bellman1967chandrasekhar,sengupta2021effects} as follows, \begin{equation}\label{eq: general emission term} \beta(\tau;\mu;\phi) = \sum_{m=0}^{N} \beta^m (\tau,\mu)\cos m(\phi-\phi_0) \end{equation} For planetary atmosphere, the emission can be caused by different mechanisms (for example, see \cite{bellman1967chandrasekhar,chakrabarty2020effects,malkevich1963angular,sengupta2021effects,seager2010exoplanet}). In the current study, we consider an atmosphere, where each horizontal layer is in Local Thermodynamic Equilibrium and emits only in terms of Planck Emission \citep{seager2010exoplanet}, as shown in figure.~\ref{fig: scattering and thermal emission in finite atmosphere}. Hence, considering m=0 with no $\mu$ dependencies, eqn.\eqref{eq: general emission term} will reduce into \begin{equation}\label{eq: thermal emission term} \beta(\tau,\mu,\phi) \approx B(T_\tau) \end{equation} Here $T_\tau$ represents the absolute temperature of that particular atmospheric layer which has an optical depth $\tau$. It is worth noting that in case of thermal emission the exact expression of $\beta$ is $\frac{\kappa}{\chi}B(T_\tau)$. But in case of low scattering limit (i.e. $\kappa >> \sigma $), the $\kappa \approx \chi$ and eqn. \eqref{eq: thermal emission term} is valid \citep{sengupta2021effects}. Assuming low scattering approximation, eqn.\eqref{eq: source function with emission} will become, \begin{equation}\label{eq: source function at tau=0} \begin{split} \xi(0,\mu,\phi) = B(T_0)+\frac{1}{4}F[p(\mu,\phi;-\mu_0,\phi_0) + \frac{1}{4\pi}\int_0^1\int_0^{2\pi} p(\mu,\phi;\mu'';\phi'')S(\tau_1;\mu'',\phi'';\mu_0,\phi_0)d\phi''\frac{d\mu''}{\mu''}] \end{split} \end{equation} \begin{equation}\label{eq: source function at tau=tau_1} \begin{split} \xi(\tau_1,\mu,\phi) = B(T_{\tau_1})+\frac{1}{4}F[e^{-\tau_1/\mu_0}p(\mu,\phi;-\mu_0,\phi_0) + \frac{1}{4\pi}\int_0^1\int_0^{2\pi} p(\mu,\phi;-\mu'';\phi'')T(\tau_1;\mu'',\phi'';\mu_0,\phi_0)d\phi''\frac{d\mu''}{\mu''}] \end{split} \end{equation} Now using eqns. \eqref{eq: source function at tau=0} and \eqref{eq: source function at tau=tau_1} in the eqns. 23|26; pg:168 in \cite{chandrasekhar1960radiative} we will get (for a detailed derivation see Appendix~\ref{apndx sec: Derivation of scattering function} ) , \begin{equation}\label{eq: finite atmosphere scattering function1.1} \begin{split} &[(\frac{1}{\mu}+\frac{1}{\mu_0})S(\tau_1;\mu,\phi;\mu_0,\phi_0)+\frac{\partial S(\tau_1;\mu,\phi;\mu_0,\phi_0)}{\partial \tau_1}] = 4U(T_0)[1+\frac{1}{4\pi}\int_0^1\int_0^{2\pi} S(\tau_1;\mu,\phi;\mu',\phi')\frac{d\mu'}{\mu'}d\phi']\\ &+ p(\mu,\phi;-\mu_0,\phi_0)+ \frac{1}{4\pi}\int_0^1\int_0^{2\pi} p(\mu,\phi;\mu'';\phi'')S(\tau_1;\mu'',\phi'';\mu_0,\phi_0)d\phi''\frac{d\mu''}{\mu''}\\ &+ \frac{1}{4\pi}\int_0^1\int_0^{2\pi} S(\tau_1;\mu,\phi;\mu',\phi')[p(-\mu',\phi';-\mu_0,\phi_0)+ \frac{1}{4\pi}\int_0^1\int_0^{2\pi} p(-\mu',\phi';\mu'';\phi'')S(\tau_1;\mu'',\phi'';\mu_0,\phi_0)d\phi''\frac{d\mu''}{\mu''}]\frac{d\mu'}{\mu'}d\phi' \end{split} \end{equation} \begin{equation}\label{eq: finite atmosphere scattering function2.1} \begin{split} &[\frac{\partial S(\tau_1;\mu,\phi;\mu_0,\phi_0)}{\partial \tau_1}] = 4U(T_{\tau_1})[e^{-\tau_1/\mu}+\frac{1}{4\pi}\int_0^1\int_0^{2\pi} T(\tau_1;\mu,\phi;\mu',\phi')\frac{d\mu'}{\mu'}d\phi']\\ &+ [exp\{-\tau_1(\frac{1}{\mu_0}+\frac{1}{\mu})\}p(\mu,\phi;-\mu_0,\phi_0) + \frac{1}{4\pi}e^{-\tau_1/\mu}\int_0^1\int_0^{2\pi} p(\mu,\phi;-\mu'';\phi'')T(\tau_1;\mu'',\phi'';\mu_0,\phi_0)d\phi''\frac{d\mu''}{\mu''}]\\ &+ \frac{1}{4\pi}\int_0^1\int_0^{2\pi} T(\tau_1;\mu,\phi;\mu',\phi')[e^{-\tau_1/\mu_0}p(\mu',\phi';-\mu_0,\phi_0) + \frac{1}{4\pi}\int_0^1\int_0^{2\pi} p(\mu',\phi';-\mu'';\phi'')T(\tau_1;\mu'',\phi'';\mu_0,\phi_0)d\phi''\frac{d\mu''}{\mu''}]\frac{d\mu'}{\mu'}d\phi' \end{split} \end{equation} \begin{equation}\label{eq: finite atmosphere transmission function1.1} \begin{split} &[\frac{1}{\mu}T(\tau_1;\mu,\phi;\mu_0,\phi_0) + \frac{\partial T(\tau_1;\mu,\phi;\mu_0,\phi_0)}{\partial \tau_1}] = 4U(T_{\tau_1})[1+\frac{1}{4\pi}\int_0^1\int_0^{2\pi} S(\tau_1;\mu,\phi;\mu',\phi')\frac{d\mu'}{\mu'}d\phi']\\ &+ [e^{-\tau_1/\mu_0}p(-\mu,\phi;-\mu_0,\phi_0) + \frac{1}{4\pi}\int_0^1\int_0^{2\pi} p(-\mu,\phi;-\mu'';\phi'')T(\tau_1;\mu'',\phi'';\mu_0,\phi_0)d\phi''\frac{d\mu''}{\mu''}]\\ &+ \frac{1}{4\pi}\int_0^1\int_0^{2\pi} S(\tau_1;\mu,\phi;\mu',\phi')[e^{-\tau_1/\mu_0}p(\mu',\phi';-\mu_0,\phi_0) + \frac{1}{4\pi}\int_0^1\int_0^{2\pi} p(\mu',\phi';-\mu'';\phi'')T(\tau_1;\mu'',\phi'';\mu_0,\phi_0)d\phi''\frac{d\mu''}{\mu''}] \frac{d\mu'}{\mu'}d\phi' \end{split} \end{equation} \begin{equation}\label{eq: finite atmosphere transmission function2.1} \begin{split} &[\frac{1}{\mu_0}T(\tau_1;\mu,\phi;\mu_0,\phi_0) + \frac{\partial T(\tau_1;\mu,\phi;\mu_0,\phi_0)}{\partial \tau_1}] = 4U(T_0)[e^{-\tau_1/\mu}+\frac{1}{4\pi}\int_0^1\int_0^{2\pi} T(\tau_1;\mu,\phi;\mu',\phi')\frac{d\mu'}{\mu'}d\phi']\\ &+ [p(-\mu,\phi;-\mu_0,\phi_0) + \frac{1}{4\pi}\int_0^1\int_0^{2\pi} p(-\mu,\phi;\mu'';\phi'')S(\tau_1;\mu'',\phi'';\mu_0,\phi_0)d\phi''\frac{d\mu''}{\mu''}]e^{-\tau_1/\mu}\\ &+ \frac{1}{4\pi}\int_0^1\int_0^{2\pi} T(\tau_1;\mu,\phi;\mu',\phi')[p(-\mu',\phi';-\mu_0,\phi_0) + \frac{1}{4\pi}\int_0^1\int_0^{2\pi} p(-\mu',\phi';\mu'',\phi'')S(\tau_1;\mu'',\phi'';\mu_0,\phi_0)d\phi''\frac{d\mu''}{\mu''}]\frac{d\mu'}{\mu'}d\phi' \end{split} \end{equation} where, $U(T_0)=\frac{B(T_0)}{F}$, $U(T_{\tau_1})=\frac{B(T_{\tau_1})}{F}$. Equations. \eqref{eq: finite atmosphere scattering function1.1} | \eqref{eq: finite atmosphere transmission function2.1} represent the integral equations governing the problem of diffuse reflection and trasmission in presence of atmospheric thermal emission of a plane-parallel atmosphere with finite optical depth. \section{The integral equations in isotropic scattering}\label{sec: specific form of integral equations} It is evident that these four integral equations have an explicit dependency on the phase function $p(\mu,\phi;\mu',\phi')$. The different types of phase functions are discussed in \cite{chandrasekhar1960radiative,sengupta2021effects}. Here, we specifically study the effect of thermal emission in isotropic scattering case only. It can be treated in terms of single scattering albedo $\tilde{\omega}_0$ \citep{sengupta2021effects} as, $$p(\mu,\phi;\mu_0,\phi_0)=\tilde{\omega_0}$$ This axial symmetry in phase function is also a property of scattering and transmission functions and they can be expressed in axisymmetric terms as, $S(\tau_1;\mu;\mu')$ and $T(\tau_1;\mu;\mu')$ Then, eqn.\eqref{eq: finite atmosphere scattering function2.1} will become, \begin{equation*} \begin{split} &[\frac{\partial S(\tau_1;\mu,\mu_0)}{\partial \tau_1}]= 4U(T_{\tau_1})[e^{-\tau_1/\mu}+\frac{1}{2}\int_0^1 T(\tau_1;\mu;\mu')\frac{d\mu'}{\mu'}]\\ &+ [exp\{-\tau_1(\frac{1}{\mu_0}+\frac{1}{\mu})\}\tilde{\omega_0}+ \frac{1}{2}e^{-\tau_1/\mu}\int_0^1 \tilde{\omega_0}T(\tau_1;\mu'',\mu_0)\frac{d\mu''}{\mu''}] + \frac{1}{2}\int_0^1 T(\tau_1;\mu,\mu')[e^{-\tau_1/\mu_0}\tilde{\omega_0}+\frac{1}{2}\int_0^1 \tilde{\omega_0}T(\tau_1;\mu'',\mu_0)\frac{d\mu''}{\mu''}]\frac{d\mu'}{\mu'} \end{split} \end{equation*} \begin{equation}\label{eq: finite atmosphere isotropic scattering function1} \boxed{ [\frac{\partial S(\tau_1;\mu,\mu_0)}{\partial \tau_1}]=4U(T_{\tau_1})W(\mu)+\tilde{\omega_0}W(\mu_0)W(\mu) } \end{equation} Here we define two new functions as, \begin{equation}\label{eq: V-function 1} \begin{split} V(\mu)&= 1+\frac{1}{2}\int_0^1 S(\tau_1;\mu,\mu')\frac{d\mu'}{\mu'} \end{split} \end{equation} and \begin{equation}\label{eq: W-function 1} \begin{split} W(\mu)&= e^{-\tau_1/\mu}+\frac{1}{2}\int_0^1 T(\tau_1;\mu,\mu')\frac{d\mu'}{\mu'} \end{split} \end{equation} Similarly, eqns.\eqref{eq: finite atmosphere scattering function1.1}, \eqref{eq: finite atmosphere transmission function1.1}, \eqref{eq: finite atmosphere transmission function2.1} can be expressed in terms of V and W functions as follows, \begin{equation}\label{eq: final scattering function} \boxed{ (\frac{1}{\mu}+\frac{1}{\mu_0})S(\tau_1;\mu,\mu_0)= 4[U(T_0)V(\mu)-U(T_{\tau_1})W(\mu)]+\tilde{\omega_0[}V(\mu)V(\mu_0)-W(\mu)W(\mu_0)] } \end{equation} \begin{equation}\label{eq: finite atmosphere isotropic transmission function1} \begin{split} [\frac{1}{\mu_0}T(\tau_1;\mu,\mu_0) + \frac{\partial T(\tau_1;\mu,\mu_0)}{\partial \tau_1}] = 4U(T_0)W(\mu)+\tilde{\omega_0}V(\mu_0)W(\mu) \end{split} \end{equation} \begin{equation}\label{eq: finite atmosphere isotropic transmission function2} \begin{split} [\frac{1}{\mu}T(\tau_1;\mu,\mu_0) + \frac{\partial T(\tau_1;\mu,\mu_0)}{\partial \tau_1}] = 4U(T_{\tau_1})V(\mu)+\tilde{\omega_0}W(\mu_0)V(\mu) \end{split} \end{equation} Subtracting eqn.\eqref{eq: finite atmosphere isotropic transmission function2} from eqn.\eqref{eq: finite atmosphere isotropic transmission function1} gives, \begin{equation}\label{eq: finite atmosphere isotropic transmission function3} \boxed{ (\frac{1}{\mu_0}-\frac{1}{\mu})T(\tau_1;\mu,\mu_0)= 4[U(T_0)W(\mu)-U(T_{\tau_1})V(\mu)]+\tilde{\omega_0}[V(\mu_0)W(\mu)-W(\mu_0)V(\mu)] } \end{equation} Subtracting eqn.\eqref{eq: finite atmosphere isotropic transmission function1}*$\frac{1}{\mu}$ from eqn.\eqref{eq: finite atmosphere isotropic transmission function2}*$\frac{1}{\mu_0}$ gives, \begin{equation}\label{eq: finite atmosphere isotropic transmission function4} \boxed{ (\frac{1}{\mu_0}-\frac{1}{\mu})\frac{\partial T(\tau_1;\mu,\mu_0)}{\partial \tau_1}=4[\frac{1}{\mu_0}U(T_{\tau_1})V(\mu)-\frac{1}{\mu}U(T_0)W(\mu)]+\tilde{\omega_0}[\frac{1}{\mu_0}W(\mu_0)V(\mu)- \frac{1}{\mu}V(\mu_0)W(\mu)] } \end{equation} Thus, the functional form of V and W functions (eqns.\eqref{eq: V-function 1} and \eqref{eq: W-function 1}) can be modified as, \begin{equation}\label{eq: V-function 2} \begin{split} V(\mu)= 1+\frac{1}{2}\mu\int_0^1 \{4[U(T_0)V(\mu)-U(T_{\tau_1})W(\mu)]+\tilde{\omega_0[}V(\mu)V(\mu_0)-W(\mu)W(\mu_0)]\}\frac{d\mu'}{\mu'+\mu} \end{split} \end{equation} and \begin{equation}\label{eq: W-function 2} \begin{split} W(\mu)=e^{-\tau_1/\mu}+\frac{1}{2}\mu\int_0^1 4[U(T_0)W(\mu)-4U(T_{\tau_1})V(\mu)]+\tilde{\omega_0}[V(\mu_0)W(\mu)-W(\mu_0)V(\mu)]\frac{d\mu'}{\mu-\mu'} \end{split} \end{equation} The final emitted radiation from $\tau=0$ and $\tau=\tau_1$ can be expressed from eqn. \eqref{eq: diffuse reflection1} as, \begin{equation}\label{eq: final radiation from tau=0} \begin{split} I(0,\mu) = \frac{\mu_0}{\mu+\mu_0} [B(T_0)V(\mu)-B(T_{\tau_1})W(\mu)]+ \frac{F}{4}\frac{\mu_0}{\mu+\mu_0}\tilde{\omega_0}[V(\mu)V(\mu_0)-W(\mu)W(\mu_0)] \end{split} \end{equation} and, \begin{equation}\label{eq: final radiation from tau=tau1} \begin{split} I(\tau_1,-\mu)=\frac{\mu_0}{\mu-\mu_0}[B(T_0)W(\mu)-B(T_{\tau_1})V(\mu)]+\frac{F}{4}\frac{\mu_0}{\mu-\mu_0}\tilde{\omega_0}[V(\mu_0)W(\mu)-W(\mu_0)V(\mu)] \end{split} \end{equation} Eqns.\eqref{eq: final radiation from tau=0} and \eqref{eq: final radiation from tau=tau1} can be expressed as, \begin{equation}\label{eq: Invariance principle matrix form1} \begin{split} \begin{bmatrix} (\mu+\mu_0)I(0,+\mu)\\ (\mu-\mu_0)I(\tau_1,-\mu) \end{bmatrix} &= \begin{bmatrix} V(\mu) & W(\mu)\\ W(\mu) & V(\mu) \end{bmatrix} \begin{bmatrix} \frac{F}{4}\tilde{\omega_0}\mu_0V(\mu_0)+\mu_0B(T_0)\\ -\frac{F}{4}\tilde{\omega_0}\mu_0W(\mu_0)-\mu_0B(T_{\tau_1}) \end{bmatrix} \end{split} \end{equation} \section{A simple transformation rule}\label{sec: transformation rule} \cite{chandrasekhar1960radiative} introduced two crucial functions $S(\tau_1;\mu,\phi;\mu_0,\phi_0)$ and $T(\tau_1;\mu,\phi;\mu_0,\phi_0)$ while considering diffuse scattering in finite atmosphere. The transformation rule between these two functions was established by \cite{coakley1973simple} as, \begin{equation}\label{eq: S-T Transformation rule} \begin{split} &S(\tau_1,\mu,\phi;-\mu_0,\phi_0)e^{-\tau_1/\mu_0} = T(\tau_1,\mu,\phi;\mu_0,\phi_0)\\ &T(\tau_1,\mu,\phi;-\mu_0,\phi_0)e^{-\tau_1/\mu_0} = S(\tau_1,\mu,\phi;\mu_0,\phi_0) \end{split} \end{equation} Here we will show that these rules are indeed true while the thermal emission is included in this problem under some circumstances. We replace $\mu_0$ by $-\mu_0$ in eqn.\eqref{eq: finite atmosphere scattering function1.1} and multiplying both sides by $e^{-\tau_1/\mu_0}$ and get, \begin{equation}\label{eq: finite atmosphere scattering function1.1.1} \begin{split} \therefore &[(\frac{1}{\mu}-\frac{1}{\mu_0})S(\tau_1;\mu,\phi;-\mu_0,\phi_0)e^{-\tau_1/\mu_0} + \frac{\partial S(\tau_1;\mu,\phi;-\mu_0,\phi_0)}{\partial \tau_1} e^{-\tau_1/\mu_0}]\\ &= 4U(T_0)[1+\frac{1}{4\pi}\int_0^1\int_0^{2\pi} S(\tau_1;\mu,\phi;\mu',\phi')\frac{d\mu'}{\mu'}d\phi']e^{-\tau_1/\mu_0}\\ &+ p(\mu,\phi;\mu_0,\phi_0)e^{-\tau_1/\mu_0} + \frac{1}{4\pi}\int_0^1\int_0^{2\pi} p(\mu,\phi;\mu'';\phi'')S(\tau_1;\mu'',\phi'';-\mu_0,\phi_0) e^{-\tau_1/\mu_0} d\phi''\frac{d\mu''}{\mu''}\\ &+ \frac{1}{4\pi}\int_0^1\int_0^{2\pi} S(\tau_1;\mu,\phi;\mu',\phi')[p(-\mu',\phi';\mu_0,\phi_0)e^{-\tau_1/\mu_0}\\ &+ \frac{1}{4\pi}\int_0^1\int_0^{2\pi} p(-\mu',\phi';\mu'';\phi'')S(\tau_1;\mu'',\phi'';-\mu_0,\phi_0)e^{-\tau_1/\mu_0}d\phi''\frac{d\mu''}{\mu''}]\frac{d\mu'}{\mu'}d\phi' \end{split} \end{equation} Now if we make use of eqn.\eqref{eq: S-T Transformation rule} and the symmetric properties of the phase function $p(\mu,\phi;-\mu_0,\phi_0) = p(-\mu,\phi;\mu_0,\phi_0)$ and $p(-\mu,\phi;-\mu_0,\phi_0) = p(\mu,\phi;\mu_0,\phi_0)$ then equation \eqref{eq: finite atmosphere scattering function1.1.1} will be, \begin{equation}\label{eq: finite atmosphere scattering function1.1.2} \begin{split} \therefore &\frac{1}{\mu}T(\tau_1;\mu,\phi;\mu_0,\phi_0) + \frac{\partial T(\tau_1;\mu,\phi;\mu_0,\phi_0)}{\partial \tau_1}= 4U(T_0)e^{-\tau_1/\mu_0}[1+\frac{1}{4\pi}\int_0^1\int_0^{2\pi} S(\tau_1;\mu,\phi;\mu',\phi')\frac{d\mu'}{\mu'}d\phi']\\ &+ p(-\mu,\phi;-\mu_0,\phi_0)e^{-\tau_1/\mu_0} + \frac{1}{4\pi}\int_0^1\int_0^{2\pi} p(\mu,\phi;\mu'';\phi'')T(\tau_1;\mu'',\phi'';\mu_0,\phi_0) d\phi''\frac{d\mu''}{\mu''}\\ &+ \frac{1}{4\pi}\int_0^1\int_0^{2\pi} S(\tau_1;\mu,\phi;\mu',\phi')[p(\mu',\phi';-\mu_0,\phi_0)e^{-\tau_1/\mu_0}\\ &+ \frac{1}{4\pi}\int_0^1\int_0^{2\pi} p(\mu',\phi';-\mu'';\phi'')T(\tau_1;\mu'',\phi'';\mu_0,\phi_0)d\phi''\frac{d\mu''}{\mu''}]\frac{d\mu'}{\mu'}d\phi' \end{split} \end{equation} Observing the similarity of this equation with eqn.\eqref{eq: finite atmosphere transmission function1.1}, we can write condition on thermal emission, \begin{equation}\label{eq: reduced thermal emission} U(T_{\tau_1}) = U(T_0) e^{-\tau_1/\mu_0} \hspace{1cm} \hspace{1cm}B(T_{\tau_1}) = B(T_0) e^{-\tau_1/\mu_0} \end{equation} So the transformation rule between S- and T-function eqn.\eqref{eq: S-T Transformation rule} will be valid when the thermal emission from different atmospheric layers are connected by eqn.\eqref{eq: reduced thermal emission}. This blackbody emission can be named as the \textit{reduced thermal emission}. For isotropic scattering case the transformation rules of V and W can be obtained using eqns.\eqref{eq: S-T Transformation rule}, \eqref{eq: V-function 1}, \eqref{eq: W-function 1} as, \begin{equation}\label{eq: V-W transformation rule} V(-\mu)e^{-\tau_1/\mu} = W(\mu) \hspace{1cm} \hspace{1cm} W(-\mu)e^{-\tau_1/\mu} = V(\mu) \end{equation} \section{The Physical meaning of V and W functions:}\label{sec: physical meaning} We introduced two new functions $V(\mu)$ and $W(\mu)$ which resembles to Chandrasekhar's $X(\mu)$- and $Y(\mu)$- functions respectively in the presence of thermal emission with diffusely reflecting finite atmosphere problem . The physical meaning of X- and Y- functions has been discussed in \cite{chandrasekhar1960radiative,van1948scattering,peraiah2002an}. Here we will discuss the additional effects introduced in V and W functions. Let there be a point source above the layer $\tau=0$ which has unit brightness (with the total emitting flux $4\pi$ from the point source). Now the flux will be scattered and transmitted multiple times by both the atmospheric layers at $\tau=0$ and $\tau =\tau_1$ (see fig~\ref{fig: V_mu and W_mu explanation}). In addition to it there is a contribution of thermal emission $B(T_0)$ and $B(T_{\tau_1})$ respectively from those layers. Now for an observer at large distance, the combination of the same point source and the illuminated atmosphere will again appear as a point source and only the combined effect can be observed. If that distant observer is in $(+\mu,\phi)$ direction from the atmosphere (i.e. above the atmospheric layer $\tau =0$ in fig.~\ref{fig: V_mu and W_mu explanation}), then $V(\mu)$ will be the total observed brightness. In the same way, if the observer is in $(-\mu,\phi)$ direction from the atmosphere (i.e. below the atmospheric layer $\tau =\tau_1$ in fig.~\ref{fig: V_mu and W_mu explanation}), then $W(\mu)$ will be the total observed brightness. In both the cases, the factor $\frac{1}{\mu}$ is positive. In other words, $V(\mu)$ and $W(\mu)$ represents the relative change of the incident and transmitted flux along $(+\mu)$ and $(-\mu)$ direction respectively due to the presence of the atmosphere. This relative change shows the combined effect of scattering, transmission and thermal emission by the atmospheric layers. Clearly in the absence of thermal emission, the contribution of thermal emission will be removed and the observed brightness will be a combination of atmospheric scattering and transmission of the point source flux only. In such circumstances, $V(\mu)$ and $W(\mu)$ functions will reduce into the Chandrasekhar's $X(\mu)$ and $Y(\mu)$ functions only (see section~\ref{sec: consistency} for more discussion). Also figure~\ref{fig: V_mu and W_mu explanation} will reduce into the figure given in \cite{van1948scattering}. In case of semi-infinite atmosphere, the bottom layer will be extended at $\tau_1\to \infty$ as shown in figure~\ref{fig: M_mu explanation}. In such circumstances, the distant observer can observe the combined effect from $(+\mu,\phi)$ direction only. Hence $W(\mu)$ function will vanish and $V(\mu)$ function will give the combined effect of scattering and thermal emission. In such case $V(\mu)$-function will reduce into the well known $M(\mu)$-function as introduced by \cite{sengupta2021effects} for semi-infinite atmosphere case. Hence the V and W functions represents the relative change of the flux from point source due to the presence of atmospheric scattering, transmission and thermal emission. \section{Consistency Check}\label{sec: consistency} In this section we will show that how our results reduce into previous literature results at specific boundary conditions. It is expected that, when the atmospheric thermal emission is very less than the incident flux (i.e. $B(T_0) , B(T_{\tau_1}) << F$) then our solutions should match with the results of only scattering case as derived by \cite{chandrasekhar1960radiative}. In case of no thermal emission limit, $U(T_0) , U(T_{\tau_1})\to 0$ and thus equations \eqref{eq: V-function 2} and \eqref{eq: W-function 2} will reduce into that of Chandrasekhar's X and Y functions as shown in \cite{chandrasekhar1960radiative} (pg.181; eqns.84-85) $$\lim_{U\to 0} V(\mu)\to X(\mu)$$ and $$\lim_{U\to 0}W(\mu)\to Y(\mu)$$. Hence in the limit of $U\to 0$, eqn.\eqref{eq: final scattering function} and \eqref{eq: finite atmosphere isotropic transmission function3} will become, \begin{equation} \begin{split} &(\frac{1}{\mu}+\frac{1}{\mu_0})S(\tau_1;\mu,\mu_0)= \tilde{\omega_0}[X(\mu)X(\mu_0)-Y(\mu)Y(\mu_0)]\\ &\text{and}\\ & (\frac{1}{\mu_0}+\frac{1}{\mu})T(\tau_1;\mu,\mu_0)= \tilde{\omega_0}[X(\mu_0)Y(\mu)-Y(\mu_0)X(\mu)] \end{split} \end{equation} These equations are the same as given in \cite{chandrasekhar1960radiative} (pg.181; eqns. 80-81) The no thermal emission limit will also affect the final radiation coming out from both the boundaries at $\tau=0$ and $\tau=\tau_1$. Hence equations \eqref{eq: final radiation from tau=0} and \eqref{eq: final radiation from tau=tau1} will reduce into the scattering only case. At no thermal emission case we can write the matrix equation \eqref{eq: Invariance principle matrix form1} as follows, \begin{equation}\label{Invariance principle matrix form without emission} \begin{split} \begin{bmatrix} (\mu+\mu_0)I(0,+\mu)\\ (\mu-\mu_0)I(\tau_1,-\mu) \end{bmatrix} &= \begin{bmatrix} X(\mu) & Y(\mu)\\ Y(\mu) & X(\mu) \end{bmatrix} \begin{bmatrix} \frac{F}{4}\tilde{\omega_0}\mu_0X(\mu_0)\\ -\frac{F}{4}\tilde{\omega_0}\mu_0Y(\mu_0) \end{bmatrix} \end{split} \end{equation} This is the same form as derived by \citep{chandrasekhar1960radiative} (pg. 201; eqns 108 and 109) in case of diffuse scattering. Now we will show the two limiting cases of optical depth. \begin{enumerate} \item Semi-Infinite optical depth $(\tau_1\to \infty)$: In this condition the expression of the function $V(\mu)$ will reduced into , \begin{equation}\label{eq: V at tau_1 infinity limit} V(\mu) = 1+2U(T)M(\mu)\mu \log(1+\frac{1}{\mu})+ \frac{\tilde{\omega_0}}{2}\mu M(\mu)\int_0^1 \frac{M(\mu')}{\mu+\mu'}d\mu'=M(\mu) \end{equation} This expression is same as of the M-function derived in \citep{sengupta2021effects} in the context of semi-infinite atmosphere. Hence we can say that the finite atmosphere problem boils down to semi-infinite atmosphere problem at this limiting value. Now using the transformation rule \eqref{eq: V-W transformation rule}, the W-function can be represented as, \begin{equation}\label{eq: W at tau_1 infinity limit} W(\mu) = \lim_{\tau_1\to\infty} V(-\mu)e^{-\tau_1/\mu}\to 0 \end{equation} \item Small optical depth $(\tau_1\to 0)$: In such case the W function will be, \begin{equation}\label{eq: W at tau_1 zero limit} W(\mu)\to e^{-\tau_1/\mu} \end{equation} and using the transformation rule we get, \begin{equation}\label{eq: V at tau_1 zero limit} V(\mu) = W(-\mu)e^{-\tau_1/\mu} \to 1 \end{equation} These values are same as shown in \citep{peraiah2002an} in the case of Y and X functions respectively. \end{enumerate} \section{Discussion}\label{sec: Discussion} The finite atmosphere diffuse reflection problem was first introduced by \cite{chandrasekhar1960radiative} for scattering only atmosphere where no atmospheric emission was considered. Here for the first time we include the thermal emission effect simultaneously with the isotropic scattering from each atmospheric layers in finite atmosphere diffuse reflection problem. The thermal emission modifies Chandrasekhar's results in terms of the factor $U(T)$, where U is the ratio of blackbody emission (B) and irradiation flux (F) (see section~\ref{sec: general integral equations} and \ref{sec: specific form of integral equations}). Moreover, the modified scattering and transmission functions obey the same transformation rules as established by \cite{coakley1973simple} (as shown in sec. \ref{sec: transformation rule}). Then, we show that our results are consistent with that of the \cite{chandrasekhar1960radiative} results in the limit of low atmospheric thermal emission (i.e. $B<<F$). Hence, it can be said that our treatment of thermal emission and scattering for finite atmosphere problem is more general than Chansdrasekhar's one. In exoplanetary context Chandrasekhar's results are used to model the reflection, transmission and emission spectra for highly irradiated low emitting planets \cite{madhusudhan2012analytic}. But as it is evident that when the thermal emission and scattering occurs comparably in a planetary atmosphere (e.g. low irradiating ultra-hot jupiters), then our results will provide more accurate modeling than Chandrasekhar's results. The thermal emission from each atmospheric layer will travel through other layers as well and will face the scattering and transmission. For example, the emission from the layer at $\tau=0$ is $B(T_0)$ which scattered along the direction $(\mu,\phi)$ and contribute to the final radiation $I(0,\mu)$ in terms of $B(T_0)V(\mu)$ (see eqn.\eqref{eq: final radiation from tau=0}). In the same way it contributes along the direction $(-\mu,\phi)$ in the radiation in terms of $B(T_0)W(\mu)$. Thus, the flux F irradiated on the atmosphere and the atmospheric thermal emission will follow the same rule of scattering and transmission. So this theory is applicable to those planetary atmosphere where, (1) the atmospheric thermal emission is comparable to the irradiated stellar flux and (2) the atmosphere gives infrared scattering effects. Here we have revisited the connection relation between scattering and transmission functions (see \eqref{eq: S-T Transformation rule}) as established by \cite{coakley1973simple}. This transformation rule describes the interchange between S- and T- functions depending on the orientation of the incident beam in case of only diffuse scattering in finite atmosphere \citep{coakley1973simple}. In this work, we first show that this relation is indeed true in case of thermally emitting atmosphere. Secondly, a transformation rule for the V and W functions (see eqn. \eqref{eq: V-W transformation rule}) as well as a connection relation between the thermal emission flux at different atmospheric layers, which can be named as \textit{reduced thermal emission}, is established. It ensures that if a beam incident from the upper side of the layer $\tau=0$, then V and W functions can be represented as shown in the figure \ref{fig: V_mu and W_mu explanation}. But while the light beam incident from the lower side of $\tau = \tau_1$ layer then the position of V and W functions in figure~\ref{fig: V_mu and W_mu explanation} will interchange. It shows the symmetry of the solutions provided here. Although the transformation rules are applicable only if the atmospheric emission at different optical depths are connected by the relation given in eqn. \eqref{eq: reduced thermal emission}. The applicability of the transformation rule in case of thermal emission, also ensures the fact that in case of comparable emission and scattering from an atmosphere, these both effects are in equal footing. Hence, both effects should be considered for a full-proof modeling. The $V(\mu)$ and $W(\mu)$ functions are analogous to Chandrasekhar's X and Y functions mentioned in \cite{chandrasekhar1960radiative}. They represent the relative changes of the radiation from the layer $\tau = 0$ along the direction $(\mu,\phi)$ and from $\tau=\tau_1$ along $(-\mu,\phi)$ respectively due to the presence of the atmosphere. Hence, the atmospheric presence can be realized in terms of diffuse reflection and atmospheric thermal emission from the corresponding layer. In other words it can be said that they act as the source function for the direction $(\mu,\phi)$ at $\tau=0$ and the direction $(-\mu,\phi)$ at $\tau=\tau_1$ respectively. We showed (in section~\ref{sec: consistency}) that at the semi-infinite limit (i.e.$\tau_1\to \infty$), our finite atmosphere results will reduce into that of semi-infinite results obtained in \cite{sengupta2021effects}. Hence we can say that the $M(\mu)$-function (see \cite{sengupta2021effects} for detail) is semi-infinite counterpart of the more general $V(\mu)$-function as shown in figure~\ref{fig: M_mu explanation}. It reveals the semi-infinite limiting case of $V(\mu)$-function. The work presented by \citep{sengupta2021effects} considered atmospheric thermal emission in semi-infinite atmosphere case and thus limited by the condition of translational invariant thermal emission in the atmosphere. For Planetary atmosphere it means that such theory is applicable only for those planets which has an isothermal atmosphere. In this work we removed that limitation by considering the finite atmosphere problem which does not need a translational invariant thermal emission and in such case the scattering function $S(\tau,\mu,\phi;\mu_0,\phi_0)$ varies with the optical depth of the atmosphere. It will provide the opportunity to model the atmospheric spectra for any type of atmospheric temperature structure with simultaneous emission and scattering. In this work we considered only the isotropic scattering case which is indeed the first step to modify the finite atmosphere scattering problem in the presence of thermal emission. This work can be expanded for the general cases of scattering with the same recipe and the modifications will follow accordingly. To include the numerical approach, one should use the Heyney-Greenstein phase function \cite{henyey1941diffuse}, \begin{equation*} p(\cos\Theta) = \frac{1-g^2}{(1+g^2-2g\cos\Theta)^\frac{2}{3}} \end{equation*} where, $g\in[-1,1]$ is the asymmetry parameter, as shown in \cite{bellman1967chandrasekhar,batalha2019exoplanet}. The atmospheric thermal emission is the simplest possible emission process considered in our work. However, we assumed low scattering limit $\sigma<<\kappa$ to use the simple planck function as the atmospheric emission term (see eqn. \eqref{eq: thermal emission term}). It simplifies the mathematical derivations significantly. However, this restriction can be removed by replacing $B(T_\tau)$ with $\frac{\kappa}{\chi}B(T_\tau)$ and the results will follow accordingly. Although the physical interpretations remain unaltered. In case of exoplanetary atmosphere modeling, the atmospheric emission can not always be simplified by the Planck emission. The upper atmosphere of exoplanets does not hold the local thermodynamic equilibrium condition \citep{seager2010exoplanet,sengupta2021effects}. In such cases different types of atmospheric emission can be considered by the general atmospheric emission function $\beta$ as shown in \eqref{eq: general emission term}. By appropriate choice of $\beta$-parameter thermal re-emission, anisotropic emission can be considered as discussed in \cite{sengupta2021effects}. Finally, the polarization effect is not considered in this work. That can be included for finite atmosphere in the same way as in the semi-infinite atmosphere case discussed in \cite{sengupta2021effects}. \appendix \section{DERIVATION OF THE SCATTERING FUNCTION and its derivative}\label{apndx sec: Derivation of scattering function} In this section we will show the derivation of the scattering function $S(\tau_1,\mu,\phi;\mu_0,\phi_0)$ and its derivative $\frac{\partial S(\tau,\mu,\phi;\mu_0,\phi_0)}{\partial \tau}|_{\tau = \tau_1}$ which has directly been written in eqns.\eqref{eq: finite atmosphere scattering function1.1} and \eqref{eq: finite atmosphere scattering function2.1}.For the simplicity of calculations we write $\frac{\partial S(\tau_1,\mu,\phi;\mu_0,\phi_0)}{\partial \tau_1}$ instead of $\frac{\partial S(\tau,\mu,\phi;\mu_0,\phi_0)}{\partial \tau}|_{\tau = \tau_1}$ in the main text of section~\ref{sec: general integral equations}. To start with we will use the scattering function relation with the source function equation given in \cite{chandrasekhar1960radiative} (page: 168; eqns: (23) and (25)) as follows, \begin{equation}\label{eq: scattering function in chandrasekhar 23} \frac{1}{4}[(\frac{1}{\mu}+\frac{1}{\mu_0})S(\tau_1;\mu,\phi;\mu_0,\phi_0) + \frac{\partial S(\tau;\mu,\phi;\mu_0,\phi_0)}{\partial \tau}|_{\tau = \tau_1}] = \xi(0,+\mu,\phi) + \frac{1}{4\pi}\int_0^1\int_0^{2\pi} S(\tau_1;\mu,\phi;\mu',\phi')\xi(0,-\mu',\phi') \frac{d\mu'}{\mu'} d\phi' \end{equation} and \begin{equation}\label{eq: scattering function in chandrasekhar 25} \frac{1}{4}F \frac{\partial S(\tau;\mu,\phi;\mu_0,\phi_0)}{\partial \tau}|_{\tau = \tau_1} = exp(-\tau_1/\mu)\xi(\tau_1,+\mu,\phi) + \frac{1}{4\pi} \int_0^1\int_0^{2\pi} T(\tau_1;\mu,\phi;\mu',\phi')\xi(\tau_1,+\mu',\phi')\frac{d\mu'}{\mu'}d\phi' \end{equation} Now the source functions $\xi(0,\mu,\phi)$ and $\xi(\tau_1,\mu,\phi)$ in thermal emission case is given in eqns. \eqref{eq: source function at tau=0} and \eqref{eq: source function at tau=tau_1}. Making use of them with the boundary conditions (eqns.\eqref{eq: boundary condition1}) the above two equations can be written as, \begin{equation}\label{apndx eq: scattering function} \begin{split} &\frac{F}{4}[(\frac{1}{\mu}+\frac{1}{\mu_0})S(\tau_1;\mu,\phi;\mu_0,\phi_0)+\frac{\partial S(\tau;\mu,\phi;\mu_0,\phi_0)}{\partial \tau}|_{\tau = \tau_1}]= B(T_0)[1+\frac{1}{4\pi}\int_0^1\int_0^{2\pi} S(\tau_1;\mu,\phi;\mu',\phi')\frac{d\mu'}{\mu'}d\phi']\\ &+ \frac{1}{4}F[p(\mu,\phi;-\mu_0,\phi_0)+ \frac{1}{4\pi}\int_0^1\int_0^{2\pi} p(\mu,\phi;\mu'';\phi'')S(\tau_1;\mu'',\phi'';\mu_0,\phi_0)d\phi''\frac{d\mu''}{\mu''}\\ &+ \frac{1}{4\pi}\int_0^1\int_0^{2\pi} S(\tau_1;\mu,\phi;\mu',\phi')\{p(-\mu',\phi';-\mu_0,\phi_0)+ \frac{1}{4\pi}\int_0^1\int_0^{2\pi} p(-\mu',\phi';\mu'';\phi'')S(\tau_1;\mu'',\phi'';\mu_0,\phi_0)d\phi''\frac{d\mu''}{\mu''}\}\frac{d\mu'}{\mu'}d\phi'] \end{split} \end{equation} and \begin{equation}\label{apndx eq: derivative scattering function} \begin{split} &\frac{F}{4}[\frac{\partial S(\tau;\mu,\phi;\mu_0,\phi_0)}{\partial \tau}|_{\tau = \tau_1}]= B(T_{\tau_1})[e^{-\tau_1/\mu}+\frac{1}{4\pi}\int_0^1\int_0^{2\pi} T(\tau_1;\mu,\phi;\mu',\phi')\frac{d\mu'}{\mu'}d\phi']\\ &+ \frac{1}{4}F[exp\{-\tau_1(\frac{1}{\mu_0}+\frac{1}{\mu})\}p(\mu,\phi;-\mu_0,\phi_0) + \frac{1}{4\pi}e^{-\tau_1/\mu}\int_0^1\int_0^{2\pi} p(\mu,\phi;-\mu'';\phi'')T(\tau_1;\mu'',\phi'';\mu_0,\phi_0)d\phi''\frac{d\mu''}{\mu''}\\ &+ \frac{1}{4\pi}\int_0^1\int_0^{2\pi} T(\tau_1;\mu,\phi;\mu',\phi')\{e^{-\tau_1/\mu_0}p(\mu',\phi';-\mu_0,\phi_0) + \frac{1}{4\pi}\int_0^1\int_0^{2\pi} p(\mu',\phi';-\mu'';\phi'')T(\tau_1;\mu'',\phi'';\mu_0,\phi_0)d\phi''\frac{d\mu''}{\mu''}\} \frac{d\mu'}{\mu'}d\phi'] \end{split} \end{equation} Hence, multiplying eqns.\eqref{apndx eq: scattering function} and \eqref{apndx eq: derivative scattering function} by $\frac{4}{F}$ and replacing the quantities $\frac{B(T_0)}{F}, \frac{B(T_{\tau_1})}{F}$ by $U(T_0)$ and $U(T_{\tau_1})$ respectively, we will get the equations \eqref{eq: finite atmosphere scattering function1.1} and \eqref{eq: finite atmosphere scattering function2.1} respectively. In the similar fashion the Transmission function $T(\tau_1;\mu,\phi;\mu_0,\phi_0)$ and its derivative $\frac{\partial T(\tau;\mu,\phi;\mu_0,\phi_0)}{\partial \tau}|_{\tau = \tau_1}$ can be derived. \bibliography{paper3} \bibliographystyle{aasjournal}
Title: A New Position Calibration Method for MUSER Images
Abstract: The Mingantu Spectral Radioheliograph (MUSER), a new generation of solar dedicated radio imaging-spectroscopic telescope, has realized high-time, high-angular, and high-frequency resolution imaging of the sun over an ultra-broadband frequency range. Each pair of MUSER antennas measures the complex visibility in the aperture plane for each integration time and frequency channel. The corresponding radio image for each integration time and frequency channel is then obtained by inverse Fourier transformation of the visibility data. In general, the phase of the complex visibility is severely corrupted by instrumental and propagation effects. Therefore, robust calibration procedures are vital in order to obtain high-fidelity radio images. While there are many calibration techniques available -- e.g., using redundant baselines, observing standard cosmic sources, or fitting the solar disk -- to correct the visibility data for the above-mentioned phase errors, MUSER is configured with non-redundant baselines and the solar disk structure cannot always be exploited. Therefore it is desirable to develop alternative calibration methods in addition to these available techniques whenever appropriate for MUSER to obtain reliable radio images. In the case that a point-like calibration source containing an unknown position error, we have for the first time derived a mathematical model to describe the problem and proposed an optimization method to calibrate this unknown error by studying the offset of the positions of radio images over a certain period of the time interval. Simulation experiments and actual observational data analyses indicate that this method is valid and feasible. For MUSER's practical data the calibrated position errors are within the spatial angular resolution of the instrument. This calibration method can also be used in other situations for radio aperture synthesis observations.
https://export.arxiv.org/pdf/2208.10217
\baselineskip18pt \abstract{ The Mingantu Spectral Radioheliograph (MUSER), a new generation of solar dedicated radio imaging-spectroscopic telescope, has realized high-time, high-angular, and high-frequency resolution imaging of the sun over an ultra-broadband frequency range. Each pair of MUSER antennas measures the complex visibility in the aperture plane for each integration time and frequency channel. The corresponding radio image for each integration time and frequency channel is then obtained by inverse Fourier transformation of the visibility data. However, the phase of the complex visibility is in general severely corrupted by instrumental and propagation effects. Therefore, robust calibration procedures are vital in order to obtain high-fidelity radio images. While there are many calibration techniques available -- e.g., using redundant baselines, observing standard cosmic sources, or fitting the solar disk -- to correct the visibility data for the above-mentioned phase errors, MUSER is configured with non-redundant baselines and the solar disk structure cannot always be exploited. Therefore it is desirable to develop alternative calibration methods in addition to these available techniques whenever appropriate for MUSER to obtain reliable radio images. In the case that a point-like calibration source containing an unknown position error, we have for the first time derived a mathematical model to describe the problem and proposed an optimization method to calibrate this unknown error by studying the offset of the positions of radio images over a certain period of the time interval. Simulation experiments and actual observational data analyses indicate that this method is valid and feasible. For MUSER's practical data the calibrated position errors are within the spatial angular resolution of the instrument. This calibration method can also be used in other situations for radio aperture synthesis observations.\\[2ex] {\bf Keywords} instrumentation: interferometers --- Sun: radio radiation --- techniques: interferometric --- techniques: image processing --- methods: data analysis --- methods: observational --- Sun: activity --- (Sun:) solar-terrestrial relations} \section{Introduction} % \label{sect:intro} For an interferometer, radio signals from a cosmic source are received by the antenna of an observation station, and then transmitted indoors to complete the processes of mixing, amplification and filtering. Then the signal is digitalized by the digital receiver. Afterwards, the complex signals of the two antennas are correlated to produce observational visibilities \citep{Thompson+2017}, equivalent to a Fourier component of the radio brightness distribution. The term "calibration" refers to the estimation and correction for the instrumental gain and errors in the visibilities \citep{Grobler+2014}. The purpose of calibration is to solve the unknown gain error and phase error of the equipment, as well as the unknown propagating interference \citep{Wijnholds+2010}. The available calibration methods of radio telescopes mainly include three basic categories: direct calibration, calibration referenced to calibrator sources in the sky, and self-calibration \citep{Bastian1989,Fomalont+Perley1999,Thompson+2017}. Direct calibration measures the amplitude gain and delay phase in the system link by constructing a link loop. The principle of observing the calibration source is to use the radio telescope to observe the target source and a known calibration source, and then remove the influence of the instrument and the propagation path based on the two observation data. The self-calibration method uses the characteristics of the target source and the characteristics of the antenna array, e.g., the closure relationships, to build a model to achieve the desired calibration result. The calibration and imaging methods can be treated within a common mathematical framework and the calibration is reduced to the mutual fitting of the observed values of sky model and instrument model, such as the model provided by the radio interferometry equation \citep{Hamaker+1996, Smirnov+2011, Rau+2009}. Unresolved point sources are normally employed as calibrators because their phase closure should be zero and their amplitude closure unity. Thus, they are useful in checking the accuracy of calibration and examining instrumental effects \citep{Thompson+2017}. Future radio telescopes will have a large number of antennas and a large field of view. In this case, further considerations should be taken into account for calibrations, e.g., to deal with the parameters with a strong directional dependence, etc. \citep{Wijnholds+2010}. For the calibration of a radioheliograph, a radio telescopes designed to observe the sun, a redundant baseline design was adopted by the Nobeyama Radioheliograph \citep{Nakajima+1994}, the {Nancay} Radioheliograph \citep{NRH1993} and the Siberian Radioheliograph\citep{Altyntsev+2020}, where the number of redundant correlations is greater than the number of antennas. The least square method is used to solve for antenna gains and to correct for phase errors of each antenna, based on the principle that the phases recorded by equal baselines should be the same \citep{Nakajima+1994, Altyntsev+2020}. When the upgraded Very Large Array (VLA, \citealt{Thompson+1980}) observes the sun, the complex gain and delay phase of the telescope system are calibrated by observing a standard cosmic source \citep{Chen+2012}. Since small antennas may have insufficient sensitivity to observe cosmic calibrator sources, the Expanded Owens Valley Solar Array (EOVSA, \citealt{Nita+2016}) introduced a 27-meter antenna equipped with He-cooled receivers, to calibrate the small antennas using standard calibrator source. The Mingantu Spectral Radioheliograph (MUSER), which is stationed in Inner Mongolia, China, is a new generation of radioheliograph capable of observing the Sun with high time, angular, and frequency resolution \citep{Yan+2009,Yan+2021}. Observations by MUSER during 2014-2019 have been presented by \citet{Zhang+2021}. The outcomes of calibration and data processing for MUSER in decimetric wavelengths have been reported, including delay measurements, polarisation calibration, and some additional results of calibration and data processing in \citet{Wang+2013}. The approach to calibrating MUSER is to observe the strong radio sources in the sky as the point-source calibrator. Since MUSER antennas are insensitive to cosmic calibrator sources, radio beacons on satellites are the strongest sources in the sky for MUSER. In the MUSER frequency band, some geosynchronous orbit satellites and GPS satellites are available. Therefore, the satellites can be observed as calibrator sources at several discrete frequencies. Some strong radio sources or intensive radio bursts on the sun can also be used as calibrator sources across the full frequency band \citep{Wang+2013, Wang+Yan2019,Wang+2019}. However, when a satellite is used as a calibrator, its nominal position may not be as accurate as celestial source's position in real time except for GPS or other navigation and positioning satellites, though the accuracy of this nominal position may still meet the needs of the satellite's original purpose for applications. This satellite nominal position contains an error which will cause a solar radio image to deviate from the center of the field of view. Fortunately, the solar disk in the solar radio image can be employed to determine the disk center by fitting the solar disk model so as to obtain the offset of the solar image \citep{Mei+2017,Chen+2017,Wang+2019}. Nevertheless, this approach may not work in general: the solar disk structure in a solar image is not always obvious due to sparse sampling of the synthesis array, or observing at low radio frequencies. Therefore, we have for the first time derived a mathematical model to describe the problem with the calibrator position deviation, and proposed a new method to determine the calibrator's position error by minimizing to zero the RMS error of the deviated positions away from the center of radio images over a certain period of time interval. As described in \citet{Thompson+2017}, the closure phase for the point-source is always zero, even if it is not at the phase-tracking center or if the station coordinates have errors. Therefore the position of the point-source cannot be deduced from closure phase measurements alone. While a point-source is an ideal calibrator in radio interferometry and/or radio aperture synthesis, it turns out that its position cannot be determined by the aperture synthesis theory exclusively, but must be prescribed in advance by other means. In the next Section \ref{section2}, we briefly introduce the characteristics of MUSER. This new mathematical model, as well as the resulting new position calibration method, are presented in Section~\ref{section3}. In Section \ref{section4} the simulation results are shown to validate the method. The calibration results of real MUSER observations are also demonstrated. Finally we discuss the merits of the new method and provide our conclusions in Section \ref{section5}. \section{Brief Description of MUSER Imaging} \label{section2} The main characteristics and performance of MUSER are listed in Table~\ref{characterestics}\citep{Yan+2009,Yan+2021}. Presently MUSER consists of two arrays named as MUSER-I and MUSER-II. MUSER-I contains 40 antennas of 4.5 m diameter operating in the frequency range from 0.4 GHz to 2 GHz, and MUSER-II contains 60 antennas of 2m diameter operating in the frequency range from 2 GHz to 15 GHz. These 100 antennas are arranged on three logarithmic spiral arms shown in Fig.~\ref{antennas position}, and the longest baseline is about 3km \citep{Yan+2009}. The inserted panel in Fig.~\ref{antennas position} shows the dense antenna distribution in the central area within 200m range. \begin{table} \caption{MUSER characterestics and performance.} \label{characterestics} \begin{tabular*}{\columnwidth}{l@{\hspace*{22pt}}l@{\hspace*{22pt}}l} \hline MUSER Array & MUSER-I & MUSER-II\\ \hline Frequency range: & 400 MHz - 2 GHz & 2 - 15 GHz\\[2pt] % Array antennas & 40 $\times \phi$4.5 m & 60 $\times\phi$2 m\\[2pt] Single dish beam: & 9.5$^\circ$ - 1.9$^\circ$ &4.3$^\circ$-$0.6^\circ$\\[2pt] Frequency resolution: & 64 channels & 520 channels\\[2pt] Angular resolution: & 51.6$^{\prime \prime}$- 10.3$^{\prime \prime}$ & 10.3$^{\prime \prime}$- 1.3$^{\prime \prime}$\\[2pt] Time resolution: & 25 ms & 206.25 ms\\[2pt] Dynamic range: & 25 db (snapshot) & 25 db (snapshot)\\[2pt] Polarizations: & Dual circular L, R & Dual circular L, R\\[2pt] Maximum baseline: & $\sim$3 km & $\sim$3 km\\[2pt] \hline \end{tabular*} \end{table} As an aperture synthesis radio telescope, MUSER images the sun by following the standard aperture synthesis imaging process in radio astronomy. However, there are also some specific characteristics in MUSER imaging. A high-performance imaging pipeline and some algorithms have been developed for MUSER to produce solar radio images \citep{Mei+2017,Chen+2017,Wang+2019,Chen+2019}. E.g., We have done some research work in deconvolution of extended source, like the sun. Deconvolution using the generated countermeasure network is proposed \citep{Xu+2020}, and the simulation results show that it is better than theHogbom CLEAN algorithm \citep{Thompson+2017}. Another, the Cornwell Multi-scale CLEAN \citep{Cornwell2008} has been employed as the deconvolution method in MUSER solar radio imaging \citep{Zhao+2017}. The image structure information were obtained by combining the weighting functions of natural weighting and uniform weighting \citep{Wang+Yan2019}. The quasi-periodic pulsations before and during a solar flare were analyzed with restored radio images observed by MUSER \citep{Chen+2019}, etc. Since the two MUSER arrays have no redundant baselines, the calibration procedure utilised in other radioheliographs to account for redundant baselines cannot be employed for MUSER. It is also difficult for MUSER to observe weak radio source signals in the sky as MUSER arrays are composed of small antennas. The known strong radio sources as calibrators are needed for the phase calibration. Therefore geosynchronous satellites such as meteorological satellites with strong signals have been used as the calibrator sources for MUSER with spherical wavefront effects due to the small distance of the satellites from the Earth have also been taken into account \citep{Wang+2019}. As mentioned above, though, while the nominal position of the satellite may satisfy its purpose, it may be offset from the actual position of the satellite at the time of measurement that cause issues for source positions. Tracking a satellite in 9 hours reveals that the satellite position errors are $\sim1.1^{\circ}$ in Declination and $\sim1^{\prime}$(arcmin) in Hour angle \citep{Wang+2019}. This satellite position error will cause solar radio images obtained by MUSER to deviate from the center of the field of view. Therefore other methods such as fitting the solar disk model have been applied to correct the offset of the solar images \citep{Mei+2017,Chen+2017,Wang+2019}. Within the framework of radio interferometry and aperture synthesis theory, we present a new method for determining the position error of the satellite (or calibrator), i.e., determining this unknown deviation from the phase-tracking center. \section{Method of Determining Calibrator's Position Deviation} \label{section3} In the following subsections, we derive the mathematical basis for calibration with a point-source calibrator offset from the phase tracking center and describe the numerical procedure used to solve the problem. \subsection{Calibration with a source offset from the phase tracking center} \label{subsect3.1} Under the approximation that the synthesized field of view is small, the radio distribution of brightness on the sky can be obtained from the following inverse Fourier Transform: \begin{equation} I^D(l,m)=\iint S(u,v) V(u,v)e^{j2\pi(ul+vm)} dudv, \label{eq:dirty} \end{equation} where $V(u,v)$ is called the visibility function and $S(u,v)$ is the sampling function produced by all baselines over ($u$,$v$) plane. The visibilities $V(u,v)$ are obtained through the correlation of radio signals of any two antennas in the interferometric array, correspond to the Fourier components of the radio intensity distribution in the observed small sky area. $I^D(l,m)$ is the so-called dirty image. The measured visibilities must be calibrated to remove the influence of the instrumental gain and other effects. This can be achieved by observing a calibrator source. When pointing at the calibrator position, or the phase tracking center, it is clear that all the visibility amplitudes are unity and all the visibility phases are zero if the calibrator is a point-source of unit flux density. Hence, the complex gain of each antenna can be determined \citep{Bastian1989,Fomalont+Perley1999}. Normally an interferometric array is designed to be stable during observations and the phase errors corresponding to each interferometric element are assumed stochastic stationary when pointing at different directions. When pointing at the calibrator source, the phase tracking center is set to the calibrator position and it is at the origin of the image plane $(l,m)$. However, if there exists an offset term ($l_d,m_d$) from the origin (i.e., the nominal position of the calibrator, or the phase tracking center), the actual observed visibilities for the calibrator become: \begin{equation} V_{cal}(u,v)e^{j2\pi(ul_d+vm_d)}=\iint I_{cal}(l+l_d,m+m_d)e^{-j2\pi(ul+vm)}dldm, \label{eq:basecal} \end{equation} and the observed response of the interferometric element with antenna pair ($p$-$q$) for the calibrator source is: \begin{equation} {r_{cal}^{pq}}=|V_{cal}^{pq}|g^{pq}\exp[{-j(\varphi_{Vcal}^{pq}+\varphi_{err}^{pq})+j2\pi(u_{cal}^{pq}l_d+v_{cal}^{pq}m_d)}], \label{eq:calerr} \end{equation} where $u_{cal}^{pq},v_{cal}^{pq}$ are the corresponding $u,v$ coordinates when pointing at the calibrator source for the antenna pair ($p$-$q$), and $\varphi_{err}^{pq}$ is the phase error of the corresponding interferometric element due to instrumental effects. This deviation will be transferred to the target radio images through the phase calibration process with a point calibrator source. The calibrated response of the interferometric element with antenna pair ($p$-$q$) for the solar image becomes \citep{Wang+2019}: \begin{equation} {r_{sun}^{pq}}=|V_{sun}^{pq}|g^{pq}\exp[{-j\varphi_{Vsun}^{pq}+j2\pi(u_{cal}^{pq}l_d+v_{cal}^{pq}m_d)}], \label{eq:sunerr} \end{equation} where $|V_{sun}^{pq}|$ indicates the visibility amplitude of the sun and $\varphi_{Vsun}^{pq}$ is the corresponding visibility phase of the sun. The solar visibilities thus calibrated with a calibrator offset from the phase tracking center can then be expressed as follows. \begin{equation} V(u,v)=V_{sun}(u,v)e^{j2\pi(u_{cal}l_d+v_{cal}m_d)}=V_{sun}(u,v)e^{j2\pi[u(\frac{u_{cal}}{u}l_d)+v(\frac{v_{cal}}{v}m_d)]}. \label{eq:solvis} \end{equation} When pointing at the calibrator, $u_{cal},v_{cal}$ are different for different baselines. We can decompose the baseline ratios when pointing at the calibrator source and the target objective, the sun, as an invariant term for all baselines and a variable function as follows, \begin{equation} \frac{u_{cal}}{u}=\xi_0+\frac{\xi(u,v)}{u}~,~~~\frac{v_{cal}}{v}=\eta_0+\frac{\eta(u,v)}{v}, \label{eq:deviaterm} \end{equation} in which $\xi_0, \eta_0$ are constants and $\xi(u,v), \eta(u,v)$ are functions of $u, v$ because a given baseline in general traces an ellipse in the (u,v) plane, {with hour angle as the variable} \citep{Fomalont+Perley1999,Thompson+2017}. Therefore, {$\xi_0, \eta_0$} are also changing with time although they are constants for a certain instant. By substituting equation~(\ref{eq:solvis}) into equation~(\ref{eq:dirty}) with the decomposed expression~(\ref{eq:deviaterm}) and carrying out the inverse Fourier Transform, we find that the final dirty image of the observation {with {$l_d,m_d$} as parameters} is the deviated solar image convolved with a {blurring} function. \begin{equation*} I^D(l,m;l_d,m_d)= \iint S(u,v) V_{sun}(u,v)e^{j2\pi[u(\xi_0l_d)+v(\eta_0m_d)]}\times e^{j2\pi[\xi(u,v)l_d+\eta(u,v)m_d]}e^{j2\pi(ul+vm)} dudv \end{equation*} \begin{equation} \null~ \hspace{-3.6cm}= I_{sun}^d(l+\xi_0l_d,m+\eta_0m_d)*H(l,m;l_d,m_d), \label{eq:sunshift} \end{equation} in which the solar image $I_{sun}^d$ deviated from the phase tracking center is \begin{equation} I_{sun}^d(l+\xi_0l_d,m+\eta_0m_d) =\iint S(u,v) V_{sun}(u,v)e^{j2\pi[u(\xi_0l_d)+v(\eta_0m_d)]} e^{j2\pi(ul+vm)} dudv, \label{eq:basesun} \end{equation} and {$H(l,m;l_d,m_d)$} is defined formally by the following expression, \begin{equation} H(l,m;l_d,m_d)=\iint e^{j2\pi[\xi(u,v)l_d+\eta(u,v)m_d]}e^{j2\pi(ul+vm)} dudv. \label{eq:blur} \end{equation} Though it is difficult to obtain the analytic expression of {$H(l,m;l_d,m_d)$}, we may nevertheless estimate its influence on the final dirty image of the sun as a {blurring} effect. From the Fourier Transform function of the form $e^{-j\phi}$ in the integrand in equation~(\ref{eq:blur}) we can see that its modulus is 1. Therefore {{$H(l,m;l_d,m_d)$}} may modulate the solar dirty image distribution without changing the maximum intensity of the original dirty image and total energy of the original signal. Furthermore, in the case that $l_d=0$ and $m_d=0$, the Fourier Transform function in equation~(\ref{eq:blur}) becomes a constant 1. It turns out that the inverse Fourier Transform {{$H(l,m;l_d,m_d)$}} becomes a $\delta$-function under the condition of $l_d=0$ and $m_d=0$. Equation~(\ref{eq:sunshift}) is derived for the first time to address the problem of calibrator position deviation from the phase tracking center, and it provides a solid mathematical foundation for our new position calibration procedure. In practice, the dirty image obtained after calibration with a satellite signal is the target dirty image shifted with the deviation ($\xi_0l_d$,$\eta_0m_d$) and modulated by the above expressed blurring function {$H(l,m;l_d,m_d)$}. As mentioned before, this shifting amount is also changing with time. The position error ({$l_d, m_d$}) of the satellite at the time of calibration is largely unknown, but it has fixed values. Therefore the phases {$2\pi$}{$(u_{cal}l_d$}+{$v_{cal}m_d)$} introduced to corresponding baselines by the satellite position deviation for phase calibration are also unknown. Furthermore they are inseparable from the corresponding visibility phase terms. Normally, it is difficult to directly obtain the exact position error of the satellite, or the calibrator. Our strategy to eliminate the influence of the unknown position error ({$l_d, m_d$}) of the calibrator is based on introducing a deviation compensation ({$l_a, m_a$}) into the observed target visibility phases with equation ~({\ref{eq:sunshift}}). We employ the minus to express the expectation that the added phases {$2\pi$}{$[u({u_{cal}}/{u})l_a$}+{$v({v_{cal}}/{v})m_a]$} will act with opposite signs to the unknown calibrator's deviation with {$\Delta l=$}{$l_d-$}{$l_a$}, and {$\Delta m=$}{$m_d-$}{$m_a$}. Then the dirty image of the observation is modified as follows. \begin{equation} I^D(l,m;\Delta l,\Delta m) =I_{sun}^d(l+\xi_0\Delta l,m+\eta_0\Delta m)*H(l,m;\Delta l,\Delta m), \label{eq:modelshift} \end{equation} The above expression~({\ref{eq:modelshift}}) is the mathematical basis for our new position calibration procedure. If we could eliminate the influence of ({$l_d, m_d$}) by adjusting ({$l_a$,$m_a$}) in the modified image {$I^D$}{$(l,m;$}{$\Delta l,\Delta m)$} with {$\Delta l$}, and {$\Delta m$} approaching zero, we can obtain the desired final result. Of course if one has {$a~priori$} knowledge of ({$l_d, m_d$}) one can directly remove the influence of ({$l_d, m_d$}) by compensating {$l_a=$}{$l_d$} and {$m_a=$}{$m_d$}. In general, we do not know the exact deviation of the satellite calibrator at the time of the calibration. Therefore, we need a criterion to judge whether {$\Delta l$}=0 and {$\Delta m$}=0, which will be presented in the next subsection. \subsection{Procedure of new position calibration technique} \label{subsect3.2} As mentioned above, the solar visibility data calibrated by a calibrator with the unknown offset ({$l_d, m_d$}) relative to the phase tracking center, together with an arbitrarily added compensation ({$l_a$,$m_a$}), will generally cause the recovered solar radio image to deviate from the center of the field of view by the unknown offset ({$\xi_0$}{$\Delta l$}, {$\eta_0$}{$\Delta m$}). Fig.~{\ref{deviation}} schematically shows the location of the dirty image in which the solar disk center $C$ is offset from the phase tracking center $O$ with the corresponding unknown offset in the projected two-dimensional sky plane. In this paper we do not consider the situation where the the solar disk can be fitted for the position calibration, which can nevertheless be applied to verify the calibrated results wherever appropriate. $S$ denotes a reference source position on the solar spherical surface but projected on the two-dimensional solar disk. Apparently, if we adjust the added compensation ({$l_a$,$m_a$}) for the current observation, the solar image including the reference source $S$ and the center $C$ will also change accordingly. It should be noted that if the calibrator deviates from the phase tracking centre in an unexpected way, all locations on the target image will also deviate from the phase tracking centre with unknown offsets. The problem is determining the appropriate compensation in this situation. If one has {\it a priori} knowledge of the location of the reference radio source on the solar disk, one can determine the actual offset of the satellite or calibrator by adjusting the compensation over just one solar image. However, this is not the case for the general calibration problem as considered here, i.e., we make no assumptions regarding the target solar images and the unknown calibrator offset relative to the phase tracking center. We instead consider the target images during a certain period of time. During this period, we may expect three kind of motions of a radio source in the target images. Firstly, the source moves with time in the same way as the center does in the target solar image due to the influence of unknown deviation errors ($\Delta l$,$\Delta m$) as described by equation~({\ref{eq:modelshift}}). Secondly, the source should rotate as the sun rotates with its axis in addition to the first kind of motion. Finally, the source may have its own relative motion on the solar surface in addition to the solar rotation and the first kind of motion. In all three cases, the source location will be modulated by the blurring function {$H(l,m;l_d,m_d)$}. If the reference source is a stable structure on the sun during a period of observation, the third kind of position variation for this source should be absent. Then the position change of the stable reference source with respect to the solar center is solely related to the rotation of the sun in addition to the influence of the unknown deviation error. Our goal is to eliminate the influence of the first kind of motion from the second kind of motion for a stable source. As shown in Figure~{\ref{deviation}}, while the unknown deviation error ($\Delta l$,$\Delta m$) influences both $S$ and $C$, the distance variation between $S$ and $C$ should be only due to the rotation of the sun if the reference source is a stable structure on the sun during the observation time. This serves as our criterion for the new calibration procedure. As mentioned above, the solar disk center $C$, which is in general unknown, varies with respect to the phase center $O$ for images observed at different times due to the time-dependence of ({$\xi_0,\eta_0$}) in equation~{(\ref{eq:deviaterm})} if the position error ($\Delta l$,$\Delta m$) has not been eliminated. Consequently, a source position $S$ on the solar disk also varies with respect to the phase center $O$ due to the same reason of the existence of the error term ($\Delta l$,$\Delta m$). We need to verify whether the difference between ${\bf R}_{sc}$ and ${\bf R}_s$ decreases by adjusting the compensation deviation ($l_a, m_a$) from the calibrator phase tracking center. If the compensation deviation matches the unknown deviation ($l_d, m_d$) of the calibrator from the phase tracking center, the difference should be zero, and the solar center $C$ and phase tracking center $O$ are the same. That is precisely what we want as a final result. Since ${\bf R}_{sc}$ is unknown, we actually evaluate the difference between ${\bf R}_s$ and the corresponding location's theoretical trajectory due to the solar rotation with respect to the phase center $O$ over the period of observation. Let the locations of $S$ in the image plane ($x,y$) be denoted as ($x_s,y_s$). For the time interval of the observation, we have a series of solar images with the calibrated phase containing the position error of the calibrator source. Hence we get a series of reference source positions $ {\bf R}_s^n=(x_s^n,y_s^n)$ where superscript $n$ refers to the discrete time instant $t_n$, $n=1, 2 , ... , N$ during the observation period and $N$ is the total number of the discrete times. The discrete times $t_n$ do not necessarily need to be uniformly distributed over the observation interval. It should be pointed out that, if $S$ represents a stable structure on the sun, its projected position on the solar disk should follow the trajectory due to the solar rotation with respect to the rotation axis crossing the solar center during the observation period. Since we do not know the exact position of solar center, we can nevertheless calculate {$ad ~hoc$} the theoretical trajectories due to the solar rotation with respect to the phase tracking center $O$. The observed positions $ {\bf R}_s^n$ and the calculated theoretical trajectories should coincide to each other if the influence of the calibrator deviation $(l_d,~m_d)$ is eliminated, i.e., the solar center $C$ is coincident with the phase tracking center $O$. In reality, during the elapsed period $ {\bf R}_s^n$ may not always be equal to ${\bf R}_{sc}$ which should follow the correct trajectory due to the solar rotation with respect to the solar center. We therefore seek to minimize the root mean square (RMS) difference between the observed positions $ {\bf R}_s^n$ and the calculated trajectories with respect to the phase tracking center $O$ as the criterion to judge whether the influence of the calibrator deviation $(l_d,~m_d)$ is eliminated and the solar center $C$ is thus determined. Therefore the mathematical model for our new position calibration method can be expressed as an optimization problem as follows. Find {$(l_d, m_d)$} such that the RMS value in the following expression reaches a minimum: \begin{equation} \Delta R=\sqrt{\frac{1}{N}\sum_{n=1}^{N} |{\bf R}_s^n-{\bf T}_s({ R}_s^n,R_{\circ})|^2}={\rm min}, \label{eq:model} \end{equation} where ${\bf T}_{s}({ R}_{s}^n,R_{\circ})$ indicates the above mentioned theoretical trajectory expression of the source $S$. Obviously, it is a function of ${R}_{s}^n=|{\bf R}_{s}^n|=\sqrt{(x_s^n)^2+(y_s^n)^2}$, and the {radius of the radio sun} $R_{\circ}$. As mentioned above, $N$ is the total number of the discrete time instants during the observation period. In practice, the optimization problem~(\ref{eq:model}) is solved by an iterative procedure to find unknown $(l_d,~m_d)$ to eliminate the effects of the introduced phase $2\pi(u_{cal}l_d+v_{cal}m_d)$ contained in the initial data for each baseline as described by the following steps. {\it Step 0}. Denote the estimated deviation as $(l_a^{(k)},~m_a^{(k)})$ with k=0 representing initial status and the initial values are taken as {$l_a^{(0)}=0$}, and {$m_a^{(0)}=0$}. Set a prescribed but very small positive value $\epsilon$ as the threshold. {\it Step 1}. Set $k=k+1$ as an index for the current iteration. For an observational interval (e.g., of several hours), a series of full Sun dirty images can be obtained by equation~(\ref{eq:sunshift}) which contain the influence of the introduced phase $2\pi(u_{cal}l_d+v_{cal}m_d)$ in the calibrated visibility due to the calibrator deviation from the phase tracking center. We add a compensation phase term $$-2\pi(u_{cal}l_a^{(k)}+v_{cal}m_a^{(k)})$$ in the corresponding visibility so that the introduced additional phase becomes $$2\pi[u_{cal}(l_d-l_a^{(k)})+v_{cal}(m_d-m_a^{(k)})].$$ A stable radio source on the sun is chosen as the reference position $S$ so that we obtain a set of $(x_s^n,y_s^n)$ from the dirty images avoiding any solar radio burst interval. {In practice, $l_d, m_d$ are unknown and inseparable from the corresponding visibility phase terms. Therefore at this stage we actually do not know whether $l_a^{(k)}, m_a^{(k)}$ will compensate the deviation or not}. {\it Step 2}. Calculate the theoretical trajectories of the reference position $S$ in the same way as if the reference source $S$ rotates on the solar disk with respect to the phase tracking center $O$ in Fig.~\ref{deviation}. {\it Step 3.} Evaluate the objective function or the RMS value $\Delta R$ of the difference between the observed positions and the theoretical trajectory positions of the reference source for the observational period as described in the optimization model~(\ref{eq:model}). If the RMS value $\Delta R$ does not decrease and the difference between consecutive trial deviations $$\sqrt{|l_a^{(k)}-l_a^{(k-1)}|^2+|m_a^{(k)}-m_a^{(k-1)}|^2}\geq \epsilon,$$ we modify $(l_a^{(k)},m_a^{(k)})$ to further reduce the influence of the additional phase term $2\pi(u_{cal}l_d+v_{cal}m_d)$ due to the calibrator deviation and repeat from {\it Step 1}. Otherwise continue to the next step. {\it Step 4.} If the difference between consecutive iterations is less than $\epsilon$ and the RMS value $\Delta R$ does not increase, or the convergence criterion has been satisfied, the RMS value $\Delta R$ has reached its minimum value. Therefore the unknown deviation ($l_d, m_d$) has been inferred to be ($l_a^{(k)}, m_a^{(k)}$) approximately. Hence the influence due to calibrator deviation from the phase tracking center has been largely eliminated and the iterative procedure can be terminated Optimization techniques can be incorporated in the above procedure. We can also rotate all time series images to the image location corresponding to a fixed instant in order to evaluate the distribution of the reference source during the observation period. Obviously the reference source positions at different instants should converge to the same place on the image corresponding to the certain instant if it is a stable source. The proposed phase calibration process will be demonstrated in the following section for its correctness and merits in removing calibrator position discrepancies. \section{Application to Eliminate Calibrator Position Deviation} \label{section4} {We have conducted several simulation case studies and realistic data processing for MUSER observations to validate the applications of the proposed new method}. \subsection{Simulation} \label{subsect4.1} We first perform a simulation experiment. The model consists of a uniform disk and two Gaussian sources with different widths in their intensity distribution. {In the first image of this simulation test, the source on the left at ${1}/{2} R_{\circ}$ at a position angle 154$^{\circ}$ from the x-axis is chosen as the reference source, and its peak intensity is 20 times that of the background. The "date" of the simulation data is taken to be November 22, 2015, and the observation period is from 02:05 UT to 06:05 UT. On that particular day, the solar $P$ angle was around 19$^{\circ}$. For MUSER, the local meridian time is roughly 4 UT. The working frequency is 1.7125 GHz}. The first row of solar model images in Fig.~\ref{simulation1} {is thus established. From left to right, the three images are assumed to correspond to (a) 02:05:00, (b) 04:05:00, and (c) 06:05:00 UT. The solar rotation has been taken into account, which causes the position of the radio source to vary over the three different instants. If MUSER-I were to observe the aforementioned solar model images, we would obtain the dirty maps shown in the middle row of} Fig.~\ref{simulation1}(d-f). A set of phases $2\pi(u_{cal}l_d+v_{cal}m_d)$ with the initial deviation of $l_d=-1.4^{\prime}, m_d=4^{\prime}$ are then added to the phases of the visibilities corresponding to the model dirty image at different time instants. {The deviation values are significantly larger than the angular resolution of $0.245^{\prime}$ at the 1.7125 GHz MUSER observing frequency, emulating the practical situation of introducing a set of additional phases in phase calibration due to the calibrator position error deviating from the phase tracking center.} The three images in the bottom row in Fig.~\ref{simulation1}(g-i) are respective dirty images affected by the calibrator position deviation and they are the initial data for our analyses. We can readily apply the iterative algorithm as described in section~\ref{subsect3.2} to solve the mathematical model~(\ref{eq:model}) for the simulated data. Fig.~\ref{iteration} shows the {\it a posteriori} iteration history of the RMS error $\Delta R$ of the reference source positions as expressed in model~(\ref{eq:model}). It can be seen that the RMS error $\Delta R$ decreases to a small fraction of the angular resolution at MUSER observing frequency after about ten iterations. The image deviation {\it a posteriori} from the phase tracking center also converges to the expected value after a few iterations. The position calibrated dirty images and restored clean images are shown in Fig.~\ref{simulres}. These results are satisfactory as compared with the original model, as seen in the top two rows of Fig.~\ref{simulation1}. It should be noted that all the restored radio images were obtained through Hogbom CLEAN algorithm \citep{Thompson+2017} unless stated otherwise. To imitate real-world scenarios, we introduced varying amounts of noise to the simulation cases, such as 5\%,10\%,20\%, and 50\% quiet sun intensity, respectively, as random normal distributed noises. The situation in which not all antenna baselines are functioning has also been considered. Normally these baselines are flagged so that they are not involved in the aperture synthesis imaging process. We looked at three situations in which the flagged baselines are confined to short, long, or all baselines. Here, the baseline lengths $<200$ m are regarded as the short; otherwise it is regarded as long. For the sake of convenience, we simply flag antennas and consider the central antennas in the inserted panel of Fig.~\ref{antennas position} {to be short-baseline antennas, even though they may be long-baseline antennas with other antennas outside the central area. The number of central area antennas in the MUSER-I array, as shown in} Fig.~\ref{antennas position}, is 19 and the number of rest antennas is 21. As a result, in the actual simulations, we simply flag the fractions of 10\%, 30\% and 50\% of total antennas that are confined to (i) only the core area, (ii) only the outside area, and (iii) all antennas. In this way, we may evaluate the practicality of the proposed position calibration method. In general the problem converges after about twenty iterations in all cases. Fig.~\ref{sim} {shows the {$a~ posteriori$} results of both the objective function RMS error ($\Delta R$) of the reference source positions and the recovered phase centre error versus different noise levels ranging from noise-free, 5\%, 10\%, 20\% to 50\% quiet Sun intensity, and varied flagging antenna fractions ranging from 0\%, 10\%, 30\% to 50\% total antennas but limited either in (i) the central area only, (ii) the outside area only, or (iii) with no restrictions.} From the simulation results it can be seen that in most cases the recovered position errors are with a small fraction of the corresponding spatial angular resolutions with available baselines. Only in a few cases the calibration position error may reach around one-fold spatial angular resolution. It should be noted that in those cases many antennas were flagged out in the exterior of the central area, i.e., most long baselines were flagged. Because longer baselines lead to higher angular resolution, the majority of information corresponding to increased angular resolution is lost. Nevertheless the recovered position errors with different noise levels are still around one-fold angular resolution even under these circumstances. The above results indicate that the proposed new position calibration method is valid and practical. \subsection{Calibration of MUSER Observational data} \label{subsect4.2} To investigate the validation of this method using actual observational data, we chose November 22, 2015 as the date. The SDO/HMI magnetogram and AIA 131\AA ~EUV image are shown in Fig.~\ref{event1}. It can be seen that there were several solar active regions on that day, AR12457 N11E36 (-562$^{\prime\prime},159^{\prime\prime}$), AR12456 N06W55 ($793^{\prime\prime},82^{\prime\prime}$), and AR12454 N13W53(757$^{\prime\prime},199^{\prime\prime}$). There was a GOES SXR C5.6 class flare in AR12454 starting at 05:31 UT, peaking at 05:38 UT and ending at 05:41 UT. The radio source observed by MUSER-I at AR12457 was chosen as the reference source because it was more stable over the observational period. In general, we use the radio signal of the Geosynchronous weather satellite operating at around 1.7GHz as a calibrator for MUSER data calibration \citep{Wang+2019}. This satellite has another working frequency around 1.68 GHz. The difference in the angular resolution of the MUSER-I array between the two frequencies is less than 1.5 percent. Hence we apply both frequencies for the phase calibration. Since they are both from the same calibrator position, the location differences of any sources in the target images from both frequencies should measure the original position variations corresponding to different frequencies, which was with a RMS value $\Delta R$ of 0.727 in units of the angular resolution before the phase calibration. The present method is then used to calibrate the solar radio images at these two frequencies. The solar radio model in \citep{Tan+2015} has been employed to consider the factor of the frequency-dependent radii. The reference radio source at different times is tracked, expressed in terms of the distance from its location relative to the solar center, or its radius on the solar disk, $R$, and displayed in Fig.~{\ref{track}} with triangle and star symbols denoting 1.7125 GHz and 1.6875 GHz, respectively. The RMS value $\Delta R$ of the location differences of this reference source for both frequencies is slightly changed as 0.696 in unit of the angular resolution after the calibration. The dashed and solid lines in Fig.~{\ref{track}} show the corresponding trajectories of the reference radio source based on the theoretical calculation described in the previous section. To evaluate the effect of integration time and time cadence on final results, we calculate the variations of the restored phase center relative errors based on the images with different integration time and different time cadence, respectively. The corresponding results are shown in Fig.~\ref{sim2}. The results in Fig.~\ref{sim2} (a) show that the relative errors fluctuate around -0.2\%$\sim$0.3\% when the integration time changes from 100 ms to 2000 ms. Similar results are achieved as shown in Fig.~\ref{sim2} (b) when the time interval varies from a few minutes to 20 minutes. In general, the relative errors are within 0.5 percent which indicate that both integral time and time cadence do not influence the restored result significantly. This allows flexibility in selecting the appropriate observational intervals and the integration time whenever the reference source is stable. MUSER-II observational data for the quiet sun on July 5, 2016 were also calibrated, using another Geosynchronous satellite operating at around 4 GHz as a calibrator. There were no radio bursts that day. The radio source observed by MUSER-II near east limb of the solar disk was selected as the reference source in this case. The same iteration approach as in earlier applications was used, and the solar images were thus recovered, with the effects caused by deviation from the phase tracking centre eliminated by the current method. The MUSER-II synthesized image from 03:31 UT to 05:28 UT at 4.1875 GHz on July 5, 2016 after the position calibration is shown in Fig.~\ref{MUSER2}. For comparison the SDO/AIA EUV images at 171\AA (b), 193\AA (c), 304\AA (e) and 131\AA (f), and NoRH radio image at 17 GHz (d) around 04:30 UT are also shown in Fig.~\ref{MUSER2}. It can be seen that the observed features are in close agreement. Therefore, the results obtained by the proposed method in both simulations and using realistic observational data are satisfactory. They also demonstrate the desired performance of MUSER. \section{Discussions and Conclusions} \label{section5} A general method has been proposed to calibrate the solar image position errors arising from calibrator offsets from the phase tracking center. For example, when the geosynchronous satellites are used in MUSER calibrations the present method can effectively resolve the problems of satellite deviation from the nominal phase reference position or phase tracking center. However, the currently available frequencies from satellites do not cover the full frequency bands used for MUSER observations. For the imaging at other frequencies, we need to seek some strong radio sources or intensive radio bursts on the sun as a calibrator source \citep{Wang+2013}. As to the amplitude calibration, the working frequency of geosynchronous satellites is usually narrow-band whereas each bandwidth of MUSER frequency reaches 25~MHz, i.e., much wider than the artificial signal. Therefore, it is not possible to make use of observing artificial geosynchronous satellite for the amplitude calibration of MUSER visibility. Again, the standard techniques \citep{Bastian1989,Wang+2013,Mei+2017,Thompson+2017} including self-calibrations have been employed to calibrate the relative amplitude of MUSER images. Meanwhile, larger antennas for MUSER calibration system have been under construction. In the next a couple of years, two 20-meter 400 MHz - 2 GHz antennas and one 16-meter 2 GHz -15 GHz antenna with a He-cooled receiver will be incorporated into MUSER arrays for calibrations \citep{Yan+2021}. As described in the previous section there was a C5.6 class flare on November 22, 2015 that peaked at 05:38 UT. For an impulsive radio burst on the sun, if we assume the radio burst originated from a compact area at its onset, we may treat the source as a $\delta$-function for phase calibration. During the observation of the sun, the phase center is always the center of the solar disk. Now we choose a source that deviates from the phase center as the calibrator. Then we can apply the present method to eliminate the deviation from the phase tracking center or the image center with the help of the observational data during the quiet period not severely affected by the flare. The model for the frequency-dependent solar radius in \citep{Tan+2015} is adopted in this method for calculating theoretical trajectories for frequencies from 0.4 GHz to 15 GHz. Fig.~\ref{MUSER3} shows the restored multi-frequency images from MUSER-I at 1.26 GHz, 1.46 GHz, 1.66 GHz, 1.86 GHz and 1.96 GHz and the comparison with observations in other wavelengths such as EUV images from SDO/AIA and the radio image from the Nobeyama Radioheliograph at 17 GHz. These results indicate that MUSER observations are both reliable and significant in revealing solar atmospheric observational features. The goal of this research is to demonstrate that the proposed new position calibration model and solution technique are reliable. The interpretation of MUSER radio images that have been restored will be presented elsewhere (e.g., \citealt{Chen+2019,Zhang+2021}). In summary, the conclusions are as follows. 1. A mathematical formula describing the phase calibration problem in the aperture synthesis images arising from a calibrator position offset from the phase tracking center has been established. According to the aperture synthesis principle, the phase tracking center is the calibrator position and it is at the origin of the sky radio image plane. If a calibrator offset (with either a known or unknown value) from the phase tracking center is employed in the phase calibration, this offset will be transferred to the sky radio images. Then it is shown, for the first time, that the observed dirty image of the sky radio intensity distribution can be formulated explicitly as a convolution product between a shifted sky radio image with unknown deviation and a {blurring function, as expressed in equation~({\ref{eq:sunshift}}). This blurring function has a modulus of unity and approaches a $\delta$-function when the deviation reduces to zero. Therefore, the shifted sky radio image is merely modulated by the introduced blurring function, and it becomes the correct sky radio image as the deviation goes to zero. The newly derived mathematical formula can also be applied to other synthesis imaging analyses.} 2. The corresponding position calibration procedure has been proposed to determine the calibrator offset from the phase tracking center based on the above mentioned formula by investigating the offset of the positions of radio images over a certain period of time. This is achieved by selecting a stable radio source in the field of view as a reference spot. Then, the reference source position with respect to its original non-deviated phase tracking center should follow its original geometric relationship with respect to the non-deviated origin during the observation interval, e.g., a stable spot on the sun will vary its position solely due to the solar rotation, or a radio source in the sky map will just keep its position unchanged. This constitutes the criterion for the proposed optimization model of the new position calibration procedure, e.g., equation~(\ref{eq:model}) for the solar observations. {Simulation tests show that the proposed method can effectively eliminate errors due to known or unknown calibrator offsets from the phase tracking centre to small fractions of the corresponding angular resolution under a variety of conditions with different noise levels and sampling configurations}. This demonstrates that the proposed new position calibration method is valid and practical. 3. MUSER observational data have been treated by the proposed method and the calibrated results are robust under different integration time and cadences. When the restored MUSER radio images are compared to other solar observations, it can be seen that the mutual co-alignments agree in exhibiting the observed features in these images, supporting the calibration and MUSER's desired performance. Scientific discussion of the MUSER observations, however, will be given elsewhere. The present study contributes to MUSER calibration, and the future update of the MUSER calibration system will enable MUSER to be used in a broader range of solar and heliospheric physics applications. \section*{Acknowledgements} This work was supported by NSFC grants (11790301, 11790305, 11773043, U2031134, 12003049), and National Key R\&D Program of China (2021YFA1600500, 2021YFA1600503, 2018YFA0404602). MUSER was supported by National Major Scientific Research Facility Program of China with the grant Number ZDYZ2009-3. The MUSER calibration system is a part of the Chinese Meridian Project funded by China's National Development and Reform Commission. The HMI/SDO magnetogram, AIA/SDO and NoRH images are obtained from their respective web sites which are sincerely acknowledged. NoRH was operated by ICCON from 2015 to 2020. We thank the MUSER Team for MUSER operation. Mr. Tulsi Thapa is acknowledged for modifying the English. Dr. Tim Bastian is greatly appreciated for reading and improving the English of the manuscript. \bibliographystyle{spmpsci} % \label{lastpage} \bibliographystyle{raa} \bibliography{arViv}
Title: Fast neutrino cooling in the accreting neutron star MXB 1659-29
Abstract: Modelling of crust heating and cooling across multiple accretion outbursts of the low mass X-ray binary MXB 1659-29 indicates that the neutrino luminosity of the neutron star core is consistent with direct Urca reactions occurring in $\sim 1\%$ of the core volume. We investigate this scenario with neutron star models that include a detailed equation of state parametrized by the slope of the nuclear symmetry energy $L$, and a range of neutron and proton superfluid gaps. We find that the predicted neutron star mass depends sensitively on $L$ and the assumed gaps. We discuss which combinations of superfluid gaps reproduce the inferred neutrino luminosity. Larger values of $L\gtrsim 80\ {\rm MeV}$ require superfluidity to suppress dUrca reactions in low mass neutron stars, i.e. that the proton or neutron gap is sufficiently strong and extends to high enough density. However, the largest gaps give masses near the maximum mass, making it difficult to accommodate colder neutron stars. We consider models with reduced dUrca normalization as an approximation of alternative, less efficient, fast cooling processes in exotic cores. We find solutions with a larger emitting volume, providing a more natural explanation for the observed neutrino luminosity, provided the fast cooling process is within a factor of $\sim 1000$ of dUrca. The heat capacities of our models span the range from fully-paired to fully-unpaired nucleons meaning that long term observations of core cooling could distinguish between models. We discuss the impact of future constraints on neutron star mass, radius and the density dependence of the symmetry energy.
https://export.arxiv.org/pdf/2208.04262
\title{Fast neutrino cooling in the accreting neutron star MXB 1659-29} \correspondingauthor{Melissa Mendes} \email{melissa.mendessilva@mail.mcgill.ca} \author[0000-0002-5250-0723]{Melissa Mendes} \affiliation{Department of Physics and McGill Space Institute, McGill University, 3600 rue University, Montreal, QC, H3A 2T8, Canada} \author[0000-0002-7371-3656]{Farrukh J. Fattoyev} \affiliation{Department of Physics, Manhattan College, Riverdale, New York 10471, USA} \author[0000-0002-6335-0169]{Andrew Cumming} \affiliation{Department of Physics and McGill Space Institute, McGill University, 3600 rue University, Montreal, QC, H3A 2T8, Canada} \author[0000-0001-7351-9338]{Charles Gale} \affiliation{Department of Physics, McGill University, 3600 rue University, Montreal, QC, H3A 2T8, Canada} \keywords{Accretion (14), Low-mass x-ray binaries (939), Neutron star cores (1107), Neutron stars (1108), X-ray transient sources (1852)} \section{Introduction} Neutron stars in transiently-accreting low mass X-ray binaries (LMXBs) are remarkable laboratories to probe the physics of dense matter (for a review see \citealt{Wijnands2017}). While accreting, the neutron star crust is heated by accretion-induced nuclear reactions, with most of the energy flowing inwards to the neutron star core. After accretion ends, in the quiescent phase, the neutron star surface temperature can be measured, which in turn gives an estimation of the neutron star core temperature. The quiescent temperatures and luminosities of LMXB neutron stars have been used to infer the efficiency of neutrino emission processes and superfluid state in their cores \citep{Yakovlev2004ARAA,Heinke2007,Levenfish2007,Heinke2009,Wijnands2013,Beznogov2015a,Beznogov2015b,Han2017,Potekhin2019}, the thermal conductivity and superfluidity of the neutron star crust \citep{Shternin2007,Brown2009,PageReddy2012}, and the heat capacity of the core \citep{Brownetal2017,Degenaar2021}. There is growing evidence that a number of neutron stars have highly-efficient fast neutrino processes in their cores, based on very low quiescent temperatures \citep{Heinke2007,Heinke2009,Han2017, Potekhin2019}. Characterized by a local emissivity $\propto T^6$, where $T$ is the local temperature, the most efficient fast neutrino process is the direct Urca (\durca) process in nucleonic matter in which neutrinos are produced by the reactions \citep{Lattimer1991} \begin{equation}\label{eq:\durca_reactions} n \rightarrow p+e^{-}+\bar{\nu}_{e}, \quad p+e^{-} \rightarrow n+\nu_{e}. \end{equation} Momentum conservation in these reactions means that they can proceed only if the proton fraction is sufficiently large ($Y_p\gtrsim 1/9$). This means that determining that the \durca\ process is happening in a neutron star core directly constrains the proton fraction and therefore the value of the nuclear symmetry energy at high density (see discussion in \citealt{Lattimer2018}). It also means that the central density and therefore mass of the neutron star is large enough to achieve the critical value of $Y_p$. With more exotic compositions, such as a meson condensate or quark matter, other fast processes are possible, with the same $\propto T^6$ scaling but a typically smaller normalization (see \citealt{Yakovlev2001} for a review). The core temperature is therefore a very interesting quantity that can be constrained by observations and depends on the unknown composition of neutron star cores. The LMXB \src\ has shown multiple accretion outbursts in which the neutron star crust has been observed to thermally-relax in quiescence \citep{Parikh2019}. This is unusual because most LMXBs with multiple outbursts have short, frequent outbursts that do not significantly heat the crust at depth; while the LMXBs with large outbursts that heat the crust significantly, have typically only shown one outburst because the recurrence time between outbursts is very long. \src\ went into outburst in the late 1970s, in 2001 and again in 2015, with each outburst lasting $\sim 2$ years \citep{Cackett2006,Parikh2019}. By modelling the sequence of outbursts in \src\ and the relaxation of the surface temperature in quiescence, \cite{Brownetal2018} showed that the core must be cooled by a fast neutrino process. The rate at which the neutron star cools after each outburst depends on the temperature of the neutron star crust, set by the rate at which the neutron star core is being heated and therefore cooled by neutrinos. The composition of the neutron star envelope is also constrained by the shape of the cooling curve, reducing an uncertainty in mapping the surface temperature to core temperature \citep{Brownetal2017} (for \src, the inferred core temperature is $\approx 2.5\times 10^7\ {\rm K}$). Slow neutrino processes, \textit{i.e}.~less efficient processes such as modified URCA with emissivity $\propto T^8$, cannot provide the required neutrino luminosity at this core temperature. Computing an average value of $L_\nu/T^6$ for the core and comparing with the expected value for \durca, \citep{Brownetal2018} found that the neutrino cooling luminosity of \src\ is consistent with \durca\ occurring over about $\sim 1$\% of the core volume. Such a low effective emitting volume gives an interesting constraint on the neutron star in \src. Possible explanations are that (1) the neutron star has a mass within a few percent of the mass at which \durca\ reactions become possible, (2) superfluidity suppresses the \durca\ reactions throughout most of the available volume, reducing the overall luminosity, or (3) a less efficient fast process is operating over the core. \cite{Brownetal2018} pointed out that these possibilities could in principle be distinguished because they make different predictions for the cooling rate of the core in quiescence because the heat capacity of the core depends on its composition and the extent of superfluidity. In this paper, we explore the different scenarios for the neutrino emission of \src\ using detailed neutron star models that include a variety of superfluid gap models. We use an equation of state parametrized by the slope of the nuclear symmetry energy $L$, since this parameter determines the proton fraction at high density and therefore the onset density for \durca\ reactions. We describe the input microphysics that we use in \S \ref{sec:formalism}. In \S \ref{sec:results}, we investigate the values of $L$ and neutron star mass that are required to reproduce the inferred neutrino luminosity of \src\ under different assumptions about the superfluid gap. In \S \ref{sec:heatcapacity}, we calculate the heat capacity of our models since this is a potential observable that can distinguish the different scenarios. We end with a discussion of our results in \S \ref{sec:discussion}, including the possibility of less efficient fast emission processes that might occur if exotic particles are present in the core, and potential future observational and experimental constraints. \section{Details of the calculation and input microphysics} \label{sec:formalism} \subsection{Equation of state and neutron star structure} The family of equations of state (EOS) we use to describe the core of neutron stars is based on relativistic mean field (RMF) model FSUGold2 \citep{FSUGoldReference}. This EOS was one of the first to reproduce not only ground-state properties of finite nuclei, but also the maximum observed neutron star mass at the time. A detailed framework of the EOS we generate is available in \cite{Mendes2021}, but we summarize their main characteristics here, for convenience. We consider a minimal model in which only neutrons, protons, electrons and muons are the particle constituents of neutron stars. Consider the expansion of the total energy per nucleon $E(\rho, \alpha)$ at zero temperature, \begin{equation} E(\rho, \alpha) = E_{\rm SNM}(\rho) + E_{\text {\rm sym}}(\rho) \cdot \alpha^{2}+\mathcal{O}\left(\alpha^{4}\right) \label{EnNucleon} \, \end{equation} where $\rho = \rho_{\rm n} + \rho_{\rm p}$ is the total baryon number density and $\alpha=(\rho_{\rm n} - \rho_{\rm p})/\rho$ is the neutron-proton asymmetry parameter. Next, consider the Taylor series \citep{Piekarewicz:2008nh} that characterize both the energy per nucleon in symmetric nuclear matter (SNM), $E_{\rm SNM}(\rho)$, and the symmetry energy, $E_{\text {\rm sym}}(\rho)$, near the nuclear saturation density $\rho_{\rm sat} = 0.15$ fm$^{-3}$, \begin{equation} \begin{split} & E_{\rm SNM}(\rho) = B + \frac{1}{2} K x^2 + \cdots \,\\ & E_{\rm sym}(\rho) = J + L x+ \frac{1}{2} K_{\rm sym} x^2 + \cdots\,\\ & \mathrm{where}\, x = (\rho - \rho_{\rm sat})/3\rho_{\rm sat}. \end{split} \end{equation} The family of the EOS we work with shares identical SNM bulk parameters, such as the energy per nucleon $B=-16.26$ MeV and incompressibility coefficient $K=237.7$ MeV. The symmetry energy $\tilde{J}$ at a subsaturation density of $\rho =0.1$ fm$^{-3}$ is also fixed to ensure that binding energies and charge radii of finite nuclei are well reproduced. Their different slope of symmetry energy $(L)$, varying from $47$ MeV to $112.7$ MeV, provide distinct neutron skin thicknesses and neutron star properties, such as radii, all consistent with the current experimental and observational data \citep{Adhikari:2021phr, Riley:2019yda, Miller:2019cac, Abbott:PRL2017, Abbott:2018exr}. In particular, increasing $L$ leads to a larger symmetry energy at supersaturation densities, which increases the proton fraction $Y_{\rm p} = \rho_{\rm p}/\rho$ in the innermost region of the star. In addition, increasing $L$ leads to larger neutron star radii. The mass-radius curves for different $L$ values are shown in Figure~\ref{fig:mxr}. The outer crust is described by the EOS from \cite{Baym1971} and the inner crust, by the EOS from \cite{Negele1973}. We assume a non-rotating spherically-symmetric neutron star, and solve the Tolman-Oppenheimer-Volkoff (TOV) equations \begin{equation}\label{TOV} \begin{split} \frac{\mathrm{d} P}{\mathrm{d} r} &=-\frac{\mathcal{E}(r)}{c^2}\frac{G m(r)}{r^2} \left[1 + \frac{P(r)}{\mathcal{E}(r)}\right] \\ & \qquad \quad \left[1+\frac{4 \pi r^{3} P(r)}{m(r) c^2}\right] \left[1-\frac{2Gm(r)}{c^2r}\right]^{-1} \\ \frac{\mathrm{d} m}{\mathrm{d} r} &= 4 \pi r^2 \frac{\mathcal{E}(r)}{c^2}\\ \frac{\mathrm{d} \phi}{\mathrm{d} r} &=- \frac{1}{\mathcal{E}(r)+P(r)} \frac{\mathrm{d} P}{\mathrm{d} r}, \end{split} \end{equation} where $m(r)$ is the mass within radius $r$, $P(r)$ is the pressure, $\mathcal{E}(r)$ is the energy density and $\phi(r)$ is the gravitational potential such that at the surface of the star, $r = R$ and $m=M$, the pressure vanishes, $P(R) = 0$ and $\phi(R)=\frac{1}{2} \ln (1-{2 G M}/{c^2R})$, $G$ is the gravitational constant. \subsection{Neutrino emissivity} Since our goal is to reproduce the inferred neutrino luminosity of \src, we consider the fast cooling process of \durca\ only. If there are no muons participating, \durca\ cooling takes place through the reactions in equation (\ref{eq:\durca_reactions}) which conserve momentum only if \begin{equation}\label{eq:triangle} k_{F n} \leq k_{F p}+k_{F e}, \end{equation} which implies that for \durca\ reactions the proton fraction must exceed a threshold value \begin{equation}\label{condition} Y_{p} \geq Y_{\mathrm{p\, \durca}} = \left[Y_{n}^{1/3}-Y_{e}^{1/3}\right]^{3}, \end{equation} as explained in \cite{Yakovlev2001}. Here, $k_{\mathrm{Fx}}$ are the Fermi momenta, a function of $\rho_\mathrm{x}$, the number density for each species and the particle fraction $Y_{\mathrm{x}}$. When muons participate, additional \durca\ reactions take place \begin{equation}\label{muonsincluded} n \rightarrow p+\mu^{-}+\bar{\nu}_{\mu^{-}}, \quad p+\mu^{-} \rightarrow n+\nu_{\mu^{-}} \end{equation} which has its own threshold given by replacing $k_{F e}$ with $k_{F \mu}$ in equation (\ref{eq:triangle}). As well as introducing an additional neutrino producing reaction, muons also modify the electron fraction $Y_e = Y_p - Y_\mu$ which enters equation (\ref{condition}), and so modify the electron \durca\ channel even before the threshold for muon \durca\ is reached. The threshold proton fraction corresponds to a threshold density $\rho_\mathrm{\durca}$ for \durca\ processes to occur. Only in regions of the core with $\rho>\rho_\mathrm{\durca}$ is \durca\ allowed, and this also implies that only neutron stars massive enough to have a central density $\rho_c>\rho_\mathrm{\durca}$ can cool by \durca. The \durca\ neutrino luminosity, as seen by an observer at infinity, is given by \begin{equation}\label{integration} L_{\nu_{\mathrm{\durca}}}^{\infty} = \int_0^{R_\mathrm{core}}\frac{4 \pi r^2 \epsilon_{0}^{\mathrm{\durca}, \mathrm{total}} e^{2 \, \phi (r)}}{\left(1-2 G m(r)/c^ 2 r\right)^{1/2}}\, dr, \end{equation} where the integral is over the neutron star core and the local neutrino emissivity is \citep{Yakovlev2001} \begin{equation} \label{emissivity} \begin{split} \epsilon_{0}^{\mathrm{\durca}, e^{-}} &=\frac{457 \pi}{10080} G_{\mathrm{F}}^{2} \cos ^{2} \theta_{\mathrm{C}}\left(1+3 g_{\mathrm{A}}^{2}\right)\\ & \qquad \times \frac{m_{n}^{*} m_{\mathrm{p}}^{*} m_{e}}{h^{10} c^{3}}\left(k_{\mathrm{B}} T\right)^{6} \Theta_{\mathrm{npe}}\\ \epsilon_{0}^{\mathrm{\durca}, \mu^{-}} &= \epsilon_{0}^{\mathrm{\durca},e^{-}} \Theta_{\mathrm{np\mu}}\\ \epsilon_{0}^{\mathrm{\durca}, \mathrm{total}} &= \epsilon_{0}^{\mathrm{\durca}, e^{-}}+ \epsilon_{0}^{\mathrm{\durca},\mu^{-}},\\ \end{split} \end{equation} where we use the weak coupling constant $G_{\mathrm{F}}= 1.436\times 10^{-62}\, \mathrm{J} \mathrm{m}^3 $ and Cabibbo angle $\sin \theta_{\mathrm{C}} = 0.228$ \citep{Zyla2020PDG}, and in-medium axial vector coupling constant from \cite{Carter:2002}, $g_{\mathrm{A}}=-1.2601 (1-\rho/(4.15 (\rho_{0}+\rho))$. We account for in-medium interactions through $\mathrm{m}_{x}^{*}$, which represents the Landau effective mass of species $x$ defined as $\mathrm{m}^{*}=\sqrt{m_{\rm D}^2+(\hbar k_{\rm F}/c)^2}$ with $m_{\rm D}$ being the nuclear interaction-dependent Dirac effective mass (see, e.g. \citealt{Chen:2007ih}) and $k_{\rm F}$ is the Fermi wave-number of the nucleon. $\Theta_{\mathrm{npe(\mu)}}$ is a step function that restricts direct Urca reactions to the regions with $\rho>\rho_\mathrm{\durca}$. \subsection{Treatment of superfluidity and superconductivity} \label{sec:intro_superfluidity} Including superfluidity and superconductivity in the neutron star core model changes the neutrino luminosity, since the local neutrino emissivity is exponentially reduced by a reduction factor $R_L$, giving $\epsilon^{\mathrm{\durca}}=\epsilon_{0}^{\mathrm{\durca}} R_L$ \citep{Yakovlev2001}. We consider both proton singlet (PS) (${ }^{1} \mathrm{S}_{0}$) and neutron triplet (NT) $({ }^{3} \mathrm{P}_{2},\,m_{J}=0)$ pairing in the core. For proton singlets, the reduction factor is given by \citep{Yakovlev2001} \begin{equation} \label{reduced1} \begin{split} R_{\mathrm{L}} &=\left[0.2312+\sqrt{(0.7688)^{2}+(0.1438 ~ v_{\mathrm{S}})^{2}}\right]^{5.5}\\ & \qquad \times \exp \left(3.427-\sqrt{(3.427)^{2}+v_{\mathrm{S}}^{2}}\right), \\ v_{\mathrm{S}} &=\sqrt{1-\tau}\left(1.456-\frac{0.157}{\sqrt{\tau}}+\frac{1.764}{\tau}\right), \end{split} \end{equation} while for neutron triplets, \begin{equation} \label{reduced2} \begin{split} R_{\mathrm{L}} &=\left[0.2546+\sqrt{(0.7454)^{2}+(0.1284 \, v_{\mathrm{T}})^{2}}\right]^{5}\\ & \qquad \times \exp \left(2.701-\sqrt{(2.701)^{2}+v_{\mathrm{T}}^{2}}\right),\\ v_{\mathrm{T}} &=\sqrt{1-\tau}\left(0.7893+\frac{1.188}{\tau}\right). \end{split} \end{equation} Here $\tau = T/T_c$, where $T_c$ is the critical temperature, calculated according to each gap model parametrization, and $T$ is the local temperature at radius $r$, $T(r) = \tilde{T} \exp(-\phi(r))$, with $\tilde{T}$ the temperature of the isothermal core as measured at infinity. When proton singlet superconductivity and neutron triplet superfluidity are simultaneously active, we use the approximation \begin{equation} R_{L} \sim \min \left(R_{L,\mathrm{singlet}}, R_{L,\mathrm{triplet}}\right) \end{equation} which is valid in the limit of strong superfluidity \citep{Yakovlev1994}. A more accurate calculation could be performed with combinations of the asymptotic expressions described in \cite{Yakovlev1994}, however, since $T_c \gg T$ except in a narrow range of density, they would only provide minor corrections. The heat capacity of neutrons and protons is similarly reduced when they are superfluid or superconducting, $\mathrm{C}_{p,n}^{\mathrm{superfluid}} = \mathrm{C}_{p,n}\,R_C$ \citep{Yakovlev:1994}, where, for proton singlets, \begin{equation} \label{reduced3} \begin{split} R_C &=\left[0.4186+\sqrt{(1.007)^{2}+(0.5010 \, u_{\mathrm{S}})^{2}}\right]^{2.5} \\ & \qquad \times \exp \left(1.456-\sqrt{(1.456)^{2}+u_{\mathrm{S}}^{2}}\right),\\ u_{\mathrm{S}} &=\sqrt{1-\tau}(1.456-0.157 / \sqrt{\tau}+1.764 / \tau), \end{split} \end{equation} and for neutron triplets, \begin{equation} \label{reduced4} \begin{split} R_{C} &= \left[0.6893+\sqrt{(0.790)^{2}+(0.03983 \, u_{\mathrm{T}})^{2}}\right]^{2} \\ & \qquad \times \exp \left(1.934-\sqrt{(1.934)^{2}+\frac{u_{\mathrm{T}}^{2}}{16 \pi}}\right),\\ u_{\mathrm{T}} &=\sqrt{1-\tau}(5.596+8.424 / \tau). \end{split} \end{equation} The total heat capacity is given by \begin{equation} \label{heatcap} C^{\mathrm{core}}_{\mathrm{total}} = \int_0^{R_\mathrm{core}}\frac{4 \pi r^2 \sum C_{x}}{\left(1-2 G m(r)/c^ 2 r\right)^{1/2}}\, dr, \end{equation} where $C_{x}$ is the contribution to the local heat capacity from each particle species \citep{Yakovlev:1994}, \begin{equation} C_x=\frac{m_x^*p_{F, x}}{3 \hbar^{3}} k_{B}^{2} T. \end{equation} \subsection{Gap models} To explore a range of different superfluid gap models, we use the analytic fits of \cite{Hoetal2015} (see their eq.~[2] and Table~II) to nine proton singlet (PS) and eight neutron triplet (NT) gap models. The PS gap models are AO \citep{Amundsen1985singlet}, BCLL \citep{Baldo1992}, BS \citep{Baldo2007}, CCDK \citep{Chen1993}, CCYms/CCYps \citep{Chao1972}, EEHO \citep{Elgaroy1996b}, EEHOr \citep{Elgaroy1996a}, and T \citep{Takatsuka1973}. The NT gap models are AO \citep{Amundsen1985triplet}, BEEHS \citep{Baldo1998}, EEHO \citep{Elgaroy1996NT}, EEHOr \citep{Elgaroy1996a}, SYHHP \citep{Shternin2011}, T \citep{Takatsuka1972}, and TTav/TToa \citep{Takatsuka2004}. For additional references and details of the fits, see \cite{Hoetal2015}. Unless it is clear from the context, we will prefix the gap name by either NT or PS to make clear whether we are referring to a neutron triplet or proton singlet model, e.g.~NT~AO refers to the neutron triplet AO model. Figure \ref{fig:parametrizations} shows these gap models as a function of $k_F$. To help characterize the region of the star which is superfluid, we define the opening $\rho_\mathrm{opening}$ and closing $\rho_{\rm closing}$ densities to correspond to the densities where the local temperature equals the critical temperature, $T_c=T(r)$. The opening and closing densities depend on the EOS (which maps $k_F$ to density for each species), so we compute them for each neutron star model. Suppression of the \durca\ emissivity will occur in the density range $\rho_{\rm opening}\lesssim\rho\lesssim\rho_{\rm closing}$. Our list of gap models covers a range of amplitudes and widths of the critical temperatures for nuclear pairings in neutron star cores, as well as early (low density) and late (high density) openings and closings, and so will allow us to explore the range of expected behavior. \section{Models of \src\ with dUrca neutrino cooling} \label{sec:results} In this section, we attempt to reproduce the inferred neutrino luminosity of \src\ with neutron star models in which the neutrino emission is by the nucleonic \durca\ process, and considering different gap models for neutron and proton superfluidity. We start by considering models without superfluidity (\S \ref{sec:noSF}), then consider the effect of neutron and proton pairing separately (\S \ref{sec:SF}) and in combination (\S \ref{PsNt}). Since the mass of the neutron star in \src\ is unconstrained, we take the approach of calculating the range of allowed masses that are consistent with the inferred neutrino luminosity $L_\nu^\infty$ of \src. We take the central value inferred by \cite{Brownetal2018}, $L_{\nu}^{\infty} = 3.9 \times 10^{34} \, \mathrm{erg}/\mathrm{s}$, and also consider upper and lower values $L_{\nu}^{\infty} = 2 \times 10^{34} \, \mathrm{erg}/\mathrm{s}$ and $L_{\nu}^{\infty} = 7.8 \times 10^{34} \, \mathrm{erg}/\mathrm{s}$ which correspond approximately to the $1$-$\sigma$ range found by \cite{Brownetal2018} (see their Fig.~2). We also set the core temperature at infinity to $\tilde{T}=2.5\times 10^7\ {\rm K}$ \citep{Brownetal2018} (note that $\tilde{T}$ is independent of radial coordinate in an isothermal star). \subsection{Models with no pairing}\label{sec:noSF} We first consider models without nuclear pairing, so that neutrino cooling occurs from all parts of the neutron star core where the density exceeds the \durca\ threshold density. This situation is indicated schematically in Fig.~\ref{illustration}a. The top panel of Figure \ref{fig:MLnopairing} shows the allowed masses for \src, i.e.~the neutron star mass that has the same $L_\nu^\infty$ as \src, as a function of the slope of the symmetry energy $L$. The required mass decreases with $L$, and lies just above the \durca\ threshold mass, shown as a dashed line in Figure \ref{fig:MLnopairing}. At the lowest values of $L$, we find $M \approx 1.8\ M_{\odot}$, whereas for $L\gtrsim 80\ {\rm MeV}$, the mass falls below $1.0\ M_{\odot}$. This result is a consequence of the high efficiency of \durca\ processes which mean that only a small volume of the core is needed to supply the required luminosity. The bottom panel of Figure \ref{fig:MLnopairing} shows the volume fraction of the core involved in \durca\ reactions, that is, the percentage of core volume above the \durca\ threshold. For stars with the inferred luminosity, the \durca\ volume fraction is around $1$--$4 \%$, similar to the estimate of \cite{Brownetal2018}. Note that our solutions span a wide range of allowed neutron star masses. A common assumption is that only massive neutron stars can cool by \durca\ reactions, but we see that for the EOS used here the threshold mass is small for high $L$ values. For completeness, we show results for low neutron star masses $\mathrm{M} < 1.0 \, \mathrm{M}_{\odot}$ even though these low masses are unlikely for astrophysical neutron stars (e.g.~see \citealt{Ozel2012} for a discussion of the observed neutron star mass distribution). This implies that non-superfluid cores can explain \src\ only if $L\lesssim 80\ {\rm MeV}$ for the EOSs used here. A similar limit on $L$ comes from the fact that some observed neutron stars are inconsistent with fast cooling (e.g.~\citealt{Wijnands2017}), whereas our non-superfluid models predict that all neutron stars would have \durca\ if $L$ were larger than $80\ {\rm MeV}$. However, these conclusions are relaxed when we include nuclear pairing, as we show in the next section. \subsection{Effect of superfluidity}\label{sec:SF} We next include the reduction in neutrino emissivity due to nuclear pairing. By suppressing neutrino emission in regions of the core that are above the \durca\ threshold density, superfluidity can lead to neutrino emission from either a reduced region of the core, or from a shell surrounding the superfluid core. These possibilities are shown schematically in Fig.~\ref{illustration}b and c. We first consider neutron and proton pairing separately to explore the role of each. The results are shown in Figure \ref{fig:newR}, where again we show the allowed neutron star mass as a function of $L$. The effects of nuclear pairing on the allowed masses are substantial, especially for neutron superfluidity. Due to the superfluid suppression of \durca\ emissivity, the effective of onset of \durca\ is delayed to higher density and the mass is increased in most cases (compare Fig.~\ref{fig:MLnopairing} and Fig.~\ref{fig:newR}). In addition, for a given gap model, there is a wider range of inferred masses (a wider color band around the solid lines) than in the no pairing case, because superfluidity smooths the transition to dURCA emission, which means that $L_\nu^\infty$ increases more slowly with increasing $M$ than in the no pairing case. Even though the NT critical temperatures are lower than PS, as shown in Fig. \ref{fig:parametrizations}, neutron superfluidity has a larger effect on the required masses because the opening and closing densities for NT are more likely to occur in the region where \durca\ reactions are allowed. Hence, in calculating $L_\nu^\infty$, the opening and closing densities and the width in density of a gap model is more important than its amplitude (as noted for example in the study of isolated cooling neutron stars by \citealt{BeloinHan:2018}). The ordering of the PS curves in the upper panel of Figure~\ref{fig:newR} follows the ordering of the gap closing density. For $L \leq 52$ MeV ($M \geq 1.75 \, \mathrm{M}_{\odot}$), all the PS gaps close before the central density reaches the \durca\ threshold and the PS gap results are the same as the no pairing results. Two PS gaps, BS and CCYps, close before the onset of \durca\ for all $L$ and hence give the same results as the no-pairing case. The other gaps predict increasing mass as the gap closing density increases: in order of increasing density, these are AO, BCLL, (CCYms,EEHOr), (T,EEHO), and CCDK. The pairs (CCYms, EEHOr) and (T,EEHO) have very similar gap closing densities (see Fig.~\ref{fig:parametrizations}) and therefore give very similar allowed mass ranges. So we see that the role of the PS gap is to delay the effective onset of \durca\ to higher density (from $\rho_\mathrm{\durca}$ to $\rho_{\rm closing}$) and therefore higher masses. The NT gaps are more complicated because they have different orderings of $\rho_{\rm opening}$, $\rho_{\rm closing}$, relative to $\rho_\mathrm{\durca}$. EEHOr has $\rho_{\rm closing}<\rho_\mathrm{\durca}$ for all $L$ and so gives the same results as the no pairing case. Apart from the gap SYHHP, which we discuss below, the curves again increase in mass following the ordering of the closing densities: (T,EEHO), TTav, BEEHS, TToa, AO, where again we bracket together T and EEHO which have similar closing densities and give similar mass constraints. Because the neutron gaps close at a much higher density than the proton gaps, the NT results for gaps that have $\rho_\mathrm{closing}>\rho_\mathrm{\durca}>\rho_\mathrm{opening}$ give larger neutron star masses than the PS gaps: allowed masses are $\gtrsim 1.65$--$1.8\ M_\odot$, depending on $L$. The gap model NY SYHHP is an interesting case. The distinct shape of this curve in Figure \ref{fig:newR} is a direct consequence of the shape of this particular gap, which is narrow and peaks at higher density than the other NT gaps in Figure \ref{fig:parametrizations} (this gap model is a phenomenological model developed to fit the observed cooling of Cas A, see \citealt{Shternin2011}). In Figure~\ref{nt_syhhp}, we show the regions of $M$ and $L$ where \durca\ reactions are allowed somewhere in the neutron star core (orange shaded regions) or are suppressed by superfluidity (unshaded region). At large $L \gtrsim 70$ MeV, the NT SYHHP gap opens after the onset of \durca\ reactions ($\rho_\mathrm{opening}>\rho_\mathrm{\durca}$), leading to a range of masses $\lesssim 1.2$--$1.4\ M_\odot$ which cool by \durca\ without any superfluid suppression. At smaller values of $L \lesssim 70$ MeV, the core is superfluid already at the densities where \durca\ reactions are allowed. Masses close to the maximum mass, however, have central densities that exceed the closing density of the gap $\rho_0>\rho_\mathrm{closing}$, so \durca\ reactions can then proceed. In Figure \ref{fig:newR}, where we are looking for solutions with a particular value of $L_\nu^\infty$, we see the transition from solutions at high $L$ close to the \durca\ threshold mass to solutions at low $L$ close to the maximum mass. In the first case, the emission is from the core of the star that is not at high enough density to be superfluid; in the second case, the emission is from the core of the star which has a high enough density that the gap has closed. We are able to find a solution that matches \src's neutrino luminosity except for one case, the largest $L=112.7$ MeV in our EOS table with gap model NT AO. In this particular case, even a maximum mass star is not able to reproduce the upper value of $L_\nu^\infty$, although it can reproduce the central value. This result holds when we include proton superfluidity and NT AO neutron superfluidity together (\S \ref{PsNt}); therefore the NT AO gap model at very large $L$'s is disfavored. The reason for this is the very broad shape of the NT AO gap, as well as its large amplitude (Fig.~\ref{fig:parametrizations} shows that AO, TToa and BEEHS all have roughly the same width but only AO fails to fit the data). The introduction of pairing relaxes the conclusion from the no-pairing models that $L \gtrsim 80 \, \mathrm{MeV}$ requires low neutron star masses that are likely not realizable in nature. Once pairing is included, Figure \ref{fig:newR} shows that the masses are significantly increased for many of the gap models. As mentioned above, the exceptions are the NT gaps EEHOr and SYHHP (which either close before or open after $\rho_\mathrm{\durca}$, respectively), and the 4 PS gaps BCLL, AO, BS and CCYps (which close before $\rho_\mathrm{\durca}$), which all allow solutions near the \durca\ threshold. \subsection{Combination of neutron and proton pairing} \label{PsNt} We now include both proton and neutron pairing in the core. Three representative cases are shown in Figure \ref{fig:comb1}. The top panel of Figure \ref{fig:comb1} shows the behavior that we find for most combinations of NT and PS pairings, namely that the neutron superfluid suppression dominates and the effect of proton superconductivity is negligible (a similar conclusion was reached by \citealt{Han2017}). This can be seen in the top panel of Figure \ref{fig:comb1}, where the NT gap model alone produces the same result as the combination of NT and PS (in this case, the solutions are all near the maximum mass for the broad gap model NT AO). The reason that the proton gap does not change the results is that neutron superfluidity is usually active in larger volumes of the core of the neutron star, despite its lower critical temperature, when compared with proton superconductivity gap models. In that case, we obtain the same results as before for the allowed range of inferred masses, $\approx 1$--$5\%$. There are some pairings of PS and NT gaps, however, for which the choice of the proton pairing gap does change the results. In that case, the results from a model with both PS and NT gaps included can be quite different from those with neutron superfluidity only. Two examples are shown in the middle and bottom panels of Figure \ref{fig:comb1}, both involving the PS CCDK gap which extends to higher density than the other PS gaps (see Fig.~\ref{fig:parametrizations}). In the example in the middle panel, for $L\lesssim 70$ MeV, stars with the inferred luminosity have more of their core volume under $\mathrm{PS} \, \mathrm{CCDK}$ pairing than under $\mathrm{NT} \, \mathrm{T}$ pairing, thus their calculated neutrino luminosity versus mass curve reproduces the previously found $\mathrm{PS} \, \mathrm{CCDK}$ curve. For $L \gtrsim 70$ MeV, $\mathrm{NT} \, \mathrm{T}$ dominates instead and the $L_\nu^\infty$--$M$ curve reproduces the $\mathrm{NT} \, \mathrm{T}$ curve. Note that the width of the calculated mass curve remains narrow, indicating that the range of allowed masses of the star is still small. In the lower panel of Figure \ref{fig:comb1}, we show an example in which the solution transitions between NT-dominated high mass solutions (for $L \leq 80 \, \mathrm{MeV})$ to PS-dominated intermediate mass solutions (for $L > 90 \, \mathrm{MeV})$. The transition is significantly shifted in $L$ compared to the NT calculation alone. Note that, at the transition, the range of inferred masses is considerably larger than before, up to $\approx 12\%$ variation in mass. The reason that the results for PS+NT are different from either PS or NT alone is that there are regions in the star where both proton and neutron superfluidity provide comparable suppression factors rather than one or the other dominating. Figure \ref{rxn} shows a specific example of how the reduction factor $R_L$ (see \S \ref{sec:intro_superfluidity}) varies with density for a star with mass $1.74\ M_\odot$ and for $L=85\ {\rm MeV}$ for this choice of gap models. This shows that the proton and neutron reduction rates become comparable at densities higher than the \durca\ threshold, so that both play a role, suppressing emission over a large fraction of the core. To show the different emission regions inside the star in more detail, Figure \ref{fig:lxr} shows the cumulative neutrino luminosity profile for a particular case from the lower panel of Figure \ref{fig:comb1} ($L=90\ {\rm MeV}$ and $1.6\ M_\odot$). The black curve shows the \durca\ luminosity without any superfluidity; the other curves show how this is suppressed as superfluidity is introduced, either NT only, PS only, or NT+PS. The NT+PS curve follows the NT-only curve for the innermost $\approx 4\ {\rm km}$, showing that the NT-pairing suppression dominates there. That region is within the green shaded area on the plot, corresponding to active neutron triplet pairing. At $\approx 5$ km, that gap closes and the proton gap then dominates, represented by the pink shaded area. Its large reduction factor stops the luminosity from accumulating and the curve goes flat, such that the total luminosity is obtained at the innermost part of the core. Note that between $4 \, \mathrm{km}$ and $5 \, \mathrm{km}$ proton reduction rates dominate, even though both nucleon pairings are active. This shifting of neutron and proton superfluidity regions of influence is the signature of transitions as seen in the lower panel of Figure \ref{fig:comb1}. The fact that in most cases the neutron gap dominates over the proton gap (as in the top panel of Fig.~\ref{fig:comb1}) means that the number of NT and PS gap model combinations that predict low mass stars $(\mathrm{M} \leq 1.0 \, \mathrm{M}_{\odot})$ at large $L$ is actually small. Examples are PS EEHOr+NT SYHHP or PS CCYms+NT EEHOr, which have a late opening of the NT gap or a weak NT superfluidity, respectively. Most of the nuclear pairing combinations investigated favor intermediate to high masses at large $L$. Furthermore, the range of allowed masses is consistently $\approx 5 \%$ for most cases, so that even though superfluidity can change the density range in which significant \durca\ cooling happens, the emitting volume is always a small fraction of the core volume. \section{The heat capacity of \src} \label{sec:heatcapacity} In the previous sections we have shown that a variety of different models can account for the neutrino luminosity of \src. One way to distinguish these different models is through the heat capacity of the neutron star core, which depends on the degree of superfluidity \citep{Brownetal2018}. In this section, we calculate the total heat capacity of our solutions to quantify the nuclear pairing reductions. In Figure \ref{heatcap1}, we show the total heat capacity of stars that match the neutrino luminosity of \src\ with either no pairing or NT superfluidity only. The value of heat capacity depends primarily on the neutron star mass required to produce the inferred neutrino luminosity (compare each gap model with the corresponding curves in the lower panel of Fig.~\ref{fig:newR}). In some cases, such as the case with no or weak pairing, where the best fitting mass decreases with increasing $L$, the heat capacity decreases with $L$. In other cases with strong NT superfluid suppression, the allowed masses are larger and tend to increase with $L$; the heat capacity in those cases increases towards larger $L$. Figure \ref{fig:cnu} shows the heat capacity as a function of the neutrino luminosity (following \citealt{Brownetal2017} and \citealt{Brownetal2018}) for different combinations of PS and NT gaps. Unlike neutrino luminosity, for which only the part of the core above the threshold density contributes, the entire volume of the core contributes to the heat capacity. Therefore the superfluid reduction to heat capacity is always visible, even for weak gap models. In addition, the heat capacities of stars with combined superconductivity and superfluidity are significantly smaller than when only one of these pairings is present. As shown in eqs.~(\ref{reduced3}) and (\ref{reduced4}), the stronger the gap model, that is, the larger its critical temperature compared to the star's temperature, the smaller the star's total heat capacity. The combination of the weakest gap models, $\mathrm{PS} \, \mathrm{BS} + \mathrm{NT} \, \mathrm{EEHOr}$, is the closest heat capacity to the no-pairing case. The combination of the strongest ones, $\mathrm{PS} \, \mathrm{CCDK} + \mathrm{NT} \, \mathrm{AO}$, approaches the value of heat capacity coming from the leptons only (grey band), ie. close to full suppression of the nucleonic contribution to the heat capacity. Other gap models will generate stars with heat capacities between these two limits, as for the example shown of PS CCDK $+$ NT SYHHP. The high efficiency of the \durca\ process together with the small emitting volumes in the case of \src\ lead to the very shallow gradient of the curves in Figure~\ref{fig:cnu}, i.e.~the heat capacity is quite insensitive to $L_\nu^\infty$ since $L_\nu^\infty$ changes rapidly with neutron star mass. The diagonal lines in Figure \ref{fig:cnu} show the variation in neutron star surface temperature that would be expected over a decade, given the neutrino cooling luminosity $L_\nu$ and the heat capacity $C$. To calculate those curves, we use equation~(24) of \cite{Brownetal2017}, and following their work we assume that the change in core temperature $\tilde{T}$ is related to the change in effective (surface) temperature $T_\mathrm{eff}$ by $\Delta \tilde{T}/\tilde{T} \approx 1.8 \, \Delta T_\mathrm{eff}^{\infty}/T_\mathrm{eff}^{\infty}$. Our results show that, because the different gap models span almost the full range of heat capacity between the unpaired and lepton-only values, constraints on $\Delta T_\mathrm{eff}^{\infty}/T_\mathrm{eff}^{\infty}$ at the percent level can discriminate between different gap models, clearly indicating whether \src\ is strongly or weakly superfluid. This result, valid for all studied EOS, signals that future measurements of cooling in quiescence of \src\ would constrain nuclear pairing in the neutron star core. \section{Discussion} \label{sec:discussion} \subsection{Cooling of \src\ by \durca} We find a range of models that reproduce the inferred neutrino luminosity $L_\nu^\infty$ of \src\ with \durca\ neutrino emission. The predicted neutron star mass depends sensitively on the choice of $L$ and gap model (Fig.~\ref{fig:newR}). Since only a small volume of the core undergoing \durca\ reactions is needed to explain the inferred luminosity ($\approx 1$--$4$\% of the core volume), these solutions tend to lie close to either the mass where \durca\ reactions first turn on, i.e.~where the central density first exceeds the \durca\ threshold, or the mass where superfluidity turns off, i.e.~where the central density is large enough that the critical temperature $T_c$ falls below the core temperature, quenching superfluidity and allowing \durca\ reactions to proceed. For our EOS, the \durca\ threshold mass $M_\mathrm{\durca}$ varies from $\approx 1.85\ M_\odot$ at $L\approx 50\ {\rm MeV}$ to $\approx 1.1\ M_\odot$ at $L\approx 80\ {\rm MeV}$. In this range of $L$ it is possible that the mass of \src\ lies near $M_\mathrm{\durca}$, and superfluidity is not needed. However, for $L\gtrsim 80\ {\rm MeV}$, $M_\mathrm{\durca}$ drops below the smallest mass expected for astrophysical neutron stars, \textit{i.e.} it becomes low enough that all observed neutron stars should be cooling by \durca\ if they are not superfluid. In that case, suppression of \durca\ by superfluidity is essential to allow a solution in which \src\ lies just above the mass where \durca\ cooling turns on (as well as providing a range of lower masses where neutron stars are able to cool by slow neutrino processes). We find that neutron pairing plays a much more important role than proton pairing in moving the onset of \durca\ cooling to larger masses. This is because the neutron triplet pairing occurs at a higher density than proton singlet (Fig.~\ref{fig:parametrizations}), and so is most likely the cause of superfluidity in the high density regions where \durca\ operates. Proton gap models that close at high density can play a role, e.g.~the CCDK model shown in Figure \ref{fig:parametrizations}, particularly at intermediate values of $L\sim 60$--$80\ {\rm MeV}$. In cases where superfluidity moves the onset of \durca\ cooling to higher masses, the predicted mass for \src\ directly reflects the density at which the gap closes (Fig.~\ref{fig:newR}). We were able to find some combinations of gap models that could not explain \src\ in the case where $L\gtrsim 80\ {\rm MeV}$. Two examples are the combinations PS EEHOr and NT SYHHP or PS CCYms and NT EEHOr. In these cases, the proton gap closes at low density, and the neutron gap is either very weak (in the case of NT EEHOr) or opens at higher density than other models (NT SYHHP), allowing a region of normal matter near the \durca\ threshold, and giving a mass near the threshold mass, ie. $\lesssim 1\ M_\odot$. However, in the majority of cases using other gap combinations we found that superfluidity acts effectively to increase the predicted mass to values above $\approx 1.65$--$1.8\ M_\odot$ depending on $L$. In some cases, the superfluid gap extends to high enough density that the predicted mass lies close to the maximum neutron star mass for any value of $L$. Particular examples are the neutron gap models NT AO, TToa, and BEEHS. In one case, with the NT gap AO (the gap with the broadest density range and largest amplitude) and the largest value of $L$ in our EOS table, $L=112.7$ MeV, we were not able to fit the 1$\sigma$ upper limit on $L_\nu^\infty$. Even in other cases where we could fit \src\ adequately with a mass close to the maximum mass, this could cause problems explaining colder sources that require even higher neutrino luminosities, since it does not leave much room to increase the mass and therefore neutrino luminosity further. In particular, the sources \saxj\ and 1H~1905+00 have quiescent luminosities significantly below \src\ (e.g.~see Fig.~3 of \citealt{Potekhin2019}). Indeed, \cite{Potekhin2019} found that a suppressed triplet pairing was necessary to explain \saxj: their standard model of PS BS + NT BEEHS was not able to produce cold enough stars. In this sense \src, with its intermediate quiescent luminosity and small \durca-active volume, is an interesting data point to add to \saxj\ and 1H~1905+00 when constraining \durca\ emission (e.g.~the study of \citealt{Han2017}), since a combination of gap models that can match \saxj\ for example may not match \src, and vice versa. Note that while \src\ has been included in studies of the population of accreting transients (e.g.~\citealt{Potekhin2019}), the atmosphere is often assumed to have a heavy composition (Fe) which \cite{Brownetal2018} found to be inconsistent with the cooling data. \subsection{Modelling uncertainties} A concern for models that explain the neutrino luminosity of \src\ with \durca\ is that the allowed range of neutron star masses is rather small. For the great majority of nuclear pairing combinations, we found a mass range of $\lesssim 5\%$. For some specific EOS and gap model combinations, we can have up to $10\%$ mass change, however, these cases are limited to intermediate $L$ where the dominant pairing transitions from neutron to proton (e.g.~SYHHP gap in Fig.~\ref{fig:newR} lower panel). Given the small number of sources available, it is perhaps unlikely that we would catch \src\ at a time when its mass lies within this small range, although quantifying this probability would require detailed modelling of the evolutionary history. The range of allowed masses is set by the uncertainty in the derived neutrino luminosity and core temperature. Based on the modelling of \cite{Brownetal2018}, we have taken a range of about a factor of two in $L_\nu^\infty$. Since the emitting volume is small, this translates to a narrow range of allowed neutron star mass for any given choice of $L$ and gap model. \cite{Brownetal2018} derived the uncertainty in inferred neutrino luminosity of \src\ by marginalizing over the other parameters of their model such as the accretion rate and crust impurity parameter. However, relaxing some of the assumptions of that model would broaden the allowed range of neutrino luminosity. The normalization of the accretion rate and the corresponding deep crustal heating rate were included in the marginalization procedure of \cite{Brownetal2018}, accounting for uncertainties in deep crustal heating --- the predicted energy injection ranges from $\approx 0.5$--$2\ {\rm MeV}$ per accreted nucleon in different models \citep{Haensel2008,Fantina2018,Gusakov2021}. However, \cite{Brownetal2018} assumed that the average accretion rate over the last 30 years of observations of \src\ are representative of the longer term average accretion rate (on the timescale to reach thermal equilibrium of the core, hundreds of years for a cold core). Relaxing this assumption would allow for a wider range of accretion rates (e.g.~\citealt{Potekhin2021}). In addition, the marginalization carried out by \cite{Brownetal2018} did not include distance uncertainties, although they estimated that this would change the inferred \durca\ prefactor $L_\nu/\tilde{T}^6$ by less than a factor of two. An additional source of uncertainty that we have not included here is in the core temperature. Following \cite{Brownetal2018}, we have taken a fixed value $\tilde{T}=2.5\times 10^7\ {\rm K}$. In fact, for a given measured neutron star surface temperature $T^\infty_{\rm eff}$, the inferred core temperature depends on the envelope model and assumed neutron star mass and radius. The value of core temperature we assume here is for a neutron star surface temperature $T_{\rm eff}^\infty=55\ {\rm eV}$ and a neutron star with mass and radius $M=1.4\ M_\odot$, $R=12\ {\rm km}$ and pure He envelope; including the mass and radius dependence would result in variations of up to $\approx 20$\% in $\tilde{T}$ \citep{Brownetal2017}, or factors of a few in the emitting volume. In addition, the spectral models used to fit the data and obtain the measured $T_{\rm eff}^\infty$ assume fixed values of mass and radius. In addition, although \cite{Brownetal2017} found that an iron envelope could not reproduce the shape of the observed cooling curve for \src, \cite{Potekhin2021} were able to reproduce the cooling with a carbon envelope for a large enough accretion rate, which could be explored further. Although we do not expect it to change our conclusion that the neutron star mass must lie close to the \durca\ onset mass (as allowed by superfluidity), a fully-self consistent study that updates the spectral model and envelope model for each choice of $M$ and $R$ (or $L$) would be worthwhile to quantify the emitting volume more accurately. \subsection{Alternative fast emission processes} The small emitting volume for \durca\ provides motivation for considering a less efficient fast process that would result in a larger emission volume and therefore might give a more natural explanation for the observations of \src. Less efficient fast cooling could arise from an uncertainty in the \durca\ prefactor, or from reactions involving other particles such as hyperons, $\Delta$ resonances, or from quark matter. To consistently implement \durca\ cooling from exotic particles would require that we update our EOS to account for the different particle content, and also adjust the \durca\ threshold density accordingly, which is beyond the scope of the current paper. However, as a first check on how our results might change with a less efficient process, Figure \ref{QM_triplet} shows the results of an illustrative calculation in which we keep the same EOS and \durca\ threshold but scale the nucleonic \durca\ emissivity by the constant factor $f_\mathrm{red}$ everywhere in the star. Since a more exotic cooling process likely has a higher threshold density than \durca, the neutrino luminosity we calculate here can be viewed as an approximate upper limit on the emissivity for that case. The results in Figure \ref{QM_triplet} show that as $f_{\rm red}$ is made smaller, the predicted neutron star mass increases. For example, for the weak superfluid gap EEHOr (which closes before the onset of \durca\ reactions) at $L=112.7\ {\rm MeV}$, the mass rises from $<1\ M_\odot$ (near the \durca\ threshold mass) towards the maximum neutron star mass as $f_{\rm red}$ is reduced to values below $0.01$. This is to be expected since a less efficient process requires a larger emitting volume to generate enough neutrino luminosity. However, our calculations show that, because of redshift corrections and increasing central densities near the maximum mass, the volume fraction increase necessary to reproduce the star's luminosity is not exactly inversely proportional to the reduction factor. This can be seen in Figure~\ref{QM_triplet}, where, for example, for $L=112.7$ MeV, solutions can be found for $f_{\rm red}$ as small as $2\times 10^{-3}$ even though the volume fraction for $f_{\rm red}=1$ is $2$\%. It is also noticeable that the allowed range of neutron star masses reproducing the inferred luminosity of \src is larger in many cases for $f_{\rm red}<1$, making the model more likely to reproduce the observed \src\ temperature, however this is not always the case. Depending on the nuclear pairing model considered and the star's volume fraction subject to it, the range of masses can also be reduced. In general, we find that we can reproduce the inferred luminosity of \src\ for any combination of proton and neutron pairing and $L$ as long as $f_\mathrm{red}$ is larger than $\sim 3\times 10^{-3}$\,--\,$3\times 10^{-2}$. In principle this constrains alternative fast neutrino emission mechanisms, e.g.~from pions or kaons which could be suppressed relative to nucleonic \durca\ by a factor of 1000 or more \citep{Yakovlev2001}. This suggests that it would be interesting to further explore models with alternative fast processes that incorporate consistent equations of state and \durca\ thresholds.\newline \subsection{Future observational and experimental constraints} Our calculations of the neutron star total heat capacity, combined with its inferred neutrino luminosity, have shown that a future measurement of surface temperature variation in a long time interval could help discriminate between core nuclear pairing models. Figure ~\ref{fig:cnu} shows that our models span the full range of heat capacity, from close to the minimal heat capacity where leptons only contribute, to the larger values where the nucleons in the core are unpaired. A precise value for that temperature variation, of a few percent, will exclude strong or weak combinations of pairing models, helping to determine the state of matter in the neutron star core. Achieving these observations requires sensitive X-ray observations over many years, and also requires that the source remain in quiescence for this long. \src\ is promising for this, with a mean outburst rate of about 1 every 14 years so far \citep{Maccarone2022}. We have taken $M$ and $L$ to be free parameters, but our results show that constraints on $L$ and $M$ from future experiments and observations would be extremely constraining for cooling models. For example, if it was shown experimentally that $L\gtrsim 80\ {\rm MeV}$, certain gap model combinations would immediately be ruled out for \src\ in the context of our EOS, i.e.~we need the gap to close at high enough density that the transition to \durca\ is delayed. The mass of the neutron star in \src\ is currently unconstrained. \cite{Ponti2018} discuss the possibility of measuring the neutron star mass in \src\ using X-ray spectroscopy of the inner regions of the accretion disk. They find that a mass measurement with an uncertainty of about 5\% may be possible with the next generation X-ray telescopes such as Athena (e.g.~\citealt{Nandra2013}). Another possibility is to use spectral fitting of the neutron star thermal spectrum, either in quiescence or during Type I X-ray bursts, to infer constraints on mass and radius, although these methods currently have significant systematic uncertainties \citep{Ozel2016}. The thermal relaxation of the neutron star crust after accretion outbursts also depends on $M$, primarily through its effect on the crust thickness \citep{Brown2009}. In combination with a determination of the neutron star radius, this could lead to tighter constraints on the mass. Comparing the radius range $\approx 11.5$--$13\ {\rm km}$ recently inferred by \cite{Raaijmakers2021} from a variety of astrophysical data, including from NICER \citep{Riley2021}, with Figure~14 of \cite{Brown2009} suggests $M\lesssim 1.6\ M_\odot$ for \src. For the EOS studied here, this would require $L\gtrsim 60\ {\rm MeV}$ and pairing strong enough to delay the onset of \durca\ to this mass (see Figs.~\ref{fig:newR} and \ref{fig:comb1}). There are various experimental and astrophysical constraints on the value of the slope of the symmetry energy as summarized in \cite{Li:2013ola}. While several experimental results point toward a smaller value of the slope in the range of $40$ to $60$ MeV \citep{Lattimer:2012nd, Drischler:2020hwi}, recent experimental measurements on the neutron skin thickness of $^{208}$Pb \citep{Adhikari:2021phr} implies that $L$ can be much larger, $L = 106 \pm 37$ MeV \citep{Reed:2021nqk}. On the other hand, a very small neutron skin of $^{48}$Ca was measured recently that suggests the $L$-value to be much smaller than all previous constraints combined \citep{Zhang:2022bni}. The lower limit of $L=47$ MeV chosen in this study is a characteristic of the FSUGold2 parametrization below which a self-consistent solution cannot be found. Our exploration suggests $L < 47$ MeV would push the \durca \, threshold even closer to the central density of the maximum-mass neutron star, giving similar results to $L=47$ MeV. While challenging, there are future prospects of a more precise electroweak determination of the neutron skin at the future Mainz Energy-recovery Superconducting Accelerator (MESA) \citep{Becker:2018ggl} that should allow to constrain $L$ more stringently. From the astrophysical side, the prospects of measuring the radius of a neutron star and its tidal deformability that are both very sensitive to the $L$ value have never been better. NICER aims to measure the neutron star radii with known masses, at a $3\%$ level which should significantly constrain the value of $L$ \citep{Miller:2016kae}. Moreover, future gravitational wave data from binary neutron star mergers should give a strong constraint on the tidal deformability that in turn will constrain $L$ \citep{Fattoyev:2017jql}. \section*{Acknowledgements} We thank David Blascke, Sangyong Jeon, J\'{e}r\^{o}me Margueron and Adriana Raduta for useful discussions and comments, and Ed Brown, Chuck Horowitz, Dany Page, and Sanjay Reddy for many conversations about transient LMXBs and \src\ in particular. This work is supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC). MM is supported by the Schlumberger Foundation through a Faculty for the Future Fellowship. AC is a member of the Centre de Recherche en Astrophysique du QuГ©bec (CRAQ). FF is supported by the Summer Grant from the Office of the Executive Vice President and Provost of Manhattan College. Computations were made on the Beluga supercomputer at McGill University, managed by Calcul Quebec and Compute Canada. \software{This work made use of the Python libraries Matplotlib \citep{Hunter2007}, NumPy \citep{harris2020} and SciPy \citep{SciPy2020}.} \bibliographystyle{aasjournal.bst} \bibliography{paper-bib}
Title: The Mars Microphone onboard SuperCam
Abstract: The Mars Microphone is one of the five measurement techniques of SuperCam, an improved version of the ChemCam instrument that has been functioning aboard the Curiosity rover for several years. SuperCam is located on the Rover's Mast Unit, to take advantage of the unique pointing capabilities of the rover's head. In addition to being the first instrument to record sounds on Mars, the SuperCam Microphone can address several original scientific objectives: the study of sound associated with laser impacts on Martian rocks to better understand their mechanical properties, the improvement of our knowledge of atmospheric phenomena at the surface of Mars: atmospheric turbulence, convective vortices, dust lifting processes and wind interactions with the rover itself. The microphone will also help our understanding of the sound signature of the different movements of the rover: operations of the robotic arm and the mast, driving on the rough floor of Mars, monitoring of the pumps, etc ... The SuperCam Microphone was delivered to the SuperCam team in early 2019 and integrated at the Jet Propulsion Laboratory (JPL, Pasadena, CA) with the complete SuperCam instrument. The Mars 2020 Mission launched in July 2020 and landed on Mars on February 18th 2021. The mission operations are expected to last until at least August 2023. The microphone is operating perfectly.
https://export.arxiv.org/pdf/2208.01940
\title{The Mars Microphone onboard SuperCam} \titlerunning{SuperCam Microphone} % \author{David Mimoun \and Alexandre Cadu \and Naomi Murdoch \and Baptiste Chide \and Anthony Sournac \and Yann Parot \and Pernelle Bernardi \and P. Pilleri \and Alexander Stott \and Martin Gillier \and Vishnu Sridhar \and Sylvestre Maurice \and Roger Wiens \and the SuperCam team} \institute{D. Mimoun \at Institut Sup\'{e}rieur de l'A\'{e}ronautique et de l'Espace (ISAE-SUPAERO), Universit\'{e} de Toulouse, 31055 Toulouse Cedex 4, France \\ \email{david.mimoun@isae.fr} % \\ A. Cadu \at Institut Sup\'{e}rieur de l'A\'{e}ronautique et de l'Espace (ISAE-SUPAERO), Universit\'{e} de Toulouse, 31055 Toulouse Cedex 4, France \\ \email{alexandre.cadu@isae.fr} % \\ N. Murdoch \at Institut Sup\'{e}rieur de l'A\'{e}ronautique et de l'Espace (ISAE-SUPAERO), Universit\'{e} de Toulouse, 31055 Toulouse Cedex 4, France \\ \email{naomi.murdoch@isae.fr} % \\ B. Chide \at Institut de Recherche En Astrophysique et Planétologie, Toulouse, France \\ \email{baptiste.chide@irap.omp.fr} % \\ A. Sournac \at Institut Sup\'{e}rieur de l'A\'{e}ronautique et de l'Espace (ISAE-SUPAERO), Universit\'{e} de Toulouse, 31055 Toulouse Cedex 4, France \\ \email{Anthony.Sournac@isae.fr} % \\ Y. Parot \at Institut de Recherche En Astrophysique et Planétologie, Toulouse, France \\ \email{yann.parot@irap.omp.fr} % \\ P. Bernardi \at Laboratoire d'Etudes Spatiale et d'Instrumentation en Astrophysique (LESIA), Paris, France % \email{pernelle.bernardi@obspm.fr} % \\ P. Pilleri \at Institut de Recherche En Astrophysique et Planétologie, Toulouse, France \\ \email{paolo.pilleri@irap.omp.fr} % \\ A. Stott \at Institut Sup\'{e}rieur de l'A\'{e}ronautique et de l'Espace (ISAE-SUPAERO), Universit\'{e} de Toulouse, 31055 Toulouse Cedex 4, France \\ \email{Alexander.Stott@isae.fr} % \\ M. Gillier \at Institut Sup\'{e}rieur de l'A\'{e}ronautique et de l'Espace (ISAE-SUPAERO), Universit\'{e} de Toulouse, 31055 Toulouse Cedex 4, France \\ \email{Martin.Gillier@isae.fr} % \\ V. Sridhar \at Jet Propulsion Laboratory, 4800 Oak Grove Dr, Pasadena, CA 91109, United States \\ \email{vishnu.sridhar@jpl.nasa.gov} % \and S. Maurice \at Institut de Recherche En Astrophysique et Planétologie, Toulouse, France \\ \email{sylvestre.maurice@irap.omp.eu}% \and R. C. Wiens \at Los Alamos National Laboratories, NM 87544, USA, \\ \email{rwiens@lanl.gov}% } \date{January 2022} \section{The Mars Microphone} In July 2020, NASA launched a Rover that landed on Mars in February 2021 and has been operating since then on the surface of Mars, in the remnants of the Jezero crater delta which may include ancient sedimentary deposits and water altered materials \citep{mangold2021perseverance}. The Mars 2020 rover, named Perseverance, is dedicated to the study of Mars Habitability and the search of potential traces of ancient life, and to the study of the capacity of the explored locations environment to sustain life or the potential that had the visited sites to support life \citep{farley2020mars}. Like in the previous Mars Science Laboratory (MSL)-“Curiosity” Mars rover, the mission includes a long-duration science laboratory. Mars 2020 “Perseverance” rover is capable enough to make in-situ, multi criteria evaluation of samples, and to encapsulate them in “sample-return” containers left behind for future sample return missions \citep{muirhead2020mars}. Of course, the assessment of present and past habitability includes multidisciplinary measurements; habitability criteria require a thorough evaluation in various thematic fields, such as biology, climatology, mineralogy, geology and geochemistry. Among the scientific instruments on board “Perseverance”, SuperCam \citep{Wiens2021} provides a rich set of tools to help the M2020 “Perseverance” rover to reach those scientific goals. The SuperCam instrument is an evolution from the successful ChemCam instrument on MSL- “Curiosity” \citep{maurice2012chemcam}. SuperCam is an instrument package capable of four different remote-sensing techniques: Laser-Induced Breakdown Spectroscopy (LIBS), Raman and time-resolved fluorescence (TRF), passive visible and infrared (VISIR) reflectance spectroscopy, and remote micro-imagery (RMI). A fifth technique, the sound recording, has been added to complement the LIBS measurements and also to open a new window of measurements on Mars: prior to Mars 2020, no sounds had ever been recorded on the surface of Mars. \subsection{A brief history of planetary microphones} The SuperCam Microphone is not by far the first microphone that has been implemented in a space mission, but it has been the first to operate successfully on Mars and to record the first sounds of Mars. The idea of having sounds from Mars, and more generally from other worlds, has an incredible popularity among the general public. The short history of the planetary microphones probably began with the Grozo 2 instrument during the Venus Venera 13 and 14 missions \citep{ksanfomaliti1982acoustic}. A successful attempt to record the `sound' of the Huygens probe entry, descent and landing on Titan was made in 2005. Even though this `sound' was actually reconstructed from accelerometer data recorded by the Huygens Atmospheric Structure Instrument during its descent through the atmosphere of Titan, it has still been a popular success, downloaded thousands times from the European Space Agency (ESA) website, see e.g. \citep{leighton2004sound}. The original Mars Microphone instrument, funded by the Planetary Society \citep{delory2007development} was built for the ill-fated NASA Mars Polar Lander (MPL) mission, which lost contact with Earth shortly after its descent to the Martian surface and was never recovered. However, the worldwide interest in the Mars Microphone project was so intense that immediately following the loss of MPL, an opportunity to fly the microphone experiment was provided by the Centre National d’Etudes Spatiales (CNES), the French Space Agency, on the NetLander mission \citep{dehant2004network} to Mars in 2007. But NetLander was cancelled in 2001, and the Mars microphone was, therefore once again relocated, now to operate on the Phoenix MARDI camera. On the Phoenix Mission \citep{smith2004phoenix} the same team provided the microphone. Unfortunately the MARDI Camera was not operated during the Phoenix mission, for fear of major electrical interference with a high-priority instrument. Finally, a last proposal was made by ISAE Team to ESA for the (also ill-fated) Schiaparelli descent module. The microphone was planned to be integrated into the DREAMS payload package \citep{esposito2013dreams} as an add-on to other atmospheric science payloads. The microphone goals were to detect, during the short life of the lander on the Mars ground (up to three days in the most optimistic case), atmospheric related events such as convective vortices, sand saltation noise or any other atmospheric noise. However, some reserves were issued by ESA on the possible science outcome of the instrument and it was finally not implemented. \subsection{SuperCam instrument overview} \label{Supercam-description} The SuperCam instrument is an evolution from the successful ChemCam instrument on MSL-Curiosity\citep{maurice2012chemcam}. In addition to the geologic investigation capabilities, linked to the Laser Induced Breakdown Spectroscopy, or LIBS technique (see Section \ref{LIBS-science}) it implements a new Raman biologic spectroscopic analysis, which is coupled to an Infra-Red (IR) spectrometer. Another improvement has been made by the addition of colour to Remote Micro Imager (RMI), which provides context for the instrument. The SuperCam package consists of three separate major units: the “Body Unit”, the “Mast Unit” and the “Calibration Targets” (see figure \ref{fig:SuperCam}) . The Mast Unit (MU) consists of a telescope with a focusing stage, a pulsed laser and its associated electronics, an infrared spectrometer, a color CMOS micro-imager, and focusing capabilities. A new development for SuperCam is separate optical paths for LIBS (“red line”) and Raman spectroscopy (“green line”), which produces a frequency-doubled beam. The Body Unit (BU) consists of three spectrometers covering the UV, violet, and visible and near-infrared ranges needed for LIBS. The UV and violet spectrometers are identical to ChemCam. The visible spectrometer uses a transmission grating and an intensifier so that it can double as the Raman spectrometer. The intensifier allows the rapid time gating needed to remove the background light so that the weak Raman emission signals can be easily observed. A fiber optic cable, as well as signal and power cable, connects the Mast and Body units. In addition, a set of calibration targets (CT) mounted on the Rover will enable periodic calibration of the instrument. A complete description of the instrument can be found in \citep{Maurice2021}. The Mast Unit is provided by IRAP (funding from CNES), while Los Alamos National Laboratories (LANL, US) provided the Body Unit. The IRAP and LANL portions are entirely separate mechanically, greatly simplifying the interface controls as well as development across international boundaries. The University of Valladolid (UVa) in Spain is the lead for the SuperCam on-board Calibration Target . \section{SuperCam Microphone Science Objectives} \subsection{Science Objectives derived from SuperCam} The SuperCam Microphone goal is to record audio signals on the surface of Mars from both natural and artificial origins. Contrarily to previous attempts to operate a microphone on Mars, which were primarily for outreach purposes, the primary science objective is to support the SuperCam LIBS investigation. LIBS stands for "Laser Induced Breakdown Spectroscopy" and is the key technique that has been developed in the frame of the ChemCam experiment \citep{maurice2012chemcam} to analyse - at distance - the composition of the martian rocks. It uses a powerful laser to ablate rocks and create a plasma: the emitted radiation is then be collected by a telescope and its spectrum is analysed (see e.g. figure \ref{fig:LIBS-sound}). The sound recording of the LIBS laser shots provides a unique opportunity to obtain the properties of Martian rocks and soils, mostly related to the rock hardness \citep{Maurice2021, Chide2019, Murdoch2019}. This objective is directly linked to the primary science of SuperCam, however, the SuperCam microphone also provides several other scientific opportunities. By providing wind and turbulence measurements, and potentially recording dust devils at a close distance \citep{Chide2021,murdoch2021predicting,murdoch2021ATM}, the SuperCam Microphone will also contribute to the Mars 2020 atmospheric science goals linked to the circulation, weather and climate, the dust cycle and even aeolian processes. From an engineering perspective, the SuperCam microphone can also enable backup determination of the SuperCam telescope focus \citep{lanza2021expected}, and monitoring of artificial sounds emitted by other payloads (e.g. MOXIE \citep{hecht2021mars}, or Z-CAM \citep{bell2021mars}), or by the rover operation itself. If we refer to SuperCam Science objectives and goals \citep{Maurice2021}, the SuperCam Microphone complements the experiment for the following goals: Finally, it is important to recall that this instrument opens a new window in our Mars observation capabilities, adding for the first time the sense of "hearing" to a rover. The SuperCam Microphone is, therefore, a powerful outreach tool, which draws a lot of general public attention to planetary science. \subsection{Sound propagation in the Martian Atmosphere} \label{sound_propagation} Among the various reasons given by mission or review boards not to support the selection of microphones on missions to Mars, one comes back very often: many scientists believe that a Mars microphone would record hardly anything, due to the combination of low pressure of the atmosphere and of the expected attenuation of the sound in a carbon dioxide atmosphere. As a matter of fact, pioneering work had been done notably by \citep{williams2001} or \citep{bass2001absorption} for the Mars Polar Lander mission: sound propagation on Mars is expected to be similar to the Earth stratosphere, with an average atmospheric pressure between 6 and 8 millibars and a mean temperature of about 240 K. These acoustic models predict a frequency-dependent sound speed and an attenuation where inflection points at a few kHz result from carbon dioxide molecular relaxation processes (See figure \ref{fig:attenuation_models}): in the cold, carbon dioxide Martian atmosphere, they predict a strong attenuation across the audible frequency range. We have used this model and considered that the acoustic pressure amplitude follows Equation \ref{equ:attenuation_model} \begin{equation} p(r,f)=p_0\left(\frac{r_0}{r} \right)^\beta .e^{-\alpha(f).r} \label{equ:attenuation_model} \end{equation} where p is the pressure at a distance $r$, $r_0$ the reference distance, $p_0$ is the pressure at the source, and $f$ the frequency. The $\beta$ coefficient derives from the geometric attenuation ($\beta=1$ for a spherical wave and $\beta=0$ for a plane wave front). $\alpha(f)$ is a frequency-dependent attenuation coefficient that is graphically represented in Figure \ref{fig:attenuation_models}. \vspace{2cm} It is predicted that most sounds in the frequency range audible to the human ear (~20 Hz - 20 kHz) will not propagate over more than some tens of meters, particularly the higher frequency range (see \ref{fig:attenuation_sensitivity}). However, the situation improves in the lower frequencies and infra-sound region ($<$20 Hz). Such low acoustic frequencies, produced, for example, by dust devils (e.g.\citep{DustDevilInfrasoundSignatures}), or bolide impacts (e.g. \citep{williams2001}), could propagate over kilometer ranges. This process has been described e.g in \citep{Martire2020Infrasounds} in the context of the NASA Insight mission \citep{banerdt2016, banerdt2020}. In order to assess the potential of sound recordings on Mars, and build a science focused Mars microphone, we have, therefore, chosen to use a double approach: design an instrument based on these analytical models, and confirm the performance of the instrument with tests in a Martian environment (see Section \ref{Test_Campaigns}). As it can be derived from figure \ref{fig:attenuation_sensitivity}, the sound is not expected to propagate significantly at frequencies over 10 kHz, and the maximum distance of recording of a sound of typical amplitude (e.g. 60 dB) is relatively limited. To this end, a nominal sampling frequency of 25kHz is chosen for the SuperCam microphone in order to capture most of the audible signal. An optional 100 kHz frequency has been added to allow the optimization of low pass filters. The start of each laser shot is easily detected though their electromagnetic impact on the recording. The difference between the time of this spike and the beginning of the sound arrival can be used to determine the average speed of sound along the targeted direction, providing that the distance between the SuperCam Mast Unit and the target is known. These distances can be known thanks to the 3D model of the environment built on the Z-Cam stereo view. This method for estimating the speed of sound is described in \citep{chide2020speed}. \subsection{Recording the LIBS measurements} \label{LIBS-science} The first objective of the SuperCam Microphone is to record sounds resulting from the interaction of the laser with the rock targeted by the LIBS technique. LIBS is a chemical analysis technology that uses a short laser pulse to create a micro-plasma on the surface of the sample. The micro-plasma is then analyzed by a spectrometer, which analyzes the spectrum of the sample to be studied. This technique was invented when the laser first appeared in the 1960s. The term LIBS was introduced in the 1980s in reference to the breakdown of the air by laser pulses during the creation of plasma. This technique, which allows the remote chemical analysis of samples (therefore without contact with potentially dangerous samples, such as radioactive samples), is now experiencing a renewed interest due to the appearance of powerful lasers delivering more powerful pulses. It is also a perfect tool for remote sensing when transporting samples to a scientific laboratory is challenging. The process by which the laser sparks creates a sound is described in Figure \ref{fig:LIBS-sound}. The interest of this recording is two-fold: first of all, this technique allows remote analysis of the target, without bringing the robotic arm into contact with the mineral to be analyzed; In addition, the first laser impacts vaporize the layer of dust that covers the rock to be analyzed, allowing an analysis of the rock thus uncovered. However in the process the structure of the target is lost. Listening to LIBS sparks provides new information relative to the ablation process that is independent from the LIBS spectrum. In the LIBS literature, the acoustic wave is known to be a product of laser-induced evaporation at high power density of radiation on a sample surface: \begin{equation} \Delta P \approx m_{abl}.1/v_{acw}. 1/r \end{equation} where $v_acw$ and $r$ are the velocity of acoustic wave front and the distance. Thus, the intensity of the acoustic signal acquired as the peak-to-peak amplitude of acoustic waveform will be proportional to the ablated masses $m_{abl}$ as demonstrated by \citep{chaleard1997correction} for aluminum alloys and \citep{GRAD1993370} who used various ceramics. In our experiments with the SuperCam microphone, \citep{Murdoch2019} used soil simulant targets, to demonstrate that the acoustic signal associated with the plasma formation during the LIBS experiment varied as a function of the target compaction. Then \citep{Chide2019} compared in detail the shot-to-shot evolution of acoustic energy with the laser induced crater morphology and plasma emission lines. The chosen targets are a set of geological targets of various origin and hardness; the depth and volume of the craters created by the LIBS impacts have been profiled and analysed with the associated sound recording. The observable here is the acoustic energy recorded by the microphone. A good proxy of this acoustic energy is the integral of the waveform. The decrease of the acoustic energy as a function of the number of shots is well correlated with the target hardness/density (Figure \ref{fig:LIBS-hardness}). This can be explained as the acoustic energy source originates in the hole created by the sparks: shape and volume of the hole vary as a function of the number of shots, changing the source geometric properties. Therefore, listening to laser-induced sparks can complement the LIBS experiment by providing constraints on the hardness/density of the targeted rock. An interesting consequence of the sensitivity of the acoustic signal to target hardness/density is that we can interpret a rupture of the shot-to-shot energy change as a proxy for the existence of a rock coating at the surface of the target, as described in \citep{lanza2020listening}. \subsection{Atmospheric Science} The key atmospheric science goal of the microphone is to characterize the Martian atmospheric dynamics at high frequency (much higher than Perseverance’s Mars Environmental Dynamics Analyzer - MEDA - instrument suite, which make measurement at up to 2 sample per second (sps) for the wind, and 1 sps for the pressure \citep{rodriguez2021mars}). The microphone measures high frequency variations in the dynamic pressure, and such pressure fluctuations are crucial to understanding the Martian climate, including the diurnal and seasonal evolution. Therefore, the atmospheric science investigations of the microphone, can be linked to the Mars 2020 mission high level atmospheric investigations and science goals: \begin{itemize} \item What controls the circulation, weather and climate? \item What controls the dust cycle? \item Aeolian processes and rates \end{itemize} \subsubsection{Measurement of the atmospheric turbulence} Atmospheric turbulence is a key property of the Martian atmosphere. The thinness of its atmosphere associated to the thermal properties of its sandy surface pave the way for strong instabilities of the boundary layer gradient, resulting in convective turbulence (e.g. \citep{tillman1994boundary}). Pressure fluctuations and wind gusts (both observable on the microphone are manifestations of convective motions in the atmosphere. These convective motions are linked to dust motion and lifting, and there is also a link between the observed pressure fluctuations and the atmospheric opacity measurements (e.g., \citep{ullan2017analysis} ) The turbulent properties of the Planetary Boundary Layer (PBL) are key to understanding the conditions of the Martian atmosphere (see \citep{spiga2018}, \citep{chatain2021seasonal}). Following our work on the Mars atmospheric turbulence spectrum \citep{murdoch2016,mimoun2017,temel2022,murdoch2022} based on wind, pressure and infra-sound measurements, we plan to use the SuperCam microphone to extend the measurements of the InSight and Perseverance meteorological suites to higher frequencies. The microphone will allow us to characterise the Martian dynamic pressure fluctuations at high frequency for the first time, giving us information about the spectral content of high frequency turbulence. In addition, the combination of MEDA and microphone data will allow us to investigate the full energy spectrum of the atmosphere, and its potential variations at hourly, daily and seasonal scales. In addition, we should be able to identify the transition frequency (or frequencies) between the various regimes of the Martian atmosphere at the Jezero site. \subsubsection{Measurement of the wind speed} \label{wind_speed} A first attempt of measurement the wind speed thanks to an interplanetary microphone has been done on Venus \citep{ksanfomaliti1983wind}. This measurement is based on the the following relationship (as described in \citep{morgan1992investigation}.)\\ \begin{equation} P = \rho U V \end{equation} \noindent where P is the sound pressure, $\rho$ the atmospheric density, $U$ the variation speed and $V$ the mean wind speed. \citep{lorenz2017wind} use the Aarhus Martian wind tunnel to demonstrate the potential for using a microphone as a wind speed sensor on Mars. Specifically, they report that the Root Mean Square (RMS) voltage measured by the microphone varies with the wind speed. Similarly, during the end-to-end tests of the SuperCam microphone in the same wind tunnel (see Section \ref{Test_Campaigns} and \citep{Murdoch2019,Chide2021}), we have been able to correlate the wind speed to the microphone measurements (Figure \ref{fig:Wind_RMS}), and quantify the influence of the SuperCam mast-unit orientation with respect to the wind direction. As the wind tunnel is not fully representative of the environment on Mars, an in-situ calibration is required to be done after landing, with a 360° sampling of the sound measurement, while simultaneously measuring the wind speed and direction using MEDA. \subsubsection{Acoustic detection of dust devils and convective vortices:} Dust devils are convective vortices, usually of a few meters to tens of meters in diameter, that lift and transport dust particles. This phenomenon has been witnessed on Earth since centuries in arid regions, or more generally in regions where the convective activity of the atmosphere is important \citep{Balme2006}. Vortices and dust devils are also common on Mars e.g. \citep{Ferri2003,Murphy2016,perrin2020monitoring}. As described in \citep{lorenz2015dust}, convective vortices have an infrasonic counterpart that has already been detected on Earth. We intend to follow up this research by attempting to record the sound pressure level associated with dust devil encounters. However, \citep{murdoch2021predicting} explain that the dominate vortex signal on the SuperCam Microphone will likely be the pressure fluctuations induced by the wind dynamic pressure (Figure \ref{fig:dust_devils}, left). Such a signal has also been observed with microphones in terrestrial field experiments \citep{lorenz2017wind, murdoch2021predicting}. In any case, in order to measure the vortex winds, or to record any possible infrasounds generated by vortices despite the sound attenuation of the Martian atmosphere, a very close-range vortex encounter will be necessary. This will probably need a dedicated measurement campaign (and some luck!). \subsection{Rover and Other Sounds monitoring:} The primary objectives of the SuperCam microphone are related to Mars 2020 science. However, these objectives can be completed by several opportunistic secondary objectives that can be classified in roughly two types of objectives: first, "engineering support", with the recording of the various sounds produced by the Mars Perseverance Rover. As during our everyday life, listening to the noise from mechanical devices provides a quick diagnosis on the inner workings of a complex piece of engineering. The SuperCam microphone team has been in touch with various other instrument teams (e.g. MOXIE pumps \citep{hecht2021mars}), or MastCam to help them to record a "reference" noise recording that will serve to help investigate any later issue in the instrument operation. The table \ref{tab:various-noise} makes a summary of possible sound recordings. Second, there has long been interest just in listening to sounds from Mars, in order to engage the public in planetary science. Wind blowing on another planet, rover sounds recordings, and even the Mars 2020 helicopter recordings are sounds of great interest for public outreach. \begin{table}[h!] \label{tab:various-noise} \begin{tabular}{ | m{2.5cm} | m{2.5cm}| m{2.5cm} | m{2.5cm} |} \hline \textbf{Potential Sound Source} & \textbf{Interest} & \textbf{Likelihood of success} & \textbf{Planning before launch} \\ \hline \textbf{MOXIE} & MOXIE Pumps behaviour & Good & Coordination with MOXIE Team\\ \hline \textbf{Mastcam-Z} & Motors Monitoring during motion & Good & No\\ \hline \textbf{Helicopter} & Helicopter sound and video & Weak (significant sound attenuation) & No \\ \hline \textbf{Drill} & Drilling process surveillance & Good & No\\ \hline \textbf{Rover Wheels Noise} & Interaction with soil & Weak (microphone location not adapted) & No\\ \hline \end{tabular} \caption{Possible sources of noise, with initial likelihood of success } \label{tab:noise_source} \end{table} \subsection{Requirements summary} \subsubsection{Science related requirements } Due to its strong coupling with SuperCam LIBS investigation on rock hardness, we have designed the microphone to be able to record the pressure wave generated by the LIBS shot, which has typical maximum amplitude of 5 Pa, at a distance of 4 meters from the rover mast, in a Martian atmosphere. In order to support the SuperCam geological investigation with classical signal processing methods, the signal-to-noise ratio (SNR) of the recording must be greater than 10 dB. The expected bandwidth, as described in section \ref{LIBS-science} ranges from 100 Hz to 10 kHz. The optimal sampling frequency is 100 kHz to satisfy anti-aliasing criteria. A degraded mode allows a 25 kHz sampling frequency in order to save telemetry and increase the recording duration. The amplification gain must be tunable to be able to cope with various signal-to-noise ratios. This variability in the signal-to-noise ratio comes from the potentially small amplitude of the acoustic signals, possibly linked the variation in distance of the various targets (the requirement is to measure up to 4 meters), and to the variability in the LIBS acoustic counterpart depending on the target material properties, and on the background noise (wind). We have chose to keep the analog/digital (A/D) conversion dynamic higher than 60 dB to ensure the quantization effects are negligible with respect to the other error contributors. As the Martian thermal environment is harsh, the microphone also includes a temperature sensor with relative accuracy of 1 K in order to compensate potential deviations in transfer function and noise. Test results (see \ref{Test_Campaigns}) have, however, demonstrated that the sensitivity to temperature is negligible with respect to the calibration capabilities, for both the microphone and its proximity electronics. The microphone design must also comply with the Martian wind as a potential source of perturbation with respect to the other signals of interest. The acquired signal must not be saturated by the effects of wind up to 1 $\sigma$ of the speed distribution as stated in the Mars 2020 Environment Requirements Document, which is 6 m/s. The test results (see section \ref{Test_Campaigns}) have shown that the saturation of the electronics occurs beyond a wind speed of 8 m/s and that the frequency content is limited to the lowest frequencies (inferior to 1 kHz), whereas the LIBS signal is mainly above 2 kHz. Usual filtering methods can then be used to separate those signals, provided that the saturation is avoided \citep{Murdoch2019}. \subsubsection{Functional and design requirements} The microphone was integrated late in the development of the SuperCam Mast-Unit. It was agreed with the Mars 2020 project that the microphone would have been descoped at any time if it had become a threat to other investigations. Fortunately, it survived the many challenges inherent to a space project. This specificity led to a strong design constraint: instead of having its own acquisition, the microphone "piggy backed" on existing the acquisition system of the SuperCam instrument, and uses the same A/D channel as the laser house-keeping. This led to minor operational constraints: the SuperCam team is not able to use the laser temperature housekeeping together with the microphone acquisition, and the total volume of an acquisition (and therefore its duration) is limited. The total amount of data for one acquisition cannot exceed 8 MB, which is the memory size allocated to one RMI (Remote Micro-Imager) image. This leads to a maximum time of recording of 41 or 167 seconds depending on the chosen sampling frequency. As part of the telemetry reduction effort, a filtering and decimation algorithm has been included in the SuperCam Body Unit (BU) flight software. The decimation factor is fixed to 4, while the 65-coefficient FIR filter ensures a rejection greater than 60 dB beyond 10 kHz. The synchronization of the microphone recordings with the LIBS shots is mandatory, and is managed at higher level in the SuperCam Mast Unit. Thus, the amount of data necessary for the LIBS analysis can be reduced down to the minimum time window required to match the sound propagation delay and the LIBS signal duration constraints. In order to save additional data, a specific "pulsed" mode has been introduced, in order to be able to record only the LIBS waveforms (as a consequence, this mode cannot be used to study wind turbulence). The microphone is mounted beside the SuperCam telescope input window holder, in order to face in the direction of the the laser target, and detect the acoustic wave without any obstacle. The microphone is however omnidirectional for the lowest frequencies, even though it is limited by the surrounding large scale elements (Remote Warm Electronics Box (RWEB), Mast Unit, rover body, etc.) once integrated to the rest of the system. The microphone, directly exposed to the Martian environment will undergo daily temperature variations from -80 °C to 0 °C and is qualified over a range from -135°C to +60 °C. The electronics, unable to operate properly at such low temperature, are attached to the Optics Box (OBOX) in order to be protected inside the Remote Warm Electronics Box surrounding the SuperCam instrument (RWEB), and is qualified over a temperature range from -55 °C to +60 °C. \subsubsection{Requirements flowdown } The summary of Science and design requirements has allowed us to setup the requirements flow-down for the microphone (Figure \ref{fig:requirements-flowdown}). The requirements for the SuperCam Microphone also included small mass and size, a rugged, robust design, and resistance to extreme conditions, including radiation exposure and low temperatures. \section{Instrument design} \subsection{Functional description} The microphone is composed of two main parts: the microphone sensor located outside of the RWEB, and the front-end electronics (FEE) located inside the RWEB. The purpose of the FEE is to amplify the microphone output for acquisition by the housekeeping Analog-to-Digital Converter(ADC) of the instrument Digital Processing Unit (DPU). Figure \ref{fig:microphone_hardware} describes the various parts of the SuperCam Microphone. The microphone sensor is exposed to the external environment, and is embedded in a cylinder required to pass through the RWEB. The microphone and the PT100 used for the temperature housekeeping are sealed in glue potting to avoid any unwanted motion. Figure \ref{fig:microphone_functional} depicts the functional overview of the microphone subsystem. The sound wave (pressure) is converted into a voltage by the microphone finger outside the RWEB (Remote Warm Electronic Box). The resulting voltage is amplified by the Front End Electronics (FEE) located inside the RWEB. The FEE integrates a selectable gain (4 values) to match as closely as possible the input voltage range of the 12-bits laser housekeeping ADC, under any kind of signal amplitude. The recording is stored in the DPU memory, before being downloaded by the BU software, possibly compressed, and then stored in a non-volatile memory until the ground data download operations. The electret microphone component (Figure \ref{fig:microphone_accommodation}) is a commercial off the shelf (COTS) component. It is the same commercial microphone sensor as used on the Mars Polar Lander and the Phoenix missions\citep{delory2007development, smith2004phoenix}. It sits outside the RWEB, at the tip of a 3 cm sandblasted aluminum "finger" (Figure 39). Hence the echo from the RWEB itself arrives between 222 µs and 277 µs after the direct signal. A temperature sensor is also potted inside the microphone stand. The temperature probe, a PT100 thermo-resistor, is powered by the same FEE, and the resulting voltage is amplified to provide a sufficiently large signal to the 16-bits ADC of the DPU, dedicated to the precise housekeeping data acquisition of the SuperCam Mast Unit. The microphone temperature data is stored together with the other SuperCam MU housekeeping data. A shielded cable connects the microphone and temperature sensors to the FEE inside the OBOX (Figure \ref{fig:microphone_accommodation}). The harness consists of five Manganin wires to limit the thermal leak. When the electret microphone is at -120°C and the OBOX at -35°C, the power drawn from the survival heaters by the microphone is only ~30 mW, four times smaller than for standard copper wires of the same diameter. The role of the FEE is to amplify and filter the analogic signal and to collect temperature data. A first stage amplifies the signal by a factor x15, a second stage by a controllable gain, x2, x4, x16, and x64. The digitalization of the signal is performed by a fast 12-bit ADC on the instrument DPU board. To avoid failure propagation, two short-circuit protections are implemented for the microphone and the operational amplifiers. We had initially considered to implement a grid to protect the microphone sensor from Martian dust, but the decrease in sensitivity of the overall assembly when the grid was present led us to favour the performance and remove the grid from the design. \subsubsection{Microphone component properties} \label{Microphone_component} The SuperCam microphone is a Knowles EK-23132 microphone (Figure \ref{fig:microphone_accommodation}). This is an electret based sensor, using a charged membrane, whose movement due to pressure fluctuations alters the capacitance of the sensor, which is then read as a signal. The EK-23132, originally selected for the NASA Mars Polar Lander (MPL) \citep{delory2007development}, is the lowest noise microphone manufactured by Knowles, and in our experience has a sensitivity superior to similar microphones made by other manufacturers. It is designed to be inherently rugged to withstand severe environmental conditions, and has a low vibration and shock sensitivity. The microphone contains a Bipolar Junction Transistor (BJT) that amplifies the charge variations caused by the membrane motion, transforming this signal into a voltage level through an output bypass resistor. EK-23132 sensitivity is 29.6 mV/Pa at 1 kHz without any stage of amplification. Its dimensions are 5.6 mm x 3 mm. \begin{center} \end{center} Figure\ref{fig:EK3132-sensitivity} shows the EK-23132 sensitivity and frequency response. Theoretically, the sensitivity of the microphone scales as the acoustic impedance $\rho c$, where $\rho$ is the density of the gas and $c$ the sound speed. $\rho c$ for Mars is $\sim 0.01 \rho c$ for Earth, [Sparrow, 1999], thus reducing $\rho$ by 100 for constant $c$ in air achieves a similar effect. \subsubsection{Proximity Electronics} The microphone front-end electronics (FEE) has four functions \begin{enumerate} \item Amplification of the microphone signal \item Gain switch (2, 8, 32 and 64) \item PT100 Temperature resistance to voltage conversion \item Microphone power supply +3.3 V \item Interface with the O-BOX \end{enumerate} The FEE ensures the polarization of the components and the amplification of the microphone signal. Amplification is done with two stages. The first amplification is fixed, the second amplifier has a selectable gain of 2, 4, 16 or 64 and a bandwidth from 100 Hz to 10 kHz. High gains (32 and 64) have been designed to optimize the SNR with respect to the LIBS sound recordings made in lab. Low gains (2 and 4) are meant to record environmental noise while avoiding saturation due to e.g., wind gusts. \begin{center} \begin{table}[h!] \begin{tabularx}{0.8\textwidth} { | >{\raggedright\arraybackslash}X | >{\centering\arraybackslash}X | >{\centering\arraybackslash}X| >{\centering\arraybackslash}X | } \hline \textbf{Gain Number} & \textbf{Measured FEE Gain [V/V]} & \textbf{Total Sensitivity [V/Pa]} & \textbf{Total Sensitivity [LSB/Pa]} \\ \hline \textbf{Gain 1} & 29 & 0.6 & 491\\ \hline \textbf{Gain 2} & 57 & 1.2 & 983\\ \hline \textbf{Gain 3} & 240 & 5.2 & 4262\\ \hline \textbf{Gain 4} & 972 & 21.0& 17213\\ \hline \end{tabularx} \caption{Gains of the microphone electronics and conversion to physical units . This table includes all amplification gains - fixed and tunable.} \label{tab:MIC-GAINS} \end{table} \end{center} The FEE is directly controlled by the SuperCam power unit (DPU). The measured total gain and the total sensitivity of the microphone flight model at 1 kHz and its electronics are presented in the Table \ref{tab:MIC-GAINS}. These are the begining-of-life values but, as mentioned above, we do expect some evolution in the microphone sensitivity over time. The FEE also provides the output of the PT100 temperature probe. The total sensitivity of the temperature measurement is 3.96 mV/K. The output voltage is shifted by a nominal offset of 1.029 V at 0 °C. The whole design has been made keeping in mind the protection of the SuperCam main electronics. Therefore a protection against failure, shorts, saturation or single event transient (SET) has been implemented at the input of the FEE. The total power consumption of the microphone FEE is about 20 mW, on $+/- 5V$. Figure \ref{fig:microphone_hardware} shows the flight model of the FEE during its final inspection before delivery. The FEE box mechanical design is very straightforward: a simple 0.5 mm aluminium box enclosing the FEE (Figure \ref{fig:microphone_hardware}, left). The FEE box is connected to the chassis mass. \begin{center} \end{center} \subsubsection{Microphone Assembly Thermal design} The temperature range of the FEE is standard: the parts and process selected allow the design to cope with the required temperature range. However, the MIC temperature range is extended down to -130°C (lowest possible temperature on Mars), therefore a dedicated qualification has be done for the microphone assembly. Due to the external location of the microphone, the thermal architecture has to minimize the microphone thermal leaks. If a standard strategy using cables and shielding was applied, the conductance of the cable between the microphone assembly and its FEE would be to high to cope with instrument safe mode heating power requirements. We have, therefore, chosen to implement a thermal leak reduction for the cable, thanks to "athermous" Manganin \textsuperscript{TM} wires, an alloy that conducts current but has a low thermal conductivity. \subsubsection{Data handling} As required by the instrument design, the SuperCam Microphone is mostly managed by the SuperCam Mast-Unit. The SuperCam Body-Unit is mostly in charge of implementing the data storage, as well as the interface with the Rover. The Mast-Unit controls various subsystems, such as the laser, autofocus, Microphone, IRS, RMI, and SDRAM. The Microphone driver derives from the laser HouseKeeping driver which drives the high-speed ADC (sampling frequency up to 100 kHz) and stores data. This driver is also called by the laser driver to synchronize the recording to the LIBS shots when needed. The DPU Unit controls the microphone. Due to the SuperCam Body Unit software limitations, the recording limited to a 8 MB data product size. \subsubsection{Operation Modes} \label{sec:operation} The Microphone has three modes of operation (see also Figure \ref{fig:MIC-rec-time}): \begin{enumerate} \item \textbf{MIC standalone}: this mode is mostly dedicated to the study of natural (winds) or artificial (Rover, helicopter ...) sounds. It can also be used in coordination with other payloads such as the MastCam-Z or MOXIE. \item \textbf{MIC + LIBS continuous mode}: used to record sound continuously during a LIBS burst. \item \textbf{MIC + LIBS Pulsed mode}: used to sample specifically the LIBS burst. The sound recording starts less than 1 ms before the laser pulsed is emitted and runs during 60 ms for each shot (except the last one kept for laser data). It allows to keep data only related to the study of the LIBS shots. The timing for the pulsed mode is shown in Figure \ref{fig:PulsedModeTiming}. \end{enumerate} For these 3 modes, we can use the three possible gains (from 0 to 3). Due to storage capability, the recording duration is, at most, 167s for a sampling frequency of 25 kHz, and 41s for a sampling frequency of 100 kHz. An optional decimation algorithm is also implemented in the Body-Unit to down sample the data from 100 kHz to 25 kHz. We anticipate that the gain settings will depend mostly on environment and target distance. \section{Performance Model} \subsection{Microphone model} The frequency response of the microphone is established with respect to the manufacturer datasheet. However, to perform an extended analysis of the instrument, it is necessary to consider a larger frequency band (at least from 10 Hz up to 100 kHz). The microphone was then modeled with a first order high-pass filter $(f_{HP, M}=30 \mathrm{~Hz})$ and a second order low-pass filter $(f_{LP, M} = 15 kHz, h_M = 0.2)$, and adjusted in amplitude (Microphone sensitivity = 22.4 mV/Pa). The resulting transfer function is described by Equation \ref{transfer-function} \begin{equation} H_{n}(f)=S_{M} \frac{j\left(\frac{f}{f_{HP, M}}\right)}{1+j\left(\frac{f}{f_{HP, M}}\right)} \frac{1}{1+2jH_m\left(\frac{f}{f_{LP, M}}\right)-\left(\frac{f}{f_{LP, M}}\right)^2} \label{transfer-function} \end{equation} \noindent and compared to datasheet values. The model is justified at low and high frequencies by the typical sensitivity of similar products of Knowles Electronics. However, without the real damping factor and the cutoff frequency, only likely assumptions have been made so far. The resonance above 10 kHz might have a more complex description than a second order response, due to the microphone inlet. This is particularly important for the phase modelling and filtering considerations. \subsection{FEE Transfer function models} The electronic transfer function is the response of two 1st order band-pass filters, plus the output stage representing the behavior of the level shifter (+2.5 V reference and associated passive components) and a high impedance input (buffer or oscilloscope). It is modelled by including the effect of the non-perfect operational amplifiers. The comparisons of the theoretical results with the measurements are presented figure \ref{fig:FEEtransfer-function-models}. The model is able to reproduce the transfer function amplitude with a precision inferior to 1 dB. \subsection{Overall System Noise} The electronic noise is measured with an oscilloscope as the root mean square value of the signal, when no signal is applied at the input of the circuit. Theoretical values computed with the model and measurements are presented in Figure \ref{fig:FEE-Noise}. The order of magnitude of the noise is well reproduced by the model. The remaining small discrepancies are due to the simplifications made in our model. For comparison, an ADC LSB with an input voltage range of 5 V and 12 bits of resolution is about 1.2 mV. Therefore, the lowest gains (Gains 1 and 2, see Table \ref{tab:MIC-GAINS}) can be used to improve the maximum input range without saturating the ADC, whereas the highest gains (Gains 3 and 4, see Table \ref{tab:MIC-GAINS}) are more relevant for the recordings of low amplitude signals and the characterization of background noises. \subsection{Theoretical SNR for LIBS} The complete transfer function amplitudes (sensitivities), including the microphone plus the front-end electronics, are presented in Figure \ref{fig:MIC-SNR}. Each curve corresponds to a defined gain of the second stage of amplification. The LIBS signal amplitude shape used has been derived from recordings made in the lab with calibrated microphones. \section{Microphone Verification Process } \label{Test_Campaigns} \subsection{Microphone part verification} Given that the microphone is a commercial off-the-shelf (COTS) component, its integration into a space instrument development required some additional qualifications. A batch of a hundred components was procured from the same manufacturer sub-lot, in order to perform all the necessary tests and integration processes. First, an entrance visual inspection was performed to verify the lot and sub-lot numbers of each part. Second, a serial number marking operation was performed to provide a non-ambiguous traceability of each part. Then an initial performance test was performed to track potential degradation during further operations by comparison. The test was based on noise, bias, and sensitivity measurements in a contained acoustic bench, which is dedicated to the whole process. The main group of components (71 parts) underwent vibration, pin test and re-tinning operation, whereas two others groups were dedicated respectively to radiation and destructive physical analysis (DPA). After the main group performances verification, five parts were finally extracted for internal X-ray inspection and surface electron spectroscopy of the solder pads. No physical or performance degradation was observed at this step. The 20 best components in terms of acoustic sensitivity were then selected to be integrated in the microphone assembly. Six were allocated to PQV (package qualification and verification) testing and further inspections, the five best assemblies were used for the final qualification thermal test of the models eligible for flight. Contrary to the rest of the SuperCam Mast Unit instrument, the microphone is directly exposed to the Martian environment. As a consequence, six microphone assemblies have undergone a package qualification and verification process (PQV) that consist of 1400 thermal summer cycles (+40 to -105 °C) and 600 winter cycles (+15 to -130 °C). No major potting lift-off was observed that would endanger the cleanliness or the sealing of the RWEB, or the microphone integrity itself. No degradation of the acoustic performance was observed during the whole campaign. The PQV was performed at CNES in a thermal chamber supplied with liquid nitrogen. A sub-lot of five microphone assembly parts eligible for flight was tested for a cryogenic temperature qualification at ISAE-SUPAERO in a dedicated thermal vacuum chamber operating with a liquid nitrogen thermal exchanger. The parts were mounted on a copper mechanical interface, as shown in Figure \ref{fig:MIC-CRYO}, and the temperature of each assembly was monitored thanks to its internal PT100 sensor. Four cycles were performed from +60°C to -135°C: two of them under cruise pressure conditions (10$^{-5}$ mbar), the microphone being off, and two under Martian conditions (5 to 10 mbar) with a functional test on the dwells. The performances of each assembly have been compared to their pre-qualification characteristics with the acoustic test bench. No measurable differences were observed and the five microphone assemblies were therefore declared qualified for the mission thermal environment. The flight model and its spare were picked from this sub-lot. \subsection{Radiation Susceptibility} Screening and lot qualification were performed on 100 microphone parts. The qualification path included detailed component analyses, and radiation sensitivity evaluation, with increasing ionizing doses up to 1750 rad (Si). This maximum dose results is a loss of 10 dB of sensitivity for the full mission (Radiation Design Factor of 2). This loss is acceptable for the nominal mission, but we can expect the SNR performance to slightly decrease with time once on Mars due to the radiation environment. A group of ten microphone parts was dedicated to the total ionizing dose degradation analysis. As the microphone membrane is made of a permanently charged polymer, it is sensitive to radiation degradation. This degradation is probably due to the release of charge carriers in the medium during energy deposition. The radiation test was performed with five components powered to their nominal mission voltage of 3.3 V and five other components not powered. Components were irradiated with gamma rays from a Co60 source, and removed for testing at different Total Ionizing Dose (TID) values. The sensitivity comparison was established by comparing the components response to an acoustic sweep signal with respect to the response of a component not exposed to radiations. The excitation signal was generated by a speaker inside the test bench. The results of the relative sensitivity decay as a function of TID is presented in Figure \ref{fig:MIC-RAD}. The full mission equivalent dose is presented for radiation design factors (RDF) of 1 and 2. According to the JPL M2020 rover team analyses, the microphone will undergo most of the dose exposure during the cruise to Mars, to reach a worst case end-of-life loss of sensitivity of -10 dB after 2 years, which is still compliant with the performance requirements. \begin{center} \end{center} \subsection{Microphone directivity } In order to assess the directivity of the microphone measurement, we have also studied the influence of the Mast Unit orientation on the microphone recording \citep{chide2020premier}. This measurement has been done in an anechoic chamber at ISAE-SUPAERO/DEOS Department (see picture of the setup in Figure \ref{fig:MIC-Anechoic}). The directional sensitivity of the microphone has been studied, as well as the impact of the Mast Unit on this sensitivity. Full results are described in \citep{chide2020premier} and are summarized in Figure \ref{fig:fig_Directivity_MIC}. On the left of Figure \ref{fig:fig_Directivity_MIC}, we see that the microphone is omni-directional as specified in the microphone part datasheet. However, when the microphone is accommodated on the SuperCam Mast Unit, there is a directionality obviously linked to the presence of the Mast Unit. In the forwards-facing direction, the sensitivity remains good. \subsection{End-to-end noise characterization and signal validation} Since the clean rooms and thermal vacuum chamber are generally noisy environments in terms of acoustic measurement, the opportunities to get the complete system noise characterization are rare or non-existing. However, it happened to be possible in the LESIA facilities during the Infra-Red Spectroscope (IRS) qualification and characterization campaign. Figure \ref{fig:FEE-noise-STT} presents the comparison of the noise measurements of the microphone, integrated to the SuperCam Mast Unit, with the theoretical models for all gains, under 6 mbar of nitrogen. The model is a combination of the FEE noise model, the ADC quantification error model, and a microphone component noise model fitted with the data acquired during in the previous Martian wind tunnel test campaigns. The close match between the model and data validates the complete noise model of the SuperCam Microphone, which will be used to distinguish the intrinsic noise from the real acoustic signals in further scientific analyses. During the system thermal test of the rover Perseverance at JPL, a raster of the LIBS laser shooting at the titanium calibration target was recorded with the microphone under a pressure of 6 mbar at -55 °C. The microphone was set to its minimum gain and successfully acquired 30 laser acoustic waveforms. A zoom on one waveform can be seen in Figure \ref{fig:LIBS-shot-STT}. The maximum amplitude is 0.7 V and the root mean square noise level is 10 mV, leading to an operation re-calibration SNR close to 60 dB, which is consequent when compared to the expected signal levels from the distant Martian rocks (20 to 30 dB). This calibration recording, and those that will be acquired on Mars, will be used to optimize the data processing pipeline currently under development, aiming at automatically extracting the relevant information from the waveforms, while removing most of the noise components. This acquisition, just before the rover integration into the interplanetary probe, was the last signal acquired by the microphone on Earth, and it validates the complete system chain of the flight model, from the calibration target up to the rover and the ground system. \subsection{Mars like environment performance validation} In order to validate the end-to-end performance of the instrument, full tests were performed using the Aarhus Wind Tunnel Simulator II \citep{holstein2014environmental} in Denmark in July 2017. The full details of these tests are reported in \citep{Murdoch2019}. The AWTSII facility is a climatic chamber with a wind tunnel; it has a cylindrical shape, with a 2.1 m inner diameter, and a 10 m length. The tests were performed at 6 mbar of $100\%$ $CO_2$. A suite of environment sensors (temperature, pressure, humidity), in addition to an in-situ webcam, were used to monitor the environment. These tests were the final validation that the acoustic signal of a laser LIBS blast could be recorded by the SuperCam Microphone in a martian atmosphere environment at a 4 m distance (this was the science requirement). A peak SNR of 21 dB was measured largely above the instrument SNR requirement of 10 dB. Thanks to this experiment we learned also two important points : \begin{enumerate} \item These experiments also demonstrated that wind signal would be recorded, as it adds a high amplitude, low frequency, component to the acoustic signal. However, this wind signal can be removed with relatively simple filtering, enabling LIBS recording even in windy conditions. \item As a result of this, the microphone recording provide a good proxy for the wind speed, as already discussed in Section \ref{wind_speed}. \end{enumerate} \section{SuperCam Microphone Operations } \subsection{Operational limitations } They are several limitations in the use of the microphone that shall be taken into account during the ATLO process. \begin{enumerate} \item The maximum temperature that the microphone membrane can sustain without permanent degradation is +63 °C. Beyond that limit, the membrane polymer changes state, and the permanent electric dipole starts decreasing. \item It has been observed that under pressure lower than 1 mBar, the microphone membrane resonance is not sufficiently damped, which creates a very strong oscillation when interacting with the electrostatic feedback force. Even though no degradation of the performances has been detected after vacuum testing, it is not recommended to power on the microphone under those conditions. \item Membrane protection: in order to optimize its sensitivity, no filter has been implemented to protect the membrane from external objects. \end{enumerate} \subsection{In Situ calibration } In order to have the best possible performance on Mars, we have to perform in-situ calibrations, that will enable to estimate the microphone best gain tuning, the microphone sensitivity to wind as well as the performance graceful degradation along the mission. \subsubsection{Gain calibration} In order to determine the best gain setting strategy for atmospheric measurements, the MIC only sequence shall be played various positions of Mast Unit orientation with respect to wind. 4 sequences (gains 0, 1, 2, 3) shall be played, 100 kHz. For the LIBS recording, LIBS continuous or LIBS pulsed mode shall be played during shots on calibration targets. 4 sequences (gains 0, 1, 2, 3) shall be played at 100 kHz. If possible, LIBS pulsed recording shall be played during every shots on calibration targets. This will help to monitor any microphone sensitivity change. \subsubsection{Sensor cross validation} In order to evaluate the microphone sensitivity to the wind, we propose to record a stand-alone continuous sequence with Mast rotation in azimuth together with MEDA ( 1 recording for each Mast Unit position). \begin{enumerate} \item 15° or 30° step in azimuth, elevation at 0°, 24 or 12 measurements in total \item One 30 long microphone recording (25 kHz) per step in azimuth \item Microphone measurements in parallel with MEDA to obtain wind speed and orientation \item Ideally this sequence should not be played for a wind coming from the rear of the rover, subject to too much interaction with the body of the rover and the RTG. \end{enumerate} \subsubsection{ Regular Calibration} Each time a target calibration is performed, LIBS continuous recording shall be played during shots on calibration targets. 4 sequences (gains 0, 1, 2, 3) shall be played, 100 kHz. In particular, LIBS+MIC shall be played each time the SCCT Titanium is targeted: it will be used as the in-situ reference sound as acoustic LIBS signal on titanium is loud and because the ablation rate on titanium is small. Therefore the acoustic signal remains constant with the number of shots. This allows to determine the microphone gain graceful degradation as a function of time. MIC ONLY recording shall be played routinely in parallel with MEDA (one pointing only) to calibrate the microphone RMS pressure level with regard to wind speed. These measurements will also have to take into account the variation of the LIBS signal as a function of the external pressure ( which varies along the seasons). \section{Conclusions and discussion} The SuperCam instrument suite onboard the Mars 2020 rover includes the Mars Microphone (provided by ISAE-SUPAERO in France). The SuperCam Microphone has been the first microphone to record sounds from the surface of Mars. In order to record LIBS shock waves and atmospheric phenomena, the Mars Microphone is able to record audio signals from 100 Hz to 10 kHz on the surface of Mars, with a sensitivity sufficient to monitor a LIBS shock wave at distances of up to 4 m. It shall be an help characterize the rocks shot by SuperCam, but it also opens a new windows of atmospheric measurement on the Mars surface, providing high frequency insights on the wind and turbulence measurements. We do not have a single doubt that we have paved the way and that all Mars missions, in the coming years, will implement a microphone as a part of their atmospheric payloads. \section{Acronyms} The following acronyms are used in this publication : \begin{table}[h!] \begin{tabular}{ | m{2.5cm} | m{4cm}| m{4cm} | } \hline \textbf{Acronym} & \textbf{Signification} & \textbf{Comment} \\ \hline \textbf{ADC} & Analog to Digital converter & \\ \hline \textbf{BU} & Body Unit & SuperCam Subsystem \\ \hline \textbf{CMOS} & Complementary metal-oxide-semiconductor & \\ \hline \textbf{CNES} & Centre National d'Etudes Spatiales & French Space Agency \\ \hline \textbf{COTS} & Component Off the Shelf & \\ \hline \textbf{DPU} & Digital processing Unit & SuperCam subsystem \\ \hline \textbf{DPA} & Destructive Part Analysis & \\ \hline \textbf{DREAMS} & Dust Characterisation, Risk Assessment, and Environment Analyser on the Martian Surface & ExoMars payload \\ \hline \textbf{FEE} & Front End Electronics & SuperCam subsystem \\ \hline \textbf{HK} & House Keeping & \\ \hline \textbf{IRAP} & Institut de Recherche en Astrophysique et planétologie & Consortium member \\ \hline \textbf{IRS} & Infra-Red Spectroscope & SuperCam subsystem \\ \hline \textbf{ISAE} & Institut Supérieur de l'aéronautique et de l'Espace & Consortium member \\ \hline \textbf{JPL} & Jet Propulsion laboratory & Lead Consortium member \\ \hline \textbf{LANL} & Los Alamos National Laboratories & Consortium member \\ \hline \textbf{LIBS} & Light Induced Breakdown Spectroscopy & \\ \hline \textbf{LVPS} & Low Voltage Power Supply & SuperCam subsystem \\ \hline \textbf{LSB} & Least Significant Bit & \\ \hline \textbf{MARDI} & Mars Descent Imager & Phoenix Mission Instrument \\ \hline \textbf{MPL} & Mars Polar Lander & \\ \hline \textbf{MSL} & Mars Science Laboratory & \\ \hline \textbf{MU} & Mast Unit & \\ \hline \textbf{MEDA} & Mars Environmental Dynamics Analyzer & Perseverance Instrument \\ \hline \textbf{MOXIE} & Mars Oxygen In-Situ Resource Utilization Experiment & Perseverance Instrument \\ \hline \textbf{Mastcam-Z} & Mast-Mounted Camera System & Perseverance Instrument \\ \hline \textbf{PQV} & Part Qualification Validation & \\ \hline \textbf{RMI} & Remote Imager & SuperCam subsystem \\ \hline \textbf{OBOX} & Optical Box & SuperCam subsystem \\ \hline \textbf{SET} & Single Event Transient & \\ \hline \textbf{SNR} & Signal to noise ratio & \\ \hline \textbf{RWEB} & Remote Warm Electronic Box & \\ \hline \textbf{TID} & Total Ionizing Dose & \\ \hline \end{tabular} \end{table} \section{Acknowledgements} We gratefully acknowledge funding from the French space agency (CNES), from ISAE-SUPAERO, and from Région Occitanie. \bibliography{micro}
Title: Gaussian phase autocorrelation as an accurate compensator for FFT-based atmospheric phase screen simulations
Abstract: Accurately simulating the atmospheric turbulence behaviour is always challenging. The well-known FFT based method falls short in correctly predicting both the low and high frequency behaviours. Sub-harmonic compensation aids in low-frequency correction but does not solve the problem for all screen size to outer scale parameter ratios (G/$L_0$). FFT-based simulation gives accurate result only for relatively large screen size to outer scale parameter ratio (G/$L_0$). In this work, we have introduced a Gaussian phase autocorrelation matrix to compensate for any sort of residual errors after applying for a modified subharmonics compensation. With this, we have solved problems such as under sampling at the high-frequency range, unequal sampling/weights for subharmonics addition at low-frequency range and the patch normalization factor. Our approach reduces the maximum error in phase structure-function in the simulation with respect to theoretical prediction to within 1.8\%, G/$L_0$ = 1/1000.
https://export.arxiv.org/pdf/2208.06060
\keywords{Phase Screen, Fast Fourier Transform, Subharminic, Autocorrelation, Phase structure function} \section{Introduction} \label{sec:intro} % For a variety of purposes such as the design and development of adaptive optics systems, speckle imaging techniques, atmospheric propagation studies etc., it is essential to simulate a good atmospheric phase screen model. Methods based on Zernike polynomial expansions\cite{Roddier90}, FFT-based methods \cite{Herman90,Johansson94,Sedmak98,McGlamery76,Sedmak04,Xiang14,Xiang12}, Optimization method \cite{Zhang19} etc. have been in use for this purpose.The Zernike polynomial method, which is widely in use, has a limitation due to the maximum number of coefficients needed for accurate compensation. The optimization method which compensates accurately for low frequency part of the spectrum by using unequal sampling and unequal weight in low frequency region, but does not cover high frequency deficiencies. Among these, FFT-based methods are computer memory size friendly and widely accepted. But, FFT operators assume uniform sampling for the non-uniformly distributed phase power spectrum which leads to undersampling in the low and high frequency region. It has limitations in true recreation of the phase power spectrum. To compensate for low-frequency components, Johansson and Gravel \cite{Johansson94} suggested the modified subharmonics equation (originally given by Lane et al. \cite{Lane92}). Giorgio Sedmek \cite{Sedmak04} later proposed a performance analysis of this method by actually calculating the phase structure function from the simulated screen after compensating for high-frequency component too. Results from his analysis show that FFT-based simulations are accurate for large screen size $G$ to Outer scale parameter $L_0$. For a screen size of $G=200$m and outer scale of $L_0 = 25$m, the maximum relative error in the system approaches 1\%. From Fig ~\ref{fig:power} we can see that\cite{sedmak_private} the simulation band $\big(\frac{1}{G}-\frac{1}{\Delta}\big)$ is actually smaller than full band $\big(\frac{1}{L_0}-\frac{1}{l_0}\big)$. The larger the simulation band to full band ratio, the more accurate the simulated results will be. Our simulations also demonstrate that the errors from low-frequency components start shooting up once we move to smaller $G/L_0$ ratios, even after compensating with modified subharmonics. For simulations of imaging with small apertures relative to the outer scale, we need a screen of small size, but cutting out small screens from a larger screen is not the right solution to this problem. \\ In this paper, we present a method to deal with small $G/L_0$ phase screen simulation using the FFT-based method, inspired by Jingsong Xiang's \cite{Xiang14} work on phase screen simulation. Section~\ref{sec:autocorr} covers obtaining Phase Autocorrelation Matrix using Phase power spectrum , Section~\ref{sec:comp} covers compensation for residual error in phase autocorrelation matrix, Section~\ref{sec:phase_screen} covers on phase screen simulation from the autocorrelation matrix, Section~\ref{sec:val} covers validation via Phase structure function calculation, Section~\ref{sec:results} covers result analysis and Section~\ref{sec:conclusion} covers conclusion part. \section{Obtaining Phase Autocorrelation Matrix using Phase Power Spectrum}\label{sec:autocorr} The 2D phase structure-function $D_{\phi}(m,n)$ and phase autocorrelation matrix $B_{\phi}(m,n)$ are related as follows: \begin{equation}\label{eqn:sf_ac} D_{\phi}(m,n) = 2(B_{\phi}(0,0)-B_{\phi}(m,n)), \end{equation} where $B_{\phi}(m,n)$ is the phase autocorrelation matrix and $(m,n)$ are the coordinates along x and y-axis. The 2D phase autocorrelation matrices for the FFT-Based phase screen and the modified subharmonic method by Johansson and Gavel \cite{Johansson94} are represented as follows. \begin{equation} \label{eqn:ac_FFT} B_{\phi}^{FFT}(m, n) =\sum_{m^{\prime}=-N_{x} / 2}^{N_{x} / 2-1} \sum_{n^{\prime}=-N_{y} / 2}^{N_{y} / 2-1} f^{2}_{FFT}\left(m^{\prime}, n^{\prime}\right) e^{i 2 \pi\left(\frac{m^{\prime} m}{N_{x}}+\frac{n^{\prime} n}{N_{y}}\right)} \end{equation} \begin{equation} \label{eqn:ac_SH} B_{\phi}^{SUB}(m, n)=\sum_{p=1}^{N_{p}} \sum_{m^{\prime}=-3}^{2} \sum_{n^{\prime}=-3}^{2} f_{SUB}^{2}\left(m^{\prime}, n^{\prime}\right) e^{i 2 \pi 3^{-p}\left(\frac{\left(m^{\prime}+0.5\right) m}{N_{x}}+\frac{\left(n^{\prime}+0.5\right) n}{N_{y}}\right)}, \end{equation} where $f^{2}_{FFT}\left(m^{\prime}, n^{\prime}\right)$ and $f_{SUB}^{2}\left(m^{\prime}, n^{\prime}\right)$ are the \vk\ power spectrum and subharmonic power spectrum of Johansson and Gavel \cite{Johansson94}. $(N_x,N_y)$ are sample points, p is the $p^{th}$ subharmonic and $N_p$ is the total number of subharmonics. During subharmoic addition, we are not worried about the leakage of energy from subharmonics to harmonics. Thus, patch normalization factor, to compensate for this leakage, is set to 1. The leakage compensation part is dealt with in section ~\ref{sec:comp}. The 2D phase autocorrelation matrix after compensating with subharmonics is represented as \begin{equation}\label{eqn:net_ac} B_{\phi}(m,n) = B^{FFT}_{\phi}(m,n) +B^{SUB}_{\phi}(m,n). \end{equation} We can determine the phase structure function of the simulated screen by substituting Eqn. \ref{eqn:net_ac} in Eqn. \ref{eqn:sf_ac}. \section{Compensation for residual error in phase autocorrelation matrix}\label{sec:comp} It is clear from the work of Zhang et. al. \cite{Zhang19} on optimum frequency sampling that unequal sampling and unequal weights are the most optimized solution to compensate for the under-sampling problem in the low-frequency region. The subharmonic method follows one particular fashion of sampling ($3^{-p}$) as is seen from Eqn. \ref{eqn:ac_SH}. With the use of Eqn. \ref{eqn:sf_ac} to Eqn. \ref{eqn:net_ac}, first $D_{\phi}(m,n)$ is calculated with the assumption that $B_{\phi}^{FFT}(0,0)$ and $B_{\phi}^{SUB}(0,0)$ are zero because we are not concerned about the piston component for now. The remaining error in $D_{\phi}(m,n)$ with respect to $D_{theory}$ can be calculated as follows: \begin{equation}\label{eqn:error_D} D_{error}(m,n) = D_{theory}(m,n) - D_{\phi}(m,n), \end{equation} where $D_{theory}(m,n)$ is the theoretical phase structure matrix taken from Johansson and Gavel\cite{Johansson94}. The error matrix cannot be just added to the $D_{\phi}$ matrix.The reason is the following: Any curve that is represented by a polynomial equation will have a higher order of moments. If we take the Fourier transform of this polynomial equation, the resultant curve will be completely different with a different order of moments, just like Gibbs phenomena. This introduces unwanted error in the final result. During subharmoic addition, we are not worried about the leakage of energy from subharmonics to harmonics. Thus, patch normalization factor, to compensate for this leakage, is set to 1 and this leakage compensation part has been dealt in section ~\ref{sec:comp}. We need a smoothening operator like a Gaussian function that can compensate for the remaining error. Using MATLAB, we find the best fit of $D_{error}(m,n)$ using cftool to obtain coefficients of the required Gaussian function (with 95\% confidence bounds)\footnote{With cftool, first fitting was done for 1D data of error matrix and then 1D data turned to 2D data by replacing m/n with $r$, $r=\sqrt{(m\Delta)^2 + (n\Delta)^2 }$}. After obtaining the best fit, the Gaussian phase autocorrelation matrix $B_{gauss}(m,n)$ can be obtained with the help of Eqn. \ref{eqn:sf_ac} by equating the zeroth component of the autocorrelation matrix to zero. The autocorrelation matrix after doing the Gaussian compensation can be given as \begin{equation} B_{tot}(m,n) = B_{\phi}(m,n) + B_{gauss}(m,n). \end{equation} \section{Phase screen simulation using $B_{\lowercase{tot}}(\lowercase{m},\lowercase{n})$ matrix}\label{sec:phase_screen} Obtaining Power spectrum from $B_{tot}(m,n)$ will lead to negative terms in the power spectrum \cite{Xiang14}. Directly putting those frequency terms equal to zero leads to a loss in the energy spectrum. From Fig~\ref{fig:power_spectrum}, for the case of $G/L_0<1$, we see that there is a large number of negative frequency components over a small separation. Hence $B_{tot}$(m,n) matrix needs to be preprocessed to eliminate most of these negative values in the power spectrum. For this we extract the Piston and Tip/Tilt components from the phase autocorrelation matrix $B_{tot}$. The Tip/Tilt component from phase autocorrelation matrix is given as \begin{equation} B_{tilt}(r) = B_{tilt}(0) - r^2\sigma_{tilt}^2/2 \end{equation} where $\sigma^2_{tilt}$ is the variance of the random tilt angle in the x or y directions and given as follows\cite{Xiang14}: \begin{equation} \sigma^2_{tilt} = \frac{B_{tot}(G/2+\Delta)-B_{tot}(G/2)}{\Delta(G-\Delta)/2} \end{equation} The remaining phase autocorrelation matrix of higher order terms is given as follows: \begin{equation} B_{high}(r) = B_{tot}(r) - B_{tilt}(r) \end{equation} Standard fast fourier transform relation is then used in order to obtain the power spectrum from $B_{high}(r)$ and $B_{tilt}(r)$ as $f_{high}^2$ and $f_{tilt}^2$ respectively. Setting the negative values in $f_{high}^2$ and $f_{tilt}^2$ to zero leads a residual error that can further reduced by using a smoothening Gaussian operator as done by Xiang with updated parameters A=3.1 and width W = G/1.5.We have obtained these numbers after performing error analysis for different values of A and W over N = 128 sampling size phase screen and find for minimum relative error in structure function calculation. To obtain the phase screen from the power spectrum, the following relation is used\cite{Xiang14}: \begin{equation} \begin{aligned} \phi(m \Delta, n \Delta)= \sum_{m^{\prime}=-N / 2}^{N / 2-1} \sum_{n^{\prime}=-N / 2}^{N / 2-1}\left[R_{a}\left(m^{\prime}, n^{\prime}\right)+\mathrm{i} R_{b}\left(m^{\prime}, n^{\prime}\right)\right] f\left(m^{\prime} \Delta^{\prime}, n^{\prime} \Delta^{\prime}\right) \exp \left[\mathrm{i} 2 \pi\left(m^{\prime} m+n^{\prime} n\right) / N\right] \end{aligned} \end{equation} Where $R_a(m^{'},n^{'})$ and $R_b(m^{'},n^{'})$ are zero-mean and unity-variance gaussian random number generators. $f$ being $f_{high}$ and $f_{tilt}$, for the higher order terms and tip/tilt term respectively. \section{Validation via phase structure function calculation}\label{sec:val} Consider the parameters: G = 1m , $L_0$ = 100/1000 m , N = 128 , $r_0$= 0.2 m , A = 3.1 , W = G/1.5, $N_{p}=5/3$ and results has been averaged over 50K frames. Fig~\ref{fig:phasescreen} shows the corresponding phase screen plot for $L_0 = 100 m $. The Relative error in phase structure function calculation has been calculated as follows: \begin{equation} err(r)= \frac{D_{\phi}^{obs}(r)-D_{theory}(r)}{D_{theory}(r)} \end{equation} Here, $D_{\phi}^{obs}(r)$ is the Observed phase structure function from simulated phase screen. The $max[err(r)]$ turns out to be $<1.8\%$ for $r<G/2$ as shown in the Fig~\ref{fig:error} and the corresponding phase structure function plot is shown in Fig~\ref{fig:psf}. \section{Result Analysis}\label{sec:results} Our errors little shoot up from 1\% because we have not set phase autocorrelation matrix to zero for $r>G/2$. The reason is the following: After extracting piston and tilt from $B_{tot}(r)$(which has inherent residual error close to 0.5\% from curve fitting),$B_{high}(r)$ does not fall to zero immediately. Due to which there is a sudden jump. Further improvement that can be performed is to find perfect smoothening operators like Gaussian, which we can multiply with $B_{high}$ so that it falls to zero progressively, not sharply. This can provide further significant improvement in the final result. \section{Conclusion}\label{sec:conclusion} In this paper, we put forward a new method to compensate for the residual error in the Low and/or High-frequency region of FFT simulated phase screens after compensating with the modified subharmonic method. This method provides very accurate phase screen structure for even $G/L_0$ ratios as small as 1/1000. No Patch Normalization factor is needed, no need to calculate subharmonic weight coefficient \cite{Lane92} and weights to compensate for high-frequency components, as done by Sedmak. Currently we have demonstrated this technique for only circular screens. We have used MATLAB R2018b on Macbook pro 2018 with 8Gb Ram to calculate coefficients of the Gaussian matrix, which works pretty fast. Finally, the accuracy of this method from low-frequency to the high-frequency range is better than 1.8\% for $G/L_0$ as low as 1/1000. \acknowledgments We would like to thank Sedmak to support us over private communication and provide in-depth knowledge of the atmospheric power spectrum. We also thank Xiang for sharing his MATLAB code which calculates the phase structure function quickly for a large number of phase screens. \iffalse \begin{equation}\label{eq1} f_{FFT}\left(m^{\prime}, n^{\prime}\right)=\frac{2 \pi}{\sqrt{G_{x} G_{y}}} \sqrt{0.00058} r_{0}^{-5 / 6}\left\bigg[\bigg(\frac{m^'}{G_x}\bigg)^{2}+\bigg(\frac{n^'}{G_y}\bigg)^{2}\right\bigg]^{-11 / 12} \end{equation} is the square root of \vk\ spectrum. where \begin{equation}\label{eq1} f_{SUB}(m',n')=\begin{array}{l}{\frac{2 \pi}{\sqrt{G_{x} G_{y}}}\sqrt{0.00058} r_{0}^{-5 / 6} 3^{-p}} {\left\{\left[3^{-p}\left(m^{\prime}+0.5\right) / G_{x}\right]^{2}\right.} {\left.+\left[3^{-p}\left(n^{\prime}+0.5\right) / G_{y}\right]^{2}+1 / L_{0}^{2}\right\}^{-11 / 12}}\end{array} \end{equation} where \begin{equation}\label{eq1} \resizebox{1.0\hsize}{!}{$B_{\text {theory }}(r)=\left(L_{0} / r_{0}\right)^{5 / 3} 2^{-5 / 6} \pi^{-8 / 3} \Gamma(11 / 6) \cdot\left[\left(\frac{24}{5}\right) \Gamma\left(\frac{6}{5}\right)\right]^{5 / 6}\left(\frac{2 \pi r}{L_{0}}\right)^{5 / 6} K_{5 / 6}\left(\frac{2 \pi r}{L_{0}}\right), \text { for } r>0 , r=\sqrt{(m\Delta)^2+(n\Delta)^2}$} \end{equation} \begin{equation}\label{eq1} B_{\text {theory }}(0)=\left(L_{0} / r_{0}\right)^{5 / 3} 2^{-5 / 6} \pi^{-8 / 3} \Gamma(11 / 6)\left[\left(\frac{24}{5}\right) \Gamma\left(\frac{6}{5}\right)\right]^{5 / 6} 2^{-1 / 6} \Gamma(5 / 6), \text { for } r=0 \end{equation} \begin{equation} B_{tot}(m,n) = B_{\phi}(m,n) + a_1*exp(-\frac{(r-b_1)^{2}}{c_1^{2}})+a_2*exp(-\frac{(r-b_2)^{2}}{c_2^{2}})+a_3*exp(-\frac{(r-b_3)^{2}}{c_3^{2}}) ..... \end{equation} where $a_1, b_1 , c_1 , ........$ are given by curve fitting function using Matlab Reason of Negative components: Because Autocorrelation matrix does not fall to zero value after $r>G/2$ for the case $G/L_0$<1.However this is not true for $G/L\gg 1$.So there is always loss in the information while taking fourier transform of that matrix(figure(2)). \fi \bibliography{report} % \bibliographystyle{spiebib} % \begin{comment} \vspace{2ex}\noindent\textbf{Sorabh Chhabra} is a PhD student at Inter University Center for Astronomy and Astrophysics(IUCAA),Pune. He received his B.Tech degree in Electronics and Communication from Delhi Technological University in 2016 and joined for his PhD in the same year in Instrumentation Department at IUCAA. \end{comment}
Title: MSW effect with quark matter: Neutron Star as a case study
Abstract: With the recent findings from various astrophysical results hinting towards possible existence of strange quark matters with the baryonic resonances such as $\Lambda^0, \Sigma^0, \Xi, \Omega$ in the core of neutron stars, we investigate the MSW effect, in general, in quark matter. We find that the resonance condition for the complete conversion of down-quark to strange quark requires estremely large matter density ($\rho_u \simeq 10^{5}\,\mbox{fm}^{-3} $). Nonetheless the neutron stars provide a best condition for the conversion to be statistically significant which is of the same order as is expected from imposing charge neutrality condition. This has a possibility of resolving the hyperon puzzle as well as the equation of state for dense baryonic matter.
https://export.arxiv.org/pdf/2208.08278
\title{MSW effect with quark matter: Neutron Star as a case study} \author{Hiranmaya \surname{Mishra}} \email{hm@prl.res.in} \affiliation{Physical Research Laboratory, Navarangpura, Ahmedabad, India} \affiliation{School of Physical Sciences, National Institute of Science Education and Research Bhubanwswar, HBNI, Jatni 752050,Odisha, India} \author{Prasanta K. \surname{Panigrahi}} \email{panigrahi.iiser@gmail.com} \affiliation{Indian Institute of Science Education and Research Kolkata, India} \author{Sudhanwa \surname{Patra}} \email{sudhanwa@iitbhilai.ac.in} \affiliation{Department of Physics, Indian Institute of Technology Bhilai, Raipur, India} \author{Utpal \surname{Sarkar}} \email{utpal.sarkar.prl@gmail.com} \affiliation{Indian Institute of Science Education and Research Kolkata, India} \noindent The paramount discovery of neutrino oscillation confirming that the neutrinos have non-zero mass and they do change identities while propagating brought the important difference to view of the universe~\cite{Super-Kamiokande:2005wtt,SNO:2002tuh} and the physics beyond the Standard Model (SM) of particle physics. With times, Mikheyev, Smirnov and Wolfenstein (MSW)~\cite{Wolfenstein:1977ue,Mikheyev:1985zog,Bethe:1986ej,Smirnov:2004zv} mechanism is found to be very popular due to its potential to explain the flavor conversion of solar neutrinos during their propagation in solar matter even with the small vaccum mixing angle. The basic idea of MSW effect is that neutrino while propagating in matter are subjected to a potential arising from the coherent elastic scattering with the particles (protons, neutrons and electrons) through charged current interaction. Even with small vaccum mixing angle, the matter potential which acts as an index of refraction modifies the in-medium mixing angle and can play an important role in flavor conversion of neutrinos. It is well know that for solar neutrinos with energy around MeV, the estimated mean free path of the neutrinos in normal matter ($\rho \sim 10^{-15}\, \mbox{fm}^{-3}$) is about $10^{14}$\,km which at higher density ($\rho \sim 0.001\, \mbox{fm}^{-3}$) can be as small as about $1$\,km~\cite{Giunti:2007ry}. One can achieve such extreme high matter densities in neutron stars and supernovae cores which has diameters of a few km. Here, we investigate, for the first time, the MSW effect in dense quark matter with possible conversion of down-quarks to strange quarks through flavor oscillation. Our motivation to explore such a proposal is driven by the recent observation of possibility of strange quark matter~\cite{Annala:2019puf} considering various astrophysical calculations or possible existence of hyperons in the core of neutron star~\cite{Vidana:2018bdi}. Indeed, it is revealed that the interior core of the neutron star indicates characteristics of deconfined phase which can be interpreted as evidence of strange quark matter cores~\cite{Lattimer:2004pg} and (or) existence of strange baryons in neutron star~\cite{Schenke:2021nod,Tolos:2021lta}. Our focus in this letter is to put forward the MSW mechanism in quark matter. The main theme of the proposal is the oscillation of quark flavor including medium effects and explore the possibility of resonance oscillations of quark flavors in dense quark matter. \noindent We consider quark flavor oscillation for two generations using down (d) and strange $s$ quarks. Within Standard Model (SM) of particle physics, the usual quarks are classified in three generations where the left-handed ones are structured as isospin doublets \begin{eqnarray} \begin{pmatrix} u \\ d \end{pmatrix}_L\, , \begin{pmatrix} c \\ s \end{pmatrix}_L\, , \begin{pmatrix} t \\ b \end{pmatrix}_L\, \end{eqnarray} while right-handed ones as isospin singlet fields. In general, the down-quark mixes with strange and bottom quarks within three generation picture. As we are limiting our discussion to two generations, the Cabibbo proposal~\citep{Cabibbo:1963yz} of mixing among quarks is given as, $$\begin{pmatrix} u^\prime \\ d^\prime \end{pmatrix} = \begin{pmatrix} u \\ d \cos\theta_C + s \sin\theta_C \end{pmatrix}\, . $$ where $\theta_C$ is Cabibbo mixing angle in vacuum. Here prime-indices denote the weak eigenstate and the unprime quantity correspond to the mass eigenstate. The basic idea of quark flavor oscillation is that quarks are produced as flavor eigenstates. Since flavor states can not propagate, they are expressed in mass basis using a unitary transformation. The vaccum effect can be understood by relating mass eigenstates $(d,s)$ by their weak eigenstates $(d^\prime, s^\prime)$ as \begin{eqnarray} \begin{pmatrix} d^\prime \\ s^\prime \end{pmatrix} &=& \begin{pmatrix} \cos\theta_C & \sin\theta_C \\ -\sin\theta_C & \cos\theta_C \end{pmatrix} \begin{pmatrix} d \\ s \end{pmatrix} \end{eqnarray} where $\theta_C$ is the quark mixing angle (Cabibbo angle) and $d^\prime$, $s^\prime$ are the flavour (weak) eigenstates. The evolution of $d$ and $s$ mass eigenstates with masses $m_d$ and $m_s$ is governed by Schrodinger equation given as, \begin{equation} i\frac{\partial}{\partial t}|\psi(t)\rangle=E|\psi(t)\rangle \,, \quad |\psi (t) \rangle =\left[ \begin{array}{c} d \\ s \end{array} \right] \end{equation} Here, $|\psi(t)\rangle$ is denoted for two-component down-type quark state vector as down-quark and strange-quark mass eigenstates. In the limit of relativistic energy approximation $E_i \simeq p+m_i^2/2p \approx E + m_i^2/2E $, we have \begin{equation} i\frac{\partial }{\partial t}\left[ \begin{array}{c} d \\ s \end{array} \right]=\left[\left( \begin{array}{cc} E & 0 \\ 0 & E \end{array} \right)+\frac{1}{2E}\left( \begin{array}{cc} m^2_d & 0 \\ 0 & m^2_s \end{array} \right)\right]\left[ \begin{array}{c} d \\ s \end{array} \right] \label{eq:schrod2} \end{equation} The term proportional to identity has no effect to d-s quark oscillation and hence, can be dropped from the analysis. The final mass eigen states are related to corresponding flavor eigenstates by another unitary mixing matrix. The propagation and time evolution of down-quarks (part of $SU(2)$ doublets in weak eigenstates) can be expressed in terms of weak eigenstates $d^\prime, s^\prime$ involving Cabibbo mixing angle as, \begin{equation} i\frac{\partial }{\partial t}\left[ \begin{array}{c} d^\prime \\ s^\prime \end{array} \right]=H_{\rm vac}\left[ \begin{array}{c} d^\prime \\ s^\prime \end{array} \right] \end{equation} with vaccum part of effective Hamiltonian as, \begin{eqnarray} \hspace*{-0.5cm} H_{\rm vac} \hspace*{-0.2cm}&=& \hspace*{-0.2cm}\frac{1}{2 E} \left[ \begin{array}{cc} m^2_d \cos^2\theta_C+m^2_s \sin^2\theta_C & -\cos\theta_C \sin\theta_C \Delta m^2_{ds} \\ -\cos\theta_C \sin\theta_C \Delta m^2_{ds} & m^2_d \sin^2\theta_C+m^2_s \cos^2\theta_C \end{array} \right] \nonumber \\ &=&\frac{\Delta m^2_{ds}}{4E} \begin{pmatrix} - \cos2\theta_C & \sin2\theta_C \\ \sin2\theta_C & \cos2\theta_C \\ \end{pmatrix} \big] \label{eq:H-vac} \end{eqnarray} with $\Delta m^2_{ds} = m^2_s-m^2_d$ is the mass square difference between strange and down quark and is a positive definite quantity. For relativistic down-type quarks (d,s) with $p \simeq E \gg m^2_{k}$ with $k=d,s$ and $t\approx L$, a typical distance travelled by the quark mass eigenstates, the transition probability for conversion of down-quark flavor to strange-quark flavor is given by~\citep{Sarkar:2008xir} $$P^{\rm vac}\left(d \xrightarrow{}s\right)=\sin^2{(2\theta_C)}\sin^2{\left(\frac{\Delta m^2_{ds} L}{4E}\right)}$$ The oscillation among quark flavor is possible in vaccum provided two of the following conditions are satisfied: \begin{itemize} \item The mixing angle ($\theta_C$) in vaccum is not be equal to $0$, $n\pi$ or $\frac{n \pi}{2}$. The oscillation amplitude is determined by the Cabibbo mixing angle $\theta_C$. \item The mass square difference $\Delta m^2_{ds} =m^2_s-m^2_d \neq 0$. The frequency of the quark flavor oscillation is controlled by this parameter and is large for large value of $\Delta m^2_{ds}$. \end{itemize} Let us have an estimate of conversion probability in vaccum. With $\Delta m^2_{ds} \simeq 10^{4}\,\mbox{MeV}^2$, $E\simeq 100$~MeV lead to $L_{\rm osc}/2 \simeq 4 \pi$ fermi to have $P^{\rm vac}\left(d \xrightarrow{}s\right)=\sin^2(2\theta_C)\simeq 0.184$. This implies that after travelling a distance of $$L_{\rm osc}/2 = \frac{\big(2 \pi \big) E}{\Delta m^2_{ds}}$$ of $4 \pi$ fermi, the probability of finding the quark flavor in the strange quark flavor state $s$ is maximal while after traversing the full length $L_{\rm osc}$, the system is back to its initial state. Thus, e.g., for down quark energy of the order of 100 MeV, probability of the getting it converted to strange quark can be as large as 18 percent after travelling a distance of few tens of fermi even in vaccum which can be understood from the Fig.\ref{fig:quark_osciln_vac}. Let us consider the medium effects. A quark flavor while passing through the medium of quark matter or hadronic matter, can interact through weak charge current interaction with the medium quarks. This leads to change of mass eigenstates of down and strange quarks resulting eventually in conversion of down quark to strange quarks. The form of charge current effective Hamiltonian in terms of up and down quarks is given by \begin{eqnarray} H_{{\rm eff}} &=&\frac{G_F}{\sqrt{2}} \big[\bar{d} \gamma_\mu (1 - \gamma_5) u \big] \big[\bar{u} \gamma^\mu (1-\gamma_5) d \big], \end{eqnarray} with $G_F$ as the Fermi constant. Applying Fierz transformation~\citep{Giunti:2007ry} and after averaging over up-quark background in the non-relativistic limit, i.e., $\langle \overline{u} \gamma^\mu u \rangle \simeq \langle u^\dagger u \rangle \equiv \rho_u$, we get \begin{eqnarray} \bar{H}_{{\rm eff}} &=& \sqrt{2} G_F \rho_u \bar{d}_{L} \gamma^0 d_{L} = V^u_{cc} j^0_d, \end{eqnarray} where $d_{L} = \frac{1 - \gamma_5}{2} d$, $j^0_d = \bar{d}_{L} \gamma^0 d_{L}$ and $V^u_{cc}$ is the interaction matter potential given by (non-relativistic limit leading to number density \cite{Pal:1991pm,Kuo:1989qe}) \begin{eqnarray} V^u_{cc} = \sqrt{2} G_F \rho_u. \end{eqnarray} In the presence of the medium, the evolution equation for the down and strange quarks becomes similar to eq.(5) as, \begin{eqnarray} i \frac{\partial}{\partial t} \begin{pmatrix} d^\prime \\ s^\prime \\ \end{pmatrix} = H_{\rm matt} \begin{pmatrix} d^\prime \\ s^\prime \\ \end{pmatrix}, \end{eqnarray} where, the effective Hamiltonian in medium is given by \begin{eqnarray} H_{\rm mat} &=& H_{\rm vac} + \begin{pmatrix} V^{u}_{cc} & 0 \\ 0 & 0 \end{pmatrix} \nonumber \\ &=& \begin{pmatrix} - \frac{\Delta m^2_{ds}}{4E} \cos2\theta_C + 2\sqrt{2} G_F \rho_u & \frac{\Delta m^2_{ds}}{4E} \sin2\theta_C \\ \frac{\Delta m^2_{ds}}{4E} \sin2\theta_C & \frac{\Delta m^2_{ds}}{4E} \cos2\theta_C \\ \end{pmatrix}. \nonumber \label{matter} \end{eqnarray} Let us introduce a notation characterising the matter effects as, \begin{eqnarray} A = 2\sqrt{2} G_F \rho_u E. \end{eqnarray} Using eq.(\ref{matter}), the modified Hamiltonian is read as, \begin{eqnarray} H_{\rm matt} = \frac{1}{4E} \begin{pmatrix} A- \Delta m^2_{ds} \cos2\theta_C & \Delta m^2_{ds} \sin2\theta_C \\ \Delta m^2_{ds} \sin2\theta_C & -A +\Delta m^2_{ds} \cos2\theta_C \\ \end{pmatrix}. \end{eqnarray} The resulting energy eigenvalues of $H_{\rm matt}$ are as follows {\small \begin{eqnarray} E^M_{d,s} = \frac{1}{4E}\left[A \mp \sqrt{(-A + \Delta m^2_{ds} \cos2\theta_C)^2 +\bigg(\Delta m^2_{ds} \sin2\theta_C\bigg)^2}\right] \end{eqnarray} } With the medium effects, the Cabibbo mixing angle $\theta_C$ will be modified as follows, \begin{eqnarray} \tan2\theta^M_C = \frac{\Delta m^2_{ds} \sin2\theta_C}{-A + \Delta m^2_{ds} \cos2\theta_C}, \label{new_mixing} \end{eqnarray} After all these simplifications, the expression for the probability for $P_{ds}$ becomes \begin{eqnarray} P^{\rm mat}\left(d \xrightarrow{}s\right) = \sin^22\theta^M_C \sin^2{\left(\frac{\Delta^M_{ds} L}{4E}\right)} \end{eqnarray} Here using the fact that $E_s - E_d \approx (m^2_s - m^2_d)/2E$, we obtain the modified mass squared difference in the presence of matter as \begin{eqnarray} \Delta^M_{ds} = \sqrt{(-A + \Delta m^2_{ds} \cos2\theta_C)^2 +(\Delta m^2_{ds} \sin2\theta_C)^2}. \label{new_mass} \end{eqnarray} It is noted that vaccum oscillation in d-s quarks is not sensitive to sign of $\Delta m^2_{ds}$ and octant of $\theta_C$. However matter effects is sensitive to both of them. The resonance condition in presence of medium effects is derived to be \begin{eqnarray} \Delta m^2_{ds} \cos2\theta_C &&= A \equiv 2\sqrt{2} G_F \rho_u E \nonumber \\ &&\hspace*{-1.2cm}= 2.65 \times 10^{-4} \bigg(\frac{\rho_u}{\mbox{fm}^{-3}}\bigg)\, \bigg(\frac{E}{\mbox{MeV}}\bigg) \, \mbox{MeV}^2 \nonumber \,. \end{eqnarray} where $\rho_u$ is the number density of up-quark background with which both down quark is propagating leading to significant medium effects. The resonance condition for amplification of mixing angle $\sin^2 2 \theta_M$ due to medium effects is displayed in Fig.\ref{fig:quark_osciln_resonance}. The variation of mass eigenstates with eigenvalues $E^{M}_{d,s}$ demonstrating the conversion of down quark to strange quark is presented in Fig.\ref{fig:quark_osciln_Ens}. The conversion probability is presented in Fig.\ref{fig:prob-vac-mat} for illustration of medium effects in quark matter oscillation. In the limit $\sin \theta_C \to 0$, the off-diagonal terms can be neglected in comparison to diagonal terms in the effective Hamiltonian in presence of matter. This implies that the resulting energy eigenvalues $E^M_d$, $E^M_s$ and the corresponding eigenstates $(d^\prime, s^\prime)$ are same as their mass eigenstates $(d,s)$. However, due to large medium effects i.e, $\rho_u >> 0$, the eigenvalue of down quark $E^M_d$ can become larger than $E^M_s$ causing conversion of down quark flavor to strange quark flavor. This cross-over occurs at critical density of u-quarks as, \begin{equation} \rho^c_u=\frac{\Delta m^2_{ds} \cos2\theta_C}{2\sqrt{2}G_F E} , \end{equation} \noindent We next examine here the effect of quark flavor oscillation in neutron star. The key parameters relevant for $d-s$ quark oscillation using matter effects inside neutron star are $$\Delta m^2_{ds}, \theta_C, E_d, \rho_u \,.$$ Out of these four parameters, the mass-square difference between strange and down quark ($\Delta m^2_{ds}$) and the Cabibbo mixing angle $\theta_C$ are precisely known while the other two parameters $E_d$ and $\rho_u$ is to estimated for the neutron star medium. The number density of up quarks inside neutron star is written as $\rho_u = Y\, \rho_0$ where $\rho_0 = 0.16 \times \mbox{fm}^{-3}$ is the saturation density of nuclear matter and $Y$ is the parameter defining up-quark density in terms of nuclear matter number density. Let us note that up-quark number density $\rho_u \approx \rho$ is the number density at the core of neutron star as there is one up-quark per neutron. We can take $\rho = 5\,\rho_0 = \rho_u$ so that $Y$ is of the order of $5$. The energy of down-quark ($E_d$) can be taken as a fraction of its fermi momentum $k^F_d$ as, $$E_d = X\, k^F_d \approx X\, \big(3\,\pi^2\,\rho_d \big)^{1/3} = X\, \big(\frac{3}{2}\,\pi^2\,\rho \big)^{1/3}\,.$$ The modified expression for parameter $A$ is written in terms of $X$ and $Y$ instead of $\rho_u$ and $E_d$ as \begin{eqnarray} A &=& \big(2 \sqrt{2}\big) \, G_F\, \big(\frac{3}{2}\,\pi^2\,\big)^{1/3}\, X\,Y^{4/3}\rho^{4/3}_0\, \end{eqnarray} The new contribution of matter potential can be inserted in medium effect mixing angle $\sin^2(2\theta^M_C)$ and is displayed in Fig.\ref{fig:resonance_XY}. For a representative values, $X\simeq 0.5$, $Y\simeq 5$ and $L\simeq 10$~km as the typical radius of the neutron star, the estimated value of conversion probability to have strange quark flavor is $P^M\left(d \xrightarrow{}s\right) \simeq \frac{1}{2} \sin^2{2\theta^M_{C}} \simeq 0.02$. Here we have use the fact that $L$ is much much greater than typical oscillation length, the frequency part of the probability i.e, $\sin^2{\left(\frac{\Delta^M_{ds} L}{4E}\right)} \simeq 1/2$. This means that about 2 percent of down quarks are converted to strange quark inside the neutron star i.e., $\rho_d \approx 0.2 \rho_s$ while travelling a distance from core to the surface of the star. Such a result however depends upon adiabatic approximations i.e, of uniform density throughout the star. Non-adiabatic approximation may change this results. Let us next compare strange quark number density arising from beta equilibrium condition for the neutron star matter. In terms of Fermi momentum and number density, the charge neutrality gives $$n_{p^+} = n_{e^-}\, \quad \mbox{or,} k_{F,p^+} = k_{F,e^-}\,. $$ In the other hand, chemical equilibrium yields $\mu_{n} = \mu_{p^+} + \mu_{e} $. Since the chemical potential is related to the number densities as $\rho_i = \gamma k^3_{F,i}/(6 \pi^2)$ with $k_{F,i} = \sqrt{\mu^2_i-m^2_i}$ with $i=d,s$ and $\gamma$ is the degeneracy factor related to color and spin. Using these basic relations, the number densities of down- and strange quark are related to each other inside neutron star as, \begin{eqnarray} \frac{\rho_d}{\rho_s} = \frac{k^3_{F,d}}{k^3_{F,s}} = \frac{\big( \mu^2_d - m^2_d \big)^{3/2}}{\big( \mu^2_s - m^2_s \big)^{3/2}} \approx 1+ \frac{3}{2}\frac{\Delta m^2_{ds}}{\mu^2_q} \end{eqnarray} Using the values of current quark masses, $m_d \simeq 5$~MeV, $m_s \simeq 95$~MeV and $\mu^2_q \simeq \mbox{500\, MeV}^2$, we get $$\frac{\rho_d}{\rho_s}=1+0\,.05$$ Thus, the beta equilibrium condition leads to existence of five percent of strange quark number density compare to down-quark number density. \noindent To summarise we have presented here quark oscillations in vaccum for two generations of quarks similar to neutrino oscillation. It turns out that for a down quark of energy 100~MeV, the oscillation length is a few Fermi's. However, in vaccum such oscillation is not observable as the strong interaction become dominant. On the hand, it is possible to have such flavor oscillation in dense matter. Indeed, following MSW mechanism, it turns out that the resonance oscillation to take place for a very large density of up quark background of the order of $\rho_u \simeq 10^{5}\,\mbox{fm}^{-3}$. We finally estimated the conversion probability of down quark to strange quark within typical set of parameters for neutron star and found that about 2 percent of down quark can be converted to strange quark inside the core of the neutron star. This is of the same order as one can expect from the condition of beta equilibrium inside the neutron star. Such an extra possible source of strange quark from flavor oscillation can have interesting consequences regarding equation of state at high densities relevant for neutron star phenomenology. Such an alternate source of strange quarks could possibly lead to higher densities of strange baryons inside neutron star. It can have consequences regarding "Hyperon puzzle" in the context of observation of high mass ($\sim 2\, M_{\odot}$) neutron star~\cite{Vidana:2018bdi}. \bibliographystyle{utcaps_mod} \bibliography{msw.bib}
Title: Mitigating the effects of instrumental artifacts on source localizations
Abstract: Instrumental artifacts in gravitational-wave strain data can overlap with gravitational-wave detections and significantly impair the accuracy of the measured source localizations. These biases can prevent the detection of any electromagnetic counterparts to the detected gravitational wave. We present a method to mitigate the effect of instrumental artifacts on the measured source localization. This method uses inpainting techniques to remove data containing the instrumental artifact and then correcting for the data removal in the subsequent analysis of the data. We present a series of simulations using this method using a variety of signal classes and inpainting parameters which test the effectiveness of this method and identify potential limitations. We show that in the vast majority of scenarios, this method can robustly localize gravitational-wave signals even after removing portions of the data. We also demonstrate how an instrumental artifact can bias the measured source location and how this method can be used to mitigate this bias.
https://export.arxiv.org/pdf/2208.13844
\preprint{APS/123-QED} \title{Mitigating the effects of instrumental artifacts on source localizations} \author{Maggie C. Huber$^{1,2}$ and Derek Davis$^3$ } \affiliation{$^1$University of Michigan, Ann Arbor, MI 48104, USA\\ $^2$University of Colorado Boulder, Boulder, CO 80309, USA\\ $^3$LIGO, California Institute of Technology, Pasadena, CA 91125, USA} \date{\today} \section{Introduction} Modern gravitational-wave (GW) detectors including Advanced LIGO~\cite{LIGOScientific:2014pky}, Advanced Virgo~\cite{TheVirgo:2014hva}, and KAGRA~\cite{KAGRA:2018plz} are increasingly sensitive to compact-binary coalescences (CBCs)~\cite{GWTC-1,GWTC-2,GWTC-3,GWTC-2.1}. Binary neutron star (BNS) and neutron star-black hole (NSBH) mergers are of particular interest~\cite{PhysRevLett.119.161101,2020ApJ...892L...3A,2021ApJ...915L...5A} as they are known to produce a variety of transient electromagnetic (EM) signals across multiple wavebands. Relativistic jets from BNS mergers can be progenitors to short gamma-ray bursts (GRBs)~\cite{2017ApJ...848L..13A,2017ApJ...848L..14G} which in turn cause afterglow radiation in X-ray, optical, and radio bands lasting from hours to years~\cite{doi:10.1126/science.aap9855,2017ApJ...848L..21A,2017Natur.551...71T,2018NatAs...2..751L, Balasubramanian:2021kny}. Ejected material from BNS mergers also produces kilonovae observable in the optical and near-infrared band for days to weeks~\cite{Metzger:2019zeh,2017ApJ...848L..19C,2017Sci...358.1559K}. EM followup campaigns are crucial for advancing the field of multi-messenger astrophysics~\cite{LIGOScientific:2017ync}. These events have allowed us to test general relativity~\cite{PhysRevD.103.122002}, characterize the nuclear equation of state~\cite{LIGOScientific:2018cki} and the black hole population~\cite{2021arXiv211103634T}, and discover objects that challenge existing models for stellar evolution~\cite{abbott2020gw190521,2020ApJ...896L..44A}. In order for telescopes to observe EM radiation from BNS mergers, the GW detector network must triangulate and localize the GW source to produce a skymap~\cite{2011CQGra..28j5021F,2016ApJ...829L..15S}. These skymaps tend to cover a large portion of the sky compared to most telescopes' field of view~\cite{2018LRR....21....3A}. This can make sources difficult to rapidly localize and make EM followup challenging to optimize\cite{Coughlin:2018lta,Coughlin:2019qkn,PhysRevD.81.082001,2013ApJ...767..124N,PhysRevD.89.084060}. Generating rapid and accurate skymaps is important for the success of EM followup campaigns and helps reveal useful astrophysical information about a GW event. Skymap accuracy and other estimates of GW source parameters can be impaired by the presence of noise from instrumental artifacts in GW detectors~\cite{Macas:2022afm,powell2018parameter,Cabero:2020eik,Vitale:2016jlv}. One main class of instrumental artifact is ``glitches,'' which manifest as bursts of excess power~\cite{LIGO:2021ppb,aasi2012characterization,KAGRA:2020agh}. Often, what causes these glitches is difficult to determine. They can be the result of either external environmental or internal instrumental interactions that alter the actual GW strain~\cite{nguyen2021environmental,2018RSPTA.37670286N,davis2021ligo,2018JPhCS.957a2004B}. Glitches are particularly troublesome for multi-messenger observations, as the are more likely to overlap with GW events with a longer duration, such as BNS and NSBH mergers. This has already occurred seen in all previously-detected confident BNS and NSBH mergers~\cite{PhysRevLett.119.161101,2020ApJ...892L...3A,2021ApJ...915L...5A}. As detection of GW events from CBC mergers becomes increasingly frequent, we expect to see more instances of glitches overlapping with GW signals. There are multiple approaches one can use to address a glitch which overlaps a GW signal. In the case of GW170817, the effects of a glitch overlapping the signal were mitigated by applying a window function to remove the data containing the glitch~\cite{Pankow:2018qpo,harris1978use}. The glitch waveform was then later reconstructed using wavelets \cite{Cornish:2020dwh,cornish2015bayeswave,cornish2021bayeswave} that could be subtracted from the data \cite{PhysRevLett.119.161101}. Although modelling glitches has been successfully used in previous GW analyses~\cite{PhysRevLett.119.161101,GWTC-2, GWTC-3, GWTC-2.1}, this method can be slow and ad hoc in nature, as we cannot always get an accurate glitch model. Different approaches are necessary to find a generalized solution that works rapidly for various types of glitches. There have already been efforts to address this issue using machine learning techniques~\cite{Mogushi:2021cpw,cuoco2020enhancing}. However, these algorithms must be tuned to specific waveforms and have only been tested for very short glitch durations. When editing the recorded strain data to mitigate a glitch, care must be taken to minimize the potential biases the mitigation procedure introduces. Window functions such as the one used for GW170817 add a taper to the edges of the removed data to avoid introducing discontinuities. However, they can introduce excess power leakage from the spectral lines in the power spectral density (PSD) of the detector. An alternative to window functions is inpainting \cite{2019arXiv190805644Z}, where the effects of discontinuities are calculated and subtracted. The end result is a gate that only masks bad seconds of data and should have no effect on the data surrounding the inpainted hole. Inpainting, however, is not without its limitations. When we inpaint a hole in GW data we lose information about the amplitude and phase of a signal. This in turn biases the sky localization, although it is less noticeable when the inpainted duration is significantly less than the total signal. For larger inpainting widths, this can add a significant bias and make skymaps less useful for localizing sources. To ensure a skymap that is accurate and precise, we must correct for the effect of gating a portion of the signal. To effectively mitigate glitches and reduce bias in our skymaps, we developed an algorithm to inpaint, reweight, and normalize the signal-to-noise ratio (SNR) time series of the signal. In this paper we explain the functionality of this new glitch mitigation method, the setup of our Python package, Python-based Skymap Localization with Inpainted Data Editor (pySLIDE), and the metrics we used to show how well it performs. Our work combines techniques to manage glitches into a simple and computationally efficient package, and rigorously tests how our methods perform. The goal is to recover the correct error and accuracy so that the skymap includes the signal and guides EM telescopes in the right direction. We determine that our method was able to recover a more accurate skymaps both in the case of simulated signals with and without a glitch for a variety of GW source parameters. Additionally, the package is computationally efficient and has a deterministic underlying formula with flexible input parameters. In Section~\ref{sec:BAYESTAR}, we explain the relevant components of BAYESTAR sky localization. In~\ref{sec:Signal}, we go into the processes for calculating the SNR time series and obtaining results. In~\ref{sec:Reweighting}, we discuss the reweighting and normalizing technique we developed for this method. In~\ref{sec:PySLIDE Workflow}, we touch on the setup of pySLIDE. In~\ref{sec:Testing simulated signals}, we describe our tests of the method. In Sections~\ref{sec:validation} and~\ref{sec:glitch}, we present our results and in Section~\ref{sec:Conclusions} we discuss important takeaways from this research. \section{Skymap computation} \subsection{BAYESTAR} \label{sec:BAYESTAR} We used the BAYESTAR~\cite{Singer_2016} algorithm and the PyCBC search pipeline~\cite{Usman:2015kfa,nitz2018rapid} to create our sky localizations. BAYESTAR localizes GW sources using Bayesian inference instead of Markov chain Monte Carlo (MCMC) methods. It takes a likelihood function and a well-defined parameter space to rapidly infer the location of GW signals. Specifically, BAYESTAR requires information about the amplitude, the phase, and the relative time of arrival of the signal in all detectors. The BAYESTAR algorithm is used to rapidly localize gravitational-wave signals with the information that is available in low latency. To calculate the sky localization posterior, BAYESTAR uses the maximum likelihood values of the intrinsic source parameters, masses and spins, from a low latency matched filter search pipeline. The BAYESTAR likelihood for each point in the sky is marginalized over additional extrinsic parameters; coalescence phase, absolute time of arrival, distance, inclination angle, and polarization~\cite{Singer_2016}: \begin{equation} f(\text{RA}, \text{Dec}) \propto \idotsint \mathcal{L}_{\text{BS}} \,r^3 \,d\phi_c\,dr\,dt\,d\cos{i}\,d\psi \end{equation} The BAYESTAR likelihood can be further segmented into a term based only on the SNR of the signal in each detector and a term based on the relative time of arrival (TOA) in each detector~\cite{Singer_2016}, \begin{equation} \begin{split} \mathcal{L}_{\text{BS}} & \propto \mathcal{L}_{\text{SNR}} \times \mathcal{L}_{\text{TOA}} \\ & \propto \exp{\left[-\frac{1}{2}\sum_i \rho_i^2 + \sum_i \rho_i \mathbb{R}\{e^{-i\theta_i} z_i^*(\tau_i)\}\right]} \end{split} \end{equation} where $\rho_i$ is the SNR, $\theta_i$ is the phase, and $z_i(\tau_i)$ is the SNR time series of the maximum likelihood template for a given detector, $i$. The values for $\rho_i$ and $\theta_i$ are extracted from the SNR time series at the time of maximum SNR in each detector, $\tau_{i,max}$, by \begin{equation} \begin{split} \rho_i & = \left| z_i(\tau_{i,max}) \right| \\ \theta_i & = \arg \left\{ z_i(\tau_{i,max}) \right\} \end{split} \end{equation} An additional normalization term is also included the likelihood, which is dependent on the PSD of each detector. The BAYESTAR likelihood, and hence the sky localization posterior, is entirely dependent on the provided SNR time series and the PSD. It is therefore these two products that we wish to modify as part of our glitch mitigation method. \subsection{Signal parameters} \label{sec:Signal} To evaluate these parameters for a gravitational-wave signal, we first use GWpy~\cite{gwpy} to generate a PSD and get a matched filter from PyCBC~\cite{Usman:2015kfa} to run the waveform template through the noisy data and calculate the SNR. The matched filter function for SNR time series $z(t)$ is given by a weighted inner product of the detector data $s$ and template $h(t)$. The SNR time series, $z(t)$, is written as~\cite{Allen:2005fk} \begin{equation} z(t) = \frac{(s|h)}{(h|h)^{1/2}} \text{ .} \end{equation} The inner product $(s|h)$ is given by \begin{equation} (s|h)(t) = 4\int_{f_{low}}^{f_{high}}\frac{\tilde{s}(f)\tilde{h^*}(f)}{S_{n}(f)}\textup{e}^{2\pi ift}\textup{d}f \end{equation} with $S_{n}(f)$ as the PSD. This is similar to convolving the template against the overwhitened data. The real part of this SNR time series corresponds to a template that is lined up along the data and the imaginary part corresponding to a template that is 90 degrees out of phase. The amplitude of the phase of the SNR time series can be then extracted from he amplitude and phase of the corresponding complex SNR. As described int he previous section, the amplitude of the SNR time series given by the matched filtering process is a key input into a sky localization for a GW signal. The other two parameters needed for a skymap are the time delay between each detector and the phase of the signal. We do not expect the presence of a glitch to bias the time delay measurements or the phase, so PySLIDE focuses on correcting the amplitude of the SNR time series. Once the localization parameters are input into BAYESTAR, we can extract important information from the skymap it returns. Here we measure the credible regions and searched area to evaluate the performance of BAYESTAR. The credible region represents the cumulative sum of pixels in a given region. We are particularly interested in the 90 percent credible region, which contains 90 percent of pixels in the skymap. The smaller the area of this region, the more precise our skymap is. We also measure the total searched area, which is the smallest credible region containing the location of the source. A smaller total searched area corresponds to better accuracy. \subsection{Inpainting and reweighting} \label{sec:Reweighting} If a glitch is present in the data, the SNR time series described in the previous section may deviate from the expected in Gaussian noise. In order to correct for this potential bias, we remove the data containing a glitch using a technique called inpainting~\cite{2019arXiv190805644Z} and recalculate the SNR time series. The inpainting filter is designed such that the overwhitened, inpainted data is zeroed inside a time window of interest. An example of how inpainting affects both whitened and overwhitened data is shown in Figure~\ref{fig:inpaint}. The removal of data ensures that any data quality issues present in the data do not bias the SNR time series. However, this data removal introduces it's own bias. The benefit of inpainting lies in the fact that this bias is well known and can be corrected for in our analysis. To correct for the bias of inpainting, we first calculate the expected loss in SNR from the inpainting procedure. The fraction of the SNR remaining after inpainting is given by~\cite{2019arXiv190805644Z} \begin{equation} \label{eq:1} \lambda_{hole}(t_{0},h)\approx \frac{(\left |h_{w}\right |^{2}\circledast\mathbbm{1}_{valid})(t_0)} {\sum_{t}\left |h_{w}(t)\right |^{2}} \text{ ,} \end{equation} where $t_{0}$ is the merger time, $h_{w}$ is the whitened waveform, and $\mathbbm{1}_{valid}$ returns zero for a data point in the inpainted hole and one otherwise. The equation convolves $h_{w}$ with $\mathbbm{1}_{valid}$, which we compute with a fast Fourier transform (FFT) as allowed by the convolution theorem. When we use Eq. \eqref{eq:1} to calculate the normalization at each time, we can find the peak in the renormalized SNR time series $t_0$ by calculating the time of the maximum renormalized SNR. After we inpaint and apply Equation~\ref{eq:1}, we multiply a normalization factor to the PSD and SNR time series using \begin{equation} \label{eq:3} \tilde{z}(t) = z(t)\frac{\lambda(\tau_{i,max})}{\lambda(t)} \\ \end{equation} \ \begin{equation} \tilde{S_n}(f) = \frac{S_n(f)}{\lambda(\tau_{i,max})} \\ \text{ .} \end{equation} These new values are then used in BAYESTAR to recompute the skymap and obtain the credible region and searched areas for the renormalized time series. When a portion of a signal is inpainted, we effectively decrease the sensitivity of our measurement. Renormalizing the PSD corrects the error for our BAYESTAR localization, and as a result we can avoid overestimating how much we know about the signal. If we inpaint a hole, we are losing information and the error should increase to account for that if we want the true source location to be included in the credible region of the skymap. This is different than restoring the SNR to where it was before removing data, which would bias our results in the other direction. There are multiple advantages of using the reweighting method to renormalize the PSD. The algorithm can be used regardless of how much time is removed with inpainting and which waveform template we use, and it allows the user to configure these settings as inputs. Reweighting is also deterministic - the calculation is the same for any variation of the input parameters. It is also efficient to compute, typically taking less than a second. These benefits render this method conducive to rapid and accurate sky localization of GW events in real time, even in the presence of glitches. \subsection{PySLIDE Workflow} \label{sec:PySLIDE Workflow} PySLIDE can apply and test the reweighting method to any number of signal injections, though there are diminishing returns when you get to about 500 injection runs. It creates a PyCondor~\cite{pycondor} workflow that separates each task into jobs, which are then collected into a ``Dag''~\cite{pycondor} object to submit to a computing cluster. There are several components of the package that create results and plot them. The first script applies the reweighting method to the SNR time series using injected signal parameters and a PSD. It creates a time series with the original data, inpaints a hole for a specified segment, applies the reweighting formula to the SNR and PSD, and corrects for the remaining SNR . The script outputs the SNR time series as an XML file to be fed into the BAYESTAR algorithm. BAYESTAR returns a localization as a FITS file. Using this file, we run scripts to calculate the credible region of the true source location, the total searched area, the area of the 90 percent credible region, and the overlap of the reweighted and inpainted skymaps with the original skymap~\cite{skymap-overlap}. PySLIDE combines the results from all injection runs into one file and then that file is used for four different plotting scripts to visualize the final results. This process is computationally efficient and takes less time to run than similar methods. BAYESTAR localization contributed to the majority of the computation time. The python package we developed can also be used to evaluate sky localization of candidates that are identified in real-time processing of gravitational wave data. Either automated tools or visual inspection of the data can quickly be used to identify any glitches that overlap identified signals. These glitches can then be mitigated with the inpainting methods discussed in this work and the skymap can be recomputed using the same waveform template that was used to identify the signal. If the new skymap is different from the skymap produced before any glitches were mitigated, this could indicate a bias in the original skymap. \section{Tests with simulated signals} \label{sec:Results} \subsection{Description of tests} \label{sec:Testing simulated signals} To assess the performance of our reweighting algorithm, we simulated various CBC signals for testing. For the waveform template, we selected SEOBNRv4\_ROM~\cite{Bohe:2016gbl} and filtered our simulated parameter list to include what we would expect to detect by applying an SNR threshold of 8. We varied the duration and offset of the gate from the merger time. We also ran our tests over different component masses corresponding to BNS, NSBH, and binary black hole (BBH) mergers. Table~\ref{tab:config} lists the combinations of mass, inpainting window, and offset we tested in this work. Throughout the rest of this work, we refer to a specific combination of inpainting window and offset as ``(inpainting window in seconds, offset in seconds).'' To run our tests, we inject the simulated signals into Gaussian noise colored to representative PSDs from the LIGO Hanford (H1) and Livingston (L1) detectors and get the original SNR time series. We then calculate the SNR time series when using the inpainting function alone, then when inpainting and reweighting. We create an XML file which is put into BAYESTAR to localize all three cases. To see how the method performs, we calculate the credible regions, searched area, the area of the 90 percent credible region, and the overlap for all injected signals. \begin{table}[] \begin{tabular}{llll} \multicolumn{1}{r}{Masses $(M_\odot)$} & \begin{tabular}[c]{@{}l@{}}Mass 1 = 1.4 \\ Mass 2 = 1.4\end{tabular} & \begin{tabular}[c]{@{}l@{}}Mass 1 = 30 \\ Mass 2 = 1.4\end{tabular} & \begin{tabular}[c]{@{}l@{}}Mass 1 = 30 \\ Mass 2 = 30\end{tabular} \\ \hline \multicolumn{1}{l|}{Duration (s)} & 0.5, 1, 4 & 0.125, 0.5, 1 & 0.0625, 0.125, 0.5 \\ \multicolumn{1}{l|}{Offset (s)} & 0, -1, -32 & 0, -1, -4 & 0, -0.1, -0.25 \end{tabular} \caption{Summary of the tests we ran for various GW signals. We tried combinations of three different durations and offsets specific to each set of component masses. This corresponds to 9 total combinations per class of signal.} \label{tab:config} \end{table} \subsection{Validation test results} \label{sec:validation} To understand if the previously described method is accurately and precisely recovering the sky location of our injected signal, we focus on three metrics: the fraction of signals recovered within each credible region, the searched area, and the 90 percent credible area. Descriptions and results for each metric are explained later in this section. In this section, we focus on three example configurations of inpainting window, offset, and signal class. The complete results for all tested configuration combinations are presented in Appendix~\ref{app:all_results}. The first metric we investigate is whether the expected fraction of injections are recovered within a specific credible region. This metric is often referred to as a ``probability-probability'' (P-P) plot, and shows the credible region of the true source location vs. the fraction of total simulated signals that fall there (Figure~\ref{fig2}). These should match up, i.e. 90 percent of the signals we create end up in the 90 percent credible region of the skymap. On a P-P plot, this distribution is linear with a slope of one. Due to some complications including an internal fudge factor in BAYESTAR, our original data without the glitch lies above the diagonal when we use PyCBC. Additionally, whether the P-P plot lies on the one-to-one line depends on the component masses of our signal. We are accounting for these complications by normalizing the P-P plots by the original data for the simulated signals without a glitch. We are only comparing our results to the original data and the ideal credible region will be a one-to-one plot. Credible regions that lie above the line are overestimating error in the skymap, and regions under the line are underestimating error. From Figure~\ref{fig2}, we see that inpainting a hole in the data will bias the skymap and report an incorrect error. When we reweight the SNR time series, correctly sized credible regions are recovered and the skymap is more likely to return a localization that contains the source. How well reweighting corrects inpainting bias depends on the duration, offset, and signal type. For the BBH signals, Figure~\ref{fig2} shows an extreme case where the inpainted hole was centered on the merger and was a large portion of the signal length. This indicates that when we lose almost all information about the signal, the error increases accordingly to reflect that there is limited information we can extract about the signal. For the BNS case, when the inpainted hole is offset from the merger reweighting is able to get the credible regions closer to the original data and overestimating the error. For the NSBH component masses, we are also able to recover the correct credible regions and slightly underestimating the error. Despite the range of masses and inpainting configurations, we are able to approximately recover the correctly sized credible regions in all cases presented in this section. To check if the skymap shows an accurate credible region, we create a histogram of the total searched area, as shown in Figure~\ref{fig3}. The searched area is the area of the smallest credible region containing the true source location. Ideally, the cumulative fraction drops off faster as the searched area increases. This demonstrates that the resulting skymap predicts the source location to be in a lower credible region. For each signal and gate type, the amount of accuracy recovered after reweighting varies. Figure~\ref{fig3} shows that for these various signals and gates, the searched area is closer to the original and accuracy does improve after reweighting. Although we would ideally recover the original searched area distribution after both inpainting and reweighting, it is expected that we would still have larger searched area due to the loss of information that inpainting introduces. This information loss naturally leads to less accurate sky maps. However, the inpainting and reweighted maps are more accurate than when only inpainting. To evaluate the precision of the sky maps produced with each method, we consider the area of the 90 percent credible region, as shown in Figure~\ref{fig4}. The 90 percent credibe region is the area of the smallest credible region containing 90 percent of the posterior probablity. It is common for this credible region to be used to set the maximum area that is surveyed in electromagnetic follow up studies (see e.g.,~\cite{Coughlin:2020fwx} and references therein), making this a particularly important quantity. For the cases plotted in Figure~\ref{fig4}, the reweighting process generally causes the 90 percent area to increase. For the BBH signals, the 90 percent area blows up and indicates that we cannot recover a precise location when so much information is lost about the signal. For the BNS signals, the 90 percent area increases but not a significant amount. For NSBH signals, the 90 percent area remains the same after the reweighting process. The increase in the 90 percent area can be understood as accounting for the loss of information due to inpainting and is therefore expected. \subsection{Data with a glitch} \label{sec:glitch} In the previous section, all tests were conducted using colored Gaussian noise. Data that is consistent with Gaussian noise would not benefit from the application of this method, meaning that these tests also show less accurate and less precise results after inpainting and reweighting. In order to demonstrate how this method can lead to improved estimates of the sky localization of signals, we introduce a simulated glitch on top of our signals and repeat the same studies for one choice of inpainting configuration. We simulate a BBH merger with 10 M$_\odot$ component masses and add a sine-Gaussian burst with a frequency of 80 Hz on top of the injected signal. We then excise the glitch using the same inpainting and reweighting methods. We also calculate the same statistics when no glitch is included for comparison purposes. The results of this test are shown in Figure~\ref{fig6}. When including a glitch, we find that inpainting and reweighting is both more precise and accurate than inpainting alone and leads to greatly improved sky localizations compared to no mitigation at all. For the P-P panel (left) in Figure~\ref{fig6}, we see that the glitch significant biases the error estimate in BAYESTAR. Inpainting corrects for some of the error, and reweighting leads to an even more accurate estimate than inpainting. The searched area panel (center) displays a similar behavior. We see that a glitch biases the accuracy of the skymap in the original data, and it is recovered best by reweighting. After inpainting and reweighting, the searched area distribution is consistent with the result for no glitch and no mitigation. The area of the 90 percent credible region (right) shows the inpainted and reweighted histograms are quite close, but larger than the original data with or without the glitch included. As previously mentioned, this behavior is expected due to the information loss from inapinting. \section{Conclusions} \label{sec:Conclusions} As the presence of noise transients may bias parameter estimation for GW signals, robust mitigation methods are needed to address any identified data quality issues. This is particularly important for low latency applications, where trustworthy information is needed as quick as possible. In this work we have presented a solution to this challenge for localizing the source of an identified gravitational-wave signal. Although it is difficult to know if a specific glitch leads to a bias in the localization of a signal, we have shown that our method can quickly produce sky maps that are known to unbiased in a variety of scenarios. The reweighting method discussed in this work is shown to be particularly helpful when significant portions of the signal need to be inpainted due to a glitch as removing data via the inpainting process (or other similar methods) contributes its own bias. This was a noticeable effect even with a gate of 64 ms (less than a tenth of a second) that we used on the tests shown previously. Other tests we conduced show the same bias for larger gate widths that could be necessary for certain types of observed glitches such as slow-scattering ~\cite{LIGO:2020zwl}. We therefore expect PySLIDE to lead to significant improvements in the sky localization in these scenarios as compared to previously utilized methods. Although useful in a wide variety of situations, we did identify a number of limitations of this method. In cases where the excised data includes the time of merger, we find that the produced sky maps are biased, likely due to some of the assumptions made about the relationship between the SNR and the sky localization accuracy. This may be possible to mitigate by accounting for the change in signal bandwidth after inpainting; we reserve this investigation for future works. We also note that in cases where large portions of the signal are removed by inpainting, there are less benefits of using this method as opposed to simply excluding the impacted detector from the analysis entirely. While this method is still applicable, excluding a detector is both more straightforward and guaranteed to not introduce biases in the sky localization. In further observing runs we expect the sensitivity of the detectors to increase and to detect more events. If the data quality in these runs is similar to past runs, this means that it is more likely there will be instances of glitches overlapping signals from BNS mergers. The method presented here can easily handle these types of noise transients, and is particularly useful compared to other methods when long gates are necessary. Future work will involve packaging this method into a tool that can be used in low latency, and exploring how we can investigate the source of the previously outlined limitations of the reweighting method. For upcoming observing runs, it is imperative that we have a way to to mitigate instrumental artifacts in the detector instantaneously. Quick and reliable sky localization of GW signals will help facilitate additional multi-messenger detections. Our method provides a way to maximize the efficiency of these complex searches and strive towards reliable and useful information even in the presence of noise artifacts. \section*{Acknowledgements} The authors thank the other participants and mentors in the LIGO Caltech REU program for productive discussions and support. We thank Ronaldas Macas for their comments during internal review of this manuscript. This work is supported by National Science Foundation grant PHY-1852081 as part of the LIGO Caltech REU program. This material is based upon work supported by NSF’s LIGO Laboratory which is a major facility fully funded by the National Science Foundation. LIGO was constructed by the California Institute of Technology and Massachusetts Institute of Technology with funding from the National Science Foundation, and operates under cooperative agreement PHY-1764464. Advanced LIGO was built under award PHY-0823459. The authors are grateful for computational resources provided by the LIGO Laboratory and supported by National Science Foundation Grants PHY-0757058 and PHY-0823459. This work carries LIGO document number P2200195. \appendix \section{Complete results} \label{app:all_results} For each choices of masses, we completed 9 sets of 200 injections with varying inpainting window sizes and offsets from the time of merger. In this appendix, we present results for all of these analyses and discuss the effectiveness of limitations of this method, The specific masses, windows, and offsets for these analyses are listed in Table~\ref{tab:config}. The cumulative fraction of events identified at each credible interval for all tests is shown in Figure~\ref{fig:combined_pp}. As is done in the body of this paper, we normalize the plotted distribution based on the value with no mitigation to account for any biases inherent to BAYESTAR. The majority of configurations have results consistent with the expected one-to-one line. However, there are both specific configurations that depart from this line, as well as general trends for each signal class. The cases with the largest departures are when the inpainting window directly overlaps the signal's time of merger. These differences are especially larger in the NSBH case. The BNS results also show a systematic overestimation of the error in cases when the inpainting has a non-trivial impact on the result. However, these differences are small compared to the ``merger overlap'' scenario. Our results for how the 90 percent credible region is affected by this method are shown in Figure~\ref{fig:combined_search}. We find that, in general, we are able to recover the same level of accuracy with our inpainted and reweighted results as the original analysis. However, there are notable exceptions for all three signal classes. Similar to the previous Figure, these exceptions correspond to cases when large amounts of data are inpainted directly overlapping the signal's time of merger. Our results for how the 90 percent credible region is affected by this method are shown in Figure~\ref{fig:combined_90}. For this statistic, the results show a continuum from almost perfect agreement with the original analysis to order of magnitude increases in the 90 percent credible area. This behavior is expected from this method; when the data is inpainted, the loss of information should increase the error region. This increase in area of the 90 percent region is, in general, correctly representing the amount of information loss. This figure should be cross-referenced with Figure~\ref{fig:combined_pp} to understand if the reported errors are logically consistent. Overall, we find that in cases when the inpainting window does not directly overlap the signal's time of merger, the method presented in this work generates skymaps that are bias free. Cases where the inpainting window overlaps the time of merger do show some biases. This is likely due to this method assuming that the information contained in each data segment relevant for sky localization is proportional to the SNR of the signal in that time window. However, the bandwidth of the signal also plays a subdominant role. This assumption is less valid for time periods overlapping the time of merger, where the signal rapidly increases in frequency. Despite this limitation, this method may still be beneficial in these cases, as is also known that the presence of a glitch at this time introduces the largest potential biases in the sky localization and other parameters~\cite{Macas:2022afm, powell2018parameter}. An additional consideration is if the data is informative after inpainting. Although we show that the skymaps produced by this method are logically consistent after removing large portions of the signal, this scenario is similar to simply excluding the effected detector from the analysis. Excluding a detector is simpler than this method and should not introduce biases. We therefore recommend that this approach be used in cases when the majority of the signal SNR would be removed by inpainting. \bibliography{main.bbl}
Title: Exponentially amplified magnetic field eliminates disk fragmentation around the Population III protostar
Abstract: One critical remaining issue to unclear the initial mass function of the first (Population III) stars is the final fate of secondary protostars formed in the accretion disk, specifically whether they merge or survive. We focus on the magnetic effects on the first star formation under the cosmological magnetic field. We perform a suite of ideal magnetohydrodynamic simulations until 1000 years after the first protostar formation. Instead of the sink particle technique, we employ a stiff equation of state approach to represent the magnetic field structure connecting to protostars. Ten years after the first protostar formation in the cloud initialized with $B_0 = 10^{-20}$ G at $n_0 = 10^4\,{\rm cm^{-3}}$, the magnetic field strength around protostars amplifies from pico- to kilo-gauss, which is the same strength as the present-day star. The magnetic field rapidly winds up since the gas in the vicinity of the protostar ($\leq\!10$ au) has undergone several tens orbital rotations in the first decade after protostar formation. As the mass accretion progresses, the vital magnetic field region extends outward, and the magnetic braking eliminates fragmentation of the disk that would form in the unmagnetized model. On the other hand, assuming a gas cloud with small angular momentum, this amplification might not work because the rotation would be slower. However, disk fragmentation would not occur in that case. We conclude that the exponential amplification of the cosmological magnetic field strength, about $10^{-18}$ G, eliminates disk fragmentation around the Population III protostars.
https://export.arxiv.org/pdf/2208.01216
\title{Exponentially amplified magnetic field eliminates disk fragmentation \\around the Population III protostar} \correspondingauthor{Shingo Hirano} \email{hirano@astron.s.u-tokyo.ac.jp} \author[0000-0002-4317-767X]{Shingo Hirano} \affiliation{Department of Astronomy, School of Science, University of Tokyo, Tokyo 113-0033, Japan} \affiliation{Department of Earth and Planetary Sciences, Faculty of Science, Kyushu University, Fukuoka 819-0395, Japan} \author[0000-0002-0963-0872]{Masahiro N. Machida} \affiliation{Department of Earth and Planetary Sciences, Faculty of Science, Kyushu University, Fukuoka 819-0395, Japan} \keywords{% Magnetohydrodynamical simulations (1966) --- Primordial magnetic fields (1294) --- Population III stars (1285) --- Star formation (1569) --- Stellar accretion disks (1579) --- Protostars (1302) } \section{Introduction} \label{sec:intro} One of the significant challenges in modern cosmology is the formation process of the first generation of stars, the so-called Population III (Pop III) stars. They influence all subsequent star and galaxy evolution in the early Universe through their input of ionizing radiation and heavy chemical elements, depending on the final fates of Pop III stars \citep{yoon12}. There have been no direct observations yet, but the nature of the first stars has been elucidated by theoretical studies, in particular with numerical simulations of increasing physical realism \citep[for recent reviews,][]{greif15}. Furthermore, several indirect constraints exhibit the imprint of the first stars: $<\!0.8\,\msun$ low-mass stars capable of surviving to the date \citep[e.g.,][]{magg18}, about $100\,\msun$ massive binaries which can be a promising progenitor of BH-BH (blackhole) merger like the gravitational wave sources \cite[e.g.,][]{kinugawa14}, and $\sim\!10^5\,\msun$ supermassive stars which can leave massive seed BHs of the high-$z$ supermassive black holes \citep[SMBHs; e.g.,][]{inayoshi20}. There is a need to update the theoretical model for the formation and evolution of the first stars to predict their observational signature in light of the upcoming suite of next-generation telescopes. One of the key unresolved issues in the first star formation theory is the efficiency of magnetic effects (e.g., magnetic braking). Previous studies have identified several effects assuming primordial star-forming gas clouds have strong magnetic fields: delaying the gas contraction to the host DM minihalo and the first star formation \citep[e.g.,][]{koh21}, preventing disk fragmentation with efficient angular momentum transport due to the magnetic field \citep{machida13,sadanari21}, and reducing the protostellar rotation degree which can also control the final fate of Pop III stars \citep{hirano18}. However, it is known that the primordial magnetic field of the universe \citep[$10^{-18}$\,G;][]{ichiki06} is extremely weak compared to the magnetic field of nearby star-forming regions ($\sim\!10^{-6}$\,G). The magnetic field amplification by the flux freezing during the cloud collapse phase, $B \propto n^{2/3}$, is insufficient to provide the magnetic field strength to affect the first star formation. The small-scale turbulent dynamo can lead further amplification \citep[summarized in][]{mckee20}. Cosmological magnetohydrodynamical (MHD) simulations provide power-law fits to their results, to be compared to the flux-freezing expression, specifically $B \propto n^{0.83}$ \citep{federrath11} and $B \propto n^{0.89}$ \citep{turk12}. However, the amplification level via the turbulent small-scale dynamo depends on the numerical resolution \citep{sur10,sur12}. Recent MHD simulation \citep{stacy22} showed that the small-scale dynamo could contribute only one or two orders of magnitude to the magnetic field amplification during gas cloud contraction. Hence, most of the amplification comes from compressional flux-freezing. Recently, \cite{hirano21} reported a new mechanism of magnetic field amplification during the protostellar accretion phase in the primordial atomic-hydrogen (H) cooling gas clouds. Many fragments of a gravitationally unstable gas cloud amplify the magnetic field due to the rotational motion and form a vital magnetic field region around the protostar. The simulations in \cite{hirano21} adopted the stiff equation of state (EOS) technique then we could calculate the coupling between the high-density region and the magnetic field, which is essential in reproducing this amplification mechanism. However, MHD simulations of the first star formation replaced the dense region by the sink particle \citep[$n_{\rm sink} \sim 10^{13}\,\cc$;][]{sharda20,sharda21,stacy22} which results in the loss of the magnetic field connection from the dense region. Therefore, it is not uncovered whether a similar amplification occurs in the primordial molecular hydrogen (H$_2$) cooling gas clouds in such simulations. Our previous simulations \citep{machida13} assumed a relatively strong magnetic field ($10^{-10}$--$10^{-5}$\,G at $n = 10^4\,\cc$) and did not consider a cosmological weak magnetic field as an initial condition. We performed three-dimensional ideal MHD simulations of the primordial star formation using the stiff-EOS technique. We find that exponential magnetic field amplification occurs in the vicinity of the protostar in the first three years after the first Pop III protostar formation and that the field-amplified region completely suppresses disk fragmentation. This letter introduces this new exponential magnetic field amplification mechanism and substantial expansion of the amplified region. In the following Paper II, we will discuss the effects of model parameters on the first star formation in detail. \section{Methods} \label{sec:methods} \subsection{Numerical methodology} We solve the ideal MHD equations with the barotropic equation of state (EOS). Note that non-ideal MHD effects are not effective in primordial gas clouds \citep[e.g.,][]{higuchi18}. To represent the thermal evolution in the zero-metallicity cloud, we adopt the EOS table based on a chemical reaction calculation \citep{omukai08}. Instead of the sink particle technique, we employ the stiff-EOS approach to represent the magnetic field structure connecting to the dense gas region which is established in our past works \citep{machida13,machida14,hirano17,hirano21,susa19}. Figure~\ref{f1} shows the resultant EOS tables. We adopt two threshold densities $n_{\rm th}$ to study the early amplification and later expansion of the magnetic field, respectively: (1) simulations until $t_{\rm ps} = 100$\,yr with $n_{\rm th} = 10^{19}\,\cc$ which reproduce hydrostatic cores whose radius is consistent with the mass-radius relation of an accreting primordial protostar \citep{hosokawa10} and (2) until $t_{\rm ps} = 1000$\,yr with $n_{\rm th} = 10^{16}\,\cc$. We define the epoch of the first protostar formation ($t_{\rm ps} = 0$\,yr) when the gas number density firstly reached the threshold density ($n_{\rm max} = n_{\rm th}$). We use the nested grid code \citep{machida15}, in which the rectangular grids of ($n_{\rm x}$, $n_{\rm y}$, $n_{\rm z}$) = ($256$, $256$, $32$) are superimposed. The base grid has the box size $L(0) = 9.83 \times 10^5$\,au and the cell size $h(0) = 3.84 \times 10^3$\,au. A new finer grid is generated to resolve the Jeans wavelength at least 32 cells. The maximum grid levels and the finest cell sizes are $l = 17$ and $h(17) = 0.0293$\,au for runs with $n_{\rm th} = 10^{19}\,\cc$ whereas $l = 14$ and $h(14) = 0.234$\,au for $n_{\rm th} = 10^{16}\,\cc$, respectively. \subsection{Initial Condition} \label{sec:initial} The initial cloud has a enhanced Bonner-Ebert (BE) density profile $n(r) = f \cdot n_{\rm BE}(r)$ with a enhanced factor $f = 1.6$ to promote the cloud contraction. The initial central density is $f \cdot n_{\rm BE}(r = 0) = f \cdot 10^4\,\cc$. The mass and size of the initial cloud are $M_{\rm cl,0} = 4.83 \times 10^{3}\,\msun$ and $R_{\rm cl,0} = 2.38$\,pc, respectively. A rigid rotation of $\Omega_0 = 1.31 \times 10^{-14}\,{\rm s^{-1}}$ is imposed. With these settings, the thermal and rotational energies to the gravitational energy of the initial cloud are $\alpha_0 = 0.533$ and $\beta_0 = 0.0209$, respectively. We do not include turbulence and do not consider a small-scale dynamo \citep[e.g.,][]{sur10,mckee20}, because we only consider very weak fields which are significantly amplified by the rotation motion of protostars (see \S\ref{sec:dis} for details). \subsection{Model Parameter} We impose a uniform magnetic field $B_0$ with the same direction as the initial cloud's rotation axis in the whole computational domain. We examine the parameter dependence of the first star formation on $B_0 = 0$, $10^{-20}$, $10^{-15}$, and $10^{-10}$\,G (labels as B00, B20, B15, and B10). We adopt B20 as the fiducial model in this study because $B_0 = 10^{-20}$\,G is lower than the cosmological value $\sim\!10^{-18}$\,G \citep{ichiki06}. If we can show that magnetic fields affect first star formation in this model, we can prove that the first stars cannot avoid the effects of magnetic fields. By comparing B20 with B00, we examine the magnetic effect on the first star formation. By comparing B20 with B15 and B10, we study the necessity of the early amplification of the magnetic field strength before the star-forming cloud formation. \section{Results} \label{sec:res} This section shows the results of MHD simulations with different initial magnetic field strengths (B00, B20, B15, and B10). We run four models under different threshold densities until $t_{\rm ps} = 100$\,yr ($n_{\rm th} = 10^{19}\,\cc$) and $1000$\,yr ($n_{\rm th} = 10^{16}\,\cc$). Since the calculation results among models with different resolutions converged outside the resolution limit\footnote{We find no resolution-dependence effects like as MRI in our simulations.}, this section shows the combined results of the first $100$ years under $n_{\rm th} = 10^{19}\,\cc$ and the latter $900$ years under $n_{\rm th} = 10^{16}\,\cc$. \subsection{Fiducial model} Figure~\ref{f2} compares the simulation results of the fiducial model (B20) and the unmagnetized model (B00) during the first 1000 years of the protostar accretion phase. At the birth time of the first protostar (left panels), the density structure around the protostar is identical because the magnetic field strength in the vicinity of the protostar is too weak (pico-gauss $= 10^{-12}$\,G at most) to affect the collapsing gas cloud. However, after a decade (middle column in Figure~\ref{f2}), the magnetic field strength on the primary protostar surface ($\sim\!30\,\rsun$) amplifies to kilo-gauss \citep[similar to the Pop I protostars;][]{johns-krull07}, and this strong ``seed'' field amplifies the surrounding field in the region of $10$\,au radius. Within this strong magnetic field region, the density and velocity structure of accretion gas are affected by the magnetic field. After 1000 years (right column in Figure~\ref{f2}), the amplified magnetic field region extends to a radius of about $500$\,au and multiple protostars that appeared in the unmagnetized model have disappeared in the fiducial model. The global spiral structure of gas appears inside the amplified region due to the angular momentum transport by magnetic braking that allows accretion to proceed efficiently. The origin of this exponential magnetic field amplification from $10^{-12}$\,G to $10^{3}$\,G nearby protostars is rapid rotational motion capable of winding up magnetic fields. The magnetic field strength in the vicinity of the protostar has sufficiently amplified in the first three years after protostar formation (Figure~\ref{f3}a). In the region of $\le\,10$\,au, the number of orbital rotations exceeds one at $t_{\rm ps} = 0$\,yr and reaches several dozen at $t_{\rm ps} = 3$\,yr (Figure~\ref{f3}b). The magnetic field amplification region then widens over time, and its arrival radius equals the radius at which the orbital rotation rate exceeds one, $N_{\rm rot} = 1$. Figure~\ref{f3}(c) shows the negative radial velocity $-v_{\rm rad}$ and indicates that the amplified field cannot significantly impede the gas accretion to the central region. Figure~\ref{f3}(d) plots the ratio of the radial velocity to the rotational (or azimuthal) velocity. The figure shows that the gas falls toward the center while rotating. These figures mean that gravitational energy is efficiently converted into magnetic energy through kinetic (or rotational) energy after the first protostar formation. We expect that the amplified magnetic field region could spread about $10^4$\,au $\sim\!0.05$\,pc, inside which the total gas mass is about $\sim 500\,\msun$ in this model, until the end of the accretion phase of the first stars (about $10^5$\,yr; dotted line). How does this exponentially amplifying magnetic field affect the formation process of the first stars? The amplified magnetic field eliminates fragmentation of the gravitationally unstable accretion disk, and a single protostar forms at the center of the cloud (Figure~\ref{f4}a). On the other hand, the stellar masses are not different between magnetized and unmagnetized models (Figure~\ref{f4}b). In any case, the fragments born in the vicinity of the protostar will merge immediately. The rotation velocities, the second important parameter of the stellar evolution theory, are nearly constant, $\sim\!0.05$ times the Keplerian velocity, regardless of the models and evolution time (Figure~\ref{f4}c). Since the rotational degree is low, it seems reasonable to adopt a non-rotational model for the stellar evolution \cite[e.g.,][]{yoon12}. \subsection{Dependence on the initial B-field strength} We simulated two comparison models (B15 and B10) with higher initial magnetic field strengths than the fiducial model to examine the effects of other amplification mechanisms, which do not appear in our simulations. The evolution of collapsing gas cloud is almost similar among the three models until $t_{\rm ps} = 0$\,yr except for the magnetic field strength distribution. Figure~\ref{f5} presents that, in both cases, exponential magnetic field amplification occurs immediately after $t_{\rm ps} = 0$\,yr, similar to the fiducial model. Because the expansion of the amplified magnetic field region completely prevents disk fragmentation, all magnetized models show the same results: the formation of a single first star (Figure~\ref{f4}a). \section{Discussion} \label{sec:dis} The magnetic field amplification after protostar formation proceeds in the following steps: (1) The rotational motion of protostars amplifies the small magnetic field around them. (2) The amplification rate of the magnetic field in the surrounding region increases according to the induction equation, $\partial B / \partial t = \nabla \times ( {\bf v \times B})$. (3) The rotation-dominated region gradually extends outward, where the magnetic field is amplified by mechanism (1). The MHD simulations adopting the sink particle technique cannot reproduce the ``seed'' magnetic field amplification around the protostar and the following propagation outwards because the rotation of the high-density region does not couple with the magnetic field. The high accretion rate in the atomic hydrogen (H) cooling halo causes many fragments, which amplify the magnetic field due to the rotation \citep{hirano21}. In the molecular hydrogen (H$_2$) cooling halo investigated in this study, the disk fragments appear only at the initial stage but soon merge into the primary protostar. The orbital rotation around the protostar alone can amplify the magnetic field without further fragmentation. This amplification mechanism is unique to star formation in the early Universe because it does not occur in nearby star-forming regions where the magnetic field saturates before the protostar accretion phase. Thus, the feedback cannot be ignored under a strong magnetic field environment. This magnetic field amplification prohibits fragmentation of accretion disk. If the star-forming gas cloud has a sufficient rotational degree, a gravitationally unstable accretion disk forms and fragments, but the amplified magnetic field immediately suppresses disk fragmentation. Conversely, if the rotation of the gas cloud is weak, the amplification of the magnetic field by the rotation is less efficient. In this case, a gravitationally unstable accretion disk can not form and disk fragmentation does not occur. In either case, a single first star remains. We describe some caution about the rotational amplification of the magnetic field. Recent studies have suggested turbulence as an amplification mechanism of magnetic field and shown that disk fragmentation is not significantly suppressed in turbulent environments \citep[e.g.,][]{sharda21,Prole22}. We did not consider turbulence in this study (\S\ref{sec:initial}). The rotational amplification mechanism may not be effective in highly turbulent environments because the turbulent reconnection (or reconnection diffusion) breaks the coupling between the magnetic field and gas (or fluid motion) even in ideal MHD calculations \cite[e.g.,][]{lazarian99,lazarian20}. Thus, the existence of (strong) turbulence may significantly change our results, which will be investigated in our future paper. In addition, the rotational amplification would not be efficient when magnetic dissipation, such as ambipolar diffusion and ohmic dissipation, is effective, and the amplified field significantly dissipates. As shown in \cite{higuchi18}, the ambipolar diffusion becomes effective in the high-density region ($n\gtrsim 10^{12}\cc$) when the magnetic field strength exceeds $B \gtrsim 0.1 - 1$\,kG. Thus, we need to consider the ambipolar diffusion in a further evolutionary stage. Next, we discuss the amplification of the magnetic field and the treatment of protostars. We have used the stiff-EOS technique instead of the sink particle technique, as in our previous studies \citep[e.g.,][]{machida13,machida15,hirano21} because some physical quantities related to the amplification or accumulation of magnetic field (or flux), such as the mass-to-flux ratio and kinetic energy, can be conserved. The sink particle technique removes only the gas around the sink particles without removing magnetic flux. Thus, the mass-to-flux ratio is not conserved and would decrease with time. With a small mass-to-flux ratio, the magnetic flux is leaked out from the region around the sink particle, for example due to interchange instability \citep{Zhao11,Machida20}. We also showed that rotation around protostars amplifies the magnetic field. However, the rotational energy, which is proportional to the mass, is substantially removed with the sink particle technique. In addition, the high-density gas is not coupled with the magnetic field after removing the gas. Thus, the rotational amplification of the magnetic field should be underestimated with the sink particle technique. For these reasons, we used the stiff-EOS technique. It is expected that the difference in the results among recent studies (rare or frequent fragmentation) is attributed to the treatment of protostar (sink particle or stiff-EOS techniques) and the inclusion of turbulence. Finally, we comment on observational constraints of the first stars. Our result is consistent with the observational constraint with no observation of low-mass ($<\!0.8\,\msun$) surviving first stars in the Galaxy. Though the amplified magnetic field prohibits small-mass disk fragmentation, it is unclear whether the magnetic field affects wide binary/multiple \citep[separation $>10^3$\,au;][]{sugimura20} and chemo-thermal instability at the Jeans-scale \citep{hirano18sv}, which form more massive fragments. If a massive first star binary forms from them, it could leave the massive star binary, which can be a promising progenitor of BH-BH merger like the gravitational wave sources \citep{kinugawa14}. We are interested in the contribution of the amplified magnetic field to the contraction of the binary orbit, but it is outside the scope of this study. \section{Conclusion} \label{sec:con} We introduce a new exponential amplification mechanism of the magnetic field during the accretion phase of the first star formation. Even if the star-forming gas cloud has only the cosmological magnetic field strength, the orbital rotation around the protostar amplifies the tiny magnetic seed to kilo-gauss as the current protostar in less than ten years after the protostar formation. The amplified magnetic field region expands during the accretion phase and reaches $\sim\!10^4$\,au (inside which $\sim\!500\,\msun$) at $t_{\rm ps} \sim 10^5$\,yr when the protostar becomes the zero-age main-sequence. Since the strong magnetic field completely prevents disk fragmentation, only one protostar forms in each accretion disk. We conclude that the first star formation is inevitably affected by magnetic fields even if the initial magnetic field strength is a cosmological value, about $10^{-18}$\,G. \cite{hirano21} showed the magnetic field amplification in the atomic hydrogen (H) cooling gas clouds. This letter shows that the same amplification also occurs in molecular hydrogen (H$_2$) cooling gas clouds with lower accretion rates and a limited number of fragments. The following Paper II will discuss in detail how the effects on gas cloud evolution depend on the initial magnetic field strength. We note that the magnetic field amplification shown in this study would not operate in contemporary star formation because the magnetic field significantly dissipates within the disk. For this reason, we have overlooked this mechanism until today. In the future, we will perform a parameter survey of the MHD simulations for the parameter ranges of the primordial star-forming gas clouds obtained from the cosmological simulations \citep{hirano14,hirano15}. Although disk fragmentation is wholly eliminated in this study, we will check whether gas clouds with different physical parameters, such as accretion rate and rotation degree, result in the same or not. In addition, the current simulations end at $t_{\rm ps} = 1000$\,yr, and additional calculations are needed to determine the final stellar mass at $t_{\rm ps} \sim 10^5$\,yr when the first star reaches the zero main sequence stage. In the future, we will fully update the first star formation theory to incorporate MHD effects and determine the formation rates of observational counterparts, such as low-mass surviving stars and massive BH binaries. \begin{acknowledgments} This work used the computational resources of the HPCI system provided by the supercomputer system SX-Aurora TSUBASA at Cyber Sciencecenter, Tohoku University and Cybermedia Center, Osaka University through the HPCI System Research Project (Project ID: hp210004 and hp220003), and Earth Simulator at JAMSTEC provided by 2021 and 2022 Koubo Kadai. S.H. was supported by JSPS KAKENHI Grant Numbers JP21K13960 and JP21H01123 and Qdai-jump Research Program 02217. M.N.M. was supported by JSPS KAKENHI Grant Numbers JP17K05387, JP17KK0096, JP21K03617, and JP21H00046 and University Research Support Grant 2019 from the National Astronomical Observatory of Japan (NAOJ). \end{acknowledgments} \bibliography{ms}{} \bibliographystyle{aasjournal}
Title: Fast neutron background characterization of the future Ricochet experiment at the ILL research nuclear reactor
Abstract: The future Ricochet experiment aims at searching for new physics in the electroweak sector by providing a high precision measurement of the Coherent Elastic Neutrino-Nucleus Scattering (CENNS) process down to the sub-100 eV nuclear recoil energy range. The experiment will deploy a kg-scale low-energy-threshold detector array combining Ge and Zn target crystals 8.8 meters away from the 58 MW research nuclear reactor core of the Institut Laue Langevin (ILL) in Grenoble, France. Currently, the Ricochet collaboration is characterizing the backgrounds at its future experimental site in order to optimize the experiment's shielding design. The most threatening background component, which cannot be actively rejected by particle identification, consists of keV-scale neutron-induced nuclear recoils. These initial fast neutrons are generated by the reactor core and surrounding experiments (reactogenics), and by the cosmic rays producing primary neutrons and muon-induced neutrons in the surrounding materials. In this paper, we present the Ricochet neutron background characterization using $^3$He proportional counters which exhibit a high sensitivity to thermal, epithermal and fast neutrons. We compare these measurements to the Ricochet Geant4 simulations to validate our reactogenic and cosmogenic neutron background estimations. Eventually, we present our estimated neutron background for the future Ricochet experiment and the resulting CENNS detection significance.
https://export.arxiv.org/pdf/2208.01760
\title{Fast neutron background characterization of the future \Ricochet{} experiment at the ILL research nuclear reactor}% \input{authorlist} \section{Introduction} \label{sec:ricochet-intro} Coherent elastic neutrino-nucleus scattering (CENNS) was predicted in 1974~\cite{Freedman:1973yd} and observed experimentally for the first time in 2017~\cite{Akimov:2017ade}. This elastic scattering process, inducing nuclear recoils of a few keV at most, proceeds via the neutral weak current and benefits from a coherent enhancement proportional to the square of the number of neutrons~\cite{Freedman:1973yd}, suggesting that even a kg-scale experiment, located in the proximity of a research or commercial nuclear reactor, can observe a sizable neutrino signal. The search for physics beyond the Standard Model with CENNS requires to measure with the highest level of precision the sub-100~eV energy range of the induced nuclear recoils, as most new physics signatures induce energy spectral distortions in this energy region~\cite{Billard:2018jnl}. These include for instance the existence of sterile neutrinos and of new mediators that could be related to the long lasting Dark Matter problem, and the possibility of Non Standard Interactions that would dramatically affect our understanding of the electroweak sector. Thanks to its exceptionally rich science program, CENNS has led to significant worldwide experimental efforts over the last decades, with several ongoing and planned dedicated experiments based on a host of techniques. Most of these experiments are, or will be, located at nuclear reactor sites producing low-energy neutrinos with mean energies of about 3~MeV: CONNIE using Si-based CCDs~\cite{Aguilar-Arevalo:2016khx}; TEXONO~\cite{Kerman:2016jqp}, NuGEN~\cite{Belov:2015ufh}, and CONUS~\cite{Bonet:2020awv} using ionization-based Ge semiconductors; and MINER~\cite{Agnolet:2016zir}, NuCLEUS~\cite{Strauss:2017cuu}, and \Ricochet{}~\cite{Ricochet:2021rjo} using cryogenic detectors. Only the COHERENT experiment~\cite{Akimov:2017ade,Akimov:2020pdx} is looking at higher neutrino energies of about 30~MeV in average produced by the Spallation Neutron Source (SNS) in Oak Ridge, and experiments are planned at the European Spallation Source (ESS) in Lund~\cite{Baxter:2019mcx}. The \Ricochet{} experiment seeks to utilize a kg-scale cryogenic detector payload combining Zn and Ge target crystals with sub-100~eV energy threshold and particle identification capabilities down to the energy threshold to reject the dominating gamma-induced electronic recoil background. Such identification will be achieved thanks to the double heat-and-ionization measurement with the semiconducting Ge target, and pulse shape discrimination in the superconducting Zn crystals. In this context, the neutron-induced nuclear recoils are therefore expected to be the limiting background to the future \Ricochet{} experiment which will be located near the nuclear reactor of the Institut Laue Langevin (ILL). The close proximity to the reactor core comes at the cost of an additional reactor-correlated fast neutron background, called reactogenic neutrons, which could mimic a CENNS signal in the Ge and Zn target detectors hence limiting the expected \Ricochet{} CENNS sensitivity at ILL. In this paper we present our fast neutron background characterization of the ILL-H7 site, where \Ricochet{} will be installed, and its implication on the expected background levels of the future \Ricochet{} experiment. To do so, we compare data taken with a $^3$He proportional counter sensitive to both thermal and fast neutrons with Geant4 simulations. Additionally, to further assess the robustness of the presented method we also characterized the cosmogenic neutron background at the {\it Institut de Physique des 2 Infinis de Lyon} (IP2I) cryogenic test facility, where Ge bolometers with particle identification capabilities have been operated~\cite{misiak:tel-03328713,billard:tel-03259707}. We show that this low-radioactivity $^3$He proportional counter is well-suited to constrain the fast neutron background at the future \Ricochet{} experiment. In light of these results, we conclude with the \Ricochet{} shielding optimization and the anticipated nuclear recoil background induced by reactogenic and cosmogenic neutrons. \section{The Ricochet experiment} \label{sec:Ricochet} The future \Ricochet{} experiment will be deployed at the ILL-H7 site (see Figure~\ref{fig:RicochetSetup}). The H7 site starts at about 7~m from the ILL reactor core that provides a nominal thermal power of 58.3 MW, leading to a neutrino flux at the \Ricochet{} detectors, 8.8 m from the reactor core, of about $1.1\times 10^{12}$~cm$^{-2}$s$^{-1}$ which corresponds to a CENNS event rate of approximately 12.8 and 11.2~events/kg/day with a 50~eV energy threshold and Ge and Zn targets crystal, respectively. The reactor is operated in cycles of typically 50 days duration with reactor-off periods sufficiently long to measure reactor-independent backgrounds with high statistics, including internal radioactivity and cosmogenic-induced backgrounds. The ILL-H7 experimental site is about 3~m wide, 6~m long and 3.5~m high. It is located below a water channel providing about 15~m.w.e. against cosmic radiation. It is not fed by a neutron beam and is well-shielded against irradiation from the reactor and neighboring instruments (IN20 and D19). Lastly, the operation of the past STEREO neutrino experiment at this site, from 2016 to 2020, has been successfully demonstrated~\cite{Allemandou:2018vwb}. The \Ricochet{} shielding will be divided into two parts: a 300~K outer shielding and a cryogenic inner one. The outer shielding will be composed of a 35~cm thick layer of 3\%-borated polyethylene to thermalize and capture fast neutrons surrounded by a 20~cm thick layer of lead to mitigate the gamma flux. Additionally, another 35~cm thick layer of polyethylene will be positioned on top to further reduce the cosmogenic fast neutron flux. The whole setup will be surrounded with 0.5~cm thick soft iron to reduce the magnetic stray field originating from neighboring experiments. This outer shielding will be divided into three sections installed on rails to allow for an easy access to the cryostat. Lastly, muon-induced gamma and neutron backgrounds will be further reduced thanks to a surrounding muon veto, made of two layers of 3~cm thick plastic scintillator, to reject events in temporal coincidence with detected muons. The cryogenic inner shielding, installed inside the cryostat above the detectors and composed of a 8.5 cm thick layer of lead and a 21 cm thick layer of polyethylene, with interleaved 1~cm thick copper layers, will ensure a closed shielding. Additionally, 8~mm thick polyethylene layers mounted on each thermal screen will further improve the shielding tightness. Eventually, up to two 1~mm thick layers of mumetal will also be added between thermal screens to further reduce the residual magnetic field from adjacent experiments. Note that the muon veto will also include a cryogenic portion at 50 K to avoid a significant gap in veto coverage at the crossing of the cryostat. According to our cosmogenic simulations, such a muon veto should exhibit a muon-induced trigger rate of about 400~Hz which will be manageable with our $\sim$100~$\mu$s timing resolution bolometers with a reasonable livetime loss of less than 30\%~\cite{billard:tel-03259707}. \section{Thermal and fast neutron detection with a low-radioactivity \texorpdfstring{$^3$He}{3He} proportional counter} \label{sec:He3counter} To characterize the neutron background at the ILL-H7 site, we used a proportional counter tube filled with $^3$He gas. The thermal and fast neutrons are detected via the following on-flight capture reaction: \begin{equation} n \ + \ ^3\text{He} \longrightarrow p \ + \ t \ \ \ (764~\text{keV} \ + \ E{_n}) \end{equation} where $E_{n}$ is the neutron kinetic energy. The $^3$He(n,p) cross section for thermal neutrons is $\sigma = 5333 \pm 7$ b~\cite{BROWN20181} and drops below several barns for neutron energies between 100~keV and 10~MeV where elastic scattering becomes relevant~\cite{PhysRev.122.1853}. The CHM-57 counter~\cite{Vidyakin} used in this work has an active length of 860 mm with an internal diameter of 31 mm. The counter is filled with 400 kPa of $^3$He and 500 kPa of $^{40}$Ar, where the latter gas element is used as a quencher in order to stabilize the avalanche process of the proportional chamber following an ionization signal detection. Intrinsic backgrounds from alpha decays of U and Th progenies in the walls were reduced by covering the detector's inner walls with 50-60~$\mu$m of Teflon and 1~$\mu$m of electrolytic copper~\cite{Vidyakin}. The ionization signal, predominantly driven by the drifting ions to the external cathode, is read out by an attached Cremat CR-110 single channel charge-sensitive preamplifier. The preamplified signal is then analyzed online by a DT5780 digitizer working in pulse height analysis mode\footnote{For more details see https://www.caen.it/products/dt5780/}.\\ A typical thermal neutron calibration spectrum is shown in Fig.~\ref{fig:ExampleSpectra} (left panel). The expected 764 keV peak from thermal neutron captures is clearly visible. A broad plateau at lower energies is also seen, resulting from captures occurring near the wall of the counter, where either the triton (t) or proton (p) escapes without depositing its full energy. From Fig.~\ref{fig:ExampleSpectra} (left panel) two shoulder-like structures, due to this so-called wall effect, are clearly visible at 191~keV and 563~keV which respectively correspond to the full collection of only the triton or proton recoils. These three characteristic features in the energy spectrum, at 191~keV, 563~keV and 764~keV, have been used to cross-check the energy scale and linearity of the detector response~\cite{Vidyakin}. According to SRIM-based simulations~\cite{SRIM}, and further confirmed with our Geant4 simulations detailed in Sec.~\ref{sec:PCsimulation}, the averaged proton and triton track lengths following a thermal neutron capture on $^3$He are about 2~mm and 0.7~mm, respectively. In this work, we focus on the high energy portion of the observed energy spectrum, {\it i.e.} above 1~MeV in detected energy, in order to estimate the fast component of the neutron background at the ILL reactor. For energies beyond 1~MeV, we expect events to be predominantly due to elastic and inelastic (on-flight captures) scatterings of fast neutrons on $^3$He. Note that elastic scatterings on $^{40}$Ar nuclei are expected to have a negligible contribution to the observed energy spectrum beyond 1~MeV as these would require neutron energies above 20~MeV due to both kinematics and their 50\%-60\% ionization yield at a few MeV in recoiling energy (see Sec.~\ref{sec:PCsimulation}). Also, thanks to their much lower stopping power, gamma induced electronic recoils cannot deposit more than a few hundreds of keV in the detector volume. Eventually, the only relevant background beyond 1~MeV of detected energy is coming from alpha decays with degraded energies arising from residual radioactive contaminants. As mentioned above, this $^3$He proportional counter has been designed to minimize such contamination in order to offer a maximal sensitivity to fast neutron detection. Figure~\ref{fig:ExampleSpectra} (right panel) shows two observed energy spectra obtained with this counter when it was irradiated by a fast neutron source of AmBe (red) and when it was operated in the low-background Modane underground laboratory (LSM)~\cite{Lemrani:2006dq} (blue). Focusing on the energy range above 1~MeV, where we expect to detect fast neutrons, we see a clear excess of events during the AmBe calibration with respect to the low-background measurement performed at LSM. Based on previous neutron measurements done at LSM~\cite{Rozov:2010bk}, the observed events beyond 1~MeV are understood as residual radon contamination, resulting in a flat background of 2 events/day/MeV that will ultimately limit our neutron detection sensitivity, see Sec.~\ref{sec:ValidationIP2I}.\\ In order to validate our approach of using the observed energy spectrum above 1~MeV to estimate the fast neutron background, we performed two additional cross checks of the detector response dedicated to the linearity of the energy scale and its sensitivity to the incoming neutron direction. As the deposited energy increases, one can expect to observe so-called space charge effects corresponding to a degradation of the amplification gain due to charge screening~\cite{Leder:2017lva}. The latter is directly related to the amplification gain, such that a larger gain would lead to higher charge screening due to a larger number of electrons produced in the avalanche process. Figure~\ref{fig:DubnaTests} (left panel) shows two measurements where the voltage was varied from 1200~V to 1800~V, corresponding to an amplification gain variation of about 6. Because we observe no statistically significant change in the spectrum under this widely varied gain, the 1650 V operating voltage is taken to be in the linear regime, at least for our region of interest up to 10 MeV. Note that variations of the ionization yield as a function of the recoil energy of $^3$He, proton, and triton could also lead to non-linearity in the energy-scale. However, SRIM simulations of all three nuclei from 500~keV up to 10~MeV of recoiling energy, in 400 kPa of $^3$He and 500 kPa of $^{40}$Ar gas, predict an ionization yield between 98.3\% and 100\% with negligible energy dependence (see Sec.~\ref{sec:PCsimulation}). Additionally, these simulated ionization yield results are further supported by the experimental observation from \cite{PhysRev.122.1853} where a similar $^3$He-based proportional counter and mono-energetic neutrons with energies up to 17.5~MeV were used and no significant variations in the ionization yields of p, t, and $^3$He was found. The fast neutron flux is expected to be anisotropic at the ILL-H7 reactor site and several localised sources have been identified in previous measurements done by the STEREO collaboration~\cite{Allemandou:2018vwb}. Therefore, we investigated the response of our detector to a neutron calibration source irradiating our detector in two extreme orientations: centered along its z axis with a radial orientation, and positioned at the bottom endcap of the detector offering an axial orientation. The resulting energy spectra are shown in Fig.~\ref{fig:DubnaTests} (right panel) for the radial (blue) and axial (red) neutron source irradiation orientations. From the comparison of these two extreme cases, we only observe a marginal difference at the highest energies, {\it i.e.} above 6~MeV. This is explained by the improved full collection efficiency of the recoiling nuclei when their tracks are aligned with the detector axis. Based on these results, we can conclude that our $^3$He proportional counter is well-suited to measure and characterize the fast neutron component of the ILL-H7 reactor site where the future \Ricochet{} experiment will be deployed. \section{Geant4 simulations} The goal of this work is to compare our observed energy spectra to simulated ones in various conditions and from different sites, both in terms of shape and rate. Therefore, in the following section we discuss the details of our simulations. Those include both the simulation of the $^3$He counter response and of the different cosmic and reactor neutron sources. All of the following simulations have been done within the Geant4 10.06.p02 software considering the ``Shielding'' physics list~\cite{Allison:2016lfl}. \subsection{\texorpdfstring{$^3$He}{3He} proportional counter simulation} \label{sec:PCsimulation} The $^3$He proportional counter is simulated according to its geometry and gas composition as described previously. Based on our observed $\sim$30~keV energy resolution (RMS), far smaller than the considered bin width of 250~keV when compared to our measured spectra, and the negligible space charge effect, we did not include these finite detector response effects in our simulations. However, note that the physics list incorporates the “G4ScreenedNuclearRecoil” module that models screened electromagnetic nuclear elastic scattering, as required for an accurate simulation of the propagation of the proton and triton after a neutron capture on $^3$He or following any elastic scattering happening in the proportional counter~\cite{Mendenhall_2005}. Lastly, using SRIM-based recoil simulations of proton, triton, $^3$He and Ar from 500~keV up to 10~MeV, we found the ionization yield of the three lighter nuclides to be greater than 98.3\% (at 500 keV) and rising up to almost 100\% at 10~MeV. For the Ar recoils however, we found the ionization yield to be of 48.5\% at 500~keV and constantly rising up to 93.5\% at 10~MeV~\cite{SRIM}. Taking into account recoil kinematics and a 1~MeV energy threshold in detected energy, we expect to only be sensitive to p, t, and $^3$He recoils for which we can assume that the ionization yield is constant and that the detected energy is equivalent to the kinetic recoil energy. Geant4 simulations were performed in which monoenergetic neutron fluxes were isotropically incident on the $^3$He proportional counter. Figure~\ref{fig:SimulatedHe3} presents the resulting energy spectra for incident neutron energies of 1 MeV (left panel) and 3 MeV (right panel). Both panels present the results with two gas compositions: pure $^3$He gas at 400~kPa (blue) and the actual gas mixture of our detector made of 400~kPa of $^3$He and 500~kPa of $^{40}$Ar (red). For both panels we see four characteristic features: 1) a line at $E_n + 764$~keV corresponding to on-flight neutron captures fully collected in the detector volume, 2) on-flight neutron captures happening near the wall of the detector with lowered energies deposited inside the gas, 3) a rather flat $^3$He recoil energy spectrum with its corresponding endpoint at $\frac{3}{4}E_n$, and 4) a low-energy $^{40}$Ar recoil energy spectrum contribution with its expected endpoint at $0.1\times E_n$ (when $^{40}$Ar gas is added to the mixture). Interestingly we see that the addition of the 500~kPa of $^{40}$Ar gas has very little effect on the observed energy spectrum of the $^3$He recoils but has the benefit of increasing the peak-to-continuum ratio of on-flight neutron captures, hence improving the spectroscopic ability of the detector. This is explained by the fact that this additional gas component increases the fraction of fully collected proton + triton tracks by increasing the pressure hence reducing the recoiling nuclei track lengths. As a conclusion of these simulations, we expect our proportional counter to exhibit some neutron spectroscopic capabilities ({\it i.e.} direct neutron energy measurement) even though these are attenuated by the $^3$He recoil contributions from neutron elastic scatterings and by incomplete track collections. In spite of these limitations, Fig.~\ref{fig:SimulatedHe3} illustrates the capability of our detector to assess the fast neutron flux at the \Ricochet{} experiment, both in energy dependence and magnitude (see Sec.~\ref{sec:NeutronFluxMeasure}). \subsection{Cosmogenic and reactogenic neutrons} \label{sec:simuneutronflux} The \Ricochet{} experiment will be using low-radioactivity materials such that the internal radioactivity is expected to be sub-dominant with respect to the external cosmogenic and reactogenic neutrons. To simulate the cosmogenic neutrons at the various sites of interest, we used the Cosmic-ray shower library (CRY) that generates correlated cosmic-ray particle shower distributions for use as input to our Geant4 transport and detector simulation codes~\cite{CRY}. We considered the latitudes of Grenoble for the ILL-based simulations and of Lyon for the IP2I-based simulations that are relevant for the geomagnetic cut-off. Additionally, the live-time simulated by CRY with its otherwise default settings has been divided by 1.28 for the ILL site, as suggested by past muon flux measurements at the ILL-H7 site~\cite{Allemandou:2018vwb}. Concerning the reactogenic neutrons at ILL, we used simulations performed by the STEREO collaboration. From the background measurements done in preparation to the STEREO experiment~\cite{Allemandou:2018vwb}, the main source of reactogenic background identified was the IN20 experiment and, more specifically, the corresponding neutron beam H13 and its shutters. Using a MCNP code, the reactor neutron energy spectrum has been propagated through the H13 tube and the IN20 experimental site to estimate the energy spectrum and rates at the STEREO location~\cite{pequignot:tel-01217946}. However, the geometry did not include some shielding walls that were added since. Therefore, we expect the energy spectrum to be overestimated and we consider it as a conservative upper limit. The overall normalization of the flux is 790 neutrons/m$^2$/s at reactor nominal power. Figure~\ref{fig:He3Fluxes} shows the simulated reactogenic and cosmogenic neutron spectra entering the $^3$He proportional counter. The reactogenic spectrum was obtained at 58~MW nominal thermal power for a box-like generation surface of 56~m$^2$ (IN20 - red). The cosmogenic spectra are from two different locations: IP2I surface lab with its averaged overburden of $\sim$3~m.w.e. (see Sec.~\ref{fig:ResultsIP2I} - blue), and the ILL-H7 reactor site with its mean 15 m.w.e. of overburden (green)~\cite{Allemandou:2018vwb}. In the IP2I surface lab case we can clearly identify the four usual cosmic neutron populations: thermal ($E_n < 0.5$~eV), epithermal ($0.5~\text{eV} < E_n < 0.1~\text{MeV}$), evaporation ($0.1~\text{MeV} < E_n < 20~\text{MeV}$), and cascade ($E_n > 20~\text{MeV}$). However, when considering the ILL-H7 site, and its averaged artificial overburden of 15 m.w.e. (see Sec.~\ref{sec:ILLneutronMeasure}), we see that most thermal and cascade neutrons are cut-out and that the evaporation neutron population has shifted to lower energies with its peak at around 1~MeV. Though significantly reduced with respect to an unshielded surface lab, we still observe some high energy neutrons up to 200~MeV that can still affect the future \Ricochet{} experiment sensitivity. Regarding the reactogenic IN20 model (red histogram), we see that its MeV-scale neutron flux is more than one order of magnitude larger than its cosmogenic counter part (green histogram), but it also exhibits a much lower energy end point of 6~MeV suggesting that it should be better attenuated by the \Ricochet{} shielding. \section{Fast neutron flux characterizations} \label{sec:NeutronFluxMeasure} This section is the core of our work as it discusses how our simulated neutron backgrounds compare with our experimental observations with the $^3$He proportional counter presented in Sec.~\ref{sec:He3counter}. It is worth emphasizing that no parameter of the reactogenic and cosmogenic neutron flux models was tuned to better reproduce the observed spectra. Therefore, both neutron flux models have been used as is to compute our expected \Ricochet{} neutron background presented in Sec.~\ref{sec:RicochetBackground}. \subsection{Validation of the method: the IP2I fast neutron background} \label{sec:ValidationIP2I} As a proof of concept of our proposed neutron background assessment methodology we first studied the case of the IP2I surface lab. The latter is located in Lyon at an altitude of 181 meters above sea level and at a latitude of 45$^\circ$45'32.616'' North. The modelization of the cryogenic lab in our CRY simulations considers that it is in the basement of a two-story high building made of thick concrete walls and floors. We found that the main overburden comes from the floor and ceiling above our experimental area, which amounts to 1.2~m of concrete and consequently provides about 2.76~m.w.e. of direct vertical overburden. Additionally, the near proximity of our detectors to a 1.45~m-thick concrete wall provides an additional position dependent solid angle-integrated overburden. In order to properly compare our cosmogenic simulations to our observations with both the $^3$He proportional counter and the Ge bolometers operated in the same lab, about 3 meters away from each other, we first estimated the common overburden with the use of muon flux attenuation measurements. To do so, we used 1~cm thick, 20~cm long, and 5~cm wide plastic scintillator panels arranged in a 4$\times$4 array from the DIAPHANE experiment~\cite{Marteau:2016jcn}. The energy loss from the muons going through the panels is converted into scintillation photons which are guided towards a multi-anode photomultiplier by wavelength-shifting optical fibres. Muons were identified as such by requiring coincident triggers on all four plastic scintillators planes. In order to confirm the IP2I building geometry utilized in the simulations for the neutron background assessment, we measured the muon rates at three different locations and derived an averaged overburden. The first position was next to the $^3$He counter but closer to the thick wall (maximizing the effective overburden). The second position was three meters away against the opposing thin wall next to the windows (minimizing the effective overburden). Lastly, the third position was above the cryostat where the Ge detectors were operated. Therefore, the latter position is the most relevant while the first two ones can be considered as being the upper and lower bounds on the surface lab overburden. Figure~\ref{fig:muon} shows the time evolution of the observed muon trigger rates at these three locations within our cryogenic lab (purple, orange, and brown dots) and from the roof of the building (green dots) to determine a zero-overburden reference measurement. Also shown is the time dependent atmospheric pressure (blue line) which was used in our fitting model (red line) to derive a mean muon trigger rate at each location. Thanks to the muon trigger rate from the roof, we can derive the muon flux attenuations $a_\mu$ at the three cryogenic lab locations which were found to be of: $0.63\pm 0.01$ (position 1), $0.78\pm 0.01$ (position 2), and $0.72\pm 0.01$ (position 3). Following the procedure described in~\cite{Angloher:2019flc}, corresponding overburdens $m_0$ can be estimated from the observed muon flux attenuation factors $a_\mu$ using the approximation below from~\cite{Theodorsson}: \begin{equation} a_\mu = 10^{-1.32\log d - 0.26 (\log d)^2} \end{equation} where $d = 1 + m_0/10$, and $m_0$ is given in meter water equivalent (m.w.e.). The derived overburden values at each of these locations were thus found to be: $4.05 \pm 0.16$~m.w.e. (position 1), $2.04 \pm 0.11$~m.w.e. (position 2), and $2.76 \pm 0.13$~m.w.e. (position 3), leading to an averaged overburden in our lab considered hereafter of $2.95 \pm 0.65$ m.w.e. Interestingly, attenuations obtained from our CRY simulations of the muon panel setups at position 1 and 2 of the I2PI lab were found to be 0.65 and 0.78, respectively, which supports the IP2I geometry used hereafter. Figure~\ref{fig:ResultsIP2I} (left panel) shows the comparison between the observed and simulated $^3$He spectra obtained at the IP2I surface lab. The measured energy spectrum (red histogram) has been obtained by subtracting the observed event rate from LSM in order to remove the internal background of the detector (see Sec.~\ref{sec:He3counter}). The corresponding observed fast neutron rate with detected energies greater than 1~MeV, is of $25.6 \pm 1.5$ per day. As one can conclude from Fig.~\ref{fig:ResultsIP2I} (left panel), the observed and simulated spectra match almost perfectly well over the entire energy range relevant for fast neutron flux measurements ({\it i.e.} for detected energies above 1~MeV). This suggests that both the magnitude and energy dependence of the fast neutron flux entering the $^3$He proportional counter is well estimated by our simulations up to about 4.5~MeV in detected energy -- limited by the $^3$He proportional counter's internal background subtraction limit shown as the gray contour. The latter represents the 95\%~C.L. limit on the significance of the neutron detection rate, calculated using the impact of Poisson fluctuations on the internal background subtraction described in Sec.~\ref{sec:He3counter}.\\ In order to further validate this cosmogenic neutron flux model, we propagated it to 38~g Ge cryogenic bolometers operated in a dry dilution cryostat surrounded by a 70\%-coverage 10~cm thick cylindrical lead shielding with a 7~cm thick bottom end-cap. Figure~\ref{fig:ResultsIP2I} (right panel) shows the comparison between the observed recoil energy spectra from our prototype bolometers called RED20~\cite{Armengaud:2019kfj} (black solid line) and from RED80, which has the ability to discriminate electronic recoils (red) from nuclear ones (blue)~\cite{misiak:tel-03328713}, and the simulated cosmogenic background (filled histograms). Note that our simulations do not take into account internal and external radioactivity from the surrounding materials which are likely to also contribute to the total background, especially with an incomplete lead shielding as considered here. Also, the cryogenic Ge bolometers were calibrated using a $^{55}$Fe source emitting 5.89 and 6.49~keV x-rays for RED20, and internal $^{71}$Ge electron-capture decays emitting low-energy x-rays of 10.37 and 1.3~keV following a thermal neutron activation of the RED80 detector. Overall, from 1 to 15~keV we see that the total observed and simulated recoil spectra agree within a factor of about three\footnote{The steep rise in the energy spectrum below 1~keV, so-called low-energy excess, is the subject of ongoing intense worldwide investigations. For more details, see~\cite{Proceedings:2022hmu}. Additionally, note that the sharp rise at 1.5~keV in RED80 is due to the 1.3~keV x-ray line from $^{71}$Ge electron-capture decays}. Thanks to RED80, which benefits from particle identification capability with its double heat-and-ionization readout, we see that this disagreement is about a factor of six for the gammas and three for the neutrons. However, it is worth noticing that the simulation reproduces well the different slopes of the observed electronic and nuclear recoil spectra. The gamma discrepancy is most likely explained by an underestimation of the gamma background in our cosmogenic-only simulations where radiogenic contributions are not taken into account while they are likely to be significant. Indeed, removing the lead shielding around the cryostat increases the electronic recoil rate in the bolometers by a factor ten, while a more optimized shielding should provide order of magnitude better protection from gamma rays~\cite{Heusser:2015ifa}. The observed excess of the electronic recoil rate compared to a simulation restricted to cosmogenic gammas is thus not surprising. The factor of three discrepancy between the simulations and the observations regarding the neutron component is however still under investigation. For instance, some plausible explanations could be that our IP2I experimental setup simulation is oversimplified, or that our cryogenic lab exhibits a larger than expected epithermal neutron population escaping our $^3$He proportional counter sensitivity operated three meters away from the IP2I cryostat. Indeed, it is worth noting that in such configuration the bare $^3$He is probing almost exclusively the neutron evaporation peak (see Fig.~\ref{fig:He3Fluxes}), while the bolometers are also sensitive to the cascade peak as the neutrons get down-converted to lower energies thanks to the lead shielding surrounding the cryostat. Such neutrons would then induce a larger than expected keV-scale nuclear recoil rate in our bolometers. We plan to test this hypothesis using lithiated bolometers~\cite{Coron:2012eq,Armengaud:2017hit}, operated in our cryostat at IP2I, and hydrogen recoil proportional counters which should exhibit complementary epithermal neutron sensitivity to our $^3$He detector. Eventually, the qualitative concordance between the simulated and observed nuclear recoil background with the Ge bolometers confirms the reliability of our proposed neutron background assessment approach using a $^3$He proportional counter combined with both muon flux attenuation measurements and CRY-based simulations. Quantitatively it appears that in the case of our cryogenic lab at IP2I, with a measured $\sim$3~m.w.e. overburden and no polyethylene shielding around the cryostat, we are underestimating the neutron background at the Ge bolometers by a factor of about three. For the sake of completness, we will therefore consider this re-scaling factor as worst case scenario of our \Ricochet{} sensitivity study presented in Sec.~\ref{sec:RicochetBackground}. \subsection{\Ricochet{} fast neutron background characterization: cosmogenic and reactogenic neutrons at the ILL-H7 site} \label{sec:ILLneutronMeasure} At the end of 2020, the STEREO experiment was decommissioned. Since then, the ILL-H7 site has been empty and therefore perfectly well-suited for background and on-site characterizations prior to the \Ricochet{} integration. Starting in January 2021, we took almost a hundred days worth of data, during reactor ON and OFF periods, with the $^3$He proportional counter located at the planned position of the \Ricochet{} cryostat. To properly simulate the ILL site, we used the altitude and latitude of Grenoble which are 212~m above sea level and 45$^\circ$11’18.704” North, respectively, and also applied the 1.28 cosmic flux normalization factor from STEREO (see Sec.~\ref{sec:simuneutronflux}). Figure~\ref{fig:ResultsILL} (left panel) shows the resulting comparison between the cosmogenic simulations (blue) and the observed data (red) of the $^3$He detector at the ILL-H7 site when the reactor is OFF. Similarly to the IP2I case, the red histogram has been obtained after subtraction of the event rate observed from the LSM data in order to subtract the residual internal background. Again, an excellent agreement between the experimental data and the cosmogenic simulations is observed above 1~MeV in detected energy, hence validating our cosmogenic neutron flux model to be used to estimate the corresponding neutron background to the future \Ricochet{} experiment. Data with our proportional counter was also acquired during reactor ON periods in order to estimate the reactogenic neutron flux. Figure~\ref{fig:ResultsILL} (right panel) presents the resulting reactogenic neutron data and simulations. The experimental data (red histogram) has been derived by subtracting the OFF period to remove both the cosmogenic neutrons and the residual internal background contributions. The simulated spectrum (blue) has been obtained by scaling the spectrum from IN20 in Fig.~\ref{fig:He3Fluxes} (red histogram) to the reduced 42~MW thermal power during our measurements. First, it is worth noticing that we observe a fast neutron detection rate about 10 times higher during reactor ON periods ($121.9 \pm 1.9$ per day) with respect to OFF periods ($11.5 \pm 0.9$ per day), for that reactor power of 42~MW and IN20 in operation. Taken at face value, this suggests an overall reactogenic fast neutron flux about 15 times higher than the cosmogenic one when the reactor is operated at its full 58~MW nominal thermal power. Note that a higher reactogenic fast neutron flux is also expected from Fig.~\ref{fig:He3Fluxes}. Also, in this case we observe a significant departure between the two histograms, suggesting that our simulated reactogenic neutron flux is both too high and at higher energies than what we observe. Similarly, our simulations predict a fast neutron detection rate of about 230 per day above 1~MeV, hence almost two times higher than the observed one. This difference can be explained by the fact that the IN20 neutron spectrum considered here doesn't take into account the lead and polyethylene walls that are surrounding the ILL-H7 site, nor the neutron moderator and shielding from the IN20 instrument. As suggested in Sec.~\ref{sec:simuneutronflux}, it was indeed expected that our neutron background model assumption, using the outgoing IN20 reactogenic neutron flux from the H13 beam, would overestimate the fast neutron flux at the \Ricochet{} location. However, in order to provide some conservative estimates of the expected neutron background, we consider hereafter this un-moderated IN20 neutron flux as an input to our \Ricochet{} background simulations. \section{\Ricochet{} expected neutron background} \label{sec:RicochetBackground} \newcommand{\TitleTabER}{\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Electronic recoils \\ {[}50\,eV, 1\,keV{]} \\ (evts/day/kg)\end{tabular}}} \newcommand{\TitleTabNR}{\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Nuclear recoils \\ {[}50\,eV, 1\,keV{]} \\ (evts/day/kg)\end{tabular}}} \newcommand{\CosmoNoShieldER}{$260 \pm 5$} \newcommand{\ReactoNoShieldER}{$4365 \pm 301$} \newcommand{\SumNoShieldER}{$4625 \pm 301$} \newcommand{\CosmoShieldER}{$183 \pm 6$} \newcommand{\ReactoShieldER}{$18\pm 2$} \newcommand{\SumShieldER}{$201 \pm 6$} \newcommand{\CosmoShieldVetoER}{$1.6\pm0.6$} \newcommand{\SumShieldVetoER}{$20\pm2$} \newcommand{\CosmoNoShieldNR}{$1554 \pm 12$} \newcommand{\ReactoNoShieldNR}{$53853 \pm 544$} \newcommand{\SumNoShieldNR}{$55407 \pm 545$} \newcommand{\CosmoShieldNR}{$42\pm 3$} \newcommand{\ReactoShieldNR}{$2.4\pm0.3$} \newcommand{\SumShieldNR}{$44\pm3$} \newcommand{\CosmoShieldVetoNR}{$7\pm2$} \newcommand{\SumShieldVetoNR}{$9\pm2$} \begin{table*}[t] \centering \smallskip \def\arraystretch{1.5} \begin{tabular}{@{}cccccc@{}} \toprule[1pt] \multicolumn{2}{c}{} & Cosmogenic & Reactogenic & Total (MC) & {\bf CENNS (Ge/Zn)} \\ \midrule[0.5pt] \TitleTabNR & No Shielding (I) & \CosmoNoShieldNR & \ReactoNoShieldNR & \SumNoShieldNR & -- \\ & Passive Shielding (II) & \CosmoShieldNR & \multirow{2}{*}{\ReactoShieldNR} & \SumShieldNR & -- \\ & Passive + $\mu$-veto (III) & \CosmoShieldVetoNR & & \SumShieldVetoNR & {\bf 12.8 / 11.2} \\ \bottomrule[1pt] \end{tabular} \caption{\label{tab:SimuRate}Simulated background rates inside the cryogenic detector array installed at the ILL, with the shielding design illustrated in Fig.~\ref{fig:RicochetSetup}, when only one bolometer has triggered. As the muon veto is still being characterized and optimized, in the case of scenario (III) we assume perfect geometrical and detection efficiencies.} \end{table*} From the cosmogenic and reactogenic neutron components of the expected \Ricochet{} background -- compared against the $^3$He counter data in the previous section (see Sec.~\ref{sec:NeutronFluxMeasure}) -- we can estimate the expected \Ricochet{} neutron background using a GEANT4 simulation taking into account its entire shielding and detector geometry, introduced in Sec.~\ref{sec:Ricochet}. Table~\ref{tab:SimuRate} presents the resulting expected reactogenic and cosmogenic neutron background rates, integrated over our CENNS region of interest between 50~eV and 1~keV, for various shielding configurations: (I) no shielding, (II) with the passive shielding presented in Fig.~\ref{fig:RicochetSetup}, and (III) with the addition of an idealized muon veto assumed to have a 100\% geometrical and detection efficiency surrounding the \Ricochet{} experimental setup. From the comparison of the first two shielding configurations I and II, one can derive that the neutron background attenuation factors provided by the passive \Ricochet{} shielding are about 37 and of the order of $10^4$ for the cosmogenic and reactogenic neutron backgrounds, respectively. The much greater attenuation factor for reactogenic neutrons is explained by both 1) the absence of muon-induced spallation in the shielding producing fast neutrons in close proximity to the detectors, and 2) their comparatively low energy when compared to that of primary and spallation neutrons from the cosmogenic contribution as they enter the \Ricochet{} shielding. Indeed, most of these reactogenic neutrons have kinetic energies below 6~MeV (see Fig.~\ref{fig:He3Fluxes}), corresponding to a mean free path in polyethylene of about 6~cm, making them efficiently moderated by the 35~cm of polyethylene. On the other hand, with energies up to $\sim$200~MeV, cosmogenic neutrons can still reach the \Ricochet{} cryogenic detectors. Therefore, despite of their higher expected (and measured) overall fast neutron flux, reactogenic neutrons are not expected to be a dominant background to the future \Ricochet{} experiment even when considering the extreme case of the un-moderated IN20 simulated neutron flux (see Sec.~\ref{sec:ILLneutronMeasure}). However, note that reaching such high attenuation factors puts strong constraints on the tightness of the passive shielding, hence the additional internal layers between the thermal screens to limit possible neutron leakage to the bolometers from the top (see Sec.~\ref{sec:Ricochet}). The comparison of the shielding configurations II and III from Table~\ref{tab:SimuRate} suggests that an idealized muon veto could help reducing the cosmogenic neutron background by an additional factor of 6. As the \Ricochet{} muon veto won't be as efficient as an ideal one, we indeed expect an overall muon veto tagging efficiency of about 90\%, we consider hereafter that our cosmogenic neutron background will be between $42\pm3$ and $7\pm2$ events per day. Solely considering the expected neutron backgrounds, these two cases respectively lead to signal-to-background ratios of about 0.3 and 1.4. Assuming a 70\% CENNS detection efficiency, arising from estimated livetime loss and of various analysis cuts finite efficiencies, these values suggest that the \Ricochet{} experiment could reach a statistical CENNS detection significance\footnote{The significance is defined as $Z = S/\sqrt{(S+2B)}$ with $S$ and $B$ the numbers of CENNS and background events respectively and assuming equal reactor ON and OFF exposition times.} after only one reactor cycle between 7.5~$\sigma$ and 13.6~$\sigma$. If we apply a conservative factor of 3 to the neutron background rates based on the Ge bolometer comparison between our cosmogenic simulations and observations done at IP2I (see Sec.~\ref{sec:ValidationIP2I}), these significances drop to 4.6~$\sigma$ and 9.2~$\sigma$ depending on the muon veto efficiency, respectively. Lastly, it is worth highlighting that these neutron background based sensitivity estimates assume that there are no additional unexpected backgrounds, and that the gamma background will be both low enough and efficiently rejected thanks to the \Ricochet{} bolometers' particle identification capabilities. \section{Conclusion} In this paper, we have presented our fast neutron flux characterization with a dedicated low-background $^3$He proportional counter. We first tested our method in comparing simulated and observed energy spectra from the IP2I surface lab where cryogenic detectors, with particle identification capabilities, were also operated. This allowed us to cross-check that our cosmogenic simulations were properly reproducing both the $^3$He spectra above 1~MeV in detected energy and the low-energy nuclear recoil spectrum from our Ge bolometers to within a factor of about three, assuming a sole cosmogenic neutron component. Following this cross-validation, we measured and simulated the neutron fluxes at the ILL-H7 site, where the future CENNS \Ricochet{} experiment will be deployed. Firstly, we found an excellent agreement between our cosmogenic neutron simulation and measurements. Based on these observations, one can conclude that CRY provides reliable estimates of cosmogenic backgrounds for experiments located at shallow sites with depths from 3 to 15~m.w.e. Secondly, a significant disagreement has been found in the case of reactogenic neutrons between our $^3$He simulations and experimental data, suggesting that the IN20 neutron model considered here overestimates the reactor induced neutron energies and flux at the ILL-H7 site. Therefore, the IN20 neutron flux model is considered as a conservative model to estimate the anticipated reactogenic neutron background for \Ricochet{}. Following these onsite neutron background characterizations, we propagated both our reactogenic and cosmogenic neutron fluxes into our \Ricochet{} shielding simulation to estimate its expected nuclear recoil background level. Interestingly, despite its higher fast neutron flux, we found that the reactogenic neutron background will only contribute to about one forth of the overall \Ricochet{} neutron background, suggesting that the ON/OFF reactor modulations should lead to an increased CENNS sensitivity. Assuming our neutron background model, we found that depending on the effectiveness of the muon veto, a statistical significance of a CENNS detection with \Ricochet{} after only one reactor cycle should be between 7.5 to 13.6~$\sigma$ when only the expected neutron background is considered. A similar study dedicated to the gamma induced background, also addressing the particle identification capabilities of our detectors, is ongoing and will be presented in a forthcoming paper. \section*{Acknowledgments} We are grateful to the EDELWEISS collaboration for the use of its electronics and DAQ system in the operation of the RED20 and RED80 cryogenic detectors discussed in Sec.~\ref{sec:NeutronFluxMeasure}. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program under Grant Agreement ERC-StG-CENNS 803079, the French National Research Agency (ANR) within the project ANR-20-CE31-0006, the LabEx Lyon Institute of Origins (ANR-10-LABX-0066) of the Université de Lyon, the NSF under Grant PHY-2013203. A portion of the work carried out at MIT was supported by DOE QuantISED award DE-SC0020181 and the Heising-Simons Foundation. This work is also partly supported by the Ministry of science and higher education of the Russian Federation (the contract No. 075-15-2020-778). \bibliographystyle{spphys} \bibliography{Refs}
Title: Increasing the raw contrast of VLT/SPHERE with the dark-hole technique. II. On-sky wavefront correction and coherent differential imaging
Abstract: Context. Direct imaging of exoplanets takes advantage of state-of-the-art adaptive optics (AO) systems, coronagraphy, and post-processing techniques. Coronagraphs attenuate starlight to mitigate the unfavorable flux ratio between an exoplanet and its host star. AO systems provide diffraction-limited images of point sources and minimize optical aberrations that would cause starlight to leak through coronagraphs. Post-processing techniques then estimate and remove residual stellar speckles such as noncommon path aberrations (NCPAs) and diffraction from telescope obscurations. Aims. We aim to demonstrate an efficient method to minimize the speckle intensity due to NCPAs during an observing night on VLT/SPHERE. Methods. We implement an iterative dark-hole (DH) algorithm to remove stellar speckles on-sky before a science observation. It uses a pair-wise probing estimator and a controller based on electric field conjugation. This work presents the first such on-sky minimization of speckles with a DH technique on SPHERE. Results. We show the standard deviation of the normalized intensity in the raw images is reduced by a factor of up to 5 in the corrected region with respect to the current calibration strategy under median conditions for VLT. This level of contrast performance obtained with only 1 min of exposure time reaches median performances on SPHERE that use post-processing methods requiring 1h-long sequences of observations. We also present an alternative calibration method that takes advantage of the starlight coherence and improves the post-processed contrast levels rms by a factor of about 3. Conclusions. This on-sky demonstration represents a decisive milestone for the future design, development, and observing strategy of the next generation of ground-based exoplanet imagers for 10m to 40m telescope.
https://export.arxiv.org/pdf/2208.11244
\title{Increasing the raw contrast of VLT/SPHERE with the dark-hole technique} \subtitle{II. On-sky wavefront correction and coherent differential imaging} \author{A. Potier\inst{1}\thanks{Based on observations collected at the European Southern Observatory, Chile, 108.22NJ} \and J. Mazoyer\inst{2} \and Z. Wahhaj\inst{3} \and P. Baudoz\inst{2} \and G. Chauvin\inst{4} \and R. Galicher\inst{2} \and G. Ruane\inst{1} } \institute{Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109\\ \email{axel.q.potier@jpl.nasa.gov} \and LESIA, Observatoire de Paris, Université PSL, CNRS, Université Paris Cité, Sorbonne Université, 5 place Jules Janssen, 92195 Meudon, France \and European Southern Observatory, Alonso de Cordova 3107, Vitacura, Santiago, Chile \and Laboratoire Lagrange, Université Cote d’Azur, CNRS, Observatoire de la Cote d’Azur, 06304 Nice, France} \date{Received XXX; accepted XXX} \abstract {Direct imaging of exoplanets takes advantage of state-of-the-art adaptive optics (AO) systems, coronagraphy, and post-processing techniques. Coronagraphs attenuate starlight to mitigate the unfavorable flux ratio between an exoplanet and its host star. AO systems provide diffraction-limited images of point sources and minimize optical aberrations that would cause starlight to leak through coronagraphs. Post-processing techniques then estimate and remove residual stellar speckles due to hardware limitations, such as noncommon path aberrations (NCPAs) and diffraction from telescope obscurations, and identify potential companions.} {We aim to demonstrate an efficient method to minimize the speckle intensity due to NCPAs and the underlying stellar diffraction pattern during an observing night on the Spectro-Polarimetric High-contrast Expolanet REsearch (SPHERE) instrument at the Very Large Telescope (VLT) instrument without any hardware modifications.} {We implement an iterative dark-hole (DH) algorithm to remove stellar speckles on-sky before a science observation. It uses a pair-wise probing estimator and a controller based on electric field conjugation, originally developed for space-based application. This work presents the first such on-sky minimization of speckles with a DH technique on SPHERE.} {We show the standard deviation of the normalized intensity in the raw images is reduced by a factor of up to five in the corrected region with respect to the current calibration strategy under median conditions for VLT. This level of contrast performance obtained with only 1 min of exposure time reaches median performances on SPHERE that use post-processing methods requiring $\sim$1h-long sequences of observations. The resulting raw contrast improvement provides access to potentially fainter and lower-mass exoplanets closer to their host stars. We also present an alternative a posteriori calibration method that takes advantage of the starlight coherence and improves the post-processed contrast levels rms by a factor of about three with respect to the raw images.} {This on-sky demonstration represents a decisive milestone for the future design, development, and observing strategy of the next generation of ground-based exoplanet imagers for 10-m to 40-m telescopes.} \keywords{instrumentation: adaptive optics -– instrumentation: high angular resolution -– techniques: high angular resolution} \section{Introduction} The imaging of faint stellar companions, exoplanets, and circumstellar disks with ground-based high-contrast imaging instruments, such as the Spectro-Polarimetric High-contrast Expolanet REsearch (SPHERE) instrument at the Very Large Telescope (VLT) \citep{Beuzit2019} and the Gemini Planet Imager (GPI) \citep{Macintosh2015a}, is currently limited by wavefront aberrations and diffraction effects. Extreme adaptive optics (XAO) systems control dynamic optical perturbations caused by atmospheric turbulence and compensate for static phase aberrations introduced by manufacturing defects and misalignments of optical components from the telescope primary mirror to the XAO wavefront sensor (WFS). Classical AO systems split the beam and analyze the wavefront aberrations in a dedicated optical channel. This two-channel configuration (one for the measurement of the aberrations and one for the science detector) cannot constrain the aberrations introduced by the optical components in the science channel. These noncommon path aberrations \citep[NCPAs,][]{Fusco2006} induce additional starlight leakage that interferes with the underlying stellar diffraction pattern and takes the form of static and quasi-static speckles on the science detector, currently preventing the detection of sources fainter than one millionth of their host star's flux. The mitigation of static speckles caused by NCPAs has been extensively studied by the community. First, postprocessing techniques such as angular \citep[ADI,][]{Marois2006}, spectral \citep[SDI,][]{Racine1999}, polarimetric \citep[PDI,][]{Kuhn2001}, and reference-star \citep[RDI,][]{Lafreniere2009} differential imaging are broadly employed to enhance detection levels in the images. Both ADI and SDI, routinely used for the detection of exoplanets, suffer from self-subtraction of the off-axis sources at small angular separations \citep{Esposito2014}. ADI requires additional observing time to increase the total parallactic angle rotation, limiting the total observing efficiency. To enhance the contrast in the images before post-processing, the NCPAs are also currently compensated, prior to the observing night, with phase retrieval techniques \citep{Gonsalves1982, Paxman1992, Sauvage2007}. Methods to compensate for the NCPAs with the XAO high-order deformable mirror (HODM) in parallel with the science observations is an active field of research. Science cameras running at high rates allow for techniques that compare WFS telemetry with focal plane images to characterize the optical system and measure the NCPAs \citep{Rodack2021, Skaf2022}. But, the long read-out time of science detectors prevents the use of such methods with most of the high-contrast imagers. Other techniques based on focal plane WFSs, initially developed for space-based applications, have also been tested in dynamic conditions and implemented on different ground-based instruments. For instance, the self-coherent camera \citep[SCC,][]{Baudoz2006} has been validated on-sky to correct for low-order aberrations \citep{Galicher2019}. Higher-order corrections have been demonstrated on-sky using speckle nulling techniques \citep{Borde2006, Martinache2014, Bottom2016}, as well as the Zernike sensor for Extremely Low-level Differential Aberration \citep[ZELDA, ][]{NDiaye2016, Vigan2019}. The coronagraphic phase diversity (COFFEE), as well as pair-wise probing (PWP), associated with the electric field conjugation (EFC) algorithm \citep{GiveOn2007SPIE} were previously only tested in the laboratory with a rotating phase plate simulating turbulence residuals \citep{Potier2019, Herscovici2019} and to calibrate state-of-the-art instruments using their internal sources \citep{Paul2014, Matthews2017, Potier2020b}. The combination of PWP with EFC aims to minimize the speckle intensity in parts of the science detector called dark holes (DHs). PWP estimates the complex focal-plane electric field (E-field) through a temporal modulation of the intensity by introducing phase-shifted surface shapes to the HODM, resulting in an interferometric measurement in the image plane. Once estimated, the static E-field is minimized by the EFC algorithm, which determines the HODM settings that destructively interfere with the measured E-field in the DH region. As shown in \cite{Potier2020b}, the technique applied to half of the field of view (commonly called a half DH) enables the compensation of both the static phase and amplitude aberrations, as well as the underlying stellar diffraction pattern, thus improving the contrast in the raw images (i.e., the intensity of the stellar residuals normalized by the off-axis point spread function of the same star). While this method has been shown to be suitable for correcting aberrations with a stable wavefront at extremely high-contrast ratios \citep{Trauger2007, Seo2019, Potier2020a}, its application is more challenging during on-sky observations of stars with ground-based telescopes due to the varying conditions between the different images required by PWP. This is likely why the loop of the iterative PWP+EFC technique has never been successfully closed on-sky previously. However, we recently demonstrated in the lab that leaving the dynamic speckles to average in a halo through long-exposure imaging should overcome previous limitations and enable the correction of speckles that evolve slower than the servo-loop speed \citep{Singh2019}. In this paper, we describe the on-sky results obtained with SPHERE where we successfully minimized the speckle intensity with PWP+EFC. In Sec.~\ref{sec:onsky_correction}, we describe the direct on-sky compensation of the NCPAs in closed loop. In Sec.~\ref{sec:CDI}, we demonstrate that the PWP estimations can be used to implement an alternative coherent differential imaging (CDI) post-processing method that improves the exoplanet detection limit in the images, even on the uncorrected parts of the science detector. \section{On-sky closed loop of the DH algorithm} \label{sec:onsky_correction} \subsection{Method} \label{subsec:Method_onsky} The observations were performed during the second half of the night of February 15, 2022 starting at 5~a.m. UTC. The seeing ranged from 0.56" to 1.2" at 500~nm according to the Paranal DIMM-Seeing monitor, slowly degrading over time with an average of $\sim$0.8". This value is typical of median conditions at Paranal. The coherence time remained stable, varying between 8 and 10\,ms during the full observing sequence. Wind originated from between the north-east and the north-west, with a speed below 5~m.s$^{-1}$ throughout the observations. These good conditions allowed us to use the small pinhole for spatial filtering in front on the Shack-Hartmann (SH) WFS \citep{Poyneer2004}. We observed HIP~57013 (\mbox{RA=11 41 19.79}, \mbox{$\delta$= -43 05 44.40}, R=5.5, H=5.5), a bright early-type A0V member of the Sco-Cen association and part of the BEAST survey \citep{Janson2021}, using the H3 filter ($\lambda_0 = 1667$~nm, $\Delta\lambda = 54$~nm). We attenuated the starlight with an Apodized Pupil Lyot Coronagraph \citep[APLC,][]{Soummer2005}, whose focal plane occultor is 185~mas in diameter. Starting from the original SPHERE wavefront calibration based on phase retrieval, we ran the PWP+EFC focal plane wavefront sensing and control algorithms, following the method described by \cite{Potier2020b}. The pair-wise probes and EFC solutions were applied on the HODM by changing the SH WFS reference slopes while the AO system ran in closed loop. The pair-wise probes consisted in the vertical displacement of one individual actuator located within the Lyot stop opening, and poked with a peak-to-valley amplitude of $\pm$400\,nm. We used two probes per iteration, which were distributed vertically to create a top DH as described in \cite{Potier2020b}. We used an exposure time of 64\,s, which averages the intensity variations due to atmospheric turbulence. Assuming the atmospheric turbulence was in steady-state during one iteration and small AO residuals with respect to the wavelength \citep{Singh2019}, the turbulent halo was ignored by PWP when the difference between each pair of diversity images was calculated. The corrected region (DH) is an annulus with an inner and outer edge of 5~$\lambda_0/D$ and 15~$\lambda_0/D$, respectively, which corresponds to angular separations ranging from 220~mas to 650~mas. This choice aims to minimize the risk of the loop being perturbed by the starlight leakage due to AO aliasing \citep[Fig.~6 of][]{Cantalloube2019} or by the wind driven halo \citep{Cantalloube2020}. At each iteration, the EFC solution is filtered through a Tikhonov regularization \citep{Pueyo2009}, which suppresses modes higher than the 400th mode out of 1377 total modes controlled by SPHERE. A low servo-loop integral gain of 0.1 was used to account for biases in the PWP, especially those induced by small unexpected fluctuations in the intensity (mainly originating from the wind driven halo) during the acquisition of the probe images and errors in the model of light propagation. Given the apparent stability of the loop, we increased the gain to 0.2 from the third iteration to expedite the correction. Each probe image acquisition required 80\,s (64\,s exposure and $\sim$16\,s overhead, including readout time) for a total of $\sim$400\,s per iteration (four images required for PWP and one image to measure current raw contrast in an unprobed image). The total time required for the on-sky NCPA calibration could be reduced in the future, either by increasing the servo-loop gain, by decreasing the exposure time for the probed images, or by skipping the unprobed image that we used to verify loop convergence. This optimization will be performed before the official commissioning of this technique on SPHERE. \subsection{Results} \label{subsec:Results_onsky} Figure~\ref{fig:OnSkyDH} shows the on-sky coronagraphic images (left) using the current SPHERE calibration procedure (top) and the closed-loop~DH correction inside the green line after five iterations (bottom). Images are normalized by the maximum of the off-axis point spread function (PSF) measured on the same star (i.e., the no-coronagraph~PSF) to study the results in the ``normalized intensity'' dimensionless metric. Normalized intensity differs from raw contrast in that it does not account for potential spatial variations in the off-axis PSF transmission and morphology. The quasi-static speckles that set the detection limit for current SPHERE observations are effectively removed from the ``DH correction'' image, leaving only the fainter, smooth AO halo. The final raw image is similar to the simulated bottom left image of Fig.~4 in \cite{Potier2020b} where the halo of the SPHERE AO system (SAXO) limits the overall performance after compensating for the static speckles in the DH. The raw image in Fig.~\ref{fig:OnSkyDH} is further limited by the fast-evolving low-order aberrations, whose level was underestimated in the former simulation. To reduce the effect of this halo, and highlight the impact of static and quasi-static speckles in the SPHERE performance, the images are high-pass filtered using a Gaussian kernel with a standard deviation of 2 pixels (0.57$\lambda_0/D$, right column of Fig.~\ref{fig:OnSkyDH}). In the DH, the visible static speckles have been corrected, minimizing the estimated mean coherent intensity from $\sim6\cdot10^{-6}$ to $\sim1\cdot10^{-6}$. The remaining photo-electrons in the smooth regions in the high-pass-filtered DH after an exposure time of 64~s with the science detector mostly come from the photon noise induced by the turbulent halo. We show the mean modulated and unmodulated intensities in the DH at each iteration in Fig.~\ref{fig:Contrast_Iteration}. The modulated component is the absolute value of the E-field squared estimated by PWP, while the unmodulated component represents the difference between the total recorded intensity and the modulated component. The unmodulated component therefore includes: 1)~astrophysical sources located in the field of view; 2)~calibration errors in PWP, such as instrument model inaccuracies; 3)~the field of speckles whose life time is smaller than the measurement rate of PWP ($\sim$5~min), such as the turbulent halo; and 4)~chromatic residuals. Figure~\ref{fig:Contrast_Iteration} demonstrates the convergence of the PWP+EFC closed-loop algorithm because the averaged modulated component in the DH is minimized. However, it also shows that the averaged level of starlight residual in the total image remains almost constant at $\sim3\cdot10^{-5}$ throughout the process. Indeed, even though the processed high-contrast imaging data are limited by the effects of static and quasi-static aberrations at small separations, the smooth halo of turbulence is often the dominating term in the raw images. This halo is removed in post-processing through the filtering of the low spatial frequency residuals or with differential imaging, leaving behind only its associated photon noise. We also show the radial profile of the filtered images (calculated as the standard deviation in azimuthal rings of size $\lambda/2D$ in the DH regions versus the angular separation) throughout the different PWP+EFC iterations in Fig.~\ref{fig:Contrast_Separation_onsky}. Detection performance in the DH region has been improved from a factor of two to a factor of five in that area. We obtain the best improvement factor (more than three) between 245 and 500~mas. The normalized intensity reaches a level below $10^{-6}$ between 410 and 635~mas. Although we did not try to minimize the starlight below 220~mas, we noticed an improvement between 150 and 220~mas, which we attribute to changing observing conditions (turbulence, wind driven halo, quasi-statics) as iterations progress. \section{Coherent differential imaging} \label{sec:CDI} \subsection{Method} \label{subsec:Method_CDI} PWP can also be used alone to further suppress stellar speckles in the images in post-processing. Indeed, the stellar E-field estimated by PWP can serve as a reference image to be subtracted from the remaining total intensity, thereby improving the S/N of the astrophysical signal that is not coherent with the starlight. This technique called coherent differential imaging (CDI) has been demonstrated with many focal plane wavefront sensors \citep{Baudoz2006, Bottom2017, Jovanovic2018}. CDI can be used to further remove residual speckles inside the DH. In our case, the loop has converged and no more speckles are to be calibrated in post-processing. In this section, we demonstrate the ability of CDI to enhance the contrast in regions that are not corrected during the PWP+EFC process. We also propose an optimization of the CDI technique to account for the changing atmospheric transmission over time between the recording of the total intensity images and the probe images. This simple algorithm will be investigated further in the near future to quantify potential over-subtraction of the astrophysical signal. At each iteration, the PWP estimation of the focal plane E-field (or reference image) is numerically recentered with respect to the total intensity signal to account for any drift on the science detector during the probing steps. It is then rescaled to minimize the mean intensity of the high-pass-filtered CDI result, in the region symmetrically opposed to the top DH described in Sec.~\ref{sec:onsky_correction}. This area is arbitrarily chosen for the sake of illustration, while any region of the science detector well sensed by PWP could be selected. This optimization assumes that most of the total signal in that area is caused by stellar residuals and is, therefore, coherent. This optimization should not be strongly affected by a planet or other astrophysical object present in the field of view but injection of fake signal in the data is out of the scope of this paper. The estimated mean scaling factor is 1.0, which demonstrates that the calibration of the probes was accurate. The standard deviation of the scaling factor over the iterations is 0.1, which we attribute to varying atmospheric transmissions. The scaling method, reprocessed at each iteration, will be used in the PWP+EFC closed loop in the future for a faster convergence of the DH correction. \subsection{Results} \label{subsec:Results_CDI} The high-pass-filtered total intensity, the optimized reference image, and the CDI result at each iteration are shown in Fig.~\ref{fig:CDI_steps}. Qualitatively, the static speckle intensity in the CDI results is minimized with respect to the total intensity in the region sensed by PWP. The DH geometry is defined either by the regions of the science detector reachable by PWP with the chosen probes, or by the spatial frequencies properly corrected by the AO system. For instance, the CDI performance is limited in the central region along the horizontal axis for the former reason (see Fig.~3 in \citealp{Potier2020b}) and in the regions above the AO cutoff frequency or where aliasing caused by the SH WFS dominates for the latter. Also, the result of CDI is limited by bright residual speckles at small separations. We conclude that PWP is not as sensitive to these residuals, either because they are caused by large wavefront errors that violate the linear approximation made by PWP \citep{GiveOn2007SPIE}, or because they are too close to the fast-evolving low-order aberrations that violate the steady-state assumption required by PWP. For the former case, in principle the iterative PWP+EFC algorithm should be able to correct such aberrations during the observation. Therefore, better performance of the CDI algorithm is expected after a few iterations of DH correction in the targeted region. This approach is well demonstrated by the CDI result at iterations three and four, where the speckles in the top DH are almost perfectly removed. Figure~\ref{fig:Contrast_Separation_CDI} shows the radial profile of both the total intensity and the result of CDI in the optimized region (encircled in blue in Fig.~\ref{fig:CDI_steps}). The CDI technique enhances the contrast level in the post-processed images between 150 and 640~mas for each estimation of the coherent signal with PWP. With this promising result, we envision observing strategies where PWP is applied at a regular cadence and the resulting CDI results are used as inputs for post-processing methods, or for a direct analysis. However, the contrast after applying CDI degrades with PWP+EFC iterations. Indeed, the minimization of the speckle intensity due to amplitude aberrations in the top DH with one HODM is achieved by degrading the contrast in the bottom part of the field of view. CDI then starts from a higher speckle intensity, with more noise, in the bottom part of the image. Hence, applying CDI before any half DH correction would be more efficient on the ``degrade'' part of the image. The implementation of CDI should, therefore, be defined depending on the science objective and telescope time. Ideally, either CDI would be applied on top of a DH active-wavefront correction for science analysis in that region, requiring the implementation of PWP+EFC successively in different DHs to cover a broader region; or CDI would be directly applied in a larger area by applying PWP alone, without EFC correction, at a regular cadence. The former case would provide the best contrast performance, while the latter would reduce the required telescope time. \section{Conclusions} We present two promising implementations of a dark hole~(DH) technique on VLT/SPHERE. First, relying on an efficient adaptive optics~(AO) system combined with good observing conditions, we demonstrated a pair-wise probing ~(PWP) + electric field conjugation~(EFC) algorithm directly on-sky that minimized the static speckle field intensity in a defined region using VLT/SPHERE. The standard deviation of the normalized intensity was improved by a factor of up to five in the DH. Therefore, 1min-exposure images, after the PWP+EFC correction is applied (30min here), achieve the same level of performance as ADI with 1h-sequences of observations. This promising technique is expected to be even more robust with the improvement of the AO systems planned for the next generation of high-contrast imaging instruments \citep{Chilcote2018, Boccaletti2020}. Second, we showed that estimation of the coherent light in the field of view with PWP could be used to further calibrate the static speckles in post-processing through coherent differential imaging~(CDI). PWP could also be used at a regular cadence during a high-contrast observation to estimate the spatial distribution of the coherent speckles and then subtract them from the raw images. The residuals, containing the incoherent signal of interest, could then be further analyzed with conventional methods such as angular~(ADI), spectral~(SDI), or reference~(RDI) differential imaging. Here, the performance was quantified via a simple high-pass-filtered image. Therefore, the~CDI method also solves the self-subtraction issue that emerges when performing ADI at small angular separations with a coronagraph instrument, such as those that will equip the upcoming extremely large telescopes \citep{Baudoz2014, Fitzgerald2019, Kasper2021}. We will assess this capability in future experiments, where the technique will target the inner edges of known circumstellar disks or face-on disks whose morphologies would be altered by ADI and SDI post-processing techniques. We will also combine this method with star-hopping algorithms \citep{Wahhaj2021} in the near future to understand the speckle lifetime and how the DH correction is impacted by the telescope alignment from one target to another. All of these experiments aim to determine the most optimized strategy of observation for the search for and the characterization of exoplanets in direct imaging. \begin{acknowledgements} The authors would like to thanks Thierry Fusco, Jean-Fran\c{c}ois Sauvage and Anne Costille for fruitful discussions about SAXO as well as Arthur Vigan and Julien Milli for their insights regarding internal turbulence in SPHERE. AP and GR's research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004). This work was supported by an ECOS-CONICYT grant (\#C20U02). \end{acknowledgements} \bibliographystyle{aa} % \bibliography{bib_GS} %
Title: The core population and kinematics of a massive clump at early stages: an ALMA view
Abstract: High-mass star formation theories make distinct predictions on the properties of the prestellar seeds of high-mass stars. Observations of the early stages of high-mass star formation can provide crucial constraints, but they are challenging and scarce. We investigate the properties of the prestellar core population embedded in the high-mass clump AGAL014.492-00.139, and we study the kinematics at the clump and the clump-to-core scales. We have analysed an extensive dataset acquired with the ALMA interferometer. Applying a dendrogram analysis to the Band o-$\rm H_2D^+$ data, we identified 22 cores. We have fitted their average spectra in local-thermodinamic-equilibrium conditions, and we analysed their continuum emission at $0.8 \, \rm mm$. The cores have transonic to mildly supersonic turbulence levels and appear mostly low-mass, with $M_\mathrm{core}< 30 \, \rm M_\odot$. Furthermore, we have analysed Band 3 observations of the $\rm N_2H^+$ (1-0) transition, which traces the large scale gas kinematics. Using a friend-of-friend algorithm, we identify four main velocity coherent structures, all of which are associated with prestellar and protostellar cores. One of them presents a filament-like structure, and our observations could be consistent with mass accretion towards one of the protostars. In this case, we estimate a mass accretion rate of $ \dot{M}_\mathrm{acc}\approx 2 \times 10^{-4} \rm \, M_\odot \, yr^{-1}$. Our results support a clump-fed accretion scenario in the targeted source. The cores in prestellar stage are essentially low-mass, and they appear subvirial and gravitationally bound, unless further support is available for instance due to magnetic fields.
https://export.arxiv.org/pdf/2208.01675
command. \newcommand{\vdag}{(v)^\dagger} \newcommand\aastex{AAS\TeX} \newcommand\latex{La\TeX} \usepackage{txfonts} \usepackage{xspace} \usepackage{mathrsfs} \usepackage{amssymb} \usepackage{gensymb} \newcommand{\nnhp}{$\rm N_2H^+$\xspace} \newcommand{\hhdp}{$\rm H_2D^+$\xspace} \newcommand{\nndp}{$\rm N_2D^+$\xspace} \newcommand{\hcop}{$\rm HCO^+$\xspace} \newcommand{\dcop}{$\rm DCO^+$\xspace} \newcommand{\hcdop}{$\rm HC^{18}O^+$\xspace} \newcommand{\kms}{$\rm km \, s^{-1}$\xspace} \newcommand{\ncol}{$N_\mathrm{col}$\xspace} \newcommand{\xmol}{$X_\mathrm{mol} (\text{\ohhdp})$\xspace} \newcommand{\tex}{$T_\mathrm{ex}$\xspace} \newcommand{\vlsr}{$V_\mathrm{lsr}$\xspace} \newcommand{\mcore}{$M_\mathrm{core}$\xspace} \newcommand{\sigmav}{$\sigma_\mathrm{V}$\xspace} \newcommand{\xab}{$f_\mathrm{corr}$\xspace} \newcommand{\modh}{$\mathscr{H}$\xspace} \newcommand{\modf}{$\mathscr{F}$\xspace} \newcommand{\modl}{$\mathscr{L}$\xspace} \newcommand{\crir}{$\zeta_2$\xspace} \newcommand{\ohhdp}{$\rm \text{o-} H_2D^+$\xspace} \newcommand{\lineh}{\hhdp $(1_{1,0} - 1_{1,1})$\xspace} \newcommand{\olineh}{$\rm \text{o-} H_2D^+(1_{1,0} - 1_{1,1})$\xspace} \shorttitle{An ALMA view of the high-mass clump AG14} \shortauthors{Redaelli et al.} \graphicspath{{./}{figures/}} \begin{document} \title{The core population and kinematics of a massive clump at early stages: an ALMA view} \correspondingauthor{Elena Redaelli} \email{eredaelli@mpe.mpg.de} \author[0000-0002-0528-8125]{Elena Redaelli} \affiliation{Centre for Astrochemical Studies, Max-Planck-Institut f\"ur extraterrestrische Physik, Gie{\ss}enbachstra{\ss}e 1, 85749 Garching bei M\"unchen, Germany} \author{Stefano Bovino} \affiliation{Departamento de Astronom\'ia, Facultad Ciencias F\'isicas y Matem\'aticas, Universidad de Concepci\'on, Av. Esteban Iturra s/n Barrio Universitario, Casilla 160, Concepci\'on, Chile} \affiliation{INAF --- Istituto di Radioastronomia --- Italian node of the ALMA Regional Centre (It-ARC), Via Gobetti 101, 40129 Bologna, Italy} \author{Patricio Sanhueza} \affiliation{National Astronomical Observatory of Japan, National Institutes of Natural Sciences, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan} \affiliation{Department of Astronomical Science, The Graduate University for Advanced Studies, SOKENDAI, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan f} \author{Kaho Morii} \affiliation{National Astronomical Observatory of Japan, National Institutes of Natural Sciences, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan} \affiliation{Department of Astronomy, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan} \author{Giovanni Sabatini} \affiliation{INAF --- Istituto di Radioastronomia --- Italian node of the ALMA Regional Centre (It-ARC), Via Gobetti 101, 40129 Bologna, Italy} \author{Paola Caselli} \affiliation{Centre for Astrochemical Studies, Max-Planck-Institut f\"ur extraterrestrische Physik, Gie{\ss}enbachstra{\ss}e 1, 85749 Garching bei M\"unchen, Germany} \author{Andrea Giannetti} \affiliation{ INAF - Istituto di Radioastronomia, Via P. Gobetti 101, I-40129 Bologna, Italy} \author{Shanghuo Li} \affiliation{Korea Astronomy and Space Science Institute, 776 Daedeokdae-ro, Yuseong-gu, Daejeon 34055, Republic Of Korea} \keywords{Interstellar line emission(844) --- Star forming regions(1565) --- Astrochemistry(75) --- Interferometry(808) ---Massive stars(732) --- Star formation(1569) } \section{Introduction\label{Intro}} High-mass stars dominate the energetic of the interstellar medium (ISM), mainly due to feedback during their whole life cycle. Despite their importance, however, their formation process is significantly less known that the low-mass counterpart. From the theoretical point of view, two main families of models have been developed. The core-accretion (or core-fed) model is a scaled-up version with respect to the low-mass process \citep{McKee03}. It predicts the existence of high mass prestellar cores (HMPCs, $M_\mathrm{core}=$ several tens of solar masses), which are virialised either due to turbulence or to the contribution of magnetic pressure, that collapse as a whole \citep{Tan13, Tan14}. In the clump-fed or competitive accretion scenarios, instead, early fragmentation in high-mass clumps leads to the formation of essentially low-mass cores, which keep accreting mass from the dense surrounding environment also during the initial protostellar stages \citep{Bonnel01, Bonnell06, Smith09}. In order to distinguish among the existing theories, observational constraints on the properties of the initial stages of high-mass star formation are needed, in particular in terms of core masses and properties of accretion. \par These observations are however challenging, since high-mass stars are intrinsically rarer and on average more distant than low-mass ones. The birthplace of high-mass stars is to be found in the heavily obscured environments of infrared dark clouds (IRDCs, \citealt{Rathborne06}). In particular, IRDCs that are dark at $24$ and $70 \, \rm \mu m$ are supposed to host the earliest evolutionary stages of high-mass star formation \citep{Tan13, Sanhueza13,Guzman15}. Several studies have hence targeted IRDCs with interferometric facilities, such as the Atacama Large Millimeter/submillimeter Array (ALMA, as done by \citealt{Zhang15, Ohashi16, Contreras18, Svoboda19, Sanhueza19, Morii21}), or the Submillimeter Array \citep[SMA, see for instance][]{Sanhueza17, Li19, Pillai19}. {Multiple works} unveiled that the lack of emission at mid-infrared wavelengths as seen with single-dish facilities (e.g. the {\em Spitzer Space Telescope}) does not guarantee a complete lack of star formation activity, due to the high extinction that characterises high-mass star-forming regions \citep[see e.g.][]{Tan16, Pillai19, Li20, Morii21,Tafoya21}. \par In this context, the ALMA Survey of $70 \, \mu$m dark High-mass clumps in Early Stages survey \citep[ASHES;][]{Sanhueza19} targeted twelve IRDCs with ALMA Band 6 observations. In the first paper of the series, the authors studied the clumps fragmentation using the continuum emission at $1.3 \,$mm, identifying $\approx 300$ cores, none of which appears more massive than $30 \, \rm M_\odot$. Continuum emission together with spectral line observations have the potential to provide a more complete picture of star-forming regions, in particular in terms of evolutionary stage assessment. For instance, outflow tracers (e.g. CO, SiO), or so-called warm transitions, which have high upper level energies ($E_\mathrm{u}> 20-30 \, \rm K$), can be used to identify signs of protostellar activity, such as outflow emission or gas heating \citep[see e,g,][]{Sanhueza12, Li20}. \par In the hunt for HMPCs, it is crucial to find a good and unambiguous tracer of the prestellar phases. Deuterated species appear promising to this aim. At low temperatures ($T < 20\, \rm K$) and high densities ($n \gtrsim 10^5 \, \rm cm^{-3})$ found in prestellar gas, most C- and O-bearing species are frozen out onto dust grains \citep{Caselli99,Bacmann02}. This contributes to increasing the abundance of \hhdp, the precursor of deuterated species in the gas phase, since this molecule is predominantly destroyed by reaction with CO \citep[e.g.][and references therein]{Ceccarelli14}. This results in a boost of deuteration, and deuterated molecules can therefore be good probes of cold and dense gas. \par \cite{Redaelli21} reported the first \ohhdp observations with ALMA in high-mass star-forming regions{ and showed that} this molecule is a good probe of prestellar conditions. {The \olineh line was detected towards two intermediate-mass clumps (AG351 and AG354)}, at a spatial resolution of $\approx 1500\, $AU. {The authors identified 16 cores in total, and estimated their masses from the continuum emission at $0.8\rm \, mm$.} {At} $T_\mathrm{dust} = 10 \, $K, all cores are less massive than $10\, \rm M_\odot$, and the majority is subvirial, assuming negligible contribution to the stability from magnetic fields. \par Molecular lines yield information also on the gas kinematics, which is of great importance when trying to investigate the accretion processes in high-mass clumps. Among the different tracers used, two important ones are ammonia (see e.g. \citealt{ Lu18, Williams18, Sokolov19}) and \nnhp \citep{Henshaw14, Chen19}. {The kinematics of high-mass star-forming regions can be studied by means of algorithms dedicated to identify the hierarchy in their filamentary structures, as done for instance by \cite{Peretto14, Chen19, Henshaw19, Wang20}. Many of these works report the detection of velocity gradients usually interpreted as gas motions, linked to accretion flows towards cores or hubs \citep[see for instance][and references therein]{Hacar22}.} \par The $70\, \mu \rm m$-dark clump AGAL014.492-00.139 (hereafter AG14) has an estimated mass of $5200 \, \rm M_\odot$ and it is located at a distance of $3.9\, \rm kpc$ \citep{Sanhueza19}. It belongs to the ATLASGAL TOP100 sample \citep{Giannetti14,Konig17}, a statistically significant sample of high-mass clumps at different evolutionary stages in the inner Galaxy. AG14 was also included among the targets of the ASHES project: \cite{Sanhueza19} identified 37 cores in continuum, 25 of which are associated with warm line or outflow emission. This point was investigated further by \cite{Li20}, who used $\rm CO$ and $\rm SiO$ observations with ALMA, and found that six cores are associated with outflows. In particular, four present bipolar emission. Throughout this work, we will refer to these cores as protostellar (or protostars). More recently, \cite{Sakai22} studied the emission of several deuterated molecules (\nndp, \dcop, and DCN) found in ALMA Band 6.% \par In this work, we present an extensive ALMA dataset on AG14, from $90$ up to $370 \rm \, GHz$ in Sect. \ref{Observations}{, consisting of Band 3 data covering the \nnhp (1-0) line, Band 7 data of the \olineh line, and Band 6 data of the \nndp (3-2) transition (already published in \citealt{Sakai22}). These different lines are used to trace distinct parts of the clump. \hhdp is mainly destroyed by reactions with CO, and it is hence sensitive to temperature rising beyond the CO desorption temperature ($\approx 20 \, \rm K$). Furthermore, its \olineh transition has a critical density of $n_\mathrm{c} \approx 10^5 \, \rm cm^{-3}$ \citep{Hugo09}, hence this line is an ideal tracer of cold and dense gas at the core scales. \nnhp is also a well known high-density tracer. Its first rotational transition has a critical density of $6 \times 10^4 \rm \, cm^{-3}$% , and it presents an isolated hyperfine component well separated from the others also in cases of large linewidths ($\sigma_\mathrm{V} \lesssim 2-4 \,$\kms). This component is usually optically thin or only moderately optically thick \citep{Sanhueza12, Barnes18, Fontani21}. In the intracloud gas in high-mass clumps, the \nnhp transition is excited over large scales. For all these reasons, \nnhp represents an ideal probe of the clump and clump-to-core kinematics. Finally, \nndp is also a high-density tracer, but \cite{Giannetti19} studied the correlation between the \olineh and the \nndp (3-2) transitions in three clumps embedded in the G351.77-0.51 complex, using single-dish data from APEX. The main result of those authors was an anticorrelation between the abundances of the two molecular species. This was explained as an evolutionary effect: in the prestellar phase, as time evolves, the abundance of \ohhdp is expected to lower, mainly due the conversion to its doubly and triply deuterated forms (see also \citealt{Sabatini20}). \nndp instead forms later, and then its abundance keep increasing, since it can be formed also from $\rm D_2H^+$ and $\rm D_3 ^+$ \citep[see for instance the chemical model of][]{Sipila13,Sipila15}. These findings hinted to the possibility of using the abundance ratio between \nndp and \ohhdp as an evolutionary indicator, and we aim to investigate this point in AG14 with the available data.} \par {The paper is organised as follows. The observations are presented in Sect. \ref{Observations}.} In the analysis, we first investigate the core population embedded in the clump, using the \olineh data (Sect. \ref{CorePop}). We then present the clump-to-core kinematic properties in Sect. \ref{kinematics}, based on the analysis of \nnhp (1-0) data. In Sect. \ref{h2dp_n2dp} we analyse the correlation between the \ohhdp and the \nndp emission in the identified cores, and Sect. \ref{Conclusion} contains a discussion and the concluding remarks of this work. \section{Observations\label{Observations}} The observations {used in this work} are described in the following {subsections}, and the main technical details (e.g. angular resolution, sensitivity,...) are summarised in Table \ref{ObsData}. If the data have already been published, we refer to the corresponding publication. \begin{deluxetable*}{ccccc} \tablecaption{Observational parameters\label{ObsData}} \tablehead{ \colhead{Observation} & \colhead{Beam size\tablenotemark{a}} & \colhead{Spatial res.} & \colhead{$rms$} & \colhead{Spectral res.} } \startdata \multicolumn{5}{c}{Band 7} \\ Continuum $0.5\,\rm mm$ & $0''.66\times 0''.50$, $PA=-73.4$\degree & $2600 \rm AU \times 2000 AU$ & $0.8 \, \rm mJy/beam$ & - \\ \olineh & $0''.67\times 0''.50$, $PA=-73.4$\degree & $2600 \rm AU \times 2000 AU$ & $100 \, \rm mK$ & $0.20\,$\kms \\ \hline \multicolumn{5}{c}{Band 6\tablenotemark{b}} \\ Continuum $1.34\,\rm mm$ & $1''.29\times 0''.85$, $PA=72.5$\degree & $5000 \rm AU \times 3300 AU$ & $0.17 \, \rm mJy/beam$ & - \\ \nndp (3-2) & $1''.44\times 1''.00$, $PA=74.8$\degree & $5600 \rm AU \times 3900 AU$ & $180 \, \rm mK$ & $0.17\,$\kms \\ \hline \multicolumn{5}{c}{Band 3} \\ \nnhp (1-0) & $2''.86\times 1''.61$, $PA=74.7$\degree & $11200 \rm AU \times 6200 AU$ & $110 \, \rm mK$ & $0.20\,$\kms \\ \enddata \tablenotetext{a}{The beam size is shown as: major axis $\times$ minor axis, position angle ($PA$).} \tablenotetext{b}{Data presented in \cite{Sanhueza19} and \cite{Sakai22}.} \end{deluxetable*} \subsection{Band 7 observations} The Band 7 data were observed during Cycle 6 as part of the ALMA project 2018.1.00331.S (PI: Bovino) in three runs (November 2018 and March-April 2019). The observations, performed as {a} single-pointing, made use of both the Main Array (12m-array, 45 antennas) and the 7m-array (12 antennas), with baselines ranging from $7$ to $645 \, \rm m$. The quasars J1924-2914, J1911-2006, J1733-1304, and J1751+0939 were used as calibrators. The spectral setup comprises four spectral windows (SPWs). The first one, dedicated to the observation of the \olineh transition, is centred at the frequency $\nu_\mathrm{rest} = 372421.3558\, \rm MHz$ \citep{Jusko17}, and has a resolution of $244 \, \rm kHz$ (corresponding to $0.20\,$\kms at $372 \, \rm GHz$) and a total bandwidth of $500 \, \rm MHz$. The second SPW is dedicated to continuum, with a total bandwidth of $1.85 \, \rm GHz$ around the frequency of $371\, \rm GHz$. % \par At these frequencies, and with the used configuration, the maximum recoverable scale is $\theta_\mathrm{MRS} \approx 20''$, {the primary beams of the main array and of ACA are $17''$ and $30''$, respectively}, and the angular resolution is $\approx 0''.6$ (corresponding to $\approx 2300 \, \rm AU$ at the distance of $3.9\, \rm kpc$). The total observing time was $6.0\, \rm h$ ({7m-array}) and $2.5 \, \rm h$ (12m-array). During the observations, the precipitable water vapour was typically $0.4 \, \mathrm{mm}< PWV < 0.6\, \mathrm{mm}$. The average system temperature values are found in the range $300-400\, \rm K$ for the SPW containing the \olineh line. The data were calibrated by the standard pipeline (\textsc{casa}, version 5.4; \citealt{McMullin07}). From a first inspection of the dirty maps, the emission both in continuum and in line appear very extended in the whole Field-of-View (FoV). We therefore applied a modified weight of $2.4$ to the ACA observations, similarly to what was done in \cite{Redaelli21}{. After a few tests, this choice appeared the ideal compromise to maximise the recovery of the large-scale flux, without downgrading too much the final angular resolution.} \par We imaged the data using the \texttt{tclean} task of the software \textsc{casa} (version 5.6), in interactive mode. We used the natural weighting and the multiscale deconvolver algorithm \citep{Cornwell08} (scales: $0, 5, 15$). In order to avoid oversampling, both the continuum and the line images have been re-gridded in order to ensure 3 pixels per beam minor axis, in agreement with the Nyquist theorem. Table \ref{ObsData} summarises the achieved sensitivities and resolutions. The molecular line data have been converted into the brightness temperature $T_\mathrm{b}$ scale, using the gain {G computed as:} \begin{equation} G = 1.222 \times 10^6 \frac{1}{\nu^2 \theta_\mathrm{min} \theta_\mathrm{maj}} = 26 \, \rm mK/ (mJy\, beam^{-1}) \;, \label{gain_eq} \end{equation} {where $\nu$ is the frequency in GHz and $\theta_\mathrm{min/maj}$ are the beam sizes along the minor and major axes, respectively, expressed in arcsec.} \par {Figure \ref{Band7_Data} shows the integrated intensity map of the \ohhdp line, computed in the velocity range $36-43\,$\kms, masking channels with a signal lower than $1\sigma$. The contours show the distribution of the continuum emission at $0.8\,\rm mm$.} Similarly to what has been noticed in \cite{Redaelli21} {in two different sources}, the morphology of the continuum and of the line emission are in general different. Several bright peaks identified in dust thermal emission lack a counterpart in \ohhdp emission above the $3\sigma$ level. \subsection{Band 6 observations} The Band 6 data of the continuum emission and of the \nndp (3-2) line at $\nu_\mathrm{rest}= 231321.8283\, \rm MHz$\footnote{According to the Cologne Database for Molecular Spectroscopy, CDMS, available at \url{https://cdms.astro.uni-koeln.de/}.} have been published by \cite{Sanhueza19} and \cite{Sakai22}, respectively, and we refer to those papers for a complete description of the observations and of the data reduction. Briefly, the data were observed in Cycle 3 (Project ID: 2015.1.01539.S; PI: Sanhueza), with the 12, array, the 7m array {(baselines ranging from 8 to 330$\,$m)}, and the Total Power (the latter for spectral lines only). The data were acquired as mosaics, consisting of 10 pointings for the 12m array and 3 for the 7m array. \par {The spectral window containing the \nndp line was imaged using the automatic clean script \textsc{yclean} \citep{Contreras18}, which uses natural weighting and multiscale deconvolver (scales: 0, 3, 10, 30). The \textsc{casa} version 5.4 was used for the imaging. To allow a better comparison with the \olineh data (see Sect. \ref{h2dp_n2dp}), we have excluded the Total Power data for this analysis, and the maximum recoverable scale is $\theta_\mathrm{MRS} = 35''$. The final angular resolution of the 12m+7m combined datacube is $\approx 1''.0\times 1.''4$, and the pectral resolution is $0.17\, $\kms. The data have been converted from the flux scale to temperature scale through the gain $G =15 \, \rm mK/ (mJy\, beam^{-1}) $, computed with Eq. \ref{gain_eq}. } \subsection{Band 3 observations} The Band 3 data were collected as part of project 2018.1.00299.S (PI: Contreras), during Cycle 6. The data consist of 12m array observations (performed in December 2018 and April 2019), 7m array observations (performed in January 2019), and Total Power (April 2019), with baseline ranging from $9.0\, \rm m$ to $500\rm \, m$. The average precipitable water vapour was in the range $4.6\, \mathrm{mm} <PWV < 6\rm \, mm$. The quasars J2000-1748, J1517-2422, and J1832-2039 were used as calibrators for the {12m-array} data, whilst J1751+0939, J2056-4714, and J1911-2006 were used during the {7m-array} observations. \par In this paper, we focus on the \nnhp (1-0) transition at $93.174\, \rm GHz$, which was targeted by a dedicated SPW {with a spectral resolution of $61\, \rm kHz$, corresponding to a velocity resolution of $0.20\,$\kms at the \nnhp frequency.} The primary beam size at the \nnhp frequency is $\approx 60''$ for the 12m array, and $\approx 110''$ for the 7m array. The line was imaged with Briggs weighting (robust = 0.5) and multiscale deconvolver, using the \texttt{tclean} task of the software \textsc{casa} {(version 5.7).} The scales used were $0,5,15,25$ times the pixel size ($0''.4${, corresponding to 1/4 of the beam minor axis, in agreement with the Nyquist sampling}). The final beam size {of the composite datacube (12m+7m+TP arrays), after primary-beam correction,} is $2''.9\times 1.6''$. {The fluxes where converted in temperature scale using Eq. \ref{gain_eq}, obtaining the gain $G =30 \, \rm mK/ (mJy\, beam^{-1}) $. The maximum recoverable scale considering the 12m and 7m array configuration is $\theta_\mathrm{MRS} = 85''$, but the Total Power observations further increase it.} \section{Analysis} \subsection{The prestellar core population\label{CorePop}} The \olineh emission traces cold and dense gas. In this Section, we describe the analysis of the Band 7 data aimed to identify the population of prestellar cores in the clump, and to study their properties. \subsubsection{Prestellar cores identification\label{scimes}} Our aim is to use the \olineh data to identify structures (cores) which are in a early, prestellar stage. Similarly to what {has been done} in \cite{Redaelli21}, we use \textsc{scimes} \citep{Colombo15}, which is based on the dendrogram algorithm \citep{Rosolowski08} and analyses data in three-dimensional, position-position-velocity (ppv) space. \par The first key step of \textsc{scimes} is the \textit{dilmasking} masking technique, which maximises the information recoverable in low signal-to-noise ratio (S/N) data (see \citealt{Rosolowsky06}). The code identifies regions where the $\rm S/N$ is higher than a given threshold ($\rm S/N_{lim}$), that however contain emission peaks brighter than a second threshold ($\rm S/N_{peak}$). After a few test, we set $\rm S/N_{peak} = 2$, and $\rm S/N_{lim} = 1.5$, consistent with our choice in \cite{Redaelli21}, which maximises the signal recovery. Another key parameter to build the dendrogram is the minimum height (in flux/brightness) that a structure must have to be catalogued as an independent leaf ($\Delta_{min}$). We set the minimum height of an identified structure on $\Delta_{min} = 2.8 \times rms$\footnote{{Values tested in the range $\Delta_{min}=(2.5-3.5)\times rms$ lead to variation in only 18\% of the identified structures. Using $\Delta_{min}/rms=2.8$, instead of 3.0, allows the cores 21 and 22 to be separated, instead of merging is a single structure significantly larger than any other identified.}}, where $rms = 100 \, \rm mK$ (this is the value obtained on the datacube before primary-beam correction, since \textsc{scimes} requires data with constant noise). We set the minimum number of channels that a leaf must span to $N \rm^{min} _{chan} = 2$, and we mask structures smaller than three times the beam area. With these input parameters, we find 22 prestellar cores, shown in Fig. \ref{cores}. Some of them appear to overlap in projection on the plane of the sky. This is due to the fact that \textsc{scimes} works in ppv space, and it is therefore able to identify distinct velocity components as belonging to different structures. We report in Table \ref{AveSpectra_par} the positions and sizes, expressed in terms of effective radius, of the whole sample of cores. \par Figure \ref{cores} confirms the fact that continuum and \ohhdp emission do not perfectly correlate, as seen also in Fig. \ref{Band7_Data}. The positions of the protostellar candidates found by \cite{Li20}, also shown in Fig. \ref{cores}, are usually associated with peaks of continuum (with the exception of the one in the north-west corner), and either not associated with, or found at the edges of, the \hhdp-identified cores. In Appendix \ref{ContCores} we present a more detailed study of the continuum cores. \begin{deluxetable*}{cccccccccc} \tablecaption{Core properties and best-fit results obtained fitting their average spectra with \textsc{mcweeds}. The $rms$ values are standard deviation over line-free channels. Uncertainties on \vlsr, \sigmav, \ncol, one-dimension turbulent Mach number, and virial mass are expressed as 95\% high probability density intervals (HPD).\label{AveSpectra_par}} \tablehead{ \colhead{Core id} & \multicolumn2c{Position} &\colhead{$R_\mathrm{eff}$\tablenotemark{a}} & \colhead{$rms$} & \colhead{\vlsr} & \colhead{\sigmav} & \colhead{\ncol} & \colhead{$\mathcal{M}$\tablenotemark{b}} & \colhead{$M_\mathrm{vir}$\tablenotemark{b}} \\ &\colhead{RA(h:m:s.ss)}& \colhead{Dec (d:m:s.ss)} & \colhead{$10^{3}$AU} & \colhead{K} & \colhead{\kms} &\colhead{\kms} &\colhead{ $\log_{10} ( \rm cm^{-2})$} & & \colhead{M$_\odot$}} \startdata 1 & $18:17:22.59$ & $-16:25:1.35$ & 5.5 & 0.09 & $38.26^{+0.06}_{-0.06}$ & $0.38^{+0.07}_{-0.06}$ & $13.20^{+0.06}_{-0.06}$ & $1.84^{+0.42}_{-0.38}$ & $4.8^{+1.7}_{-1.4}$ \\ 2 & $18:17:22.41$ & $-16:25:01.36$ & 2.4 & 0.07 & $38.96^{+0.10}_{-0.11}$ & $0.34^{+0.02}_{-0.02}$ & $12.91^{+0.07}_{-0.07}$ & $1.61^{+0.02}_{-0.02}$ & $1.74^{+0.04}_{-0.04}$ \\ 3 & $18:17:22.38$ & $-16:24:56.44$ & 6.1 & 0.07 & $40.33^{+0.04}_{-0.05}$ & $0.40^{+0.02}_{-0.04}$ & $13.27^{+0.04}_{-0.04}$ & $2.01^{+0.11}_{-0.24}$ & $6.1^{+0.6}_{-1.1}$ \\ 4 & $18:17:22.29$ & $-16:24:58.68$ & 2.5 & 0.07 & $38.38^{+0.05}_{-0.05}$ & $0.29^{+0.05}_{-0.05}$ & $13.02^{+0.06}_{-0.06}$ & $1.32^{+0.34}_{-0.32}$ & $1.4^{+0.5}_{-0.4}$ \\ 5 & $18:17:22.24$ & $-16:25:0.27$ & 5.5 & 0.03 & $38.43^{+0.04}_{-0.04}$ & $0.40^{+0.04}_{-0.03}$ & $13.04^{+0.03}_{-0.03}$ & $2.00^{+0.22}_{-0.19}$ & $5.5^{+1.0}_{-0.9}$ \\ 6 & $18:17:22.22$ & $-16:24:55.66$ & 5.1 & 0.04 & $39.71^{+0.05}_{-0.05}$ & $0.30^{+0.01}_{-0.01}$ & $12.96^{+0.04}_{-0.04}$ & $1.37^{+0.01}_{-0.01}$ & $2.96^{+0.03}_{-0.03}$ \\ 7 & $18:17:22.21$ & $-16:24:53.65$ & 4.5 & 0.06 & $40.42^{+0.04}_{-0.04}$ & $0.36^{+0.04}_{-0.03}$ & $13.16^{+0.04}_{-0.04}$ & $1.74^{+0.22}_{-0.19}$ & $3.6^{+0.7}_{-0.5}$ \\ 8 & $18:17:22.21$ & $-16:25:6.23$ & 4.5 & 0.06 & $41.01^{+0.12}_{-0.13}$ & $0.42^{+0.09}_{-0.09}$ & $12.72^{+0.11}_{-0.10}$ & $2.08^{+0.50}_{-0.52}$ & $4.8^{+2.1}_{-1.7}$ \\ 9 & $18:17:22.16$ & $-16:25:2.31$ & 3.9 & 0.04 & $39.23^{+0.03}_{-0.03}$ & $0.30^{+0.03}_{-0.03}$ & $12.96^{+0.04}_{-0.04}$ & $1.38^{+0.20}_{-0.20}$ & $2.3^{+0.4}_{-0.4}$ \\ 10 & $18:17:22.10$ & $-16:24:50.74$ & 3.7 & 0.21 & $40.43^{+0.11}_{-0.12}$ & $0.33^{+0.10}_{-0.09}$ & $13.26^{+0.11}_{-0.14}$ & $1.58^{+0.59}_{-0.55}$ & $2.6^{+1.5}_{-1.1}$ \\ 11 & $18:17:22.02$ & $-16:25:4.36$ & 3.9 & 0.04 & $41.21^{+0.06}_{-0.06}$ & $0.33^{+0.01}_{-0.02}$ & $12.76^{+0.05}_{-0.06}$ & $1.60^{+0.04}_{-0.04}$ & $2.7^{+0.1}_{-0.1}$ \\ 12 & $18:17:21.97$ & $-16:24:49.84$ & 2.9 & 0.33 & $40.57^{+0.15}_{-0.14}$ & $0.28^{+0.02}_{-0.05}$ & $13.34^{+0.16}_{-0.16}$ & $1.26^{+0.12}_{-0.12}$ & $1.5^{+0.2}_{-0.2}$ \\ 13 & $18:17:21.94$ & $-16:25:1.57$ & 3.1 & 0.04 & $38.86^{+0.05}_{-0.05}$ & $0.21^{+0.04}_{-0.04}$ & $12.50^{+0.08}_{-0.08}$ & $0.83^{+0.29}_{-0.30}$ & $1.0^{+0.3}_{-0.3}$ \\ 14 & $18:17:21.87$ & $-16:25:6.43$ & 3.2 & 0.07 & $41.46^{+0.06}_{-0.05}$ & $0.34^{+0.05}_{-0.05}$ & $13.11^{+0.06}_{-0.06}$ & $1.64^{+0.29}_{-0.28}$ & $2.4^{+0.7}_{-0.6}$ \\ 15 & $18:17:21.86$ & $-16:24:56.45$ & 5.3 & 0.04 & $39.55^{+0.04}_{-0.04}$ & $0.22^{+0.03}_{-0.03}$ & $12.76^{+0.06}_{-0.06}$ & $0.91^{+0.21}_{-0.23}$ & $1.9^{+0.4}_{-0.4}$ \\ 16 & $18:17:21.84$ & $-16:25:4.75$ & 2.4 & 0.08 & $41.44^{+0.04}_{-0.04}$ & $0.29^{+0.04}_{-0.04}$ & $13.15^{+0.05}_{-0.05}$ & $1.28^{+0.09}_{-0.18}$ & $1.2^{+0.1}_{-0.2}$ \\ 17 & $18:17:21.75$ & $-16:25:2.60$ & 6.2 & 0.03 & $39.28^{+0.03}_{-0.03}$ & $0.24^{+0.03}_{-0.03}$ & $12.77^{+0.04}_{-0.05}$ & $1.02^{+0.19}_{-0.21}$ & $2.5^{+0.5}_{-0.5}$ \\ 18 & $18:17:21.68$ & $-16:25:1.99$ & 2.7 & 0.06 & $37.93^{+0.06}_{-0.06}$ & $0.19^{+0.08}_{-0.06}$ & $12.61^{+0.11}_{-0.12}$ & $0.66^{+0.56}_{-0.49}$ & $0.8^{+0.6}_{-0.3}$ \\ 19 & $18:17:21.67$ & $-16:25:3.87$ & 4.2 & 0.05 & $41.34^{+0.02}_{-0.02}$ & $0.27^{+0.02}_{-0.02}$ & $13.20^{+0.03}_{-0.03}$ & $1.25^{+0.13}_{-0.13}$ & $2.2^{+0.3}_{-0.3}$ \\ 20 & $18:17:21.63$ & $-16:25:0.29$ & 3.7 & 0.06 & $39.28^{+0.07}_{-0.07}$ & $0.31^{+0.06}_{-0.06}$ & $12.82^{+0.07}_{-0.08}$ & $1.49^{+0.36}_{-0.34}$ & $2.3^{+0.8}_{-0.7}$ \\ 21 & $18:17:21.40$ & $-16:24:56.93$ & 6.2 & 0.11 & $40.25^{+0.04}_{-0.04}$ & $0.29^{+0.03}_{-0.03}$ & $13.35^{+0.05}_{-0.05}$ & $1.35^{+0.20}_{-0.20}$ & $3.5^{+0.7}_{-0.7}$ \\ 22 & $18:17:21.35$ & $-16:24:58.56$ & 3.6 & 0.14 & $40.36^{+0.06}_{-0.06}$ & $0.30^{+0.04}_{-0.05}$ & $13.29^{+0.07}_{-0.08}$ & $1.42^{+0.26}_{-0.28}$ & $2.2^{+0.6}_{-0.5}$\\ \enddata \tablenotetext{a}{The effective radius is the radius of a circular region with the same area of the core.} \tablenotetext{b}{The one-dimensional turbulent Mach number and the virial masses are computed assuming $T_\mathrm{gas} = 10 \, \rm K$.} \end{deluxetable*} \subsubsection{Core properties from \ohhdp fitting\label{mcweeds}} We perform a spectral fit of the \olineh in each core, in order to derive maps of the centroid velocity (\vlsr), linewidth ($FWHM$), and column density $N_\mathrm{col} (\text{\ohhdp})$. We use the parallelised version of the \textsc{mcweeds} code \citep{Giannetti17}, which is based on the \textsc{Weeds} package of \textsc{GILDAS} \citep{Maret11}. {\textsc{Weeds} is able to produce synthetic spectra in LTE approximation based on a set of input parameters ($FWHM$, \vlsr, molecular column density, excitation temperature, and source size), assuming that the line profile is Gaussian. \textsc{mcweeds}, instead, provides the framework to optimise the search for the best-fit solution of the parameters\footnote{The source size is selected to ensure that the beam filling factor is unity.}.} The code analyses the spectrum in each pixel with Bayesian statistical models implemented using \textsc{PyMC} \citep{Patil10}. In particular, we use a Markov chain Monte Carlo (MCMC) algorithm to sample the parameter space, with uninformative flat priors over the models' free parameters. Similarly as in \cite{Redaelli21}, for each position the code performs 100000 iterations, with a burn-in of 1000 steps. For the excitation temperature, we assume $\text{\tex} = 10 \, \rm K$ (see for instance \citealt{Caselli08, Friesen14, Redaelli21}). The initial guesses for the free parameters are selected individually for each core. \textsc{mcweeds} uses the line $FWHM$ as a free parameter, but here we show the velocity dispersion instead ($\sigma_\mathrm{V} = FWHM/(2 \sqrt{2\, \rm ln(2)}$). Figure \ref{MCweeds_Results} shows the best-fit maps of the free parameters, obtained composing together those of the single cores. We show the best-fit parameter maps for each core individually in Appendix \ref{AllCoresMaps}. \par The centroid velocity shows little gradient within each core. Excluding core 6 (one of the largest in terms of physical size) and core 12, the dispersion of \vlsr around the average is less than $0.20\,$\kms. However, {a clear change in \vlsr of the order of $3-4 \,$\kms is} visible at the clump level, in particular with changing declination. We can identify three separate groups: i) the southernmost cores have typical velocities of $>41 \, $\kms; ii) the cores in the central part of the clump present lower velocities (\vlsr$< 40 \,$\kms). Group 1 and 2 overlap in the west part of the clump (see e.g. core 19, 17, and 18); iii) in the northern part of AG14 the cores have typical velocities of $40-40.5\,$\kms. The presence of these three sub-populations of cores, with distinct velocities, suggests that AG14 presents a complex kinematics, with several velocity components that spatially overlap (see also the average spectra in Fig.~\ref{AveSpectra_h2dp}). This is further investigated in Sect. \ref{kinematics}. \par To investigate the core properties in terms of velocity dispersion and column density, we present the density distribution of these two parameters in Fig. \ref{DensityPlot} (green colorscale), and compare it with the results for AG351 and AG354 obtained in \cite{Redaelli21}\footnote{Due to a typo in the code, the column density values of \cite{Redaelli21} where overestimated by a factor of $\sqrt{\pi}$. This does not affect the trends found in the comparison between the sources. However, in order to compare the results in AG14 with the ones in the other two sources, we corrected the latter before producing the plot in Fig. \ref{DensityPlot}}. {We highlight that AG351 and AG354 are at about half of the distance with respect to AG14. However, the Band 7 data (both continuum and lines) were acquired with a higher angular resolution for AG14 ($\approx 0''.55$, to be compared with $\approx 0''.9$ for AG351 and AG354). Hence, the linear resolution of the data is only $\approx 25$\% worse, allowing a fair comparison.} The average {\ohhdp} column density in AG14 is $\langle N_\mathrm{col}\rangle = 10^{13} \, \rm cm^{-2}$, which is consistent with the value obtained by \cite{Sabatini20} with observations from the Atacama Pathfinder EXperiment (APEX), at a resolution ($17''$) comparable to the FoV of the ALMA data. The average velocity dispersion is $\langle \sigma_\mathrm{V} \rangle = 0.30\,$\kms. AG14 presents on average higher column density values than AG351. Furthermore, {both} clumps reported in \cite{Redaelli21} showed very narrow lines, with a significant fraction on positions below both the isothermal sound speed at $10\, \rm K$ ($c_\mathrm{s} = 0.19\, $\kms, assuming a gas mean molecular weight $\mu = 2.33$) and the thermal broadening of the \olineh at $ 10\, \rm K$ ($\sigma_\mathrm{V, th} =0.14\, $\kms). On the contrary, in AG14 only 8\% of the positions detected in \ohhdp present $\sigma_V < c_\mathrm{s} $ (to be compared with 36\% and 23\% in AG351 and AG354, respectively), and less than 1\% are characterised by $ \sigma_\mathrm{V} <\sigma_\mathrm{V, th}$ (17\% and 7\% in AG351 and AG354). {The gas motions in AG14 hence appear less quiescent than in AG351 and AG354.} We highlight that the derived velocity dispersion values might be overestimated, due to the limited spectral resolution of our observations. Lines narrower than $FWHM = 0.6 \, $\kms (corresponding to $\sigma_\mathrm{V} = 0.25\,$\kms), in fact, are resolved by less than three channels. However, the spectral resolution is the same for all three clumps, and therefore {this problem would not affect the comparison between the sources.} {In Appendix \ref{missingFlux} we discuss also the linewidth overestimation due to opacity effect, which is found to be at most 15\% and only in the densest parts of the AG14.} \par From the total velocity dispersion \sigmav, the non-thermal contribution can be computed, in the assumption that the thermal and non-thermal contributions are independent and thus they sum in quadrature \citep[see e.g.][]{Myers91}: \begin{equation} \sigma_\mathrm{V,NT} = \sqrt{ \sigma_\mathrm{V}^2 - \sigma_\mathrm{V, th} ^2 } = \sqrt{ \sigma_\mathrm{V}^2 - \frac{k_\mathrm{B}T_\mathrm{gas}}{m_\mathrm{H_2D^+}} } \; ,\label{Mach} \end{equation} where $m_\mathrm{H_2D^+}$ is the \hhdp molecular mass (in g, 4 a.m.u.), $T_\mathrm{gas}$ is the gas temperature (assumed to be $10 \, \rm K$), and $k_\mathrm{B}$ is the Boltzmann constant. The one-dimensional turbulent Mach number is then $ \mathcal{M} = \sigma_\mathrm{V,NT} /c \rm _s$. The bottom-right panel of Fig.~\ref{MCweeds_Results} shows the map of this parameter. In most of the cores, the turbulent motions are transonic or mildly supersonic ($\mathcal{M} = 1-2$). A few cores, however, present subsonic non-thermal linewidhts (e.g. cores n. 15, 17, 18, and 19). \subsubsection{Average core properties} An assessment of the dynamical state of each core can be obtained from the one-dimensional turbulent Mach number and the virial mass. These quantities are computed by fitting the averaged spectra within each core via \textsc{mcweeds}. The average spectra in each core, together with the obtained best-fit models, are shown in Fig.~\ref{AveSpectra_h2dp}, and the best-fit values are presented in Table \ref{AveSpectra_par}. {In Fig. \ref{AveSpectra_h2dp} we have highlighted with a blue asterisk those cores with significant overlap (at least 5\% of their extension) with at least one other core. These cases present either multiple velocity components well separated in velocity (e.g. core 17 and 18), or broad wings and shoulders (core 6). We however select the initial guesses for the fit of the average spectra from the results of the pixel-by-pixel fit of each core, and \textsc{mcweeds} is hence able to identify and fit the correct velocity component.} \par From the \sigmav values derived fitting the average spectra we computed the one-dimensional turbulent Mach number in each core, following the procedure described in Sect. \ref{mcweeds}. Furthermore, we have derived the total velocity dispersion of the gas ($\sigma_\mathrm{dyn}$) as: \begin{equation} \sigma_\mathrm{dyn} = \sqrt{\sigma_\mathrm{NT}^2 + c_\mathrm{s}^2 } \; , \end{equation} from which we can derive the virial mass of the cores using the equation of \cite{Bertoldi92}, in the assumption of uniform density \citep{MacLaren88}: \begin{equation} M_\mathrm{vir} = \frac{5 R_\mathrm{core} \sigma_\mathrm{dyn}^2 }{G} = 1200 \times \left ( \frac{R_\mathrm{core}}{\mathrm{pc}} \right ) \left ( \frac{\sigma_\mathrm{dyn}}{\mathrm{km \, s^{-1}}} \right )^2 \rm{M_\odot} \; , \label{Mvir} \end{equation} where for the $R_\mathrm{core}$ values we have used the effective radii listed in Table \ref{AveSpectra_par}. This definition of the virial mass ignores contributes from magnetic fields and external pressure. \par The values of $\mathcal{M}$ span the range $0.7- 2.0$, with an average of $\langle \sigma_\mathrm{V,NT} /c \rm _s\rangle = 1.4$. The turbulent motions in AG14 are transonic, or mildly supersonic. The $\sigma_\mathrm{V,NT} /c \rm _s$ values are significantly lower than the value reported by \cite{Sabatini20} using APEX observations ($\sigma_\mathrm{V,NT} /c \rm _s = 5.4$), most likely because the unresolved single-dish spectrum overestimates the linewidth, due to the velocity gradient that the ALMA data unveils ($\approx 4 \,$\kms) and the presence of several velocity components. The virial masses derived at $10 \, \rm K$ are found within the range $0.8-6.1 \, \rm M_\odot$, with 50\% of the cores presenting $M\rm _{vir} < 2.4 \, M_\odot $. If the prestellar cores identified in \ohhdp are virialised, they are essentially low-mass. \par In the analysis of the \olineh transition performed so far, we have assumed that the line opacity is low, and that the missing flux due to the filtering of the large scale emission from the interferometer is negligible. We discuss these points in further detail in Appendix ~\ref{missingFlux}. \subsubsection{Continuum emission \label{contBand7}} Further information on the core properties comes from the analysis of the continuum emission. In particular, we can estimate the core total mass (\mcore) using the equation: \begin{equation} M_\mathrm{core} = f \frac{D^2 S_\mathrm{tot}}{B_\nu(T_\mathrm{dust}) \kappa_\nu } \; , \label{Mdust} \end{equation} where $f$ is the gas-to-dust ratio (assumed to be 100, \citealt{Hildebrand83}); $D$ is the source's distance; $B_\nu(T_\mathrm{dust})$ is the Planck function at the frequency $\nu = 371 \, \rm GHz$ and temperature $T_\mathrm{dust}$; $S_\mathrm{tot}$ is the $0.8\, \rm mm$ total flux integrated within the contours of the \ohhdp-identified cores, and $ \kappa_\nu$ is the dust opacity at the frequency of the observations. For the latter, we use the power-law expression: \begin{equation} \kappa_\nu = \kappa_0 \left( \frac{\nu}{\nu_0}\right)^\beta = 1.71 \, \rm cm^2 \, g^{-1} \; , \label{kappa} \end{equation} in which we use $\beta = 1.5$ for the dust emissivity index \citep{Mezger90,Walker90} and $\kappa_0 =10\, \rm cm^2 \, g^{-1}$ for the dust opacity at the reference wavelength $\lambda_0 = 250 \, \mu \rm m$ \citep{Hildebrand83,Beckwith90}. In the assumption of spherical symmetry and uniform gas distribution, we can evaluate the gas density as: \begin{equation} {n\mathrm{(H_2)} = \frac{ 3 M_\mathrm{core}}{ 4 \pi R_\mathrm{eff}^3 \mu_\mathrm{H_2} m_\mathrm{H} } \; ,} \label{nh2} \end{equation} where $m_\mathrm{H}$ and $\mu_\mathrm{H_2} = 2.8$ are respectively the hydrogen mass and the gas mean molecular weight per hydrogen molecule \citep{Kauffmann08}. \par \begin{deluxetable*}{c|ccc|ccc|ccc} \tablecaption{Core properties derived from the continuum emission at $0.8\, \rm mm$. The core masses, volume densities, and virial parameters are computed at three distinct temperatures: 10, 15, and 20$\,$K. The uncertainty on core masses and densities is 38\%, whilst it is 43\% on the virial parameter values. \label{CoreProp2}} \tablehead{\colhead{Core id} & \colhead{$M_\mathrm{core}$ }& \colhead{$n \rm (H_2)$} & \colhead{$\alpha_\mathrm{vir}$} & \colhead{$M_\mathrm{core}$} & \colhead{$n \rm (H_2)$} & \colhead{$\alpha_\mathrm{vir}$ }& \colhead{$M_\mathrm{core}$} & \colhead{$n \rm (H_2)$} & \colhead{$\alpha_\mathrm{vir}$} \\ \colhead{} &\colhead{$\mathrm{M_\odot}$} & \colhead{$10^6 \rm cm^{-3}$}& \colhead{} &\colhead{$\mathrm{M_\odot}$} & \colhead{$10^6 \rm cm^{-3}$} &\colhead{} &\colhead{$\mathrm{M_\odot}$} & \colhead{$10^6 \rm cm^{-3}$} & \colhead{}} \startdata & \multicolumn{3}{c|}{$10 \, \rm K$} & \multicolumn{3}{c|}{$15 \, \rm K$} & \multicolumn{3}{c}{$20 \, \rm K$} \\ \hline 1 & $27 \pm 10$ & $4.9 \pm 1.9$ &$0.18\pm0.08$ & $13 \pm 5$ & $2.3 \pm 0.9$ &$0.40\pm0.17$ & $8 \pm 3$ & $1.4 \pm 0.5$ &$0.7\pm0.3$ \\ 2 & $1.9 \pm 0.7$ & $4.0 \pm 1.5$ &$0.9\pm0.4$ & $0.9 \pm 0.3$ & $1.8 \pm 0.7$ &$2.1\pm0.9$ & $0.5 \pm 0.2$ & $1.2 \pm 0.4$ &$3.6\pm1.5$ \\ 3 & $19 \pm 7$ & $2.5 \pm 0.9$ &$0.33\pm0.14$ & $9 \pm 3$ & $1.1 \pm 0.4$ &$0.8\pm0.3$ & $5 \pm 2$ & $0.7 \pm 0.3$ &$1.2\pm0.5$ \\ 4 & $2.2 \pm 0.8$ & $4.3 \pm 1.6$ &$0.6\pm0.3$ & $1.0 \pm 0.4$ & $2.0 \pm 0.8$ &$1.5\pm0.6$ & $0.6 \pm 0.2$ & $1.2 \pm 0.5$ &$2.5\pm1.1$ \\ 5 & $9 \pm 3$ & $1.5 \pm 0.6$ &$0.7\pm0.3$ & $3.9 \pm 1.5$ & $0.7 \pm 0.3$ &$1.5\pm0.6$ & $2.5 \pm 0.9$ & $0.4 \pm 0.2$ &$2.4\pm1.0$ \\ 6 & $10 \pm 4$ & $2.1 \pm 0.8$ &$0.31\pm0.13$ & $4.4 \pm 1.7$ & $1.0 \pm 0.4$ &$0.7\pm0.3$ & $2.8 \pm 1.0$ & $0.6 \pm 0.2$ &$1.2\pm0.5$ \\ 7 & $9 \pm 4$ & $3.1 \pm 1.2$ &$0.39\pm0.17$ & $4.3 \pm 1.6$ & $1.5 \pm 0.6$ &$0.9\pm0.4$ & $2.7 \pm 1.0$ & $0.9 \pm 0.3$ &$1.5\pm0.6$ \\ 8 & $6 \pm 2$ & $1.9 \pm 0.7$ &$0.8\pm0.4$ & $2.7 \pm 1.0$ & $0.9 \pm 0.3$ &$1.9\pm0.8$ & $1.7 \pm 0.6$ & $0.6 \pm 0.2$ &$3.1\pm1.3$ \\ 10 & $2.8 \pm 1.1$ & $1.6 \pm 0.6$ &$0.9\pm0.4$ & $1.3 \pm 0.5$ & $0.8 \pm 0.3$ &$2.2\pm0.9$ & $0.8 \pm 0.3$ & $0.5 \pm 0.2$ &$3.6\pm1.6$ \\ 17 & $8 \pm 3$ & $1.0 \pm 0.4$ &$0.32\pm0.14$ & $3.7 \pm 1.4$ & $0.5 \pm 0.2$ &$0.8\pm0.3$ & $2.3 \pm 0.9$ & $0.3 \pm 0.1$ &$1.3\pm0.6$ \\ 18 & $1.0 \pm 0.4$ & $1.6 \pm 0.6$ &$0.8\pm0.3$ & $0.5 \pm 0.2$ & $0.8 \pm 0.3$ &$1.9\pm0.8$ & $0.30 \pm 0.11$ & $0.5 \pm 0.2$ &$3.3\pm1.4$ \\ 19 & $13 \pm 5$ & $5 \pm 2$ &$0.16\pm0.07$ & $6 \pm 2$ & $2.5 \pm 0.9$ &$0.38\pm0.16$ & $3.9 \pm 1.5$ & $1.6 \pm 0.6$ &$0.6\pm0.3$ \\ 21 & $20 \pm 7$ & $2.4 \pm 0.9$ &$0.18\pm0.08$ & $9 \pm 3$ & $1.1 \pm 0.4$ &$0.42\pm0.18$ & $6 \pm 2$ & $0.7 \pm 0.3$ &$0.7\pm0.3$ \\ 22 & $8 \pm 3$ & $5.0 \pm 1.9$ &$0.28\pm0.12$ & $3.6 \pm 1.4$ & $2.3 \pm 0.9$ &$0.6\pm0.3$ & $2.3 \pm 0.9$ & $1.5 \pm 0.6$ &$1.1\pm0.5$\\ \enddata \end{deluxetable*} Equation \ref{Mdust} and, as a consequence, Eq. \ref{nh2} depend on the dust temperature. In the hypothesis that the line is excited in LTE conditions (which holds for $n(\mathrm{H_2}) \gtrsim n_\mathrm{cr} \approx 10^5 \, \rm cm^{-3}$, \citealt{Hugo09}) and that the gas and dust are thermally coupled (which requires $n(\mathrm{H_2}) \gtrsim 10^{4-5}\rm \, cm^{-3}$, \citealt{Goldsmith01}), we can assume that $T_\mathrm{dust} = T_\mathrm{gas} = T_\mathrm{ex}(\text{\ohhdp}) = 10\,$K. However, in order to relax these assumptions and to take into consideration that locally the dust and gas temperatures could differ, we have computed the core masses and average densities at three temperatures, equal to 10, 15, and 20$\,$K. The obtained values are summarised in Table \ref{CoreProp2}. From this analysis, we exclude cores that are undetected in continuum, {meaning that they} lack of peak flux above the $3\sigma$ level. At $10\, \rm K$, the point-like mass sensitivity of our observations is $0.6\, \rm M_\odot$ ($3\sigma$ level). Due to the different morphology that the continuum and molecular line data present, as discussed in Sect. \ref{scimes}, eight cores are excluded.\par {Regarding uncertainties, we follow \cite{Sanhueza17} {(see in particular their Sect. 5.6)}, and we assume} a 23\% uncertainty on the dust-to-mass-ratio, and 28\% uncertainty on the dust opacity. Furthermore, we assume a 10\% uncertainty on the source's distance. Hence, the uncertainty on the mass and on the density values are $38$\%. {In Equation \ref{Mdust}, the total flux is computed integrating the continuum data within each core masks; in case of core overlap, naturally, this will cause an overestimation of their masses. This problem is more severe with increasing overlap area. We have estimated the significance of this bias using the method presented by \cite{Li20b} to decompose the dust-estimated masses of cores when spectroscopic data are available, in the hypothesis that the molecular transition is a high-density tracer (i.e. it traces densities higher than the threshold for dust-gas coupling) and is optically thin. Under these assumptions, which are both reasonably valid for our \ohhdp data, one can decompose the continuum flux into different cores according to the ratio of the line integrated intensity of each velocity component with respect to the total integrated intensity (computed over all the velocity components). We have performed this analysis for the five cores that overlap by more than 5\% of their area (see also Fig. \ref{AveSpectra_h2dp}). We find that on average their masses are overestimated by 23\%, i.e. less than the uncertainties here considered. We conclude that this possible bias does not affect significantly our results.} \par Focusing on the gas density values, we note that also assuming a higher dust temperature of 20$\,$K, all the cores have average densities higher than $3 \times 10^5 \, \rm cm^{-3}$. This level is comparable to the critical density of the \olineh line, corroborating the assumption both of LTE conditions for this transition, and of dust-gas coupling. Regarding the masses, all cores are less massive than $30 \, \rm M_\odot$ at any temperature value here considered. This is consistent with what is found by \cite{Sanhueza19}, who identified cores (at any evolutionary stage) in continuum Band 6 data. We however highlight that the lack of Total Power observations in continuum can lead to partial filter-out of the large scale emission, hence underestimating the mass values\footnote{The integrated flux over the ALMA Band 7 FoV computed from the APEX $870\, \rm \mu m$ (from the ATLASGAL survey) is $\approx 2.5\, \rm Jy$, whilst the total integrated flux in the ALMA data is only $0.8 \, \rm Jy$, suggesting a significant loss of flux in the large scale emission.}.\par In Table \ref{CoreProp2} we report also the virial parameter values ($\alpha_\mathrm{vir} = M_\mathrm{vir}/ M_\mathrm{core}$) at the three temperatures here considered. The uncertainty on $\alpha_\mathrm{vir}$ takes into account the 38\% uncertainty on the core masses and a further 20\% error, which corresponds to the average uncertainty on the virial mass values. At low dust temperature ($T_\mathrm{dust}=10\,\rm K$), all cores present $\alpha_\mathrm{vir}< 1.0$, suggesting that they are both subvirial and gravitationally bound. Also at $T_\mathrm{dust}=15\,\rm K$, we derived $0.3 \leq \alpha_\mathrm{vir}\leq 2.2$, and all cores are gravitationally bound ($\alpha_\mathrm{vir}< 2$) within uncertainties. The virial parameter increaseas with temperature, but still at $T_\mathrm{dust}=20\,\rm K$ 50\% of the cores in the sample are subvirial within uncertainties. In particular, the most massive cores present the lowest $\alpha_\mathrm{vir}$ values ($\alpha_\mathrm{vir}\lesssim 0.3 $ for $\text{\mcore}\gtrsim 8\rm \, M_\odot$), in agreement with several observational results (see e.g. \citealt{Kauffmann13}). This suggests that the largest cores in the sample are not in equilibrium, unless other sources of pressure (e.g. magnetic fields) contribute to the virialisation.\par {\cite{Singh21} performed an extensive study regarding biases in the computation of the virial parameter that tend to lead to its underestimation. Those authors in particular discussed the role of \textit{i)} neglecting the gas bulk motions in the calculation of $\sigma_\mathrm{dyn}$ and of \textit{ii)} the subtraction of the background emission. They found that when these aspects are taken into account, many cores that appeared subvirial become instead virialised or supervirial. However, our analysis intrinsically limits this problem. Since $\sigma_\mathrm{dyn}$ is computed from averaged spectra in each core, bulk motions ---if present--- are already taken into account as they increase the velocity dispersion of the averaged signal. Furthermore, as noted also by \cite{Singh21}, interferometric observations naturally filter out the large scale emission, hence performing an approximate background subtraction. We conclude that these effects are likely negligible in our results. } \par {We now discuss the properties of the most massive cores identified.} Core 1, with $M_\mathrm{core} \approx 30 \, M_\odot$ at $10 \, \rm K$ is the most massive core and it is subvirial at any temperature value considered in this work. However, we have reasons to believe that this core is not in a prestellar stage. In fact, it overlaps with a continuum core associated with outflow emission and protostellar activity (see Appendix \ref{ContCores}). The continuum flux peak is found close to the edge of the core, suggesting that \ohhdp is tracing the part of the envelope which is still cold and dense enough to emit the \olineh transition. {A similar discussion could be made for core 3, which has $M_\mathrm{core}= 20\, \rm M_\odot$ (at 10$\,$K) and it is subvirial, and it lays in close proximity to the protostar p5. Also core 21 has a similar mass, but unlike the other two, no protostellar core appears to be found in its surroundings. However, a significant continuum peak is found just outside its south-east edge. This peak is associated with the continuum-identified structure c8 (see Appendix \ref{ContCores}, where we speculated about the evolutionary stage of this core). } \par In Fig. \ref{MassRadius} we compare the masses and sizes of the identified cores in AG351, AG354, and AG14 ({at} $T_\mathrm{dust} = 10 \, \rm{K}$). Cores in AG14 appear on average larger and more massive than in the other two clumps, as expected by the availability of a larger mass reservoir, since AG14 is a factor of $\approx 30$ more massive than the other two sources. In the figure, we also report several estimations of the threshold for high-mass star formation in the mass-size space. \cite{Krumholz08} derived analytically the surface density limit of $\Sigma \approx 1 \, \rm g \, cm^{-2}$, which roughly translates into $M /\rm M_\odot = 15 \times 10^3 (R/pc) ^{2}$. From observational data of several IRDCs, \cite{Kauffmann10} derived the relation $M /\rm M_\odot = 870 \, (R/pc) ^{1.33}$, whilst more recently \cite{Baldeschi17} reported $M /\rm M_\odot = 1282 \, (R/pc) ^{1.42}$, based on the analysis of clouds in the Herschel Gould Belt survey. The most massive cores in AG14 sit well above all the relations here studied, and have therefore the potential to form high-mass stars in the future. However, since their masses are $M_\mathrm{core} \approx 10-30 \, \rm{M_\odot}$, they still need to accrete significant mass from the surrounding environment, unless the star formation efficiency is locally high. \par {As previously noted, the angular resolutions of the observations of the three clumps are well matched to their distinct distances. However, the angular maximum recoverable scale is approximately the same for all the sources, which means that more large-scale flux is recovered in AG351 and AG354 with respect to AG14. This might affect the comparison between the core masses, which could be overestimated in AG351 and AG354 with respect to AG14. This however would not affect our conclusion that cores in the last clump are more massive than in the first two sources.} \subsection{The clump-to-core scale kinematics\label{kinematics}} The centroid velocity map obtained fitting the \olineh data, shown in Fig. \ref{MCweeds_Results}, suggests a complex kinematics of the source, as indicated by the presence of several velocity components at many positions. In order to investigate the kinematics of AG14 at the clump scale, we have used ALMA Band 3 observations of the \nnhp (1-0) transition. {As illustrated in Sect. \ref{Intro}, this transition is better suited to trace the gas at larger scales than the \olineh data. Furthermore, the Band 3 data} have a spatial resolution of $11200 \rm AU \times 6180 AU$ ($0.05 \, \rm pc \times 0.03 \, pc$), and they were acquired including Total Power observations {(which are not available in the Band 7 dataset), which increases the sensitivity to the large scale emission.} These observations are {therefore} ideal to probe the large-scale kinematics of the gas in which the cores identified in \ohhdp are embedded. \par We focused on the isolated hyperfine component $F_1 = 0 - 1$ of the \nnhp (1-0) transition, which is supposed to be optically thin or only moderately optically thick even at the high column densities found in high-mass star-forming regions (see for instance \citealt{Sanhueza12, Barnes18, Fontani21}). The integrated intensity of this component is shown in the left panel of Fig. \ref{n2hp_mom0}, where we overlap also the contours of the \ohhdp cores and the positions of the protostellar objects (star symbols). The field-of-view have been cut to the central $40''\times 40''$, focusing on the map area covered also by the Band 7 data. The \nnhp emission is extended over almost the whole map coverage. The morphology appears filamentary, with several clumpy peaks of emission. Several of these peaks coincide with the position of protostellar candidates. \par By a visual inspection of the datacube, it appears that three velocity components are present on a large extension of the source. We have hence proceeded with a three-components Gaussian fit using the \textsc{pyspeckit} package \citep{Ginsburg11}. The technical details of the fitting routine are described in Appendix~\ref{app:fit}. Figure \ref{n2hp_oh2dp_spectra} shows the comparison of the spectra of \ohhdp and \nnhp at the peak of the \ohhdp integrated intensity, for each core identified in Sect. \ref{scimes}. The correspondence between the two species is remarkable. Every velocity components seen in \ohhdp is associated also with a \nnhp component, whilst the opposite is not true. Furthermore, for corresponding components, \olineh tends to present narrower linewidths with respect to the \nnhp line. These findings suggest the following scenario: over the whole clump, at least three gas components (separated in velocity usually by $0.5-1.0\,$\kms) are visible, as traced by \nnhp, an abundant molecule that probes gas densities of $n \gtrsim 10^4 \, \rm cm^{-3}$. Within these large scale structures, cores are formed, with significantly higher densities ($n > 10^5 \, \rm cm^{-3}$, see also Table \ref{CoreProp2}). The gas within the cores is hence cold and dense, and it excites the \olineh emission. Arising from a more quiescent medium, the \olineh spectra are narrower than the \nnhp ones, which instead are associated with larger scale, more turbulent gas, as suggested by the broader linewidths of this species. \par The \nnhp data allow us to study not only the kinematics within the cores (better traced by the \ohhdp data), but also that of the intraclump gas in which the cores are embedded, since we are able to link kinematically each core with one \nnhp component. In order to investigate the gas structure in ppv space, we used the Agglomerative Clustering for ORganising Nested Structures (\textsc{acorns}; \citealt{Henshaw19}). Similarly to \textsc{scimes}, \textsc{acorns} is a hierarchical clustering algorithm, which identifies structures and their hierarchical links in position-position-velocity space. However, unlike \textsc{scimes}, \textsc{acorns} is designed to work on already decomposed data. In other words, instead of working on the observed datacubes, it operates on the fitting results of the multi-component Gaussian fit previously described. The technical details about the \textsc{acorns} clustering are given in Appendix~\ref{app:fit}. Using the terminology of \cite{Henshaw19}, \textsc{acorns} finds a forest of 18 trees in total, four of which contains $\approx 70$\% of all data-points, and $\approx 80\%$ of the total flux. These trees present also the most complex hierarchical structures, containing each between two and seven leaves. \par In Fig. \ref{3d_screenshot} we show a screenshot of the 3D ppv plot of the four main trees found by \textsc{acorns}, together with the positions in ppv space of the prestellar and protostellar cores. For the prestellar ones, we use the positions of the peak intensity of the \ohhdp integrated intensity within each core, and the centroid velocity at the same position obtained with \textsc{mcweeds} (see Sect. \ref{mcweeds}). The properties of the protostellar cores are derived from \cite{Li20}, who used \dcop, \nndp, or $\rm C^{18}O$ data to infer the systemic velocity values\footnote{A 3D interactive copy of this figure is permanently mantained at: \url{http://theory-starformation-group.cl/sbovino/AG14_n2hp_light.html}.}. A two-dimensional RA-velocity plot of the same data is shown in the right panel of Fig. \ref{n2hp_mom0}. \par All the four main trees identified by \textsc{acorns} appear associated with prestellar cores, and at least three of them host protostellar sources. These findings suggest that all of these structures have been or still are active in star formation. The structure at lower \vlsr values (shown in blue in Fig. \ref{3d_screenshot}, labelled B in Fig. \ref{acorns_tot}) is the most coherent in velocity, as it span less than $1.0\,$\kms, despite extending over $\approx 0.75\, \rm pc$ on the plane of the sky. It is also the most quiescent in star formation activity, based on the fact that it is associated with only one prestellar core, and no protostar. On the contrary, the tree coloured in green in Fig. \ref{3d_screenshot} (cluster C in fig. \ref{acorns_tot}), despite having a physical size of $\approx 0.15 \, \rm pc$, contains four cores identified in \ohhdp and one protostellar core. The presence of tracers of both protostellar activity and of cold and dense gas suggest that the star formation is still on-going, and that the protostellar object is very young. The position of the protostellar core p1 is found in fact very close to the \ohhdp core 1, which hints to the fact that the protostellar envelope is still cold and dense enough to have a detectable abundance of \ohhdp. \par The remaining two trees (labels A and G in Fig. \ref{acorns_tot}; shown respectively in red and purple in the right panel Fig. \ref{n2hp_mom0} and in Fig. \ref{3d_screenshot}) show a more complex and overlapping structure in ppv space, and they represent the most dynamically active part of the IRDC clump. At the same time they contain the large majority of cores identified in \ohhdp and three protostellar cores. This is indicative of the fact that this region of the clump is dynamically very active. \par The two protostellar cores p4 and p6 are not associated in ppv space with any of the {four} main {trees} identified in \nnhp. After checking the whole cluster hierarchy found by \textsc{acorns}, however, p4 appears embedded in one of the minor {trees} identified (labelled as 'r' in Fig. \ref{acorns_tot}). The protostellar core p6, instead, has not correspondence in the forest identified by the algorithm. It still emits in the \nnhp (1-0) transition (as can be seen in the integrated intensity map shown in Fig. \ref{n2hp_mom0}), but with low flux, hence not fulfilling the S/N threshold that we require in the fitting algorithm. {A possible interpretation of this observational evidence is that p6} has still an envelope, but this {has} been significantly cleared out by the protostellar activity, suggesting that this could be a more evolved protostar with respect to the others. \par The tree labelled G (shown in purple in the right panel of Fig. \ref{n2hp_mom0} and in Fig. \ref{3d_screenshot}) is one of the largest {identified trees}, as alone it contains more than $20$\% of the total data-points and $\approx 33$\% of the total flux. It also presents {a significant shift in velocity}, extending from $\text{\vlsr} = 39.5 \,$\kms to $42\,$\kms. We now focus on its part connecting the two protostars p2 and p3 (see Fig.\ref{n2hp_mom0}), which presents the brightest peak intensities of the \nnhp line. This section looks like a filament, elongating between the two protostellar cores, and containing four cores identified in the \ohhdp data (n. 11, 14, 16, and 19). The velocity is increasing from p3 towards p2. \par The left panel of Fig.~\ref{tree7} shows the map of peak intensity of points belonging to tree G. The material surrounding and linking the two protostar emits the brightest lines (with $T_\mathrm{peak}>2.5 \, \rm K$) detected in the source. In the right panel of Fig. \ref{tree7}, we show the \vlsr map of this tree, with overlaid the contours of the outflows detected in the region by \cite{Li20}. The filamentary structure stretching between the two protostellar cores is found in correspondence with the red lobe of the outflow powered by protostar p2. However, the two features cannot coincide spatially, since their velocities are opposite: the outflow is red-shifted, and it has velocities higher than the local standard of rest velocity of protostar p2 ($V_\mathrm{lsr} = 41.8 \,$\kms, according to \citealt{Li20}), whilst the gas traced by the \nnhp line is found at lower (blue-shifted) velocities than that of the protostar. Furthermore, the four \hhdp cores embedded in the gas present low velocity dispersion ($\sigma_\mathrm{V} =0.27-0.34 \,$\kms) and transonic turbulent Mach number ($\mathcal{M}= 1.2 -1.6$, see Table \ref{AveSpectra_par}), suggesting that the dense gas embedded in the filamentary structure is still cold and quiescent, and unperturbed by outflows. \par {We can thus speculate on the possible scenarios that would give raise to such an observed configuration. A first possibility is simply that we are seeing a bulk motion of the gas. The filament-like structures is moving, and the cores embedded into it participate to this motion. A second possibility is that the gas is flowing towards the protostellar core p2 (i.e. towards increasing velocities), and it accretes material onto the protostar, which in turn powers the bipolar outflow. In any case, the red outflow lobe of p2 and the filament-like structure are found on two distinct planes, which intersect at the position of the protostar, and they appear overlapping in RA-Dec space only due to projection effects. } \par {Regardless of the real configuration, the observations unveil a gas flow along the filamentary structure, and it is possible to evaluate the mass-flow rate associated to it. If the correct scenario is the second we proposed, we can interpret this quantity as a mass accretion rate onto the protostar p2. To perform the calculation, we consider the filament} as limited to those positions where $T_\mathrm{peak}>2.5 \, \rm K$, since we want to focus on the denser portion of the gas traced by the \nnhp (1-0) line. This region, shown with the white contour in the left panel of Fig. \ref{tree7}, has a width of $0.08-0.12 \rm \, pc$, which are typical values for filaments (see e.g. \citealt{Arzoumanian11, Arzoumanian19, Palmeirim13, Sabatini19}), and a length of $\approx 0.26 \rm \, pc$. It spans $1\,$\kms in velocity, from $40.9\,$\kms close to the protostar p3 to $41.9\,$\kms around p2. The velocity gradient is hence $\nabla V = 3.85 \, \rm km \, s^{-1} \, pc^{-1}$. {T}o estimate the total mass of the filament ($M_\mathrm{fil}$), we employ again Eq. \ref{Mdust}. We use the continuum emission detected in Band 6 at $1.34\, \rm mm$, since its FoV and resolution are closer to the Band 3 data with respect to the Band 7 ones. The flux density contained within the mask shown in Fig. \ref{tree7} is $F_\mathrm{fil} = 85 \rm \, mJy$. A significant contribution to this flux level comes from the bright emission of the cores p2 and p3, which are $16$ and $15\rm \, mJy$, respectively \citep{Sanhueza19}. In these cores, the emission likely arises from the warmer envelope surrounding the protostellar object, and we hence subtract it from $F_\mathrm{fil}$, since we are interested on the flow of gas not associated with the envelope of the protostars. The dust opacity at $1.34\, $mm, computed following Eq. \ref{kappa}, is $0.81 \rm \, cm^{2} \, g^{-1}$. Assuming $T_\mathrm{dust} = 10\, \rm K$, we obtain $M_\mathrm{fil} = 57 \, \rm M_\odot$. The mass accretion rate is then $\dot{M}_\mathrm{acc} = M_\mathrm{fil} \times \nabla V = 2.2 \times 10^{-4} \, \rm M_\odot \, yr^{-1}$. {We stress again that this is the rate at which the mass flows along the filamentary structure. The scenario in which it actually corresponds to an accretion motion is only one of the possibilities that would explain the observations. In order to definitely assess if this is the case, more information, in particular on the protostars (i.e. their masses, luminosities, evolutionary stages,...) would be helpful.}\par In the following, we discuss the sources of uncertainties that affect {the physical quantities just determined.} First of all, there is the uncertainty on the mass, which accounts for $\approx40$\% (see Sect. \ref{contBand7}), that comes from uncertainties in the dust-to-gas ratio, in the source's distance, and in the dust opacity. Furthermore, the inclination $i$ of the filament with respect to the plane of the sky is unknown, and it affects the value of $\dot{M}_\mathrm{acc}$ by a factor $\tan{i}$ (see e.g. \citealt{Chen19}). If the inclination varies in the range $30-60$\degree, the derived value of the accretion rate changes up to $70$\%. Due to these considerations, with a conservative approach we assume that the derived $\dot{M}_\mathrm{acc}$ value is correct within a factor of two. Within the uncertainties, the value we found is in agreement with measurements in similar sources: for instance, \cite{Lu18} found $\dot{M}_\mathrm{acc} = (1-2)\times 10^{-4} \, \rm M_\odot \, yr^{-1}$ in filaments belonging to four high-mass star-forming regions, whilst \cite{Chen19} derived $\dot{M}_\mathrm{acc} = (0.2 - 1.3) \times 10^{-4} \, \rm M_\odot \, yr^{-1}$ in several filaments identified in the infrared dark cloud G14.225-0.506. \cite{Sanhueza21} derived $\dot{M}_\mathrm{acc} = (0.9- 2.5) \times 10^{-4} \, \rm M_\odot \, yr^{-1}$ in a hot core embedded in the high-mass star-forming region IRAS 18089-1732, even though at smaller spatial scales ($\approx 10000\,$AU). In \cite{Li22}, authors studied the accretion in a filament in the high-mass star-forming region NGC6334S, deriving $\dot{M}_\mathrm{acc} = 0.3\times 10^{-4} \, \rm M_\odot \, yr^{-1}$. {Furthermore, this value is also in agreement with the results of numerical simulations \citep[see e.g.][]{Wang10, Kuiper16}.}\par The critical line mass of a filament, in the approximation of isothermal cylindrical shape, is \citep{Ostriker64}: \begin{equation} m_\mathrm{c} = \frac{2 c_\mathrm{s}^2}{G} = 17 \left ( \frac{T_\mathrm{K}}{10 \rm \, K} \right ) \rm \, M_\odot \, pc^{-1} \; . \end{equation} The line mass of the filamentary-like structure in AG14 is $m= M_\mathrm{fil}/L_\mathrm{fil} = 220\rm \, M_\odot \, pc^{-1}$, i.e. significantly higher than its critical value, which suggests that this structure is out of hydrostatic equilibrium. {One could naturally wonder whether this is consistent with the possible scenario of accretion flow that has been discussed. In particular, it is worth comparing the timescales for accretion ($t_\mathrm{acc}$) and free-fall collapse ($t_\mathrm{ff}$), at least in terms of orders of magnitude. The former can be approximated by the ratio between the mass reservoir and the accretion rate: $t_\mathrm{acc} = M_\mathrm{fil}/\dot{M}_\mathrm{acc}= 2.6 \times 10^5 \, \rm yr$. To estimate the time necessary for a filament to fully collapse onto its axis, we use Eq. 18 of \cite{Hacar22}, which in turn was derived from \cite{Pon12,Toala12}:} \begin{equation} t_\mathrm{ff} = 1.9 \left (\frac{ L_\mathrm{fil}}{FWHM_\mathrm{fil}}\right )^{0.5} \left ( \frac{n_0}{10^3 \, \mathrm{cm^{-3}}}\right ) ^{-0.5}\, \rm Myr \; , \end{equation} {where $L_\mathrm{fil}/FWHM_\mathrm{fil} = 2.6 $ is the filament aspect ratio (i.e. the ratio between its length $L_\mathrm{fil} = 0.26\,$pc and its width $FWHM_\mathrm{fil}=0.1\, $pc), and $n_0$ is the filament central density at the spine. Since we are interested only in a rough estimation of this quantity, we test the range of densities found in the cores\footnote{{This values are also consistent with the average density of the filament, computed assuming that it is a perfect cylinder of length $L_\mathrm{fil}$ and radius $FWHM_\mathrm{fil}$.}}, i.e. $n_0 = 10^5-10^6 \, \rm cm^{-3}$, obtaining $t_\mathrm{ff} = 1-3 \times 10^5 \, \rm yr$. We conclude that the two timescales are comparable, and hence the filament would have time to accrete a significant fraction of its mass onto p2 before collapsing. } \subsection{Comparison between \ohhdp and \nndp \label{h2dp_n2dp}} The cores identified in Sect. \ref{scimes} using \ohhdp data are formed by cold and dense gas that should be in a prestellar stage, even though we have evidence that a minority of cores are found in close proximity to protostellar cores (in ppv space), such as core 1 (close to protostar p1), or cores 3, 6, and 7 (close to p5). We can speculate that in these cases the \ohhdp emission is tracing the part of the protostellar envelopes that is still cold enough that the desorption of CO from the dust grains has not happened yet. This is confirmed by depletion maps derived from $\rm C^{18}O$ (2-1) observations {of AG14} at $1''.3$ of resolution, which show that the depletion factor is high ($f_\mathrm{D}>50$) even around protostellar cores \citep{Sabatini22}. This suggests that ALMA observations at resolution of $\approx 1''$ are tracing the dense and still cold envelope around protostellar objects, where the feedback of the protostar has not affected the gas yet. However, even the remaining cores could still belong to distinct evolutionary stages. To these regards, {we have mentioned that} \cite{Giannetti19} studied the correlation between the \olineh and the \nndp (3-2) transitions in three clumps embedded in the G351.77-0.51 complex, using single-dish data from APEX. The main result of those authors was an anticorrelation between the abundances of the two molecular species, {possibly due to evolutionary effects. Their} findings hinted to the possibility of using the abundance ratio between \nndp and \ohhdp as an evolutionary indicator. \par {In Fig. \ref{n2dp_mom0} we show the comparison between the integrated intensities of the \nndp (3-2) and the \olineh transitions. The two tracers appear quite correlated spatially, but some differences are visible. For instance, in the north-west part of the source, the \nndp line has a bright peak, which is not seen in \ohhdp. Furthermore, the \nndp transition seems more extended, even though we must highlight a possible observational bias: despite we excluded the Total Power observations from the Band 6 dataset, its maximum recoverable scale is still almost twice that of the \ohhdp data, hence making the former more sensitive to large-scale emission.}\par {In order to further explore the comparison at the core level,} we compared the average spectra of \olineh and \nndp (3-2) in each core. Since the Band 6 data have a lower resolution of $1''.4 \times 1''.0$, we first smoothed the Band 7 data to this beam size, to allow for a proper comparison. {Both spectral cubes have been regridded to the same coordinate grid.} The spectral resolution of the two datasets is comparable ($0.20\,$\kms for \ohhdp and $0.17\,$\kms for \nndp). The comparison of the average spectra is shown in Fig. \ref{Spectra_h2dp_n2dp}. The similarities between the line profiles of the two tracers are remarkable, both in terms of intensity and of line shapes. In four cores (12, 13, 18, and 20) the \nndp transition is not detected above the $3\sigma$ level, but we highlight that the $rms$ of the \nndp spectra is on average $\approx 2.5$ times worse than in the corresponding \ohhdp spectra. \par We have fitted the average spectra shown in Fig.\ref{Spectra_h2dp_n2dp} with \textsc{mcweeds}. We assumed $T_\mathrm{ex} = 10 \, \rm K$ also for \nndp, for which we take into consideration the hyperfine splitting due to the $^{14}\rm N$ nuclei, according to the CDMS database. Figure \ref{h2dp_n2dp_fig} shows the comparison of the best-fit values for the centroid velocity and $FWHM$ for the two tracers in the cores where both are detected above $3\sigma$. The \vlsr values align very well, considering the uncertainties, with the 1:1 relation (shown with the black-dashed curve), highlighting that the two molecular emissions arise from similar spatial regions within the source. Concerning the linewidhts, the right panel of Fig. \ref{h2dp_n2dp_fig} shows that for 80\% of the cores the \ohhdp transition presents broader lines with respect to \nndp. This can be {partially due to opacity effect, since the \olineh can be moderately optically thick, leading to a 15\% overestimation of the linewidth (see Appendix \ref{missingFlux})}. The presence of the hyperfine splitting in the \nndp (3-2) transition, instead, reduces this problem. Furthermore, we highlight the difference in the critical density of these two tracers (one order of magnitude higher for the \nndp line than for the \ohhdp transition). \par In Fig. \ref{Ncol_h2dp_n2dp} we report the correlation between the column density values of \ohhdp and \nndp. For the cores undetected in \nndp, we report $3\sigma$-upper limits computed based on the $rms$ in each core and the average linewidths of the detected cores ($<FWHM> = 0.62\,$\kms). We found a discrepancy with the anti-correlation trend found by \cite{Giannetti19}, since the column density of \nndp and \ohhdp appear well correlated, for the cores detected in both tracers. However, we highlight a fundamental difference between our analysis and that of \cite{Giannetti19}. Those authors selected cores in continuum emission, and then analysed the molecular emission. Our analysis, instead, is intrinsically biased towards core with bright \ohhdp emission, since we used this species to identify core-like substructures. Furthermore, \cite{Giannetti19} investigated the clump level scales, and therefore the anti-correlation reflected averaged clump properties. In this work, on the contrary, we resolve the core scales in a highly dynamically active environment, which further complicate a direct comparison between the two works. \par It is worth commenting on the four cores undetected in \nndp emission. They present narrow \ohhdp lines ($\sigma_\mathrm{V}= 0.18-0.27\, $\kms, smaller than the average velocity dispersion of all cores in the clump), hinting to cold and quiescent gas. According to the analysis of the continuum emission (see Sect. \ref{contBand7} and Table \ref{CoreProp2}), three of them are undetected in continuum emission, and the last one (18) is the least massive of the sample ($M_\mathrm{core} \lesssim 1.0 \rm \, M_\odot$). The non-detection of \nndp can be then explained by two scenarios: \textit{i)} these cores are not dense enough to excite the \nndp transition that has an higher critical density with respect to that of the \olineh line; \textit{ii)} alternatively, the lack of continuum emission can be explained if the gas and dust temperatures are so low ($<10\, \rm K$) that the dust thermal emission at $0.8 \rm \, mm$ is not bright; in this case, the cores would be in early evolutionary stage, and perhaps the \nndp, a late-type species, has not yet formed in detectable quantities. In this case, the core masses estimated in Sect. \ref{contBand7} could be underestimated. \section{Discussion and Conclusions\label{Conclusion}} In this work, we have investigated the dynamical and kinematic properties of AG14, from the core to the clump scales, analysing ALMA data at spatial resolution from $\sim$ 2000 to $\sim$ 12000 AU ($0.01-0.06\, \rm pc$). Using Band 7 \ohhdp data, we have identified 22 cores with dendrogram analysis. Comparing their distribution with the dust thermal emission in the same band, most bright continuum peaks are found outside or right at the edge of the \hhdp cores. Several of these peaks are known to be associated with outflow activity, and therefore are likely protostars. The fact that they lack \ohhdp emission can be explained if they are already quite evolved, and the protostellar feedback has heated the surrounding gas above the CO desorption temperature. If CO is back into the gas phase, its fast reaction with \hhdp would lower the abundance of the latter below the detection level. Alternatively, if they are in earlier evolutionary stage and they are still dense and cold, \hhdp could be efficiently transformed into its doubly and triply deuterated forms, or it could deplete due to the depletion of HD itself \citep{Sipila13}. \par {The identified cores have typical masses of $M_\mathrm{core} \lesssim 30 \, \rm M_\odot$, and they appear subvirial at $T =10\, \rm K$, even though the virial parameters might be underestimated (see Sect. \ref{contBand7} for more details). Our data seem to exclude the existence of HMPCs in AG14, even though our mass values could be underestimated due either to filter-out of large-scale emission by the interferometer, or due to overestimation of the dust temperature. However, the \ohhdp line adds no support for temperatures lower than $10\, \rm K$, unlike in AG351 and AG354, where a significant fraction of pixels presented lines narrower than the thermal broadening at $10\, \rm K$ (see \citealt{Redaelli21}). } \par {The \olineh transition at the clump level span a range of $\approx 4\,$\kms in \vlsr, and its morphology suggests that multiple velocity components are present in the source. In order to study the large-scale clump kinematics of the gas in which the identified cores are embedded, we used ALMA Band 3 observations of the \nnhp (1-0), which is} an ideal probe for the large scale kinematics. From the spectral comparison of the two tracers (the first ever done in literature, to our knowledge), we can link kinematically each \hhdp core with one velocity component of the \nnhp spectra. The high density cores are hence formed in the large-scale gas traced by \nnhp, and they inherit its kinematics. The \nnhp lines are on average broader than the corresponding \olineh components, suggesting that the denser gas is more quiescent, as expected from turbulence dissipation. \par To disentangle the complex kinematics shown by the \nnhp data and to identify its hierarchical structure in ppv space, we have first fitted the isolated hyperfine component $F_1 = 1-0$ using a three-component Gaussian fit, and then we have used the results as input for the \textsc{acorns} package \citep{Henshaw19}. {The four main trees found by \textsc{acorns} are associated with cores identified in \ohhdp emission, and at least three host also protostellar cores, suggesting that all of them are active in star formation.} One of the trees (labelled B) presents a small velocity gradient ($\approx 1\,$\kms over $0.75\, \rm pc$) and it appears more quiescent than the others, since it contains only one prestellar core. Interestingly, this core (18) is one of those not detected in \nndp, which can be explained if it is at an early evolutionary stage, when \nndp ---a late type molecule--- did not have the time yet to form. This tree can then represent a less evolved component with respect to the others in the clump.\par The trees labelled A and G are associated with more than 70\% of the \ohhdp cores and three protostellar cores, and they are overlapping and intertwined in ppv space. Such a morphology could be indicative of a sort of competitive accretion scenario, where in the crowded environment of this high-mass clump, multiple low-mass cores ($M_\mathrm{core} < 30 \, \rm M_\odot$) have formed. The intraclump gas in which the cores are embedded could provide the cores with the mass reservoir needed to later form high-mass stars. This is also consistent with the fact that at $10 \, \rm K$ all the cores are subvirial. \par {The brightest part of tree G, is structured as a} filamentary structure connecting the two protostellar cores p3 and p2. The \nnhp centroid velocity increases from $\sim 41\,$\kms close to p3 to $\sim 42\,$\kms close to p2. On the plane of the sky, this structure overlaps with the red lobe of the CO outflow identified by \cite{Li20}. However, the outflow velocities are opposite to that of the \nnhp filament, since they are redshifted with respect to the systemic velocity of p2 ($42.8\, $\kms). {We have speculated on the possibilities that would explain the observed configuration, and we show one of them} in Fig. \ref{Tommaso}. The filamentary structure seen in \nnhp emission {might be accreting} mass onto the protostellar core p2, which then powers a bipolar outflow in a direction likely perpendicular to that of the accretion flow; the red lobe of the bipolar outflow, when seen projected on the plane of the sky, appears overlapped to the \nnhp feature, but the two are actually separated in 3D space. {Assuming this scenario, we have computed the mass accretion rate along the filamentary structure, obtaining $\dot{M}_\mathrm{acc} = 2.2 \times 10^{-4} \, \rm M_\odot \, yr^{-1}$, expected to be accurate within a factor of two, in good agreement with other observations in similar sources. }From the outflow parameters, \cite{Li20} estimated a mass accretion rate on the protostar p2 of $3-4\times 10^{-6} \rm \, M_\odot \, yr^{-1}$, i.e. approximately two order of magnitude lower than our estimate, but this value depends on several assumption (for instance on the wind velocity and on the ratio between the mass accretion rate and the mass ejection rate). Furthermore, the value of \cite{Li20} represents the accretion rate onto the protostar, whilst we compute the rate onto the core. \par In this work, we have shown how ALMA observations of several molecular tracers are a powerful diagnostic tool to investigate the fragmentation and kinematic properties of the high-mass clump AG14. In particular, \ohhdp appears an ideal tracer of the cold and dense gas, and as such it can be used to identify cores likely in an early evolutionary stage. On the other hand, Band 3 data of \nnhp can be used to trace the gas kinematics at clump scales and at the clump-to-core transition, providing important information on the dynamics and accretion properties of the gas from which the cores formed. \restartappendixnumbering \begin{acknowledgements} {The authors thank the anonymous referee for the comments which helped to improve the quality of the manuscript.} The authors acknowledge Tommaso Grassi for the help in producing Fig. \ref{Tommaso}. ER acknowledges the support from the Minerva Fast Track Program of the Max Planck Society. ER and PC acknowledge the support of the Max Planck Society. SB is financially supported by ANID Fondecyt Regular (project \#1220033), and the ANID BASAL projects ACE210002 and FB210003. PS was partially supported by a Grant-in-Aid for Scientific Research (KAKENHI Number 18H01259 and 22H01271). This research made use of \textsc{astrodendro}, a Python package to compute dendrograms of Astronomical data (\url{http://www.dendrograms.org/}) \end{acknowledgements} \software{\textsc{scimes} \citep{Colombo15}, \textsc{mcweeds} \citep{Giannetti17}, PyMC \citep{Patil10}, \textsc{pyspeckit} \citep{Ginsburg11}, \textsc{acorns} \citep{Henshaw19}, \textsc{astrodendro} (\url{http://www.dendrograms.org/})} \appendix \section{Cores identification in continuum emission \label{ContCores}} In Sect. \ref{scimes} it has been discussed how the morphology of the continuum emission and of the \ohhdp integrated intensity do not correlate. To strengthen this point, we have performed a core identification also in the dust thermal emission, similarly to what done in Appendix B of \cite{Redaelli21}. We highlight that \cite{Sanhueza19} already performed a core-finding analysis in the clump, using the continuum data in Band 6, which have a sensitivity $\approx 4$ times higher than the Band 7 data. We however prefer to use the continuum at 0.8$\,$mm to perform the comparison with the \ohhdp analysis, since these two datasets were observed with the same ALMA configuration. \par Since \textsc{scimes} works in ppv space, we used the \textsc{python} package \textsc{astrodendro}, on which \textsc{scimes} is based, to analyse the 2D continuum map. Concerning the input parameters necessary to perform the clustering, we set $\Delta_\mathrm{min} = 1 \times rms$ ($rms= 0.5 \, \rm mJy\, pix^{-1}$ for the non primary-beam corrected map); the minimum value to identify structures is $min_\mathrm{val} = 2.5 \times rms$; the identified cores must be larger that three times the beam size, in order to be consistent with the identification of the \ohhdp cores. \par With these inputs, \textsc{astrodendro} identifies 11 cores, shown in Fig. \ref{fig:contcores}. Four of them (c1, c3, c6, and c11) are found in correspondence with the protostellar candidates. Five cores (22\%) seen in \ohhdp do not correspond to continuum-identified structures, which suggests that they are in a very early stage, and their low temperatures translates in low continuum fluxes at $0.8 \, \rm mm$. In turn, continuum core c8 does not overlap with any structure seen in \olineh. This structure corresponds to core 5 in the analysis of \cite{Sanhueza19} and \cite{Li20}. The latter paper does not consider it as associated with clear outflow emission, even though it shows evidence of CO emission at high velocities. Furthermore, \cite{Sanhueza19} lists it among the cores with emission from high-energy transitions. It is hence possible that core c8 hosts a young protostar, that either does not power outflow, or that cannot be detected due for instance to projection effects. \par Core c3 is the largest one, but it contains three separated flux peaks. It is likely that the algorithm is not able to separate them due to the limited sensitivity of our data. In fact, in \cite{Sanhueza19} two separated cores were identified in this area in the 1.34$\,$mm continuum emission. In this scenario, core c3 is hence divided in two parts, one which is in a protostellar stage and does not show significant \ohhdp emission; the other instead is in an earlier evolutionary stage, and it overlaps with several cores seen in \ohhdp. Core c6 is peculiar, in the sense that it overlaps significantly ($>50$\%) with \ohhdp core 1, and it also contains a protostar. As already suggested in Sect. \ref{kinematics}, these features suggest that this protostar is young, still embedded in a thick envelope that is relatively cold to have a detectable abundance of \ohhdp. \par In conclusion, more than half of the \ohhdp identified cores overlap with continuum cores by less than 30\% of their physical extension. This is likely due to different evolutionary stages traced by the two dataset. Whilst the \ohhdp emission trace cold gas still relatively undisturbed by protostellar activity, the continuum data cannot distinguish between cores in prestellar and protostellar phase. {We note that \cite{Sanhueza19} already identified cores in continuum. We prefer to re-do this analysis, since the Band 6 data used in that paper have a worse resolution (by a factor of $\approx 2.0$) and a larger maximum-recoverable-scale (by $\approx 50$\%) that our Band 7 data, and we prefer to analyse a dataset acquired wiith the same interferometer configuration of the \olineh data. We have however checked that the two methods identifying cores in continuum produce results in reasonable agreement. By comparing the cores found in this appendix and in \cite{Sanhueza19} (figure not shown here), 10 out of the 11 cores we identify have correspondence to structures seen in Band 6. In the field-of-view where the two datasets overlap, \cite{Sanhueza19} found more cores, also due to the better sensitivity of their dataset. However, several of the \hhdp-identified cores (5 out of 22) still have no clear correspondence with continuum-identified structures, and our conclusion that continuum and line morfologies are different still holds. } \section{Results of the spectral fit of the \olineh transition in each core\label{AllCoresMaps}} Figs. \ref{AllCores_1} to \ref{AllCores_4} present the maps of the best-fit parameters obtained with \textsc{mcweeds} in each core. Concerning the linewidths, here we show the $FWHM$ maps, which is the actual free parameter used in the fitting procedure. \clearpage \section{Opacity and missing flux of the \olineh line \label{missingFlux}} In order to estimate the opacity of the \olineh line, we make use of the equation: \begin{equation} \tau_\nu = - \ln{1 - \frac{T_\mathrm{b}}{J_\nu(T_\mathrm{ex})- J_\nu(T_\mathrm{bg})}} \; , \label{tau} \end{equation} where $J_\nu(T)$ is the equivalent Rayleigh-Jeans temperature at the frequency $\nu$ and temperature $T$, and $T_\mathrm{bg}=2.73 \rm \, K$ is the background temperature. In the ALMA data, the brightness temperature peaks at $2\, \rm K$. Using $\text{\tex} = 10 \, \rm K$, Eq. \ref{tau} yields $\tau_\nu \approx 0.8$. Even in the brightest part of the emission, hence, the line is only moderately optically thick. {With this information, we can also estimate by how much the linewidths would be overestimated towards the positions of the source with the highest optical depth. To do so, we make use of Eq. 52 of \cite{Burton92}: \begin{equation} \sigma_{\mathrm{obs}} = \frac{\sigma_0} {\sqrt{\ln(2)} } \left \{ \ln \left [ \frac{\tau_\nu} { \ln \left ( \frac{2}{1+ e^{-\tau_\nu}} \right) } \right ] \right \}^{\frac{1}{2}} \; , \end{equation} which allows to infer the observed velocity dispersion $\sigma_\mathrm{obs}$ from the intrinsic one $\sigma_0$ given the line opacity $\tau_\nu$. Using the maximum value for the opacity just found, we estimate that in the most optically thick parts of the source the \olineh linewidth is overestimated by 15\% . } \par The ALMA Band 7 data lack of Total Power observations, which is crucial to recover the large-scale emission from the source. In order to quantify if and how the observations are affected by filtering-out, we compare the \olineh spectra observed with the APEX single-dish telescope towards AG14 \citep{Sabatini20} with the ALMA data in Fig. \ref{ALMA_APEX}. The APEX data have been converted in flux unit using the gain\footnote{Listed at \url{http://www.apex-telescope.org/telescope/efficiency/}} $G_\mathrm{APEX} = 40 \, \rm Jy \, K^{-1}$. The ALMA data instead have been integrated over an area equal to the beam size of the single-dish ($\theta_\mathrm{APEX} = 16''.8$), and smoothed to the same spectral resolution. \par Figure~\ref{ALMA_APEX} shows that the interferometer is recovering only $\sim$ one fifth of the emission. The missing flux arises from the large scales, since the emission is more extended that the maximum recoverable scale of the telescope in this configuration ($\theta_\mathrm{MRS}\approx 20''$, as was already noted for AG351 and AG354 in \cite{Redaelli21}. However, the ALMA data do not usually present anomalous line shapes. Furthermore, the core identified by \textsc{scimes} are significantly smaller than $\theta_\mathrm{MRS}\approx 20''$. We hence conclude that the missing flux problem does not affect significantly the analysis of the present work. \section{\nnhp fitting and full results of \textsc{acorns} clustering \label{app:fit}} The multi-component Gaussian fit of the \nnhp isolated hyperfine transition, performed with the \textsc{pyspeckit} package, have nine free parameters in total: \vlsr, \sigmav, and $T_\mathrm{peak}$ values, times three Gaussian components. To improve the code convergence, we first masked pixels with $\rm S/N < 10$ in peak intensity. This choice leaves 5387 positions (55\% of the total) unmasked, which however still cover the whole \ohhdp FoV. We limited the space of the parameters as follows: $V_\mathrm{lsr} \in [36; 43]$\kms; $T_\mathrm{peak} > 0 \,$K; $\sigma_\mathrm{V} \in [0; 2.5]$\kms. Due to the large gradients of the free parameters over the map, the fitting routine does not converge everywhere. After a first procedure, we hence selected the spectra with residuals $> 2 \sigma $ ($1 \sigma = 120 \, \rm mK$; the $rms$ of the residuals is computed in the velocity range $[34.6; 43.5]$\kms), and we performed a second fit, adjusting the initial guesses on the free parameters. We checked the residuals after this second fitting routine, and their $rms$ is found to be $< 3 \sigma$. We further masked pixel-per-pixel velocity components for which the fit did not converge, or with large uncertainties (e.g. $\sigma_{T_\mathrm{peak}}>1 \, $K). \par The best-fit results of the Gaussian fitting routine are fed to the clustering alorithm \textsc{acorns}. Unlike other similar codes, \textsc{acorns} uses the spectra linewidth as a further parameter to build the cluster hierarchy, and it is overall able to distinguish structures overlapping in ppv space better than other algorithms, which turns helpful for the crowded kinematics of AG14 (see also Appendix B in \citealt{Henshaw19} for further details on the comparison between different algorithms). We select the following clustering criteria in \textsc{acorns}: \begin{enumerate} \item Clusters must have a minimum size of 1.5 ALMA beam (to ensure that all the structures found are marginally resolved); \item They must be separated in velocity less than spectral resolution of the data-cube; \item The maximum separation in velocity dispersion ($FWHM$) is less than the gas thermal velocity at 10$\,\rm K$ ($0.19\,$\kms); \item The minimum height of an independent cluster is $3\sigma$, and the stop criteria is set to $5\sigma$ ($1\sigma=0.12\, \rm K$). \end{enumerate} After a first run, the code performs a second cycle of clustering, when we relax the criteria by 30\%, which further helps building the hierarchical structure according to the prescriptions of \textsc{acorns}. At the end, the algorithm is able to cluster 87\% of the data-points, and it finds 18 trees, shown in Fig. \ref{acorns_tot} as a dendrogram, {and in Fig. \ref{acorns_full} in ppv space}. The large majority (more than 70\%) of clustered data-points belongs to only four structures, which also contain $\approx 80$\% of the total flux (A, B, C, and G as labelled in Fig. \ref{acorns_tot}). The remaining clusters contain less than 3\% of the data-points each. The analysis of Sect. \ref{kinematics} hence focuses on these four trees. \par {We now discuss why we prefer to use distinct softwares to analyse the \nnhp and \ohhdp data. \textsc{scimes} is optimise to work with low-to-medium S/N data, such as the \ohhdp ones. Furthermore, using it ensures a proper comparison with the results of \cite{Redaelli21}, which in turn allows to obtain a larger sample for instance regarding the core masses. \textsc{acorns}, on the other hand, represents a better choice to analyse the \nnhp data, first of all because it has less problems to disentangle crowded spectra such as the ones in AG14. \textsc{scimes} in fact works on the observed ppv datacubes, and it performs better when the multiple velocity components are well separated in velocity space, as in the \ohhdp data, where these components are separated by $\approx~1\,$\kms and they are narrow ($\sigma_\mathrm{V}=0.3\,$\kms). On the contrary, the \nnhp spectra are more crowded, with more velocity components, and some of these components have significantly broader lines ($\sigma_\mathrm{V}=0.6-1.0\,$\kms). Due to these features, \textsc{scimes} is not able to disentangle them, as demonstrated by a test run of the software that we performed on the \nnhp datacube. \textsc{acorns} instead is able to perform this task because it works on decomposed data. There is also another important difference, in that \textsc{acorns} performs the clustering also in velocity dispersion space. The \olineh linewidths span a much smaller range ($\approx 0.2-0.4\,$\kms) with respect to the \nnhp ones ($\approx 0.3-1.5\,$\kms), and therefore this extra constraint helps even more in disentangling the \nnhp complex kinematics.}
Title: Modelling TES non-linearity induced by a rotating HWP in a CMB polarimeter
Abstract: Most upcoming CMB experiments are planning to deploy between a few thousand and a few hundred thousand TES bolometers in order to drastically increase sensitivity and unveil the B-mode signal. Differential systematic effects and $1/f$ noise are two of the challenges that need to be overcome in order to achieve this result. In recent years, rotating Half-Wave Plates have become increasingly more popular as a solution to mitigate these effects, especially for those experiments that are targeting the largest angular scales. However, other effects may appear when a rotating HWP is being employed. In this paper we focus on HWP synchronous signals, which are due to intensity to polarization leakage induced by a rotating cryogenic multi-layer sapphire HWP employed as the first optical element of the telescope system. We use LiteBIRD LFT as a case study and we analyze the interaction between these spurious signals and TES bolometers, to determine whether this signal can contaminate the bolometer response. We present the results of simulations for a few different TES model assumptions and different spurious signal amplitudes. Modelling these effects is fundamental to find what leakage level can be tolerated and minimize non-linearity effects of the bolometer response.
https://export.arxiv.org/pdf/2208.02952
\newcommand{\hdblarrow}{H\makebox[0.9ex][l]{$\downdownarrows$}-} \title{Modelling TES non-linearity induced by a rotating HWP in a CMB polarimeter} \author{T.~Ghigna\textsuperscript{{\normalfont \textit{a}, $ \dagger$}} \and T.~Matsumura\textsuperscript{{\normalfont \textit{a}}} \and Y.~Sakurai\textsuperscript{{\normalfont \textit{a}}} \and R.~Takaku\textsuperscript{{\normalfont \textit{b}}} \and K.~Komatsu\textsuperscript{{\normalfont \textit{c}}} \and S.~Sugiyama\textsuperscript{{\normalfont \textit{d}}} \and Y.~Hoshino\textsuperscript{{\normalfont \textit{d}}} \and N.~Katayama\textsuperscript{{\normalfont \textit{a}}}} \institute{ \textsuperscript{\textit a}Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba 277-8583, Japan, \textsuperscript{\textit b}Department of Physics, University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo 113-0033, Japan, \textsuperscript{\textit c}Okayama University, 3-1-1, Tsushimanaka, Kita-ku, Okayama City, Okayama, 700-8530, Japan, \textsuperscript{\textit d}Saitama University, 255 Shimookubo, Sakura-ku, Saitama, 338-8570, Japan. \textsuperscript{$ \dagger$}Corresponding author: tommaso.ghigna@impu.jp. } \vspace{-6mm} \section{Introduction} \vspace{-3mm} Probing the primordial CMB B-mode signal is one of the main objectives of modern cosmology. While still an open question, a very rapid inflationary expansion of the Universe is expected from current theories. The best available tool to probe inflation is a measurement of the polarized CMB signal \cite{zaldarriaga_seljak97, kamionkowski97}. On large angular scales ($\ell \lesssim 200$) the polarized B-mode signal is expected to be dominated by a primordial inflationary component (if the tensor-to-scalar ratio $r\gtrsim0.01$, or even $r\gtrsim0.001$ for $\ell < 10$)\footnote{Galactic foregrounds and gravitational lensing are two major challenges for a B-mode measurement. While a detailed discussion is beyond the scope of this paper, we briefly mention them here because their presence can make non-linear effects even more relevant.}. Two of the main challenges for CMB experiments are atmospheric fluctuations (for ground-based experiments) and receiver stability on long time-scales ($1/f$ noise and gain fluctuations). Therefore, several present and future CMB experiments (CLASS \cite{Dahal2021}, Simons Array \cite{Suzuki2016}, Simons Observatory \cite{simonsObs2019}, LiteBIRD \cite{Hazumi2020}) are employing (or planning to) polarization modulators, such as continuously-rotating Half-Wave Plates (HWP). A polarization modulator up-converts the polarized sky signal to a frequency range where the noise spectrum is expected to be purely white \cite{Kusaka2014}. In a nutshell, if the HWP rotates at a frequency $f_{hwp}$, the polarized signal is modulated at $f_{mod}=4f_{hwp}$, therefore by properly choosing $f_{hwp}$ it is possible to shift the signal of interest above the $1/f$ knee-frequency. \vspace{0mm} LiteBIRD is one of the experiments that will adopt this strategy. The LiteBIRD space mission consists of 3 telescopes each equipped with a HWP, as the first optical element, and a cryogenic focal plane ($\sim 100$ mK) populated with polarization sensitive Transition-Edge Sensor (TES) bolometers. At Kavli IPMU we are in charge of developing the HWP and the rotation mechanism for LFT. The current baseline consists of a multi-layer sapphire HWP with anti-reflection layers laser-machined directly on the HWP surfaces \cite{Komatsu2019, Takaku2021}. Even though HWPs are effective in rejecting $1/f$ noise, as well as removing some systematic effects arising from intrinsic differences among detectors (e.g. bandpass and beam mismatch, gain mismatch, etc.), HWP non-idealities can introduce other systematic effects. For example, in \cite{Takakura2017, Didier2017} (POLARBEAR and EBEX collaborations) the authors identified large\footnote{Amplitudes of the order of hundreds of mK. For comparison the CMB solar dipole is $\sim\pm 3.362$ mK.} signals due to instrumental intensity-to-polarization leakage. In both cases the signal is attributed primarily to optical elements before the HWP, and it is modulated by the HWP at $4f_{hwp}$.\footnote{Any optical element located before the HWP generates instrumental polarization that is going to be modulated by the HWP at the same frequency of the sky signal. Having the HWP as the first optical element allows for a clear separation of instrumental and celestial polarization.} In both analysis the authors argue that this signal affects the TES response by driving it into a non-linear regime. \vspace{0mm} In the case of LiteBIRD and Simons Observatory the HWP is going to be the first optical element (if we neglect windows or thermal filters), therefore we should not expect the same $4f_{hwp}$ leakage level, however HWP non-idealities or non-orthogonal incidence can create similar effects. In this paper we address the problem of TES non-linearity and highlight how this can couple to HWP synchronous signals (HWPSS). We use LiteBIRD LFT as a case study. First, we define TES non-linearity and HWP intensity-to-polarization leakage. We present a code developed in \textit{python} to realistically simulate the detector response to the sky signal. Finally, we present preliminary results showing the impact of spurious signals (such as HWPSS) in the detector response. We compare these results to non-linearity estimates from POLARBEAR data \cite{Takakura2017}. We believe that the tool presented can be useful to forecast future experiment capabilities, define instrument requirements, inform the instrument design and develop methodologies to analyze the data and mitigate systematic effects. This is vital for the level of accuracy required by future experiments in particular for a space mission like LiteBIRD, which is aiming to achieve an accuracy in terms of tensor-to-scalar ratio of $\sigma_{r}\lesssim0.001$. A detailed study of LiteBIRD requirements can be found in \cite{litebird_ptep}. \vspace{-6mm} \section{TES and HWP model}\label{sec:model} \vspace{-3mm} \paragraph{Transition-Edge Sensor:} According to \cite{Irwin2005} we can define the behaviour of a DC voltage-biased bolometer by solving the system of coupled differential equations: \begin{eqnarray} L\frac{dI}{dt} & = & V_{b} - IR_{tes} - IR_{sh} \label{eq:diff_equations_I} \\ C\frac{dT}{dt} & = & - P_{b} + I^2R_{tes} + P_{opt}. \label{eq:diff_equations_T} \end{eqnarray} It is commonly understood that, in the small signal ($\delta P_{opt} \ll \bar{P}_{opt}$) and high loop-gain limits $\mathcal{L}\ll 1$, the detector current responsivity reduces to $S_{I}\sim -1/V_{b}$ (for a "slow" signal: $\omega \rightarrow 0$). In \cite{Takakura2017, TakakuraPHD} the authors present a non-linearity model where the detector responsivity is loosely dependent from the input power variations $\delta P_{opt}$. This takes the following form for the detector current: \begin{equation} \delta I(t) = [S_I + S^{\prime}_I \delta P_{opt}(t)]\delta P_{opt}(t - \tau^{\prime}\delta P_{opt}(t)), \label{eq:current_responsivity_nonlinear} \end{equation} where $S^{\prime}_I$ and $\tau^{\prime}$ are non-linear terms. In the following, after briefly describing the HWP, we will discuss how the non-linear term couples to the HWP creating spurious signals. \vspace{-3mm} \paragraph{Half-Wave Plate:} The effect of a rotating HWP on the incoming sky signal is defined in terms of its Mueller matrix $\Gamma_{hwp}$ \cite{Komatsu2019}. For an ideal HWP $\Gamma_{hwp}$ is a diagonal matrix of the form $diag(1,1,-1,-1)$. However, for a real HWP the off-diagonal elements of the matrix $\Gamma_{hwp}$ can be non-zero, and cause mixing between the $I$, $Q$ and $U$ components of the input signal. In particular, in the context of this paper, we are interested in the $\gamma_{QI}$ and $\gamma_{UI}$ components that can cause intensity-to-polarization leakage (IP). From \cite{Komatsu2019} we find that non-zero values can result in a $2 f_{hwp}$ signal of the form: $\frac{1}{2}\sqrt{\gamma^2_{QI}+\gamma^2_{UI}}I_{in}\cos(2\omega_{hwp} t-2\arctan{\frac{\gamma_{UI}}{\gamma_{QI}}})$ \cite{Takaku2021}. Other effects can also appear, if we take into consideration the position of the HWP in the optical system, but for the scope of this paper we limit the discussion to the HWP intensity-to-polarization leakage. However, this analysis can be easily generalized to other cases. \vspace{-7mm} \section{Simulation}\label{sec:sims} \vspace{-3mm} As discussed in Section \ref{sec:model}, if the TES responsivity depends loosely on the incoming signal $\delta P_{opt}$ the bolometer response becomes non-linear. From Equation \ref{eq:current_responsivity_nonlinear} we can easily see that a signal at a generic frequency $\omega$ can partially leak to $2\omega$.\footnote{From simple trigonometry, if $\delta P_{opt}\sim\sin{\omega t}$, then the term $(\delta P_{opt})^2\sim\frac{1-\cos{2\omega t}}{2}$.} Hence, the $2 f_{hwp}$ IP signal discussed in Section \ref{sec:model} can partially leak to $4 f_{hwp}$, where the polarized signal is expected. In order to perform these simulations and address the impact of non-linearity on cosmological data we have developed a $python$ module\footnote{https://github.com/tomma90/tessimdc} that can be plugged into a more standard time-order data (TOD) simulator. First, the input sky maps are converted to units of power by integrating the sky emission ($Jy/sr$) over the beam function $B(\Omega, \nu)$ and the band-pass function $G(\nu)$. An example is shown in Figure \ref{fig:simulation_map} for LiteBIRD LFT 100 GHz. We use the $pysm$\footnote{https://pysm3.readthedocs.io/en/latest/} package to generate the sky model. We then add the CMB dipole and the expected optical loading (for a LiteBIRD LFT 100 GHz detector) due to the CMB monopole and telescope loading: $\bar{P}_{opt}=0.3061$ pW \cite{Westbrook2021, Hasebe2021}. The second step consists in generating the telescope pointing according to LiteBIRD scan strategy \cite{Hazumi2020} which is used to generate the TOD in units of power. At this point in the simulation, we can add a HWP synchronous signal. At present we can only fix the amplitude of the signal. We are planning to implement the possibility to input a measured or simulated HWP Mueller matrix to allow for more realism to the simulation. The TOD is then fed to the TES simulator as the input optical power $P_{opt}$ in Equation \ref{eq:diff_equations_T}. An example for a 1-hour scan is shown in the left plot of Figure \ref{fig:simulation_tod}. \vspace{0mm} The TES response simulator consists of a module based on the Runge-Kutta 4th order method to solve Equations \ref{eq:diff_equations_I} and \ref{eq:diff_equations_T}. We have made some assumptions to set the parameters according to LiteBIRD specifications. The thermal conductance $G$ and heat capacity $C$ are defined assuming a normal time-constant $\tau_{0}=C/G\sim30$ ms and a saturation power $P_{sat}=2.5\times \bar{P}_{opt}$ (average optical power defined by the CMB monopole and telescope loading). \vspace{0mm} Another key assumption is the TES resistance. In the simulations carried out for this paper, we have assumed a simple $arctan$ model where the resistance depends only on the TES temperature. We defined the normal resistance $R_n = 1 \Omega$ and the log-responsivity ($\alpha=\frac{d log R}{d log T}$) $\alpha_{0.5\Omega}=100$ at $0.5\times R_n$ (conservative value). In the future we will explore other models that do not neglect the dependence of the resistance from the TES current (two-fluids, RSJ, etc.) One shortcoming of this procedure is the need for a high sampling rate compared to more traditional TOD simulator. This is due to the need to solve iteratively the differential equations at a rate faster than the detector effective time-constant ($\tau_{eff}\sim\tau_{0}/(\mathcal{L}+1)$) in order to avoid numerical errors. For high-loop gain conditions ($\mathcal{L}\sim 10$), this translates to a sampling rate $\gtrsim$ 1 kHz. In the right plot of Figure \ref{fig:simulation_tod} we can see the result of the TES simulator for an ideal case without HWPSS. In Figure \ref{fig:simulation_fft} we show the impact of a $2 f_{hwp}$ ($\omega_{hwp} = 46$ rpm) HWPSS component of $1\%$ and $10\%$ amplitudes (relative to the expected optical loading $\bar{P}_{opt}$)\footnote{The data used in this work are noiseless to focus on the detector response characterization.}. A larger $2 f_{hwp}$ signal clearly results into an enhancement of the $4f_{hwp}$ component in the NSD (noise spectral density), which is due to the non-linear term of Equation \ref{eq:current_responsivity_nonlinear} that grows with the amplitude of the signal ($S^{\prime}_{I}\delta P^2$). Finally, we test the non-linearity model in Equation \ref{eq:current_responsivity_nonlinear} using representative signals (for this simulation we simplified the input compared to Figure \ref{fig:simulation_fft}, by including only the monopole - $P_{opt}$ - and a signal of frequency $f_{sig}$ and relative amplitude $\delta P/P_{opt}$) of frequency $f_{sig}$ between 0.1 Hz and 4 Hz and amplitudes between 0.1\% and 10\% of the expected optical loading for two transition cases: a log-responsivity $\alpha_{0.5\Omega}$=100 and $\alpha_{0.5\Omega}$=1000. For all cases we study the response when the detector is biased at $R=0.5\times R_{n}$ and $R=0.75\times R_{n}$. After processing the input signal with the TES simulator we convert the TES current $\delta I$ to input power $\delta P^{\prime}=\delta I/S_{I}$, in the high loop-gain limit. Afterwards we compare the input signal $\delta P$ and the calibrated output $\delta P^{\prime}$ to determine the non-linear term $S^{\prime}_{I}\delta P$ for each case: $S^{\prime}_{I}\delta P/S_{I}=(\delta P^{\prime}-\delta P)/\delta P$. In Figure \ref{fig:simulation_nonlin} we show a summary of the results. By isolating the term $S^{\prime}_{I}\delta P$ we find that this appears to depend on the TES model and bias conditions. In fact, the cases with steeper and narrower transition ($\alpha_{0.5\Omega}$=1000) and lower operating resistance (higher loop-gain) show reduced non-linearity levels. \vspace{-6mm} \section{Conclusions} \vspace{-3mm} We believe that the tool developed and presented in this paper is going to be very valuable going forward with the definition of the mission, both in terms of informing the instrument development and the data analysis. This is particularly true for the requirements of a space mission. We have given a preliminary assessment of the TES non-linearity model developed and tested on POLARBEAR data in \cite{TakakuraPHD, Takakura2017}. We found indications of the input power-dependent non-linear component. While the responsivity term seems to mimic well the model, we need a more in depth analysis of the non-linearity model to address the time-constant term. A more thorough analysis with propagation to map-making and cosmological analysis is needed to understand the full impact of this effect on the scientific output (e.g. tensor-to-scalar ratio). The results\footnote{The data that support the findings of this study are available from the corresponding author upon reasonable request.} presented in this paper are a starting point for a more in-depth analysis. More data and laboratory tests are needed to define realistic detector and instrument models for LiteBIRD. However, we can notice that in \cite{Takakura2017} the level of non-linearity of POLARBEAR data has been estimated to be in the range 0.3-0.8\% (varying among detector wafers). This is in agreement with the values found in our analysis for the cases with $\alpha_{0.5\Omega}=1000$ in Figure \ref{fig:simulation_nonlin}, where we have determined $S^{\prime}_I\delta P/S_I$ to be in the range 0.2-0.9\% (depending on the bias conditions). In our analysis we have also studied the case for $\alpha_{0.5\Omega}=100$ which is regarded as a pessimistic scenario, and in fact leads to larger non-linearity levels (as high as $\sim 9$\% according to Figure \ref{fig:simulation_nonlin}). This result clearly shows that a narrower superconductive transition improves the linearity of the device. In conclusion, the absence of a turbulent atmosphere that affects ground-based experiments will allow LiteBIRD (or any other space mission) to achieve an unprecedented sensitivity. We can foresee that more stable observation conditions will reduce the impact of $1/f$ noise and gain fluctuations due to long time-scale variations of the optical loading. Hence, more subtle non-linear effects could become more dominant, which will require more realistic models of the detectors and read-out systems to forecast and mitigate them. \vspace{-3mm} \begin{acknowledgements} We thank all LiteBIRD collaborators for support and help. In particular Juan Mac\'{\i}as-P\'erez and Satoru Takakura for usuful comments and feedback on the manuscript. This work was supported by JSPS KAKENHI Grant Numbers 22K14054, 18KK0083 and 19K14732. Kavli IPMU is supported by World Premier International Research Center Initiative (WPI), MEXT, Japan. \end{acknowledgements} \vspace{0mm}
Title: Dark Energy and Neutrino Superfluids
Abstract: We show that the neutrino mass, the dark matter and the dark energy can be explained in a unified framework, postulating a new invisible Born-Infeld field, which we name "non-linear dark photon", undergoing a meV-scale dynamical transmutation and coupled to neutrinos. Dark energy genesis is dynamically explained as a byproduct of the dark photon condensation, inducing the bare massless neutrinos to acquire an effective mass around the meV scale. It is fascinating to contemplate the channel induced by the non-linear dark photon leading to the pairing of the non-relativistic neutrinos, hence generating a cosmological superfluid state. As a consequence, the appearance of a light neutrino composite boson is predicted, providing a good cold dark matter candidate. In particular, if our model is enriched by an extra global Lepton number $U_L(1)$ symmetry, then the neutrino pair can be identified with a composite Majoron field with intriguing phenomenological implications for the neutrinoless-double-beta-decay ($0\nu\beta\beta$). Our model carries interesting phenomenological implications since dark energy, dark matter and the neutrino mass are time-varying dynamical variables, as a consequence of the non-linear Born-Infeld interaction terms. Limits arising from PLANCK+SNe+BAO collaborations data are also discussed. Finally, our model allows for an inverse hierarchy of neutrino masses, with interesting implications for the JUNO experiment.
https://export.arxiv.org/pdf/2208.03591
\title{Dark Energy and Neutrino Superfluids} \author{Andrea Addazi} \email{Addazi@scu.edu.cn} \affiliation{Center for Theoretical Physics, College of Physics, Sichuan University, Chengdu, 610064, PR China} \affiliation{Laboratori Nazionali di Frascati INFN, Frascati (Rome), Italy} \author{Salvatore Capozziello} \email{capozzie@na.infn.it} \affiliation{Dipartimento di Fisica ``E. Pancini'', Universit\`a di Napoli ``Federico II'' and Istituto Nazionale di Fisica Nucleare, Sezione di Napoli, Complesso Universitario di Monte S. Angelo, Via Cinthia, Ed. N I-80126 Napoli, Italy,} \affiliation{Scuola Superiore Meridionale, Largo S. Marcellino 10, I-80138, Napoli, Italy,} \author{Qingyu Gan} \email{gqy@stu.scu.edu.cn} \affiliation{Center for Theoretical Physics, College of Physics, Sichuan University, Chengdu, 610064, PR China} \affiliation{Scuola Superiore Meridionale, Largo S. Marcellino 10, I-80138, Napoli, Italy,} \author{Antonino Marcian\`o} \email{marciano@fudan.edu.cn} \affiliation{Department of Physics \& Center for Field Theory and Particle Physics, Fudan University, 200433 Shanghai, China} \affiliation{Laboratori Nazionali di Frascati INFN Via Enrico Fermi 54, Frascati (Roma), Italy} \tableofcontents{} \bigskip{} \section{Introduction} Neutrino mass, Dark Matter (DM) and Dark Energy (DE) are among the three most elusive as well as mysterious issues beyond the Standard Model (SM), yet, still at the center of physicists concerns and debates. We bravely and radically suspect that these three different aspects of Nature are intimately interconnected within a unified cosmological theory. There have been many attempts to unify Particle Dark Matter and Neutrino mass (DM-Ne models) through symmetry extensions of the Standard model. For example, in the Majoron model, neutrino mass is generated due to the spontaneous symmetry breaking of the global Lepton number symmetry, while a new Nambu-Goldstone sterile particle provides a good candidate for Dark Matter \cite{Majoron1,Majoron2,Majoron3,Majoron4,berezinsky1993kev,barger1982majoron,berezhiani1992observable}. Moving from a complementary perspective, in the community of gravitation and cosmology there have been several searches for a common origin of Dark Matter and Dark Energy (DE-DM models) arising from extension of General Relativity (see Reviews \citep{nojiri2007introduction,Nojiri:2010wj,Clifton:2011jh,Nojiri:2017ncd} and references therein). In fact, there is very a attractive coincidence: the dark energy density is about $\rho_{DE}\sim\Lambda M_{Pl}^{2}\sim(1\,{\rm \mathrm{meV}})^{4}$, which is around the same neutrino mass scale that we expect for the lightest neutrino, i.e $m_{\nu}\sim1\,{\rm \mathrm{meV}}$. Is there any dynamical explanation for this intriguing accident in Nature? On the other hand, a mystery of Nature puzzling theoretical physicists remains: why neutrino physics and dark vacuum energy are so tiny compared to the natural scales of the Standard Model, such as the electroweak and the Quantum Chromodynamics energy scales? A unified picture for neutrino mass and dark energy genesis may indeed blow a fresh new wind over this urgently pressing problem, which has been widely investigated in many works \citep{fardon2004dark,gu2003dark,peccei2005neutrino,barger2005solar,brookfield2006cosmology,barbieri2005dark}. \vspace{0.2cm} In this paper, we show the existence of a generic minimal class of models for a DE-DM-Ne unification. We claim that in our model no new heavy hypothetical fermions, like Right-Handed (RH) neutrinos, or extra Higgs bosons will be added to the Standard Model spectrum. Our assumptions will be only two, corresponding to just few free parameters: $\,$ I) a novel fifth force spin-1 invisible interaction is introduced, with the only additional desired requirement to be dynamically generating a condensate with an energy gap of $1\,{\rm \mathrm{meV}}$; $\,$ II) neutrinos must be coupled with the new pseudo-vector Born-Infeld (BI) boson. $\,$ Within this minimal framework, we intuitively illustrate several important and unexpected consequences. First, it is quite immediate to realize that a new force, transmuting into a confining condensate with energy density of $1\,{\rm \mathrm{meV}}$, can provide a source for Universe acceleration (see \cite{addazi2016born} for a previous proposal following this way). On the other hand, even within the interplay of these two hypotheses, neutrinos can never propagate freely in the Universe: they rather interact with dark energy as an invisible electromagnetic-like background field. Therefore, neutrinos get a mass gap proportional to dark energy, as a byproduct of a frictional effect. In a certain sense, massless neutrinos acquire an effective mass, as much as electrons have a different effective mass in condensed matter structures. Automatically, the neutrino mass happens to be tiny small as the dark energy scale, without any Yukawa coupling fine-tuning or RH neutrino involved. It is worth to remark that instead of having fine-tuning on both dark energy and neutrino mass, here we need to assume one fine-tuning of the BI condensate. Neutrino mass is naturally tiny once the energy scale of the BI condensate is around meV scale. Thus fine-tuning in our model is on one rather than two parameters. One may argue that also the couplings among neutrinos and BI boson have to be fixed, but a $\mathcal{O}(1)$ coupling is considered as natural and closer to "familiar" Standard Model coupling constants. Therefore, this is an alternative paradigm to the see-saw mechanism \citep{GellMann:1980vs,mohapatra1980neutrino,Sawada:1979dis}, in the ``healthy spirit" of Occam's razor logical principle, i.e it provides an economical explanation for DE and the neutrino mass generation. Then, we move on a final non-trivial step to the explanation of cold DM. We claim not to need any new extra sterile particle beyond the standard model in order to grasp the missing matter problem. In most of the astrophysical and laboratory cases, neutrino travels relativistically fast, close to the speed of light and, therefore, completely unbounded by the new invisible interaction. However, let us suppose to cool down neutrino to the meV scale: then neutrinos start being strongly coupled through the new vector boson and hence form Cooper pairs in a superfluid state. Neutrino superfluid is an old standing idea of ${\it Ginzburg}$ and ${\it Zhakarov}$ \citep{ginzburg1967superfluidity,ginzburg1969superfluidity}, which was later developed in many works \citep{caldi1999cosmological,kapusta2004neutrino,Dvali:2016uhn}. Nonetheless, we think that the power of this intuition was not fully appreciated by the community. If a copious amount of neutrinos was produced in the early Universe, then they could provide a sterile superfluid accounting for dark matter. We will provide qualitative arguments to support the hypothesis that dark matter is partially (or possibly fully) composed of neutrino superfluid, and show that this can be easily accomplished in DE-DM-Ne framework. A quantitative estimate is possible but it is far beyond the main purposes of this paper. Indeed such estimation is also dependent on the details of inflation reheating. Indeed, just as some axion models, one can also produce a neutrino superfluid from the decay of domain walls after inflation. It is worth to mention that the current paper provides a much more accurate analysis and extensions of the model proposed in Ref.\cite{addazi2016born}. In particularly, we perform a quantitative analysis in comparison with data to show that a BI condensate can provide for dynamical dark energy from higher derivative terms of the BI action. Here, we reinterpret the neutrino condensate state as a composite Majoron, with intriguing implications for phenomenology in neutrinoless double beta decay searches. This establishes an interesting connection among cosmological data and Universe acceleration with underground laboratory physics. Another important consequence of our model is that neutrinos necessarily have masses which are dynamically varying in time and this phenomenon could inspire future new experimental projects. The plan of our paper is as follows: dark energy is discussed in section \ref{sec:Dark-Energy}; neutrino mass is dealt with in section \ref{sec:photon-Neutrino-Mass}; dark matter is investigated in section \ref{sec:Dark-Matter}; in Section V the relation between our model and the Majoron theory is studied; finally, in Section VI conclusions and remarks are offered\footnote{Throughout the paper, we use the Planckian natural units.}. \section{Dark Energy} \label{sec:Dark-Energy} We formalise assumption (I) previously introduced by imposing the condensation of the invisible gauge field, a dark photon, $\mathcal{A}_{\mu}$ in cosmological scale $M\simeq1\,\mathrm{meV}$, namely \begin{equation} ({\rm I})\rightarrow\langle\mathcal{F}_{\mu\nu}\mathcal{F}^{\mu\nu}\rangle\sim M^{4},\label{first} \end{equation} where {$\mathcal{F}_{\mu\nu}=\partial_\mu \mathcal{A}_\nu -\partial_\nu \mathcal{A}_\mu$ is the field strength of a new fifth force gauge boson field $\mathcal{A}_{\mu}$. We postulate that the dark photon condensate provides the required contribution to dark energy. As in the case of QCD condensation, the non-vanishing value of $\langle \mathcal{F}^{2}\rangle$ emerges in the non-linear regime, where the perturbation theory breaks down. This provides a repulsive vacuum energy contribution, i.e. a candidate for DE \citep{labun2010dark}. When the new vector field is simply abelian, the only possibility for the formation of the condensate is to resort to a non-linear higher derivative extension of the standard QED-like structure, e.g. as for the Born-Infeld theory. The effects of a nonlinear electromagnetic theory in a cosmological setting have been studied by several authors (see e.g. Refs.~\citep{dona2015non,de2002nonlinear,labun2010dark,elizalde2003born,camara2004nonsingular,novello2007cosmological,novello2004nonlinear,kruglov2015universe,addazi2018dynamical}). In particular, it has been shown that a Born-Infeld field can provide a source for the Universe acceleration \citep{elizalde2003born}. In general a non-linear dark photon Lagrangian $\mathcal{L}_{\mathrm{eff}}$ coupled to gravity casts as follows \begin{equation} \mathcal{S}=\int\sqrt{-g}\left[-\frac{\mathcal{R}}{16\pi G}+\mathcal{L}_{\mathrm{eff}}[s,p]\right]d^{4}x, \label{eq-Action} \end{equation} where $\mathcal{L}_{\mathrm{eff}}[s,p]$ is a generic nonlinear effective Lagrangian of the combination of dark gauge field strength $s=-\frac{1}{4}\mathcal{F}_{\mu\nu}\mathcal{F}^{\mu\nu}$ and $p=-\frac{1}{4}\mathcal{F}_{\mu\nu}{}^{*}\mathcal{F}^{\mu\nu}=-\frac{1}{8}\mathcal{F}_{\mu\nu}\epsilon^{\mu\nu\rho\sigma}\mathcal{F}_{\rho\sigma}=-\frac{1}{8}\frac{1}{\sqrt{-g}}\mathcal{F}_{\mu\nu}\widetilde{\epsilon}^{\mu\nu\rho\sigma}\mathcal{F}_{\rho\sigma}$, where we also include the CP violating term in order to account for the more general unperturbed case. Since our main interest is the application to cosmology, we will work in the FLRW background. By definition, one can obtain the stress-energy tensor $T_{\mu\nu }$ by varying action \ref{eq-Action} with respect to the metric $g^{\mu\nu }$, \begin{equation} T_{\mu \nu } \equiv \frac{-2}{\sqrt{-g}} \frac{\delta\left(\sqrt{-g} \mathcal{L}_{\mathrm{eff}}\right)}{\delta g^{\mu \nu}} = g_{\mu \nu} \mathcal{L}_{\text {eff }}-2 \frac{\partial \mathcal{L}_{\text {eff }}}{\partial s} \frac{\delta s}{\delta g^{\mu \nu}}-2 \frac{\partial \mathcal{L}_{\mathrm{eff}}}{\partial p} \frac{\delta p}{\delta g^{\mu \nu}} \nonumber. \end{equation} From the stress-energy tensor $T_{\nu}^{\mu}$, and working within a frame that is co-moving with the fluid $\rho=-T_{0}^{0},P=T_{i}^{i}$, we have \begin{eqnarray} \rho & = & -\mathcal{L}_{\mathrm{eff}}-\mathcal{L}_{\mathrm{eff}}^{(1)}[s,p]\left(\mathcal{F}^{0\sigma}\mathcal{F}_{0\sigma}\right)+p\mathcal{L}_{\mathrm{eff}}^{(2)}[s,p],\\ P & = & \mathcal{L}_{\mathrm{eff}}+\mathcal{L}_{\mathrm{eff}}^{(1)}[s,p]\left(\mathcal{F}^{i\sigma}\mathcal{F}_{i\sigma}\right)-p\mathcal{L}_{\mathrm{eff}}^{(2)}[s,p], \end{eqnarray} where $\mathcal{L}_{\mathrm{eff}}^{(1)}[s,p]=\partial\mathcal{L}_{\mathrm{eff}}/\partial s$, $\mathcal{L}_{\mathrm{eff}}^{(2)}[s,p]=\partial\mathcal{L}_{\mathrm{eff}}/\partial p$, and the index $i$ is not summed over. The effective Lagrangian for the non-linear photon can include classical terms and quantum terms, which may provide the classical and quantum contributions to the condensation \citep{elizalde2003born}. Due to the non-linear quantum effects, a condensation phenomenon found, namely \begin{equation} \langle\mathcal{F}_{\mu\nu}\mathcal{F}^{\mu\nu}\rangle_{Q}=\langle\mathcal{F}_{\mu\nu}{}^{*}\mathcal{F}^{\mu\nu}\rangle_{Q}=\alpha(t)\sim M(t)^{4}, \label{eq:quantum correlation} \end{equation} whose origins are purely of quantum nature. Within the FLRW spacetime, the condensate has the two components $\langle\mathcal{F}_{0\nu}\mathcal{F}^{0\nu}\rangle_{Q}=\alpha(t)/4$ and $\langle\mathcal{F}_{i\nu}\mathcal{F}^{j\nu}\rangle_{Q}=\delta_{i}^{j}\alpha(t)/4$. As an effect of the extra higher derivative terms, the cosmological constant will slowly run in time. Such a possibility is not ruled out by cosmic observations, such as Supernovae Ia (SNe Ia) \citep{perlmutter1999measurements,riess1998observational}, cosmic microwave background (CMB) radiation \citep{spergel2003first,hinshaw2013nine}, large scale structure (LSS) \citep{tegmark2004cosmological,seljak2005cosmological}, baryon acoustic oscillations (BAO) \citep{eisenstein2005detection} and weak lensing \citep{jain2003cross}, hence various dynamical DE scenarios are proposed in literature \citep{zhao2017dynamical,alam2004case,clarkson2007dynamical,dent2011f,upadhye2005dynamical,xia2008constraints,guberina2006dynamical,farooq2017hubble,mainini2003modeling,sola2006dynamical,di2017constraining,bamba2012dark,addazi2018dynamical}. On the other hand, the non-linear photon Lagrangian is in general not scale invariant at the classical level. Thus, for homogeneous and isotropic FLRW spacetimes, due to the equipartition principle electric and magnetic condensates acquire a stochastic background $\langle E_{i}E_{j}\rangle_{C}=\langle B_{i}B_{j}\rangle_{C}=\frac{1}{3}\epsilon(t)g_{ij}$, where $\epsilon$ is the classical radiation energy density \citep{tolman1930temperature,de2010nonsingular}. Indeed, the BI condensate receives two contributions: 1) the quantum correlator, which is purely quantum-mechanical in nature and that arises from the vacuum fluctuations; 2) the classical correlator, which accounts for the classical thermodynamics of the radiation. For the classical contribution, due to the isotropy of the spatial sections of FLRW geometry, one can perform the average procedure as suggested by equipartition principle, where $C$ denotes an average over a volume that is relatively large compared to the wavelength while relatively small with respect to the curvature radius. Generically, we have that $E_{k}B^{k}=0$ implying $\langle E_{i}B_{j}\rangle_{C} $ vanishes. In terms of $\mathcal{F}_{\mu\nu}$, we have $\langle\mathcal{F}^{0\rho}\mathcal{F}_{0\rho}\rangle_{C}=-\epsilon(t),\,\langle\mathcal{F}^{i\rho}\mathcal{F}_{j\rho}\rangle_{C}=\frac{1}{3}\epsilon(t)\delta_{j}^{i}$ and $\langle \mathcal{F}_{\mu\nu}{}^{*}\mathcal{F}^{\mu\nu}\rangle_{C}=0$. Combining the quantum and classical effects, we obtain \begin{eqnarray} \langle \mathcal{F}^{0\rho}\mathcal{F}_{0\rho}\rangle & = & \frac{\alpha(t)}{4}-\epsilon(t),\\ \langle \mathcal{F}^{i\rho}\mathcal{F}_{j\rho}\rangle & = & \left(\frac{\alpha(t)}{4}+\frac{\epsilon(t)}{3}\right)\delta_{j}^{i},\\ \langle \mathcal{F}_{\mu\nu}{}^{*}\mathcal{F}^{\mu\nu}\rangle & = & \alpha(t). \end{eqnarray} Thus, the fluid state parameter $w\equiv\langle P\rangle /\langle \rho \rangle$ for dark photon is found to be \begin{equation} w_{DE}(t)=\left.-1+\frac{16\epsilon(t)\mathcal{L}_{\mathrm{eff}}^{(1)}[s,p]}{-12\mathcal{L}_{\mathrm{eff}}[s,p]-3\left(\alpha(t)-4\epsilon(t)\right)\mathcal{L}_{\mathrm{eff}}^{(1)}[s,p]+12p\mathcal{L}_{\mathrm{eff}}^{(2)}[s,p]}\right|_{(s,p)=-\alpha(t)/4}. \label{eq:state-parameter} \end{equation} Let us note that $\mathcal{L}_{eff}/ \mathcal{L}_{eff}^{(1)}$ and $p\mathcal{L}_{eff}^{(2)}/ \mathcal{L}_{eff}^{(1)}$ are in the same order of $s$ and $p$, respectively. In general, for classical radiation dominating over the condensation, i.e. $\epsilon>\!\!>\alpha$, we have $w\sim1/3$ as expected, while within the quantum effect dominant regime, where $\epsilon\rightarrow0$, we obtain $w\sim-1$. Beyond the quantum and classical limits, careful analysis must be devoted to the regimes corresponding to a quintessence-like condensate, characterised by an equation of state parameter $w>-1$, and to the phantom-like condensate, with $\omega<-1$, that appear in concrete realization of models described by a specific form \footnote{It is worth to note that the entropy condition arsing from Holographic Naturalness highly restricts the possibility on various DE model, which shows that phantom cosmology is disfavored \citep{Addazi:2020vhq}.} of $\mathcal{L}_{\mathrm{eff}}$. Cosmological data do not rule out the possibility of $w<-1$, which will be also included for completeness in this work. Let us consider now the specific case of the Born-Infeld theory, a no-ghost nonlinear electrodynamics, given by the effective action \begin{equation} \mathcal{L}_{\mathrm{BI}}[s,p]=\lambda\left[1-\sqrt{1-\frac{2s}{\lambda}-\frac{p^{2}}{\lambda^{2}}}\right]. \end{equation} In practice, we can naturally set the dimensional coupling $\lambda=1\, \mathrm{meV}^{4}$, which is the scale of the dark energy density. By Eq.~(\ref{eq:state-parameter}), we obtain the equation of state \begin{equation} w(z)=-1+\frac{16\epsilon(z)}{3\left(4+\alpha(z)-\sqrt{16+8\alpha(z)-\alpha(z)^{2}}+4\epsilon(z)\right)},\label{eq:BI-w} \end{equation} where $z$ is the red-shift parameter. The evolution of the dark radiation $\epsilon(z)$ can be derived from the linear response of the fluctuation. One of the equation of motions of the BI action on the FLRW background is given by \begin{equation} \partial_{\mu}\left(a^{3}\frac{\mathcal{F}^{\mu\nu}+p{}^{*}\mathcal{F}^{\mu\nu}}{\sqrt{1-2s-p^{2}}}\right) = 0,\label{eq:BI-eom} \end{equation} which has a trivial solution $\mathcal{A}_{\mu}^{(0)}=0$. We consider the fluctuation around $\mathcal{A}_{\mu}^{(0)}$ as $\mathcal{A}_{\mu}=\delta \mathcal{A}_{\mu}+\left(\delta \mathcal{A}_{\mu}\right)^{2}+\cdots$, and keep it to the leading order in $\delta \mathcal{A}_{\mu}$. Then Eq.~(\ref{eq:BI-eom}) becomes $ \partial_{\mu}\left(a^{3}\mathcal{F}^{\mu\nu}/\lambda^{2}\right)=0 $. As a simplifying ansatz, we can consider only time-dependent $\delta \mathcal{A}_{\mu}(t)$, and find the solution $\mathcal{F}_{0i} = \partial_{t}\delta \mathcal{A}_{i}=c/a$, where $c$ is a constant. With respect to the conformal time $\eta$, the result $\delta \mathcal{A}_{i}\sim c\eta$ indicates a linear perturbation. The radiation energy density is then found to be \begin{equation} \epsilon=\langle E_{i}E^{i}\rangle _{C} = \Big\langle \frac{1}{a^{2}}\mathcal{F}_{i0}\mathcal{F}_{i0}\Big\rangle_{C}=c_{0}+\frac{c^{2}}{a^{4}}\, , \label{eq:darkradiation} \end{equation} where the constant $c_{0}$ is introduced referring to the quantum radiative correction. Let us first discuss the general physical picture of how $w$ evolves. The dark photon condensate appears when the temperature of the dark side decreases down to $1\, \mathrm{meV}$. We illustrate this toy model in Fig.~\ref{fig:wz}. One can see that with the expansion of the Universe, the condensation effect takes over the radiation, yielding the dark energy before the recombination. Many works have been devoted on various specific parametrised forms of $w(z)$, e.g. the well known Chevallier-Polarski-Linder ansatz $w(z)=w_0+w_1 z/(1+z)$ \citep{chevallier2001accelerating,huterer1999prospects,jassal2005wmap,efstathiou1999constraining,seljak2005cosmological,upadhye2005dynamical}. In our case, we focus on the epoch $z\sim(-0.5,10000)$ during which the dark photon condensation is assumed to have been formed and slowly vary with time according to the ansatz $\alpha(z)=\alpha_{0}+\alpha_{1}\rm{log}(1+z)$, with $\alpha_{0} \sim\mathcal{O}(1)$ and $\alpha_{1} \sim\mathcal{O}(0.1)$, which is originated from one loop corrections to the dark photon propagator through a pair of neutrinos \cite{peskin}. Besides, from Eq. (\ref{eq:darkradiation}) the dark radiation can be parametrised as $\epsilon(z)=\epsilon_{0}+\epsilon_{1}\left(1+z\right)^{4},(\epsilon_{0},\epsilon_{1}>0)$, where $\epsilon_{1}$ is constrained by $\lesssim10^{-16}$, since the dark energy is believed to exist in the CMB spectrum at $z\simeq1100$. In the right panel of the Fig.~\ref{fig:wz}, we explore some combinations of the parameter space $\{\alpha_{0},\alpha_{1},\epsilon_{0},\epsilon_{1}\}$ in comparison with the observational constraints arising from $Planck+SNe+BAO$ \cite{aghanim2020planck}. From bayesian comparison with PLANCK data, we obtain as best fit the results in Table. \ref{tableI}. We can see that our model has a $\Delta \chi^{2}$ which is comparable to $\Lambda CDM$. This evidences that our model is a robust alternative to $\Lambda\, CDM$ model, while from the theoretical point of view a reduced fine-tuning in neutrino/BI scales as commented above. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Likelihood & Frequency & Multipole range & $\chi^{2}$ & $\chi^{2}/N_{\textrm{dof}}$ & $N_{\textrm{dof}}$ & $\triangle\chi^{2}/\sqrt{2N_{\textrm{dof}}}$ & PTE{[}\%{]}\tabularnewline \hline \hline \multirow{5}{*}{TT} & $100\times100$ & $30-1197$ & $1232.37$ & $1.06$ & $1168$ & $1.37$ & $8.66$\tabularnewline \cline{2-8} \cline{3-8} \cline{4-8} \cline{5-8} \cline{6-8} \cline{7-8} \cline{8-8} & $143\times143$ & $30-1996$ & $2032.45$ & $1.03$ & $1967$ & $1.08$ & $14.14$\tabularnewline \cline{2-8} \cline{3-8} \cline{4-8} \cline{5-8} \cline{6-8} \cline{7-8} \cline{8-8} & $143\times217$ & $30-2508$ & $2563.74$ & $1.04$ & $2479$ & $1.25$ & $10.73$\tabularnewline \cline{2-8} \cline{3-8} \cline{4-8} \cline{5-8} \cline{6-8} \cline{7-8} \cline{8-8} & $217\times217$ & $30-2508$ & $2549.66$ & $1.03$ & $2479$ & $1.00$ & $15.78$\tabularnewline \cline{2-8} \cline{3-8} \cline{4-8} \cline{5-8} \cline{6-8} \cline{7-8} \cline{8-8} & Combined & $30-2508$ & $2545.67$ & $1.03$ & $2479$ & $0.96$ & $16.81$\tabularnewline \hline \multirow{7}{*}{TE} & $100\times100$ & $30-999$ & $1087.78$ & $1.12$ & $970$ & $2.70$ & $0.45$\tabularnewline \cline{2-8} \cline{3-8} \cline{4-8} \cline{5-8} \cline{6-8} \cline{7-8} \cline{8-8} & $100\times143$ & $30-999$ & $1031.84$ & $1.06$ & $970$ & $1.43$ & $7.90$\tabularnewline \cline{2-8} \cline{3-8} \cline{4-8} \cline{5-8} \cline{6-8} \cline{7-8} \cline{8-8} & $100\times217$ & $505-999$ & $526.56$ & $1.06$ & $495$ & $1.00$ & $15.78$\tabularnewline \cline{2-8} \cline{3-8} \cline{4-8} \cline{5-8} \cline{6-8} \cline{7-8} \cline{8-8} & $143\times143$ & $30-1996$ & $2027.43$ & $1.03$ & $1967$ & $0.98$ & $16.35$\tabularnewline \cline{2-8} \cline{3-8} \cline{4-8} \cline{5-8} \cline{6-8} \cline{7-8} \cline{8-8} & $143\times217$ & $505-1996$ & $1604.85$ & $1.08$ & $1492$ & $2.09$ & $2.01$\tabularnewline \cline{2-8} \cline{3-8} \cline{4-8} \cline{5-8} \cline{6-8} \cline{7-8} \cline{8-8} & $217\times217$ & $505-1996$ & $1430.52$ & $0.96$ & $1492$ & $-1.11$ & $86.66$\tabularnewline \cline{2-8} \cline{3-8} \cline{4-8} \cline{5-8} \cline{6-8} \cline{7-8} \cline{8-8} & Combined & $30-1996$ & $2045.11$ & $1.04$ & $1967$ & $1.26$ & $10.47$\tabularnewline \hline \multirow{7}{*}{EE} & $100\times100$ & $30-999$ & $1026.79$ & $1.06$ & $970$ & $1.31$ & $9.61$\tabularnewline \cline{2-8} \cline{3-8} \cline{4-8} \cline{5-8} \cline{6-8} \cline{7-8} \cline{8-8} & $100\times143$ & $30-999$ & $1047.22$ & $1.08$ & $970$ & $1.78$ & $4.05$\tabularnewline \cline{2-8} \cline{3-8} \cline{4-8} \cline{5-8} \cline{6-8} \cline{7-8} \cline{8-8} & $100\times217$ & $505-999$ & $479.32$ & $0.97$ & $495$ & $-0.49$ & $68.06$\tabularnewline \cline{2-8} \cline{3-8} \cline{4-8} \cline{5-8} \cline{6-8} \cline{7-8} \cline{8-8} & $143\times143$ & $30-1996$ & $2001.70$ & $1.02$ & $1967$ & $0.54$ & $29.18$\tabularnewline \cline{2-8} \cline{3-8} \cline{4-8} \cline{5-8} \cline{6-8} \cline{7-8} \cline{8-8} & $143\times217$ & $505-1996$ & $1430.14$ & $0.96$ & $1492$ & $-1.11$ & $86.80$\tabularnewline \cline{2-8} \cline{3-8} \cline{4-8} \cline{5-8} \cline{6-8} \cline{7-8} \cline{8-8} & $217\times217$ & $505-1996$ & $1408.52$ & $0.94$ & $1492$ & $-1.51$ & $93.64$\tabularnewline \cline{2-8} \cline{3-8} \cline{4-8} \cline{5-8} \cline{6-8} \cline{7-8} \cline{8-8} & Combined & $30-1996$ & $1986.05$ & $1.01$ & $1967$ & $0.32$ & $37.16$\tabularnewline \hline \end{tabular} \caption{\it The Goodness-of-fit tests using Planck temperature and polarization spectra, with the same methodology as $\Lambda CDM$ analysis performed by PLANCK collaboration. $\Delta \chi^{2}=\chi^{2}-N_{\textrm{dof}}$ fitted to Planck TT+lowP with $N_{\textrm{eff}}$ is the number of degrees of freedom equal to the multipoles' number. The probability to exceed (PTE) the value of $\chi^{2}$ is in last column of table. A comparison with PLANCK analysis with $\Lambda CDM$ (see Refs.\cite{Planck:2015fie,Planck:2018vyg}) shows that our model has a comparable $\Delta \chi^{2}$ with data.} \label{tableI} \end{table} Let us also mention that it is possible to introduce a kinetic mixing term between the non-linear dark photon and the ordinary photon by means of the Pontryagin density $ \kappa F^{\mu \nu} \mathcal{F}_{\mu \nu}$, where $F_{\mu \nu}$ is the strength tensor of the ordinary photon. Since our dark photon has an effective mass around the meV, the current limits on the kinetic mixing parameter arising from current photon mass limit is around $\kappa \lesssim 10^{-15}$ \cite{PDG}. \section{Dark Photon Mass and Neutrino Mass} \label{sec:photon-Neutrino-Mass} The non-vanishing vacuum expectation of $\langle \mathcal{F}^{2}\rangle \sim M^{4}$ is related to the dark photon condensate with a mass gap $M\sim \mathrm{meV}$. We may then naturally identify the only energy scale entering the vacuum expectation, namely $M$, with the dark photon effective mass $m_{\mathrm{eff}}^{\gamma} \simeq M$. The vector vertex displayed in the left panel of Fig.~\ref{fig:effective-mass}, originating from the leading nonlinear term $\sim \mathcal{F}^{4}$ and with two legs representing the condensate background, could be interpreted as a dark photon propagating through the condensate ether. Consequently one can roughly read the Feynman diagram as $\sim M^{2} \mathcal{A}^{2}$, which shows that dark photon can obtain an effective mass around the meV scale. This effect may be understood in analogy to the gluon propagation in its condensate, with consequent acquisition of an effective mass gap related to the condensation scale. Within the spirit of point (II), as stated in the Introduction, let us suppose that the Majorana neutrino $\nu$ couples to $\mathcal{A}_{\mu}$ as follows \begin{equation} ({\rm II})\rightarrow\mathcal{L}_{int}=g\, \mathcal{A}_{\mu}\nu^{T}\mathcal{C}^{-1}\gamma^{5}\gamma^{\mu}\nu, \label{eq:hypotheical 2} \end{equation} where $g$ is the coupling constant and $\mathcal{C}$ denotes the charge conjugate operation. One can see that the dark gauge potential $\mathcal{A}_{\mu}$ is a pseudo-vector. The interaction among propagating neutrinos and the background dark photon condensate medium would give to the bare massless neutrino an effective mass $m_{\mathrm{eff}}\simeq gM \simeq M$, where $g$ is assumed to be a $\mathcal{O}(1)$ (see Fig.~\ref{fig:effective-mass}). Thus, in our model, neutrino mass and dark energy are interconnected issues. We emphasize that $m_{\mathrm{eff}}$ is of the same order of magnitude as the time-dependent dark energy scale $M$. A large class of mass varying neutrino models was studied in Refs.~\citep{fardon2004dark,gu2003dark,peccei2005neutrino,barger2005solar,brookfield2006cosmology,barbieri2005dark}. Moreover, we mention that a scenario with Dirac neutrino rather than Majorana one was discussed in Ref.\cite{addazi2016born}. Since neutrinos have different masses and oscillate among one another, it is quite natural to think that dark energy also generates neutrino mixings through neutrino flavour violating interactions. These latter are provided for instance by \begin{equation} \mathcal{L}_{int}=g_{ff'}\mathcal{A}_{\mu}\nu_{f}^{T}\mathcal{C}^{-1}\gamma^{5}\gamma^{\mu}\nu_{f'}, \end{equation} where $f$ and $f'$ are flavour indices and $g_{ff'}$ is a flavour mixing matrix. The unitary transformation relating the flavour and mass eigenstates is an analog of the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix \footnote{The idea that neutrino oscillations can be used to probe the dark energy was also explored in \citep{kaplan2004neutrino, Blasone:2004yh, Capolupo:2006et, Capolupo:2007hy, Capozziello:2013dja}.}. In light of future neutrino experiments such as {\bf JUNO} \cite{An:2015jdp,Li:2014qca,Djurcic:2015vqa}, let us comment on the implications of our model. {\bf JUNO} is a middle-baseline antineutrino reactor, based on detection of antineutrinos generated by nuclear power sources. Such a measure allows to determine the neutrino mass hierarchy with a promised significance of $4\sigma$ with six years of data taking. In particular, the high resolution measurement of the spectrum of antineutrinos can allow a precise determination of neutrino oscillations parameters, $\Delta m_{21}^{2}$, $\Delta m_{ee}^{2}$ and $\sin^{2} \theta_{12}$, with $1\%$ precision. Such information is crucial in determining the sign of $\Delta m_{31}^{2}$: if $m_{3}>m_{1}$ (normal hierarchy) or $m_{3}<m_{1}$ (inverse hierarchy). In our model, the neutrinos hierarchy follows the coupling matrix of neutrinos with the new interaction gauge bosons. Therefore, the model can allow for both inverse or normal hierarchies of neutrino masses. In our prospective, {\bf JUNO} would measure the hierarchy of interaction couplings of neutrinos with dark energy. In particular, $m_{3}/m_{1}=g_{3}/g_{1}$, where $g_{1,3}$ are the coupling of first and third neutrinos with the BI field respectively. Thus an inverse hierarchy corresponds to the case $g_{1}>g_{3}$ while a normal one to $g_{3}>g_{1}$. \section{Neutrino Condensation} \label{sec:Dark-Matter} In this section we specify the details of the non-relativistic neutrinos' condensation, as triggered by the attractive force induced by the dark photon. In the UV regime, we consider the neutrino field to be a massless left-handed Weyl spinor $\xi_{\alpha}(x)$ embedded in the Majorana basis $\nu^{T}=\left(\begin{array}{cc} \xi_{\alpha}, & \xi^{\dagger\dot{\alpha}}\end{array}\right)$, coupling to $\mathcal{A}_{\mu}$ as Eq. (\ref{eq:hypotheical 2}), with the Lagrangian given by \begin{equation} \mathcal{L}_{UV}=\frac{i}{2}\bar{\nu}\gamma^{\mu}\partial_{\mu}\nu+\frac{1}{2}g\mathcal{A}_{\mu}\bar{\nu}\gamma^{5}\gamma^{\mu}\nu. \label{eq:UV Lag} \end{equation} The Lagrangian $\mathcal{L}_{UV}$ has a global $U(1)$ axial symmetry under the transformation $\nu\rightarrow e^{i\theta \gamma^{5}} \nu$. A new effective 4-fermion interaction emerges once one integrates out the vector field, leading to the non-vanishing expectation of the Cooper pair $\langle\nu^{T}\mathcal{C}^{-1}\nu\rangle$, which dynamically breaks the global $U(1)$ symmetry \footnote{Majorana neutrino condensation triggered by some scalar fields has been studied in \citep{bhatt2009majorana,antusch2003dynamical,barenboim2009inflation,fardon2004dark}.}. We now formalise the general picture discussed above. In terms of the two-component spinors, the Lagrangian Eq.~\eqref{eq:UV Lag} can be rewritten as \begin{equation} \mathcal{\mathcal{L}}_{UV}=i\xi^{\dagger}\bar{\sigma}^{\mu}\partial_{\mu}\xi+g\mathcal{A}_{\mu}\xi^{\dagger}\bar{\sigma}^{\mu}\xi \,. \label{eq:interaction1} \end{equation} Here we take the van der Waerden notation \citep{dreiner2010two}. As a local effective operator approach, integrating out the dark photon $\mathcal{A}_{\mu}$ we obtain a low energy effective 4-fermion interaction given by \begin{equation} \mathcal{L}_{int}\simeq\frac{g^{2}}{q^{2}+\left(m^{\gamma}_{\mathrm{eff}}\right)^{2}}\left(\xi^{\dagger}\bar{\sigma}^{\mu}\xi\right)\left(\xi^{\dagger}\bar{\sigma}_{\mu}\xi\right), \label{eq:interaction2} \end{equation} with $m^{\gamma}_{\mathrm{eff}}\simeq \Lambda$ as argued in Sec.~\eqref{sec:photon-Neutrino-Mass}. For small transferred momentum $q<\!\!<m^{\gamma}_{\mathrm{eff}}$, we obtain \begin{equation} \mathcal{L}_{int} = \frac{G'_{F}}{4}\left(\xi^{\dagger}\xi^{\dagger}\right)\left(\xi\xi\right), \label{eq:interaction3} \end{equation} where the Fierz identity $\bar{\sigma}^{\mu\dot{\alpha}\beta}\bar{\sigma}_{\mu}^{\dot{\gamma}\delta}=-2\epsilon^{\dot{\alpha}\dot{\gamma}}\epsilon^{\beta\delta}$ is utilized and we take the convention $\epsilon^{12}=-\epsilon^{21}=\epsilon_{21}=-\epsilon_{12}=1$. Here we introduce the effective 4-ferimon interaction coupling $G'_{F}=8g^{2}/M^{2}$. The above process is illustrated in Fig. \ref{fig:4fermion}. Again, we stress that both non-relativistic and relativistic neutrinos acquire the effective mass $m_{\mathrm{eff}}\simeq M\simeq1\,{\rm \mathrm{meV}}$, but only non-relativistic neutrinos with kinetic energies $E<M$ undergo the condensation, while relativistic neutrinos with $E\gg M$ are practically unbounded from the condensate medium, and are mainly observed in astrophysical experiments. In the non-relativistic regime, we consider the fixed-axis spin states. Following \citep{dreiner2010two,anber2018new}, we introduce the non-relativistic spin-up and spin-down wave-functions $\psi_{\uparrow\downarrow}$, whose time derivatives are small compared to $m_{\mathrm{eff}}$. The non-relativistic scenario of the UV kinetic term can be casted in the Schr\"odinger field as $\underset{s=\uparrow\downarrow}{\sum}\psi_{s}^{*}\left(i\partial_{t}+\frac{\nabla^{2}}{2m_{\mathrm{eff}}}\right)\psi_{s}(x)$, and the UV interaction term Eq. (\ref{eq:interaction3}) reduces to $G'_{F}\psi_{\uparrow}^{*}\psi_{\downarrow}^{*}\psi_{\downarrow}\psi_{\uparrow}$. Taking into account the chemical potential $\mu$, neutrinos cooled enough by the dark photon background are described by the non-relativistic action \begin{equation} \mathcal{S}[\psi^{\dagger},\psi] = \int d^{4}x\left[\underset{s=\uparrow\downarrow}{\sum}\psi_{s}^{*}\left(i\partial_{t}+\frac{\nabla^{2}}{2m_{\mathrm{eff}}}+\mu\right)\psi_{s}(x)+G'_{F}\psi_{\uparrow}^{*}\psi_{\downarrow}^{*}\psi_{\downarrow}\psi_{\uparrow}(x)\right]\, . \label{eq:action-cooper} \end{equation} This is reminiscent of the BCS theory, although here the attractive force regulated by the bare coupling constants $G'_{F}>0$, and triggering the generation of the condensate, is induced by the dark photon \footnote{A similar scenario for the vacuum energy condensation was studied in \citep{Dona:2016fip, Addazi:2017qus}, assuming torsional gravity in order to derive an attractive super-conducting behaviour.}. Since the attractive channel is mediated by the dark photon, let us proceed with the standard approach to model superfluid condensation \citep{altland2010condensed,casalbuoni2003lecture,fai2019quantum,popov1991functional,schmitt2015introduction}. We focus on the Cooper pairs with zero spin fermion-fermion bilinear $\psi_{\uparrow}\psi_{\downarrow}$, which serves as the order parameter of the neutrino s-wave superfluid \citep{Schakel:1999pa}. Applying the Hubbard-Stratonovich identity to an auxiliary complex scalar field $\Phi(x)$ in the functional integral $Z=\int\mathcal{D}\psi\mathcal{D}\psi^{*}e^{i\mathcal{S}}$, we obtain \begin{eqnarray} Z & = & \frac{1}{Z_{0}}\int\mathcal{D}\psi\mathcal{D}\psi^{*}\mathcal{D}\Phi\mathcal{D}\Phi^{*}\exp\left(i\mathcal{S}'\left[\psi,\psi^{*},\Phi,\Phi^{*}\right]\right), \label{eq:functional-HS} \\ \mathcal{S}'\left[\psi,\psi^{*},\Phi,\Phi^{*}\right] & = & \mathcal{S}_{0}\left[\psi,\psi^{*}\right]+\int d^{4}x\left[-\frac{|\Phi|^{2}}{G'_{F}}-\Phi\left(\psi_{\uparrow}^{*}\psi_{\downarrow}^{*}\right)-\Phi^{*}\left(\psi_{\downarrow}\psi_{\uparrow}\right)\right], \label{eq:partition-HS} \end{eqnarray} where $Z$ is normalized to the free theory $Z_{0}=\int\mathcal{D}\psi\mathcal{D}\psi^{*}e^{i\mathcal{S}_{0}}$. By varying $\mathcal{S}'$ with respect to $\Phi$, we obtain the equation of motion $\Phi=G'_{F}\psi_{\uparrow}\psi_{\downarrow}$, which indicates that the composite scalar $\Phi$ can be interpreted as the neutrino pairing field. In terms of the Nambu-Gorkov basis $\chi^{T}=\left(\begin{array}{cc} \psi_{\uparrow}, & \psi_{\downarrow}^{*}\end{array}\right)$, the action in Eq. (\ref{eq:partition-HS}) can be arranged into \begin{equation} \mathcal{S}'\left[\chi,\chi^{\dagger},\Phi,\Phi^{*}\right]=\int d^{4}x\left(\chi^{\dagger}\mathcal{M}^{-1}\chi-\frac{|\Phi|^{2}}{G'_{F}}\right), \label{eq:action-HS} \end{equation} with \begin{equation} \mathcal{M}^{-1}=\left(\begin{array}{cc} i\partial_{t}+\frac{\nabla^{2}}{2m_{\mathrm{eff}}}+\mu & -\Phi\\ -\Phi^{*} & i\partial_{t}-\frac{\nabla^{2}}{2m_{\mathrm{eff}}}-\mu \end{array}\right)\label{eq:M-matrix}\, . \end{equation} Plugging Eq.~(\ref{eq:action-HS}) into the partition function Eq.~(\ref{eq:functional-HS}), and integrating over $\chi$ by means of the Gau\ss~integral, we obtain the one-loop effective action \begin{eqnarray} \frac{Z}{Z_{0}} & = & \intop\mathcal{D}\Phi\mathcal{D}\Phi^{*}e^{i\mathcal{S}_{\mathrm{eff}}\left[\Phi,\Phi^{*}\right]}, \label{eq:eff-partition}\\ \mathcal{S}_{\mathrm{eff}}\left[\Phi,\Phi^{*}\right] & = & -\frac{i}{2}\textrm{Tr}\left[\log\left(\mathcal{M}_{0}\mathcal{M}^{-1}\right)\right]-\int d^{4}x\frac{|\Phi|^{2}}{G'_{F}}. \label{eq:eff-action} \end{eqnarray} Here $\mathcal{M}_{0}^{-1}$ corresponds to the free theory $Z_{0}=\left[\textrm{det}\:\mathcal{M}_{0}^{-1}\right]^{1/2}$, while the trace Tr$[...]$ is taken with respect to the Nambu-Gorkov space. For the superfluid ground state, the paring field $\Phi(x)$ obtains a static and uniform gap $\Delta$, which is assumed to be real. Within the saddle-point approximation, $Z\propto e^{i\mathcal{S}_{\mathrm{sp}}}$ --- with $\mathcal{S}_{\mathrm{sp}}[\Delta]=\mathcal{S}_{\mathrm{eff}}\left[\Delta,\Delta^{*}\right]$ --- can be treated as the thermodynamic grand potential. Therefore the dependence of $\Delta$ on the thermodynamic variables can be extracted from the extreme condition $\delta \mathcal{S}_{\mathrm{sp}}/\delta\Delta=0$, which yields the gap equation \begin{equation} \Delta=\frac{G'_{F}}{4}\int\frac{d^{3}\mathbf{p}}{\left(2\pi\right)^{3}}\frac{\Delta}{\sqrt{\left(\frac{\mathbf{p}^{2}}{2m_{\mathrm{eff}}}-\mu\right)^{2}+\Delta^{2}}} \,. \label{eq:gap-equation} \end{equation} A full non-perturbative and non-trivial solution of Eq.\ref{eq:gap-equation} can only be found with numerical methods. For the positive coupling $G'_{F}$, one can see that Eq.(\ref{eq:gap-equation}) will always possess some nonzero gap solution $\Delta >0$, which implies the formation of neutrino condensates (see for example Refs.\citep{altland2010condensed,casalbuoni2003lecture}). A simple BCS-like estimate provides the size of the condensate, namely \citep{caldi1999cosmological} \begin{equation} \Delta\sim p_{F}\exp\left(-\frac{1}{p_{F}^{2}G'_{F}}\right), \end{equation} where $p_{F}=\sqrt{2\mu m_{\mathrm{eff}}}$ is the Fermi momentum for the non-relativistic neutrinos. Let us note that the chemical potential $\mu$ can be interpreted as the finite density parameter of the condensate or the Fermi energy level, which is around the meV scale. Taking into account $\mu\simeq M$, $m_{\mathrm{eff}}\simeq g M$ and $G'_{F}\simeq g^{2}/M^{2}$, we have $\Delta\sim\sqrt{g}M\exp\left(-1/g^{3}\right)\sim\ M$, provided that we assume $g\sim\mathcal{O}(1)$. Furthermore, the critical temperature is of the same order of the gap scale, i.e. meV. Let us comment on the dynamical generation of relativistic neutrino's effective mass. In Sec.~\eqref{sec:photon-Neutrino-Mass}, we have found that the relativistic neutrino propagating in the dark photon condensate would acquire a mass term. Thus the ground state of the pair field $\langle\Phi\rangle=G'_{F}\langle\psi_{\uparrow}\psi_{\downarrow}\rangle=\Delta$ implies the non-vanishing expectation value of $\langle\xi\xi\rangle=\langle\xi^{\dagger}\xi^{\dagger}\rangle\simeq\Delta/G'_{F}$. Consequently, in Fig.~\ref{fig:4fermion} the 4-fermion interaction that accounts for two legs pairing as a condensate, reads $G'_{F}\xi^{\dagger}\xi^{\dagger}\langle\xi\xi\rangle+G'_{F}\langle\xi^{\dagger}\xi^{\dagger}\rangle\xi\xi\simeq\Delta\left(\xi^{\dagger}\xi^{\dagger}+\xi\xi\right)\simeq\Delta\nu^{T}\mathcal{C}\nu$, which indicates the emergence of an effective Majorana neutrino mass $m_{\mathrm{eff}}\simeq\Delta\sim M$. We may finally discuss the superfluid properties of the neutrino condensate. As it is well known, a superfluid originates from the spontaneous symmetry breaking of an abelian global $U(1)$. We observe that the global axial $U_{A}(1)$ symmetry of the action Eq.~(\ref{eq:action-cooper}) is broken by the formation of a ground state with fixed global phase, i.e. $\Delta\in\mathbb{R}$. The fluctuation around the ground state, namely $\Phi(x)=\left(\Delta+\delta\rho(x)\right)e^{i\delta\theta(x)}$, will produce two collective modes: the massive mode $\delta\rho$ and a massless mode $\delta\theta$. In the vicinity of the critical temperature, the dynamics of $\Phi(x)$ is described by the time-dependent Ginzburg-Landau Lagrangian \begin{equation} \mathcal{L}_{GL}[\Phi,\Phi^{*}]=i\Phi^{*}\partial_{t}\Phi-\frac{1}{2m_{\Phi}}\mathbf{\nabla}\Phi^{*}\mathbf{\nabla}\Phi-\alpha\Phi^{*}\Phi-\frac{1}{2}\beta\left(\Phi^{*}\Phi\right)^{2}\,. \label{eq:GL} \end{equation} Here $m_{\Phi}$ is the effective mass of the composite neutrino pairs, which can be roughly parametrised as $m_{\Phi}\simeq2m_{\mathrm{eff}}$. A detailed calculation relates the coefficients $\alpha$, $\beta$ to the thermal parameters of the system \citep{casalbuoni2003lecture}. Below the critical temperature, the phase fluctuation $\delta\theta(x)$ is un-gapped and fulfils in the low-energy regime the linear dispersion \begin{equation} \omega_{\mathbf{k}}^{2}=-\frac{\alpha}{\beta}\frac{\mathbf{k}^{2}}{2m_{\Phi}}\sim\frac{\Delta}{m_{\mathrm{eff}}}\mathbf{k}^{2}\,. \end{equation} Therefore, un-gapped massless mode propagates as phonon like excitation, which may relate Dark matter to a superfluid state having an extra long-range interaction and effectively modifying the newtonian potential \cite{khoury2016dark,Addazi:2018ivg,Sharma:2018ydn,Ferreira:2018wup,Berezhiani:2018oxf,Famaey:2019baq}. Neutrino superfluid has many interesting applications in cosmological and astrophysical phenomena, and include the formation of neutrino vortices, exotic neutrino superfluid boson stars, gravitational waves etc., which have been extensively investigated in a wide literature --- see e.g. Refs.~\citep{berezhiani2015theory,volovik2001superfluid,khoury2016dark,Addazi:2018ivg,Sharma:2018ydn,Ferreira:2018wup,Berezhiani:2018oxf,Famaey:2019baq}. \section{Composite Majoron} Let us now postulate an extension of the SM symmetry with an extra global $U_{L}(1)$. The BI field condensation induces the Majorana neutrino pairing into Cooper pairs. This phenomenon dynamically breaks the global lepton number, since the neutrino condensate carries a double charge unit with respect to the Lepton-symmetry. The dynamical symmetry breaking of the $U_{L}(1)$ generates the neutrino mass term. A composite pseudo-Nambu-Goldstone is then obtained in the model, and is related to the spontaneous symmetry breaking of the $U_{L}(1)$ symmetry. This particle is then identified with the Majoron field \citep{Majoron1,Majoron2,Majoron3,Majoron4}. The condensation process induces a Majorana mass term, which is in turn related to the emergence of a Nambu-Jona-Lasinio(NJL) four fermion interaction during the BI condensation, namely \begin{equation} \label{BIAI} \mathcal{L}_{NJL,\nu}=G'_{F} \langle \nu\nu\rangle \nu\nu+h.c. \rightarrow \Delta L=2 \,. \end{equation} In the model we have been introducing, the natural mass scale of the Majoron is the meV scale. In this case, the Majoron field does not remain massless, and can only compose a superfluid state. In other words, the composite Majoron can provide a viable candidate of superfluid Dark Matter. Such a model seems compatible with the recent proposal of cold dark matter composed by light Majorons, as proposed in Ref.~\cite{Reig:2019sok}. Within this perspective, the right amount of Majoron Dark Matter can be generated from the decay of topological defects, produced in the dark sector and decaying into composite states \cite{Reig:2019sok}. Our model poses further questions for the phenomenology of the neutrino-less double beta ($0\nu\beta\beta$) decay. As it is well known, in the usual Majorana mass models, the neutrino-antineutrino identification leads to reconnection of the neutrino-antineutrino internal line from two simultaneous beta decays. Within the case of the traditional fundamental Majoron, an invisible particle emission can alter the statistical distribution of the emitted electrons as a multi-body decay. But in our case, the Majoron would be emitted at energies around the nuclear KeV scale, where neutrinos are practically un-bounded \citep{barger1982majoron}. Thus, in our scenario, the Majoron emission would be substituted by the emission of a couple of electronic (anti)neutrinos. If this would be the case, we would arrive to the astonishing conclusion that the composite Majoron is completely invisible in the $0\nu\beta\beta$ decays. This prediction will be eventually relevant for the next generation of experiments on the $0\nu\beta\beta$ decay, including {\bf LEGEND}, {\bf CUORE}, {\bf nEXO}, {\bf GERDA-\uppercase\expandafter{\romannumeral2}} etc. \cite{Abgrall:2017syy,Alduino:2017pni,Albert:2017hjq,Agostini:2017hit}. On the other hand, a $0\nu\beta\beta$ decay is possible through the diagram in Fig.~\ref{fig:Majoron}, when the composite neutrino state pair acquires a vacuum expectation value generating a Majorana mass, and then triggering the process. \section{Discussion and Conclusions} \label{sec:Conclusions-and-Remarks} We have elaborated on a possible, common unifying explanation of Dark Energy, Dark Matter and the Neutrino mass origin. In particular, we have postulated the existence of a new dark fifth force interaction that we dub non-linear dark photon. The non-linear dark photon has a higher derivative electrodynamic Lagrangian, with particular interest to the Born-Infeld case. We have shown that non-linear higher derivative terms drive the new vector boson to a condensation with possibility to source the acceleration of the Universe. Then, we assume that only neutrinos are coupled to the new dark interaction. We have shown that the neutrino mass can be generated as an effect of the neutrino interaction with the Dark Energy Born-Infeld condensate. This scenario opens the possibility of having a new state of neutrino matter: if neutrinos are produced as non-relativistic and cold in the early Universe, around the meV energy scale, they can cross a phase transition forming a neutrino superfluid. The neutrino superfluid is composed by Cooper pairs of misaligned spin neutrinos providing for a Majorana mass for the neutrinos. Indeed, at meV energy, neutrinos are very weekly coupled to the SM particles, while are strongly coupled to the Dark Energy Born-Infeld condensate. On the other hand, mixing flavour neutrino pairs can provide non-diagonal mass terms sourcing neutrinos' oscillations \footnote{Neutrino oscillation has been intensively studied over the past two decades, within a series of investigations particularly focusing on the concomitant role of gravity and extended theories of gravity --- see e.g. Refs.~\citep{Capozziello:1999ww, Capozziello:1999qm, Capozziello:2000ga, Capozziello:2010yz, CaLa20202}.}. In principle, couplings of neutrinos to the Born-Infeld condensate are related to neutrino mass hierarchy. Differently than within several other see-saw models, a neutrino inverse hierarchy is naturally possible here.This is a topic that may result in great interest for the next neutrino experiments, including {\bf JUNO} \cite{An:2015jdp,Li:2014qca,Djurcic:2015vqa}. Our work opens the pathway to several novel phenomenological possibilities to search for Dark Matter and test dynamical Dark Energy scenarios. First of all, if neutrino superfluid is responsible for Dark Matter, then it is possible to consider also the formation of Boson superfluid neutrino star. Consequently, the possible merging of neutrino Boson Stars may be observed through Gravitational Waves experiments. A similar reasoning applied to possible correlated signal of relativistic neutrino emissions generated by the high energy merging. We did not focus on identifying the precise detection range of of these phenomena, but we think this may deserve a future analysis beyond the purposes of this paper. Furthermore, several exotic scenarios may emerge from considering multi-operators induced by neutrino and Dark Energy, as for instance $G_{F}'^{n}(\bar{\nu}\gamma_{\mu}\gamma_{5}\nu)^{2n}$ (spinor contracted combination). Within this latter case, one may have multi-neutrino pair condensate, pretty much reminding the already discussed possibility of Standard Model tetra-quarks, penta-quarks and so on. An alternative scenario may be that multi-neutrinos form nuggets of more compact $N>\!\!>1$ number with a binding energy scaling almost as $E_{Binding}\sim N M$ and a radius scaling as $R_{Binding}\sim1/(M\sqrt{N})$ \cite{Madsen:1998,Afshordi:2005ym,Gogoi:2020qif} . For a large number of particles, neutrino nuggets can be compact and dark objects. This latter possibility opens the pathway to new exciting cosmological scenarios, which nonetheless are sill beyond our full comprehension. \vspace{5mm} \noindent \textbf{Acknowledgements.} A.A. work is supported by the Talent Scientific Research Program of College of Physics, Sichuan University, Grant No.1082204112427 $\&$ the Fostering Program in Disciplines Possessing Novel Features for Natural Science of Sichuan University, Grant No.2020SCUNL209 $\&$ 1000 Talent program of Sichuan province 2021. S.C. acknowledges the support of Istituto Nazionale di Fisica Nucleare, sez. di Napoli, iniziative specifiche QGSKY and MOONLIGHT2. A.M. wishes to acknowledge support by the NSFC, through the grant No. 11875113, the Shanghai Municipality, through the grant No. KBH1512299, and by Fudan University, through the grant No. JJH1512105. Q.Y. Gan work is supported by the scholarship from China Scholarship Council (CSC) under the Grant CSC No. 202106240085.
Title: Symmetry restoration in the vicinity of neutron stars with a nonminimal coupling
Abstract: We propose a new model of scalarized neutron stars (NSs) realized by a self-interacting scalar field $\phi$ nonminimally coupled to the Ricci scalar $R$ of the form $F(\phi)R$. The scalar field has a self-interacting potential and sits at its vacuum expectation value $\phi_v$ far away from the source. Inside the NS, the dominance of a positive nonminimal coupling over a negative mass squared of the potential leads to a symmetry restoration with the central field value $\phi_c$ close to $0$. This allows the existence of scalarized NS solutions connecting $\phi_v$ with $\phi_c$ whose difference is significant, whereas the field is located in the vicinity of $\phi=\phi_v$ for weak gravitational stars. The Arnowitt-Deser-Misner mass and radius of NSs as well as the gravitational force around the NS surface can receive sizable corrections from the scalar hair, while satisfying local gravity constraints in the Solar system. Unlike the original scenario of spontaneous scalarization induced by a negative nonminimal coupling, the catastrophic instability of cosmological solutions can be avoided. We also study the cosmological dynamics from the inflationary epoch to today and show that the scalar field $\phi$ finally approaches the asymptotic value $\phi_v$ without spoiling a successful cosmological evolution. After $\phi$ starts to oscillate about the potential minimum, the same field can also be the source for cold dark matter.
https://export.arxiv.org/pdf/2208.08107
\preprint{WUCG-22-08} \newcommand{\newc}{\newcommand} \newc{\be}{\begin{equation}} \newc{\ee}{\end{equation}} \newc{\ba}{\begin{eqnarray}} \newc{\ea}{\end{eqnarray}} \newc{\bea}{\begin{eqnarray*}} \newc{\eea}{\end{eqnarray*}} \newc{\D}{\partial} \newc{\ie}{{\it i.e.} } \newc{\eg}{{\it e.g.} } \newc{\etc}{{\it etc.} } \newc{\etal}{{\it et al.}} \newc{\Mpl}{M_{\rm Pl}} \newcommand{\nn}{\nonumber} \newc{\ra}{\rightarrow} \newc{\lra}{\leftrightarrow} \newc{\lsim}{\buildrel{<}\over{\sim}} \newc{\gsim}{\buildrel{>}\over{\sim}} \def\rd{\mathrm{d}} \newcommand{\re}[1]{\textcolor{red}{#1}} \newcommand{\ma}[1]{\textcolor{magenta}{#1}} \newcommand{\mm}[1]{\textcolor{cyan}{[MM:~#1]}} \title{ Symmetry restoration in the vicinity of neutron stars with a nonminimal coupling } \author{ Masato Minamitsuji$^{1}$ and Shinji Tsujikawa$^{2}$} \affiliation{ $^1$Centro de Astrof\'{\i}sica e Gravita\c c\~ao - CENTRA, Departamento de F\'{\i}sica, Instituto Superior T\'ecnico - IST, Universidade de Lisboa - UL, Avenida Rovisco Pais 1, 1049-001 Lisboa, Portugal\\ $^2$Department of Physics, Waseda University, 3-4-1 Okubo, Shinjuku, Tokyo 169-8555, Japan} \date{\today} \section{Introduction} \label{sec1} Having the detection of gravitational waves from binary systems composed of black holes (BHs) and/or neutron stars (NSs) \cite{LIGOScientific:2016aoc,LIGOScientific:2017vwq}, we are now ready for testing physics on strong gravitational backgrounds in the strong field regime \cite{Berti:2015itd,Berti:2018cxi,Barack:2018yly}. General Relativity (GR) is currently recognized as a fundamental theory describing the gravitational interaction, but it is not yet clear how much extent to GR is trustable in the vicinity of extreme compact objects. There are some alternative theories of gravity like scalar-tensor theories \cite{Horndeski:1974wa,Fujii:2003pa,Deffayet:2011gz, Kobayashi:2011nu,Charmousis:2011bf,Kase:2018aps,Kobayashi:2019hrl} in which a new degree of freedom like a scalar field could modify the gravitational interaction through couplings to curvature invariants. Since the accuracy of GR has been well confirmed in the weak-field regimes, modified gravitational theories have to be constructed to be consistent with local gravity constraints in the Solar system \cite{DeFelice:2010aj,Clifton:2011jh,Joyce:2014kja,Will:2014kxa,Koyama:2015vza,Heisenberg:2018vsk}. In the presence of a scalar field $\phi$ nonminimally coupled with the Ricci scalar $R$ of the form $F(\phi)R$, it is known that a phenomenon called spontaneous scalarization can occur for static and spherically symmetric NSs \cite{Damour:1993hw}, while recovering the GR behavior in the weak-field backgrounds. Spontaneous scalarization is an interesting phenomenon in that the large deviation from GR manifests itself on strong gravitational backgrounds \cite{Sotani:2004rq,Sotani:2014tua,Minamitsuji:2016hkk}. In the presence of a scalar Gauss-Bonnet coupling, scalarization can occur for non-rotating and rotating BHs \cite{Doneva:2017bvd,Silva:2017uqg,Doneva:2017bvd,Antoniou:2017acq,Antoniou:2017acq,Antoniou:2017acq,Minamitsuji:2018xde,Cunha:2019dwb,Dima:2020yac,Herdeiro:2020wei,Berti:2020kgk} as well as NSs \cite{Doneva:2017duq}. Spontaneous scalarization can take place with a scalar-gauge coupling $\alpha(\phi)F_{\mu \nu}F^{\mu \nu}/4$ for charged BHs \cite{Herdeiro:2018wub,Fernandes:2019rez} and charged stars \cite{Minamitsuji:2021vdb}. While the extension of spontaneous scalarization of NSs to the vector-field sector has been considered in the literature \cite{Annulli:2019fzq,Ramazanoglu:2017xbl,Ramazanoglu:2019gbz,Kase:2020yhw,Minamitsuji:2020pak}, it has been argued that these models generically suffer from ghost or gradient instabilities \cite{Garcia-Saenz:2021uyv,Silva:2021jya,Demirboga:2021nrc}. In the original model of Damour and Esposito-Farese based on the nonminimal coupling $F(\phi)R$ \cite{Damour:1993hw}, the necessary conditions for the occurrence of NS scalarization are given by $F_{,\phi}(0)=0$ and $F_{,\phi \phi}(0)>0$, where $F_{,\phi}={\rm d}F/{\rm d}\phi$ and $F_{,\phi \phi}={\rm d}^2F/{\rm d}\phi^2$. In general, there is a nonvanishing scalar-field branch $\phi(r) \neq 0$ that depends on the radial distance $r$ besides a GR branch $\phi(r)=0$. The effective field mass squared around $\phi=0$ is given by $m_{\rm eff}^2(0)=-M_{\rm pl}^2 F_{,\phi \phi}(0)R_0/2$, where $M_{\rm pl}$ is the reduced Planck mass and $R_0$ is the Ricci scalar at $\phi=0$. In the weak-field backgrounds, the field can stay in the GR branch due to the smallness of $R_0$. Inside extreme compact objects like NSs, the negative mass squared $m_{\rm eff}^2(0)<0$ induced by large values of $R_0$ can trigger a tachyonic instability toward the nontrivial branch $\phi (r)\neq 0$. The typical choice of nonminimal couplings consistent with the first condition $F_{,\phi}(0)=0$ is $F(\phi)={\rm e}^{-\beta \phi^2/(2\Mpl^2)}$, where $\beta$ is a constant. To realize the second condition $F_{,\phi \phi}(0)>0$, i.e., $m_{\rm eff}^2(0)<0$, we require that $\beta<0$. The studies in Refs.~\cite{Harada:1998ge,Novak:1998rk,Silva:2014fca} have shown that spontaneous scalarization can occur for the nonminimal coupling in the range $\beta \le -4.35$, irrespective of the NS equation of state (EOS). On the other hand, the binary pulsar measurements of an energy loss through the dipolar radiation have put the bound $\beta \ge -4.5$ \cite{Freire:2012mg,Shao:2017gwu}. Then, the coupling constant $\beta$ is constrained to be in a limited range. If we apply the above nonminimally coupled theory to cosmology, it is known that the scalar field is subject to a tachyonic instability for negative values of $\beta$ required for the occurrence of spontaneous scalarization \cite{Damour:1992kf,Damour:1993id}. Around $\phi=0$, the effective field mass squared is estimated as $m_{\rm eff}^2 (0) \simeq \beta R_0/2$, so that $m_{\rm eff}^2 (0)<0$ for $\beta<0$ expect the radiation-dominated era (where $R_0=0$). During inflation in which the Hubble expansion rate $H$ is nearly constant, we have $m_{\rm eff}^2 (0) \simeq 6 \beta H^2$ and hence the negative coupling of order $\beta \simeq -5$ leads to the exponential growth of $\phi$. This spoils the success of the standard inflationary paradigm. We note that the initial field value at the onset of inflation cannot be tuned to 0 due to the presence of scalar-field perturbations $\delta \phi$. Indeed, the perturbations $\delta \phi$ relevant to the scales of observed CMB temperature anisotropies are exponentially amplified after the Hubble radius crossing during inflation. The scalar field also increases during matter and dark energy dominated epochs. Hence the GR solution $\phi=0$ is not a cosmological attractor and the Solar-system constraints would be easily violated. The similar instability of cosmological solutions is present for spontaneously scalarized BHs realized by a scalar Gauss-Bonnet coupling \cite{Anson:2019uto,Franchini:2019npi,Antoniou:2020nax}. There have been several attempts to reconcile NS spontaneous scalarizations with cosmology. One scenario is to take into account higher-order polynomial corrections (like ${\cal O}(\phi^4)$) to the nonminimal coupling function $F(\phi)$ \cite{Anderson:2016aoi}. The other possibility is to introduce a coupling between $\phi$ and the inflaton $\chi$ of the form $g^2 \phi^2 \chi^2/2$, where $g$ is a coupling constant~\cite{Anson:2019ebp}. Then the effective field mass squared $m_{\rm eff}^2 (\phi)$ can be largely positive during inflation, in which case $\phi$ decreases exponentially toward 0. After the end of the radiation-dominated era, the field $\phi$ starts to increase by the tachyonic mass. Provided that the suppression of $\phi$ during inflation occurs sufficiently, however, it should be possible that today's value of $\phi$ is below the limit constrained by Solar-system experiments. In these two scenarios, the coupling $\beta$ still needs to be in a limited negative range. There is also the other mechanism based on a disformal coupling between the scalar field $\phi$ and matter \cite{Silva:2019rle}, in which case, however, it was shown that the large disformal coupling required for the cosmological evolution toward $\phi=0$ works to suppress the occurrence of spontaneous scalarization. In this paper, we propose a new mechanism for NS scalarizations realized by the presence of a self-interacting potential of the form $V(\phi)=m^2 f_B^2 [1+\cos(\phi/f_B)]$ besides the nonminimal coupling ${\rm e}^{-\beta \phi^2/(2\Mpl^2)}R$, where $m$ and $f_B$ are constants with mass dimension. In this setup, the field $\phi$ is in a ground state at the vacuum expectation value (VEV) $\phi_v=\pi f_B$ in the asymptotic region far away from a NS. At $\phi=0$ the bare potential $V(\phi)$ has a negative mass squared $-m^2$, but the positive nonminimal coupling constant ($\beta>0$) gives rise to a positive contribution $\beta R_0/2$ to the effective mass squared as $m_{\rm eff}^2(0)=-m^2+\beta R_0/2$. In the high-curvature region with $m_{\rm eff}^2(0)>0$, the field $\phi$ can stay in the vicinity of $\phi=0$. The transition to the region close to $\phi=0$ should occur inside the NS for the coupling $\beta>{\cal O}(0.1)$ with $m={\cal O}(10^{-11}\,{\rm eV})$, where the Compton radius $m^{-1}={\cal O} (10\,{\rm km})$ corresponds to the typical size of NSs. We will show the existence of field profiles connecting the internal solution ($\phi \simeq 0$) to the external solution far outside the star ($\phi \simeq \phi_v$). Note that the conceptually similar model was proposed in Ref.~\cite{Babichev:2022djd}, where scalarized BHs were induced by the scalar Gauss-Bonnet coupling with a symmetry-breaking potential. In our model, the nonminimal coupling constant $\beta$ is positive and the effective mass squared $m_{\rm eff}^2(0)=-m^2+\beta R_0/2$ at $\phi=0$ is positive in the early cosmological epoch satisfying $\beta R_0/2>m^2$. Then, during inflation, the scalar field $\phi$ can decrease exponentially toward 0. After $\beta R_0/2$ drops below $m^2$ in the radiation-dominated era, the field $\phi$ should exhibit tachyonic growth toward the ground state at $\phi=\phi_v$. Indeed, we will show that the field settles down the potential minimum by today without violating a successful cosmic expansion history. After $\phi$ starts to oscillate around $\phi_v$, the same field can also work as the source for (a portion of) cold dark matter (CDM). In weak gravitational objects like the Sun, the Ricci scalar $R$ inside the star is small in comparison to that in NSs and hence $m_{\rm eff}^2$ is negative in the vicinity of $\phi=0$. In such cases, the scalar field is in the region close to $\phi=\phi_v$ both inside and outside the star. We will obtain the field profile and post-Newtonian parameter and put bounds on the scale $f_B$ from Solar-system tests on local gravity. Using these constrained values of $f_B$, we numerically construct scalarized NS solutions with nontrivial profiles of the scalar field and compute the effect on the Arnowitt-Deser-Misner (ADM) mass and radius of NSs as well as modifications of gravity around the surface of star. We will show that the difference of the ADM mass of scalarized NSs from that in GR can exceed more than 10\,\%. The modified gravitational interaction induced in our scenario may be detectable in future observations of gravitational waves and other measurements on the strong field regimes. This paper is organized as follows. In Sec.~\ref{modelsec}, we present our new model of NS scalarizations and discuss its basic structure. In Sec.~\ref{cossec}, we study the cosmological dynamics of the nonminimally coupled scalar field from an inflationary epoch to today and show that the field is eventually stabilized at $\phi=\phi_v$ without preventing the cosmic expansion history. In Sec.~\ref{weaksec}, we derive the field profile for a constant density star on weak gravitational backgrounds and place bounds on model parameters from Solar-system constraints. In Sec.~\ref{scasec}, we obtain the scalar-field solution for NSs and study its effect on the modification of gravitational interactions. Sec.~\ref{consec} is devoted to conclusions. \section{Models with NS scalarizations} \label{modelsec} We consider theories given by the action \ba {\cal S}= \int {\rm d}^4 x \sqrt{-g_J} \left[ \frac{\Mpl^2}{2}F(\phi) R+\omega (\phi)X -V(\phi) \right] +{\cal S}_m (g_{\mu \nu}, \Psi_m)\,, \label{action} \ea where $g_J$ is a determinant of the metric tensor $g_{\mu \nu}$, $M_{\rm pl}$ is a constant having the dimension of mass, $F$ is a function of $\phi$, $R$ is the Ricci scalar, $X=-(1/2)g^{\mu \nu} \nabla_{\mu} \phi \nabla_{\nu} \phi$ is a scalar kinetic term with the covariant derivative operator $\nabla_{\mu}$, $V(\phi)$ is a scalar potential, and \be \omega(\phi)=\left( 1-\frac{3\Mpl^2 F_{,\phi}^2}{2F^2} \right)F\,, \label{omega} \ee with $F_{,\phi}\equiv {\rm d} F/{\rm d}\phi$ and so on. The action ${\cal S}_m$ incorporates the contributions of matter fields $\Psi_m$ inside the NS. Note that in the case $F(\phi)=1$ the constant $\Mpl$ represents the reduced Planck mass ($\Mpl=2.435\times 10^{18}$ GeV). The equations of motion for the metric and scalar field are given, respectively, by \ba && \label{metric_eom} \Mpl^2 \left[ F(\phi)G_{\mu\nu} +\Box F (\phi)g_{\mu\nu} -\nabla_\mu \nabla_\nu F(\phi) \right] -\omega(\phi) \left( \nabla_\mu\phi\nabla_\nu \phi +X g_{\mu\nu} \right) + g_{\mu\nu}V(\phi) = T_{\mu\nu}, \\ && \label{scalar_eom} \omega(\phi)\Box\phi -\omega_{,\phi} (\phi)X +\frac{\Mpl^2}{2}F_{,\phi}(\phi)R -V_{,\phi}(\phi) =0, \ea where $T_{\mu\nu}$ represents the energy-momentum tensor of matter in the Jordan frame defined by \be T_{\mu\nu} \equiv -\frac{2}{\sqrt{-g_J}} \frac{\delta {\cal S}_m}{\delta g^{\mu\nu}}. \ee Acting the operator $\nabla^\mu$ on Eq. \eqref{metric_eom} and using Eq.~\eqref{scalar_eom}, we obtain the conservation law of matter as \be \label{eq_continuity} \nabla^\mu T_{\mu\nu}=0. \ee We consider the nonminimal coupling chosen by Damour and Esposito-Farese \cite{Damour:1993hw} \be F(\phi)={\rm e}^{-\beta \phi^2/(2M_{\rm pl}^2)}\,, \label{Fnon} \ee where $\beta$ is a dimensionless constant. {}From Eq.~\eqref{omega}, we have \be \omega(\phi)=\left( 1- \frac{3 \beta^2 \phi^2}{2 \Mpl^2} \right)F\,. \ee Under a conformal transformation of the action (\ref{action}) to the Einstein frame, the theory with $V(\phi)=0$ recasts the one originally advocated in Ref.~\cite{Damour:1993hw} (see the Appendix of Ref.~\cite{Kase:2020yhw}). In the following, we will perform all the analysis in the Jordan frame action (\ref{action}). Let us first revisit the case of standard NS spontaneous scalarization in the absence of the scalar potential, i.e., \be V(\phi)=0\,. \ee Then, there is the branch $\phi=0$ as one of the solutions to Eq.~(\ref{scalar_eom}). For this solution, Eq.~(\ref{metric_eom}) reduces to the Einstein equation $\Mpl^2 G_{\mu \nu ({\rm GR})}=T_{\mu \nu ({\rm GR})}$ in GR. In regions of the large curvature $R$, it is possible to have a nontrivial branch with $\phi \neq 0$ besides the GR branch $\phi=0$. If we consider a small perturbation $\delta \phi$ about the GR solution, the perturbation obeys $\square \delta \phi -m_{\rm eff}^2 (0) \delta \phi=0$, where $m_{\rm eff}^2(0)=-M_{\rm pl}^2 F_{,\phi \phi}(0)R_0/2$ and $R_0$ is the Ricci scalar in the GR background at $\phi=0$. Provided that $F_{,\phi \phi}(0)>0$ with $R_0>0$, the GR branch is subject to tachyonic instability due to the negative mass squared $m_{\rm eff}^2(0)$. For $\beta<0$, there is a possibility for NSs to acquire a scalar hair after the spontaneous growth of $\phi$ toward the other nontrivial branch, whose phenomenon is dubbed spontaneous scalarization. As mentioned in Sec.~\ref{sec1}, the nonminimal coupling constant $\beta$ needs to be in the limited range $-4.5 \le \beta \le -4.35$ in the original model of Ref.~\cite{Damour:1993hw}. Here, the upper limit of $\beta$ arises for the occurrence of spontaneous scalarization \cite{Harada:1998ge,Novak:1998rk}, whereas the lower bound comes from binary pulsar measurements \cite{Freire:2012mg,Shao:2017gwu}. For such negative values of $\beta$, there is a tachyonic instability of the field $\phi$ on the cosmological background and hence GR solution $\phi=0$ is not an attractor. This instability is particularly prominent during the inflationary epoch to destroy the background evolution. Then, the successful cosmic expansion history is spoiled by the negative nonminimal coupling with $V(\phi)=0$. The story is different for the positive nonminimal coupling with a self-interacting scalar potential $V(\phi)$. For concreteness, we consider a potential of the pseudo Nambu-Goldstone boson (pNGB), which is given by \be V(\phi)=m^2 f_B^2 \left[ 1+\cos \left( \frac{\phi}{f_B} \right) \right]\,, \label{Vphi} \ee where $m$ and $f_B$ are constants having the dimension of mass. This potential has a reflection symmetry with respect to $\phi=0$. To choose either the ground state at $\phi=\pi f_B$ or $\phi=-\pi f_B$ means the breaking of the reflection symmetry. We will choose the positive VEV $\phi_v=\pi f_B$ as a symmetry-breaking ground state. Around $\phi=0$, the potential (\ref{Vphi}) has a negative mass squared $-m^2$. Since the nonmininal coupling $\Mpl^2 F(\phi)R/2$ is present, the squared effective mass of the field at $\phi=0$ yields \be \label{m0} m_{\rm eff}^2(0)=-m^2+\frac{\beta}{2}R_0\,. \ee Due to the largeness of $R_0$ in regions of the high density, the positive nonminimal coupling constant $\beta$ can lead to the symmetry restoration at $\phi=0$. This occurs if $\beta R_0/2$ exceeds the negative mass squared $-m^2$. In regions of the low density, the effect of $\beta R_0/2$ on $m_{\rm eff}^2(0)$ should be unimportant relative to the contribution $-m^2$. Hence the scalar field would acquire the VEV $\phi_v=\pi f_B$ on weak gravitational backgrounds. This scalar-field configuration is different from that arising from standard NS spontaneous scalarization with $V(\phi)=0$, in that the scalar field is in the symmetry-restored state $\phi=0$ around the center of star while $\phi$ approaches the asymptotic value $\phi_v=\pi f_B$ far away from the star. For a star with the mean density $\rho$ and pressure $P$, the Ricci scalar $R$ at $\phi=0$ is of order $R\simeq (\rho-3P)/\Mpl^2$. Then, the critical value of $\beta$ corresponding to $m_{\rm eff}^2(0)=0$ can be estimated as \be \beta_c=\frac{2 m^2 \Mpl^2}{\rho-3P} =0.28 \left( \frac{10^{15}~{\rm g/cm}^3}{\rho-3P} \right) \left( \frac{m}{10^{-11}~{\rm eV}} \right)^2\,. \label{betac} \ee Note that, for $m={\cal O}(10^{-11}~{\rm eV})$, the Compton radius of $\phi$ is of ${\cal O}(10\,{\rm km})$, i.e., the typical the size of NSs. For $\beta>\beta_c$ we have $m_{\rm eff}^2(0)>0$, and the scalar field can be in the symmetry-restored state at $\phi=0$. For $\beta<\beta_c$, the state at $\phi=0$ becomes unstable and hence the solution should approach the ground state at $\phi=\phi_v$. The typical central density of NSs is around $\rho=10^{15}~{\rm g/cm}^3$, so the mass of order $m=10^{-11}~{\rm eV}$ gives rise to the critical coupling $\beta_c$ around $\beta_c=0.1\sim 1$. On the Friedmann-Lema\^itre-Robertson-Walker (FLRW) cosmological background, the scalar field can be in the state $\phi=0$ in the early Universe satisfying the condition $\beta R_0/2>m^2$. After the term $\beta R_0/2$ drops below $m^2$ along the cosmic expansion, however, the field should evolve to the ground state at $\phi=\phi_v$ since $m_{\rm eff}^2(0)$ becomes negative. In Sec.~\ref{cossec}, we study cosmology in the above model in details and show that $\phi$ sufficiently approaches the potential minimum by today. \section{Cosmology with positive nonminimal coupling} \label{cossec} We study the cosmological dynamics of the scalar field $\phi$ from the inflationary to today for the theory given by the action (\ref{action}). A spatially-flat FLRW background is given by the line element \be {\rm d}s^2 =g_{\mu\nu}{\rm d}x^\mu {\rm d}x^\nu =-{\rm d}t^2+a^2(t) \delta_{ij} {\rm d}x^i {\rm d}x^j\,, \ee where the scale factor $a(t)$ depends on the cosmic time $t$. Then, the gravitational and scalar-field equations of motion are \ba & & 3F H^2 \Mpl^2= -3\Mpl^2 H F_{,\phi}\dot{\phi} +\frac{1}{2} \omega \dot{\phi}^2+V+\rho\,, \label{cosmo1}\\ & & F \left( 2\dot{H}+3H^2 \right) \Mpl^2= -\Mpl^2 \left[ F_{,\phi} (\ddot{\phi}+2H\dot{\phi}) +F_{,\phi\phi}\dot{\phi}^2 \right] -\frac{1}{2}\omega \dot{\phi}^2+V-P\,, \label{cosmo2}\\ & & \ddot{\phi}+3H \dot{\phi} -\frac{3\Mpl^2F_{,\phi}}{\omega} \left(\dot{H}+2H^2\right) +\frac{\omega_{,\phi}\dot{\phi}^2}{2\omega} +\frac{V_{,\phi}}{\omega}=0\,, \label{cosmo3} \ea where $\rho$ and $P$ are the density and pressure of the inflaton field and/or perfect fluids, $H=\dot{a}/a$ is the Hubble parameter, and a `dot' represents the derivative with respect to $t$, and $\omega_{,\phi}\equiv {\rm d}\omega/{\rm d}\phi$, $V_{,\phi}\equiv {\rm d} V/{\rm d}\phi$, and so on. Note that $F_{,\phi}= -\beta \phi F/\Mpl^2$ and $F_{,\phi\phi}=\beta (\beta \phi^2-\Mpl^2)F/\Mpl^4$, and in the regime $|\phi|\ll \Mpl$, $\omega_{,\phi}/\omega \simeq -\beta(1+3\beta)\phi/\Mpl^2$. \subsection{Evolution during inflation and reheating} To study the cosmological dynamics during inflation, we incorporate a canonical inflaton field $\chi$ with the potential $U(\chi)$. Then, we have $\rho=\dot{\chi}^2/2+U(\chi)$ and $P=\dot{\chi}^2/2-U(\chi)$ in Eqs.~(\ref{cosmo1}) and (\ref{cosmo2}). The inflaton field obeys the continuity equation $\dot{\rho}+3H (\rho+P)=0$, i.e., \be \ddot{\chi}+3H\dot{\chi}+U_{,\chi}=0\,, \label{chieq} \ee where $U_{,\chi}\equiv {\rm d} U/{\rm d}\chi$. The kinetic and potential energy of the field $\phi$ should be suppressed relative to $U(\chi)$ during inflation. Let us consider the typical Hubble scale of inflation of order $H \sim 10^{14}$~GeV. Since $V(\phi)$ is at most of order $m^2 f_{B}^2$, we have $V(\phi) \lesssim m^2 f_B^2 \ll H^2 \Mpl^2 \sim U(\chi)$ for the mass scale $m \sim 10^{-11}$~eV with $f_B \lesssim \Mpl$. Provided that the condition $|\omega_{,\phi}|\dot{\phi}^2 \ll H^2\Mpl^2|F_{,\phi}|$ holds together with the slow-roll condition $|\dot{H}| \ll H^2$, Eq.~(\ref{cosmo3}) is approximately given by \be \ddot{\phi}+3H \dot{\phi}+\frac{1}{\omega} \left[ 6 \beta F H^2 \phi -m^2 f_B \sin \left( \frac{\phi}{f_B} \right)\right] \simeq 0\,. \label{phiinf} \ee We are interested in the coupling range $\beta \gtrsim 0.1$ with $m$ of order $10^{-11}$~eV. For $\phi \gtrsim f_B$, since $H \gg m$, the term $6 \beta F H^2 \phi$ dominates over $m^2 f_B \sin (\phi/f_B)$ during inflation. This is also the case for $0<\phi \ll f_B$ as $m^2 f_B \sin (\phi/f_B) \simeq m^2 \phi$ in this regime. Then, during inflation, Eq.~(\ref{phiinf}) approximately reduces to \be \ddot{\phi}+3H \dot{\phi}+\frac{6 \beta F H^2}{\omega} \phi \simeq 0\,, \label{phiin} \ee and the contribution of the pNGB scalar potential to the background Eqs.~(\ref{cosmo1})-(\ref{cosmo2}) can be completely neglected. Provided that the scalar field is in the range $\beta \phi^2/\Mpl^2 \ll 1$, we have $F \simeq 1$ and $\omega \simeq 1$. On using the approximation that $H$ is constant during inflation, the dominant solution to Eq.~(\ref{phiin}) is given by \begin{numcases}{\phi \propto } \exp \left( -\frac{3}{2} H t \right) \cos(\Omega_0 t+\theta_0) & ({\rm if}~$\beta>3/8)$\,, \label{dampos} \\ \exp \left[ -\frac{3}{2} \left( 1-\sqrt{1-\frac{8}{3}\beta} \right)H t \right] & ({\rm if}~$\beta < 3/8)$\,,\label{phidec} \end{numcases} where $\Omega_0=\sqrt{6\beta-9/4}\,H$ and $\theta_0$ is an arbitrary constant. For $\beta>3/8$, the field $\phi$ exhibits a damped oscillation with the amplitude rapidly decreasing as $|\phi| \propto \exp (-3H t/2)$. If the total number of e-foldings during inflation is $N=\int_0^t H{\rm d}t \simeq Ht=60$, the amplitude of $\phi$ at the end of inflation is $8 \times 10^{-40}$ times as small as that at the onset of inflation. For $0<\beta<3/8$, $\phi$ decreases without oscillations according to Eq.~(\ref{phidec}). If $\beta<0$, the scalar field increases as $\phi \propto \exp[(3/2) (\sqrt{1-8\beta/3}-1)Ht]$. As the inflaton potential, we consider the $\alpha$-attractor type given by \be U(\chi)=\frac{3}{4}\alpha M^2 \Mpl^2 \left[ 1-\exp \left( -\sqrt{\frac{2}{3\alpha}} \frac{\chi}{\Mpl} \right) \right]^2\,, \label{Vchi} \ee where $\alpha$ is a positive constant \cite{Kallosh:2013yoa}. For $\alpha=1$, the potential (\ref{Vchi}) is equivalent to that of Starobinky inflation \cite{Starobinsky:1980te} in the Einstein frame \cite{DeFelice:2010aj}. The field $\chi$ at the end of inflation can be determined by the condition $\epsilon_V=(\Mpl^2/2)(V_{,\chi}/V)^2=1$, i.e., $\chi_f=0.940 \Mpl$. Cosmic acceleration occurs in the region where $\chi \gtrsim \Mpl$, which is followed by the reheating stage driven by the oscillation of $\chi$ around $\chi=0$. {}From the Planck normalization of curvature perturbations generated during inflation, the mass $M$ is constrained to be around $M \simeq 10^{-5} \Mpl$. In our numerical simulations we will choose the potential (\ref{Vchi}) with $\alpha=1$, but the evolution of $\phi$ during inflation and reheating is similar for any other slow-roll inflaton potentials which can be approximated by $U(\chi) \simeq M^2 \chi^2/2$ in the vicinity of $\chi= 0$. In Fig.~\ref{fig1}, we plot the evolution of $|\phi|/\Mpl$ during inflation and reheating for $\beta=1, 0.1, -1$ with $m=10^{-11}$~eV and $M=10^{-5} \Mpl$. The initial conditions are chosen to be $\chi_i=5.365 \Mpl$, $\dot{\chi}_i=0$, $\phi_i=0.5 \Mpl$, and $\dot{\phi}_i=0$. For $\beta=1$, we can confirm that the amplitude of $\phi$ during inflation decreases as $|\phi| \propto \exp(-3Ht/2)$ with oscillations. In this simulation the number of e-foldings acquired during inflation is $N \simeq 60$, so the amplitude of $\phi$ at the end of inflation is of order $|\phi_f| \simeq |\phi_i| \exp(-90) \simeq 10^{-40}\Mpl$. This rapid decrease of $\phi$ toward 0 is the outcome of a positive mass squared larger than $H^2$ induced by the nonminimal coupling with $\beta>3/8$. Due to the strong suppression of $\phi$, the dynamics of inflation driven by the $\chi$-field potential energy $U(\chi)$ is not affected by the presence of $\phi$. For $\beta=0.1$, the analytic estimation (\ref{phidec}) shows that the field $\phi$ decreases as $|\phi| \propto \exp(-0.215Ht)$, so that $|\phi_f| \simeq 10^{-6}\Mpl$ at the end of inflation. Even in this case, the dynamics of inflation is hardly modified by the field $\phi$. When $\beta=-1$, the field $\phi$ grows as $\phi \propto \exp(1.372Ht)$ from the onset of inflation, the slow-roll inflation is prevented by the rapid increase of $\dot{\phi}^2$ (see Fig.~\ref{fig1}). In particular, the epoch of cosmic acceleration soon comes to end by the negative coupling $\beta \simeq -5$ used for the occurrence of spontaneous scalarization with $V(\phi)=0$. In our setup, the presence of the self-interacting potential $V(\phi)$ with a positive nonminimal coupling $\beta$ allows a possibility for realizing a positive effective field mass squared around $\phi=0$. As discussed above, for $\beta>{\cal O}(0.1)$, the field $\phi$ decreases toward the local minimum of its effective potential ($\phi=0$) during inflation. After inflation, the inflaton field $\phi$ should decay to radiation. To study the dynamics of $\phi$ during reheating, we incorporate the Born decay term $\Gamma \dot{\chi}$ in Eq.~(\ref{chieq}) as \be \ddot{\chi}+ \left( 3H +\Gamma \right) \dot{\chi} +U_{,\chi}=0\,, \label{dchieq} \ee where $\Gamma$ is a constant. The radiation density $\rho_r$ obeys the differential equation \be \dot{\rho}_r +4H \rho_r=\Gamma \dot{\chi}^2\,. \label{rhor} \ee The energy density $\rho$ and pressure $P$ in Eqs.~(\ref{cosmo1}) and (\ref{cosmo2}) should be also modified to $\rho=\dot{\chi}^2/2+U(\chi)+\rho_r$ and $P=\dot{\chi}^2/2-U(\chi)+\rho_r/3$, respectively. We numerically solve Eqs.~(\ref{cosmo1})-(\ref{cosmo2}) and (\ref{dchieq})-(\ref{rhor}) by using the field values $\chi_f$, $\phi_f$, and their time derivatives at the end of inflation as the initial conditions of the reheating period. We take the radiation into account from the end of inflation and integrate the background equations of motion by the time at which the inflaton energy density drops below $\rho_r$. For the mass $m$ of order $10^{-11}$~eV the condition $m^2 f_B^2 \ll H^2 \Mpl^2$ is satisfied in the standard reheating scenario, and it is a good approximation to neglect the contributions of the potential energy $V(\phi)$ to the background equations of motion. The inflaton potential is approximated as $U(\chi) \simeq M^2 \chi^2/2$ around $\chi=0$. The reheating stage driven by the oscillating $\chi$ field corresponds to a temporal matter era with $a \propto t^{2/3}$ and $H=2/(3t)$. As long as the field $\phi$ sufficiently approaches $0$ during inflation, Eq.~(\ref{cosmo3}) approximately reduces to \be \ddot{\phi}+\frac{2}{t} \dot{\phi}+\frac{2\beta}{3t^2}\phi \simeq 0\,. \ee The dominant solution to this equation is given by \begin{numcases}{\phi \propto } t^{-1/2} \cos \left( \sqrt{\frac{8\beta-3}{12}}\ln (Mt)+\theta_0 \right) & ({\rm if}~$\beta>3/8)$\,, \label{damposre} \\ t^{-(1-\sqrt{1-8\beta/3})/2} & ({\rm if}~$\beta < 3/8)$\,.\label{phidecre} \end{numcases} The time $t_f$ at the beginning of reheating is related to the Hubble parameter at the end of inflation $H_f$, as $t_f \simeq 1/H_f$. The reheating period ends around the time $t_R \simeq 1/\Gamma$, after which the energy density of radiation dominates over that of the inflaton field $\chi$. Since the evolution of $\phi$ during inflation is given by Eqs.~(\ref{dampos})-(\ref{phidec}), the amplitude of $\phi$ at which the radiation-dominated epoch commences can be estimated as \begin{numcases}{|\phi_R|=} |\phi_i| \exp \left(-\frac{3}{2}N \right) \left( \frac{\Gamma}{H_f} \right)^{1/2}& ({\rm if}~$\beta>3/8)$\,, \label{phiR1} \\ |\phi_i| \exp \left[ -\frac{3}{2} \left( 1-\sqrt{1-\frac{8}{3}\beta} \right)N \right] \left( \frac{\Gamma}{H_f} \right)^{( 1-\sqrt{1-8\beta/3})/2} & ({\rm if}~$0<\beta < 3/8)$\,,\label{phiR2} \end{numcases} where $\phi_i$ is the initial value of $\phi$ at the onset of inflation and $N$ is the total number of e-foldings during inflation. Since $\Gamma/H_f<1$, the amplitude of $\phi$ further decreases during the reheating epoch, but the suppression of $|\phi|$ is much less significant compared to the inflationary period. For $\beta>3/8$, $|\phi_R|$ does not depend on the coupling constant $\beta$. The numerical simulation of Fig.~\ref{fig1} corresponds the decay constant $\Gamma=10^8$~GeV. The Hubble parameter around the end of inflation is of order $H_f=0.1M \simeq 10^{-6}M_{\rm pl}$. Applying the estimations (\ref{phiR1}) and (\ref{phiR2}) to $\beta=1$ and $\beta=0.1$, we obtain $|\phi_{R}| \simeq 3 \times 10^{-42}\Mpl$ and $|\phi_{R}| \simeq 6 \times 10^{-7}\Mpl$, respectively, whose orders agree with the numerical results. For smaller $\Gamma$, the suppression of $|\phi|$ during reheating is even more significant. Thus, we showed that the positive nonminimal coupling with $\beta>{\cal O}(0.1)$ leads to the values of $\phi_R$ close to 0. This property is mostly attributed to the exponential decrease of $|\phi|$ during inflation. \subsection{Evolution after the onset of radiation era} Let us proceed to the discussion about the evolution of $\phi$ after the end of reheating by considering the mass scale of order $m={\cal O}(10^{-11}\,{\rm eV})$. During the radiation-dominated epoch, we have $H=1/(2t)$ and hence the term $3(2H^2+\dot{H})$ in Eq.~(\ref{cosmo3}) vanishes. Provided that the field $\phi$ is much smaller than $\Mpl$ and $f_B$, Eq.~(\ref{cosmo3}) is approximately given by \be \ddot{\phi}+3H \dot{\phi}-\left[ m^2 + \frac{\beta (1+3\beta)\dot{\phi}^2}{2\Mpl^2} \right]\phi \simeq 0\,. \label{sphieq} \ee For $\beta>3/8$ the initial field value (\ref{phiR1}) at the onset of the radiation era is as small as $10^{-42}\Mpl$, and we can ignore the second term in the square bracket of Eq.~(\ref{sphieq}) relative to $m^2$. For $0<\beta<3/8$, the field $\phi$ is not necessarily subject to strong suppression during inflation, so it may be possible to satisfy the condition $\beta (1+3\beta)\dot{\phi}^2/(2\Mpl^2) \gg m^2$ at the end of reheating. In this case, however, $\dot{\phi}=0$ is the solution to Eq.~(\ref{sphieq}) and hence the field derivative rapidly decreases to reach the region $\beta (1+3\beta)\dot{\phi}^2/(2\Mpl^2) \ll m^2$. Thus, in both cases, the scalar field $\phi$ eventually obeys \be \ddot{\phi}+3H \dot{\phi}-m^2 \phi \simeq 0\,, \label{sphiapeq} \ee which has a tachyonic mass squared $-m^2$ around $\phi=0$ due to the existence of the self-interacting potential $V(\phi)$. Since the condition $H \gg m={\cal O}(10^{-11}\,{\rm eV})$ is satisfied in the early radiation era, $\phi$ is nearly frozen by the Hubble friction. \subsubsection{Growth of the scalar field from the symmetry-restored state} After $H$ drops below the order $m$, $\phi$ starts to increase. During the radiation dominance, the solution to Eq.~(\ref{sphiapeq}) is given by \be \phi=t^{-1/4} \left[ c_1I_{1/4} (mt)+c_2 K_{1/4} (mt) \right]\,, \label{phirad} \ee where $I_{1/4}$ and $K_{1/4}$ are modified Bessel functions of the first and second kinds, respectively. Taking the limit $m t \gg 1$ in Eq.~(\ref{phirad}), there is indeed a growing-mode solution $\phi \propto {\rm e}^{mt}/t^{3/4}$. Since the potential (\ref{Vphi}) has a local minimum at $\phi=\pi f_B$, the field $\phi$ eventually reaches this region and starts to oscillate around $\phi=\pi f_B$. In Fig.~\ref{fig2}, we plot the evolution of $|\phi|/\Mpl$ as a function of $m/H$ for $\beta=1, 0.1$. We choose $m=10^{-11}$~eV and $f_B=1.0 \times 10^{-5}\Mpl$ with the initial conditions of $\phi$ consistent with their values at the end of reheating. For $\beta =1$, the field $\phi$ is nearly frozen with the value of order $10^{-42}\Mpl$ and then starts to grow for $H\lesssim m/3$. Around $H\lesssim m/200$, the field sufficiently approaches the potential minimum and exhibits a damped oscillation around $\phi=\pi f_B$. When $\beta=0.1$, the field starts to evolve for $H\lesssim m/3$ as well, while the approach to $\phi=\pi f_B$ occurs around $H\lesssim m/12$ because of the large initial value of $\phi$ of order $10^{-6}\Mpl$. In the era dominated by the radiation density $\rho_r=\pi^2 g_* T^4/30$, where $g_*$ is the number of relativistic degrees of freedom and $T$ is the temperature, we estimate the temperature $T_m$ at which the field $\phi$ starts to evolve along the potential $V(\phi)$. Using the Friedmann equation $3H^2 \Mpl^2=\rho_r$ with $m=3H$, it follows that \be T_m=\left( \frac{10}{\pi^2 g_*} \right)^{1/4} \sqrt{m \Mpl}\,. \ee For the mass scale $m=10^{-11}$~eV with $g_* \simeq 10$ \cite{Dodelson:2003ft}, we have $T_m \simeq 10^{12}$~K. Then, the field $\phi$ reaches the potential minimum $\pi f_B$ before the epoch of big-bang nucleosynthesis (BBN). After the Universe enters the matter-dominated epoch, the term $\dot{H}+2H^2$ in Eq.~(\ref{cosmo3}) is nonvanishing, i.e., $\dot{H}+2H^2 \simeq H^2/2$. Since $m^2 \gg H^2$ in this epoch, the effect of the nonminimal coupling on Eq.~(\ref{cosmo3}) is negligible and the field $\phi$ coherently oscillates around $\phi=\pi f_B$ with a decreasing amplitude. This is also the case for the late-time dark energy dominated era, so the field $\phi$ reaches the potential minimum by today. \subsubsection{Oscillation around the potential minimum as CDM} Around $\phi=\pi f_B$, the potential (\ref{Vphi}) is approximated as \be \label{osc_potential} V(\phi) \simeq \frac{1}{2} m^2 (\phi-\pi f_B)^2\,. \ee After the scalar field starts to oscillate about the potential minimum, it behaves as (a portion of) CDM with the energy density decreasing as $\rho_\phi \propto a^{-3}$. Today's field density can be estimated as $\rho_{\phi0} \simeq m^2 f_B^2 a_{\rm CDM}^3$, where $a_{\rm CDM}$ is the scale factor at which the field $\phi$ starts to behave as CDM during the radiation era and the scale factor is normalized as $a_0=1$. Defining the ratio \be r_{\rm CDM}=\frac{m}{H_{\rm CDM}} \ee with $H_{\rm CDM}=H_0 \sqrt{\Omega_{r0}a_{\rm CDM}^{-4}}$, where $H_0$ and $\Omega_{r0}$ are today's Hubble parameter and radiation density parameter respectively, today's field density parameter $\Omega_{\phi 0}=\rho_{\phi 0}/(3\Mpl^2 H_0^2)$ can be estimated as \be \Omega_{\phi 0}=\frac{r_{\rm CDM}^{3/2}}{3} \left( \frac{f_B}{\Mpl} \right)^2 \left( \frac{m}{H_0} \right)^{1/2} \Omega_{r0}^{3/4}\,. \ee If the field $\phi$ is responsible for a part of CDM, we require that $\Omega_{\phi 0} \simeq 0.27 \alpha_{\rm CDM}$, where the constant $\alpha_{\rm CDM}$ represents the energy fraction of $\phi$ to CDM and $\alpha_{\rm CDM}=1$ corresponds to the case that $\phi$ is responsible for all CDM. Then, we obtain \be \frac{f_B}{\Mpl} \simeq 30 r_{\rm CDM}^{-3/4} \sqrt{\alpha_{\rm CDM}} \left( \frac{m}{10^{-33}~{\rm eV}} \right)^{-1/4} \,, \label{fBcon} \ee where we used $\Omega_{r0} \simeq 9 \times 10^{-5}$ and $H_0 \simeq 10^{-33}$~eV. If $m=10^{-11}$~eV, then we have $f_B/\Mpl \simeq 9.4 \times 10^{-5} \sqrt{\alpha_{\rm CDM}}\,r_{\rm CDM}^{-3/4}$. Using the value $r_{\rm CDM}=200$ for $\beta=1$, we obtain $f_B/\Mpl \simeq 2 \times 10^{-6} \sqrt{\alpha_{\rm CDM}}$. For $\beta=0.1$ we take the value $r_{\rm CDM}=12$, in which case $f_B/\Mpl \simeq 1 \times 10^{-5} \sqrt{\alpha_{\rm CDM}}$. Under the constraint (\ref{fBcon}), the density parameter of $\phi$ at $a=a_{\rm CDM}$ (which is slightly before the BBN epoch) is as small as \be \Omega_\phi (a=a_{\rm CDM}) \simeq \frac{m^2f_B^2}{3\Mpl^2 H_{\rm CDM}^2} \simeq \frac{f_B^2}{3\Mpl^2} r_{\rm CDM}^2 \simeq 300 r_{\rm CDM}^{1/2} \alpha_{\rm CDM} \left( \frac{m} {10^{-33}{\rm eV}} \right)^{-1/2} = 3\times 10^{-9} r_{\rm CDM}^{1/2} \alpha_{\rm CDM} \,, \ee and hence the BBN is not affected by the presence of the field $\phi$. The relation (\ref{fBcon}) has been derived by assuming that the scalar field $\phi$ behaves as a coherently oscillating CDM by today. If the energy density of $\phi$ decays to that of radiation or some other particle whose density decreases faster than radiation, then it is possible to have larger values of $f_B$ than those constrained by Eq.~(\ref{fBcon}). For example, adding a decay term $\Gamma_{\phi} \dot{\phi}$ to the left hand-side of Eq.~(\ref{cosmo3}) leads to the dissipation of the energy density of $\phi$ before the field reaches the VEV $\phi_v=\pi f_B$. When we study scalarized NS solutions in Sec.~\ref{scasec}, we will allow for the possibility that $f_B/\Mpl$ is larger than the value constrained by Eq.~(\ref{fBcon}). \section{Weak gravitational objects} \label{weaksec} In this section, we study solutions of the scalar field $\phi$ for compact objects on weak gravitational backgrounds~(like the Sun). For this purpose, we consider a static and spherically symmetric background given by the line element \be \rd s^2 = g_{\mu\nu} {\rm d}x^\mu {\rm d}x^\nu =-f(r) \rd t^{2} +h^{-1}(r) \rd r^{2} + r^{2} \left(\rd \theta^{2} +\sin^{2}\theta\,\rd\varphi^{2} \right)\,, \label{BGmetric} \ee where $f(r)$ and $h(r)$ are functions of the radial coordinate $r$. The scalar field is assumed to be a function of $r$ alone, i.e., $\phi=\phi(r)$. For the matter species inside a star, we consider a perfect fluid described by the mixed energy-momentum tensor $T^{\mu}{}_{\nu}={\rm diag} \left[ -\rho(r), P(r), P(r), P(r) \right]$, where the energy density $\rho$ and pressure $P$ are functions of $r$. Assuming that the perfect fluid is minimally coupled to gravity, it obeys the continuity Eq.~\eqref{eq_continuity}. On the background (\ref{BGmetric}), this equation translates to \be P'+\frac{f'}{2f} \left( \rho+P \right)=0\,, \label{mattereq} \ee where a `prime' represents the derivative with respect to $r$. Varying the action (\ref{action}) with respect to $f$ and $h$, we obtain the following gravitational field equations \ba & & \frac{f'}{f}=\frac{4(1-h)\Mpl^4+2\Mpl^2 hr \phi' (r \phi'+4\beta \phi)-3\beta^2 \phi^2 \phi'^2 h r^2 -4(V-P)r^2 \Mpl^2 {\rm e}^{\beta \phi^2/(2\Mpl^2)}} {2\Mpl^2 hr (2\Mpl^2-\beta \phi \phi' r)}\,, \label{feq}\\ & & \frac{h'}{h}-\frac{f'}{f}=\frac{r[2\Mpl^2 \beta h\phi \phi'' +\{ 2(\beta-1)\Mpl^2+\beta^2 \phi^2 \}h \phi'^2 -2\Mpl^2 (\rho+P){\rm e}^{\beta \phi^2/(2\Mpl^2)}]} {\Mpl^2 h (2\Mpl^2-\beta \phi \phi' r)}\,. \label{heq} \ea The scalar field obeys the differential equation \ba & & \phi''+\frac{2(1+h)\Mpl^2+\beta \phi'^2 h r^2 -r^2(2V+\rho-P){\rm e}^{\beta \phi^2/(2\Mpl^2)}}{2\Mpl^2 hr}\phi' \nonumber \\ & & -\frac{(2\Mpl^2-\beta \phi \phi' r){\rm e}^{\beta \phi^2/(2\Mpl^2)}}{4 \Mpl^4 h} \left[ 2V_{,\phi}\Mpl^2+4\beta \phi V+\beta (\rho-3P) \phi \right]=0\,. \label{phieqs} \ea As we discussed in Sec.~\ref{cossec}, for $\beta>{\cal O}(0.1)$, the scalar field $\phi$ approaches the VEV $\phi_v=\pi f_B$ by today during the cosmic expansion history. We would like to construct a scalar-field profile $\phi(r)$ having the asymptotic value $\pi f_B$ at spatial infinity, i.e., \be \phi (\infty)=\pi f_B\,, \ee with $\phi'(\infty)=0$. At $r=0$, we impose the boundary conditions $\phi(0)=\phi_0$ and $\phi'(0)=0$, where $\phi_0$ is a constant in the range $0<\phi_0<\pi f_B$. Since we are now considering a nonrelativistic object, we ignore $P$ relative to $\rho$ and employ the approximation $\rho r^2 \ll \Mpl^2$ inside the star. The gravitational potentials are much smaller than 1, so we can exploit the approximation $h \simeq 1$ in Eq.~(\ref{phieqs}). As we will see below, the field variation $\phi'(r)$ is small on weak gravitational backgrounds, under which $\beta \phi'^2 r^2 \ll \Mpl^2$ and $| \beta \phi \phi' r| \ll \Mpl^2$. In the vicinity of $\phi=\pi f_B$, the potential (\ref{Vphi}) can be also expanded as Eq.~\eqref{osc_potential}. Around $\phi=\pi f_B$, Eq.~(\ref{phieqs}) is approximately given by \be \phi''+\frac{2}{r}\phi'-\left[ m^2 \left( \phi-\pi f_B \right)+\frac{\beta \rho}{2\Mpl^2}\phi \right] \simeq 0\,. \label{phiweak} \ee By the end of this section, we consider a star with the constant density $\rho$ and radius $r_s$. We assume that the exterior region of the star has a vanishing density ($\rho=0$). Then, for $r>r_s$, the solution to Eq.~(\ref{phiweak}) consistent with the boundary condition $\phi'(\infty)=0$ is given by \be \phi(r)=\pi f_B+\frac{A{\rm e}^{-m r}}{r}\,, \label{phiso1} \ee where $A$ is a constant. Inside the star, the field Eq.~(\ref{phiweak}) can be expressed as \be \phi''+\frac{2}{r}\phi'-\mu^2 \left( \phi-\phi_0 \right)=0\,, \label{phiweak2} \ee where \be \mu^2 \equiv m^2+\frac{\beta \rho}{2\Mpl^2}\,,\qquad \phi_0 \equiv \frac{m^2}{\mu^2}\pi f_B\,. \label{muphi0} \ee For $\beta>0$, we have $\mu^2>m^2$ and hence $\phi_0<\pi f_B$. If we consider the Sun with the mean density $\rho={\cal O}(1~{\rm g/cm}^3)$ and mass $m={\cal O}(10^{-11}\,{\rm eV})$, we have $m^2 \gg \beta \rho/(2\Mpl^2)$, under which $\phi_0$ is very close to $\pi f_B$. Since we are assuming that $\rho={\rm constant}$, the solution to Eq.~(\ref{phiweak2}) consistent with the boundary condition $\phi'(0)=0$ is given by \be \phi(r)=\phi_0+\frac{B ({\rm e}^{\mu r} -{\rm e}^{-\mu r})}{r}\,, \label{phiso2} \ee where $B$ is a constant. Matching Eq.~(\ref{phiso1}) with (\ref{phiso2}) and also their $r$ derivatives at $r=r_s$, we obtain \be A= \frac{(\phi_0-\pi f_B)[(\mu r_s-1)\,{\rm e}^{2\mu r_s}+\mu r_s+1] \,{\rm e}^{m r_s}}{(\mu+m)\,{\rm e}^{2\mu r_s}+\mu-m}\,, \qquad B= -\frac{(\phi_0-\pi f_B)(m r_s+1)\,{\rm e}^{m r_s}}{(\mu+m) \,{\rm e}^{2\mu r_s}+\mu-m}\,. \ee Then, the resulting solution of $\phi$ outside the star ($r>r_s$) is given by \be \phi(r)=\pi f_B -\beta_{\rm eff} \Mpl \frac{GM_g}{r} {\rm e}^{-m (r-r_s)} \,, \label{phica} \ee where $G=1/(8 \pi \Mpl^2)$ is the gravitational constant, $M_g=4\pi r_s^3 \rho/3$ is the mass of body, and \be \beta_{\rm eff}=3 \beta \frac{\pi f_B}{\Mpl} \frac{(\mu r_s-1)\,{\rm e}^{2\mu r_s}+\mu r_s+1} {\mu^2 r_s^3 [(\mu+m)\,{\rm e}^{2\mu r_s}+\mu-m]}\,. \label{beff} \ee The fifth force between the scalar field and baryons is mediated by the effective coupling $\beta_{\rm eff}$. Note that the solution (\ref{phica}) looks similar to that derived in the chameleon mechanism \cite{Khoury:2003aq,Khoury:2003rn}, but the difference is that the effective mass of $\phi$ inside and outside the star is similar to each other ($\mu \simeq m$) in our scenario. This leads to the different form of $\beta_{\rm eff}$ in comparison to the chameleon case. If we consider the Sun ($r_s=7.0 \times 10^8$~m) with the mass $m=10^{-11}$~eV, we have $\mu^2 \simeq m^2 \gg \beta \rho/(2\Mpl^2)$ and $\mu r_s \simeq m r_s \simeq 3.5 \times 10^4$. In this case, Eq.~(\ref{beff}) reduces to \be \beta_{\rm eff} \simeq \frac{3\beta}{2} \frac{\pi f_B}{\Mpl}\frac{1}{(mr_s)^2} \qquad {\rm for} \quad mr_s \gg 1\,. \label{beeff} \ee Because of a large suppression factor $(mr_s)^{-2}$, it is easier to satisfy Solar-system constraints in comparison to the massless case (see below). For the symmetry-breaking scale $f_B/\Mpl=2 \times 10^{-6}$ with $\beta=1$, the effective coupling is as small as $\beta_{\rm eff} \simeq 7.7 \times 10^{-15}$. In the case of Earth ($r_s=6.4 \times 10^6$~m) with $m=10^{-11}$~eV, $f_B/\Mpl=2 \times 10^{-6}$, and $\beta=1$, we have $\beta_{\rm eff} \simeq 9.2 \times 10^{-11}$. In addition to these small values of $\beta_{\rm eff}$, the factor ${\rm e}^{-m (r-r_s)}$ in Eq.~(\ref{phica}) leads to the exponential suppression of fifth forces at the distance $r \gtrsim r_s+1/m$. In the massless limit $m r_s \to 0$, we have $\mu^2 \simeq \beta \rho/(2\Mpl^2)$. Since $(\mu r_s)^2$ is of order the gravitational potential at the surface of star, we exploit the approximation $\mu r_s \ll 1$ in Eq.~(\ref{beff}). Then, the effective coupling reduces to \be \beta_{\rm eff} \simeq \beta \frac{\pi f_B}{\Mpl} \qquad {\rm for} \quad mr_s \to 0\,. \ee For $\beta$ of order 1, we have $\beta_{\rm eff} \ll 1$ under the condition $\pi f_B/\Mpl \ll 1$, in which case it is possible to satisfy Solar-system constraints even for a nearly massless scalar field (as we will see at the end of this section). Outside the star, we estimate fifth-force corrections to the metric components $f$ and $h$. They are related to the gravitational potentials $\Psi$ and $\Phi$, as $f={\rm e}^{2\Psi}$ and $h={\rm e}^{2\Phi}$. Since $|\Psi|$ and $|\Phi|$ are much smaller than 1 on weak gravitational backgrounds (of order $10^{-6}$ for the Sun), we only pick up terms linear in $\Psi$ and $\Phi$. Let us consider the massive scalar field satisfying the condition \be m r_s \gg 1\,. \ee Substituting the solution (\ref{phica}) and its $r$ derivatives into Eqs.~(\ref{feq}) and (\ref{heq}), we find that the gravitational potentials $\Phi$ and $\Psi$ approximately obey \ba & & \Phi'+\frac{\Phi}{r} \simeq -\frac{1}{2} \beta \beta_{\rm eff} G M_g \frac{\pi f_B}{\Mpl} m^2 {\rm e}^{-m (r-r_s)}\,,\\ & & \Psi'+\frac{\Phi}{r} \simeq \beta \beta_{\rm eff} \frac{G M_g}{r} \frac{\pi f_B}{\Mpl} m\,{\rm e}^{-m (r-r_s)}\,. \ea The integrated solutions to these equations, which are consistent with the asymptotic flatness, are given by \ba \Phi &=& -\frac{G M_g}{r} \left[ 1-\frac{\beta \beta_{\rm eff}}{2} \frac{\pi f_B}{\Mpl}mr\,{\rm e}^{-m (r-r_s)} \right]\,,\label{Phi}\\ \Psi &=& -\frac{G M_g}{r} \left[ 1+\frac{\beta \beta_{\rm eff}}{2} \frac{\pi f_B}{\Mpl} {\rm e}^{-m (r-r_s)} \right]\,.\label{Psi} \ea We introduce the post-Newtonian parameter as \be \gamma_{\rm PPN} \equiv \frac{\Phi}{\Psi} \simeq 1-\frac{\beta \beta_{\rm eff}}{2} \frac{\pi f_B}{\Mpl}mr\,{\rm e}^{-m (r-r_s)}\,. \ee The time-delay effect of the Cassini tracking of the Sun has given the bound $\gamma_{\rm PPN}-1= (2.1 \pm 2.3) \times 10^{-5}$ \cite{Will:2014kxa}. Since $\gamma_{\rm PPN}-1$ is negative in the current theory, we adopt the limit $1-\gamma_{\rm PPN} \leq 2.0 \times 10^{-6}$. Taking the value of $\gamma_{\rm PPN}$ at $r=r_s$, this Solar-system constraint translates to \be \beta \beta_{\rm eff}\frac{\pi f_B}{\Mpl}mr_s \le 4.0 \times 10^{-6}\,. \ee On using the effective coupling (\ref{beeff}) derived for $m r_s \gg 1$, we obtain the bound \be \beta \frac{\pi f_B}{\Mpl} \frac{1}{\sqrt{mr_s}} \le 1.6 \times 10^{-3}\qquad {\rm for} \quad mr_s \gg 1\,. \label{bebound} \ee With the mass scale $m=10^{-11}$~eV, this translates to $\pi f_B/\Mpl \leq 0.3/\beta$ for the Sun. The symmetry-breaking scale $f_B/\Mpl \simeq 2 \times 10^{-6}$ with $\beta=1$, which was mentioned in Sec.~\ref{cossec} in the context of an oscillating $\phi$-field CDM, is well consistent with this upper limit. At the end of this section, we comment on Solar-system constraints in the massless limit ($m r_s \to 0$). In this case, the scalar-field solution is given by Eq.~(\ref{phica}) with $\beta_{\rm eff}=\beta \pi f_B/\Mpl$. Provided that $\pi f_B/\Mpl$ is smaller than the order 1, the gravitational potential $\Phi$ is estimated as $\Phi=-G M_g/r$ up to the linear order in $GM_g/r$, while the other gravitational potential receives a correction from the nonminimal coupling as $\Psi=-(GM_g/r)[1+\beta^2 (\pi f_B/\Mpl)^2]$. Then, the post-Newtonian parameter is estimated as \be \gamma_{\rm PPN} \simeq 1-\beta^2 \left( \frac{\pi f_B}{\Mpl} \right)^2\,, \label{gammassless} \ee where we used the approximation $\beta^2 (\pi f_B/\Mpl)^2 \ll 1$. Note that the result (\ref{gammassless}) is consistent with that derived in Ref.~\cite{Damour:1992we}. Using the Solar-system bound $1-\gamma_{\rm PPN} \leq 2.0 \times 10^{-6}$, it follows that \be \beta \frac{\pi f_B}{\Mpl} \le 1.4 \times 10^{-3} \qquad {\rm for}\quad mr_s \to 0\,. \label{bebound2} \ee For the symmetry-breaking scale $f_B/\Mpl \simeq 2 \times 10^{-6}$ with $\beta=1$, the bound (\ref{bebound2}) is satisfied. In the massive case (\ref{bebound}) there is an extra suppression factor $1/\sqrt{m r_s} \ll 1$, % and the propagation of fifth forces is suppressed in comparison to the massless case. \section{Neutron star solutions} \label{scasec} In this section, we will construct NS solutions on the static and spherically symmetric background given by the line element (\ref{BGmetric}). We note that Eqs.~(\ref{mattereq})-(\ref{phieqs}) are the strict Euler-Lagrange equations obtained by varying the action \eqref{action} and hence valid also on strong gravitational backgrounds. The difference from the case of weak gravitational stars discussed in Sec.~\ref{weaksec} is that the gravitational potentials $|\Psi|$ and $|\Phi|$ in the vicinity of NSs are of ${\cal O}(0.1)$ and nonlinearities in the gravitational field equations become important. Moreover, the pressure $P$ is not negligible relative to the energy density $\rho$. The other important difference is that the central density of NSs $\rho_c$ is typically of ${\cal O}(10^{15}~{\rm g/cm}^3)$, so in our model the term $\beta \rho/(2\Mpl^2)$ can exceed $m^2={\cal O}\left( (10^{-11}~{\rm eV})^2\right)$ for $\beta \gtrsim {\cal O}(0.1)$. This means that the field value $\phi_0$ defined in Eq.~(\ref{muphi0}) can approach 0 inside the NS, unlike the low density star where $\phi_0$ is very close to $\pi f_B$. Then, it should be possible to realize a scalar-field configuration in which $\phi$ is close to $\phi=0$ inside the star and approaches $\pi f_B$ outside the star. In the following, we will show that such scalarized solutions do exist. \subsection{Boundary conditions} We first derive the approximate solutions around the center of star by using the expansions of $f$, $h$, $\phi$, and $P$. Due to the regularity condition $\phi'(0)=0$, we can expand the scalar field around $r=0$ in the form $\phi (r)=\phi_c+\phi_2 r^2+{\cal O}(r^3)$, where $\phi_c=\phi(0)$ and $\phi_2$ is a constant. We also impose the boundary conditions $f(0)=f_c$, $h(0)=1$, $\rho(0)=\rho_c$, $P(0)=P_c$, and $f'(0)=h'(0)=\rho'(0)=P'(0)=0$. Around $r=0$, the scalar-field potential is expanded as \be V(\phi)=V_c+V_{,\phi c} (\phi-\phi_c) +{\cal O} ((\phi-\phi_c)^2)\,, \ee where we used the notations $V_c \equiv V(\phi_c)$ and $V_{,\phi c} \equiv (\rd V/\rd \phi)(\phi_c)$. The solutions consistent with Eqs.~(\ref{mattereq})-(\ref{phieqs}) around the center of NSs are given by \ba f &=& f_c+f_c \frac{{\rm e}^{\beta \phi_c^2/(2\Mpl^2)} [2(\rho_c+3P_c-2V_c)\Mpl^2+2\beta \phi_c \Mpl^2 V_{,\phi c} +\beta^2 \phi_c^2 (\rho_c-3P_c+4V_c)]}{12\Mpl^4}r^2 +{\cal O}(r^4)\,,\label{fr=0} \\ h &=& 1-\frac{{\rm e}^{\beta \phi_c^2/(2\Mpl^2)} [2(\rho_c+V_c)\Mpl^2-2\beta \phi_c \Mpl^2 V_{,\phi c} -\beta^2 \phi_c^2 (\rho_c-3P_c+4V_c)]}{6\Mpl^4}r^2 +{\cal O}(r^4)\,,\\ \phi &=&\phi_c+\frac{{\rm e}^{\beta \phi_c^2/(2\Mpl^2)}}{6} \left[ V_{,\phi c}+\frac{\beta \phi_c (\rho_c-3P_c+4V_c)}{2\Mpl^2} \right]r^2+{\cal O}(r^4)\,,\label{phir=0}\\ P &=& P_c-\frac{{\rm e}^{\beta \phi_c^2/(2\Mpl^2)} (\rho_c+P_c) [2(\rho_c+3P_c-2V_c) \Mpl^2+2\beta \phi_c \Mpl^2 V_{,\phi c} +\beta^2 \phi_c^2 (\rho_c-3P_c+4V_c)]}{24\Mpl^4}r^2 +{\cal O}(r^4)\,. \label{Pr=0} \ea Let us consider the case in which $\phi_c$ is in the range $0<\phi_c \ll \pi f_B$. The potential energy around $\phi=0$ is of order $V \simeq 2m^2 f_B^2$, with $V_{,\phi} \simeq -m^2 \phi$. Provided that $f_B \ll \Mpl$, it follows that $V$ is much smaller than the central density $\rho_c={\cal O}(10^{15}\,{\rm g/cm}^3)$ for $m ={\cal O}( 10^{-11}$~eV). Then, the solution (\ref{phir=0}) is approximately given by \be \phi \simeq \phi_c+\frac{{\rm e}^{\beta \phi_c^2/(2\Mpl^2)}} {6}\phi_c m_{\rm eff}^2 r^2+{\cal O}(r^3)\,, \label{phiap} \ee where \be m_{\rm eff}^2 \equiv -m^2+\frac{\beta \rho_c(1-3w_c)}{2\Mpl^2}\,, \label{meffs} \ee with the equation of state (EOS) parameter $w_c=P_c/\rho_c$ at $r=0$. Here, $m_{\rm eff}^2$ corresponds to an effective mass squared of the scalar field around the potential maximum at $\phi=0$. Like Eq.~(\ref{m0}), for $\beta=0$, we have $m_{\rm eff}^2=-m^2<0$, so the scalar field decreases as a function of $r$, i.e., $\phi'(r)<0$, around $r=0$. In the presence of the positive nonminimal coupling $\beta$ with $w_c<1/3$, it is possible to realize $m_{\rm eff}^2>0$ for \be \beta>\frac{2m^2 \Mpl^2}{\rho_c (1-3w_c)} =\frac{0.28}{1-3w_c} \left( \frac{10^{15}~{\rm g/cm}^3}{\rho_c} \right) \left( \frac{m}{10^{-11}~{\rm eV}} \right)^2\,, \label{betabo} \ee where the right hand-side is equivalent to the critical value $\beta_c$ given in Eq.~(\ref{betac}). For large values of $\rho_c$, $w_c$ can be close to the relativistic value $1/3$ or even larger, so we need to implement the pressure to derive the scalar-field profile correctly. For $w_c<1/3$ the scalar field increases as a function of $\phi$ around $r=0$, so it is possible to reach the asymptotic value $\phi_{v}=\pi f_B$ at spatial infinity. Even if $\phi(r)$ decreases around $r=0$ for $w_c>1/3$, the decrease of the EOS parameter $w=P/\rho$ around the NS surface to the region $w<1/3$ allows a possibility for increasing $\phi(r)$ to reach $\phi_{v}=\pi f_B$ outside the star. We note that we have ignored the term $4V_c$ in Eq.~(\ref{phir=0}) relative to $\rho_c-3P_c$, but if $f_B$ is as close as the order $\Mpl$, there is the contribution of the potential to $m_{\rm eff}^2$ especially around $w_c \simeq 1/3$. In the asymptotic region outside the NSs, the field $\phi$ should relax toward the value $\pi f_B$. In this regime, we can set $\rho=P=0$, $V \to 0$, and $h \to 1$ in Eq.~(\ref{phieqs}) and ignore the terms $\beta \phi'^2 r^2$ and $\beta \phi \phi' r$ relative to $\Mpl^2$. Keeping the term $V_{,\phi} \simeq m^2 (\phi-\pi f_B)$ around $\phi=\pi f_B$, the solution to Eq.~(\ref{phieqs}) is approximately given by Eq.~\eqref{phiso1}, but the coefficient $A$ is different from that on weak gravitational backgrounds. \subsection{Numerically constructed scalar-field profile} To study the existence of the field profile connecting the solution (\ref{phiap}) to the other solution (\ref{phiso1}), we numerically integrate Eqs.~(\ref{mattereq})-(\ref{phieqs}) from the center of NSs to a sufficiently large distance. We exploit Eqs.~(\ref{fr=0})-(\ref{Pr=0}) as the boundary conditions around $r=0$. In Eq.~(\ref{fr=0}), we can set $f_c=1$ without loss of generality. The field value $\phi_c$ at the center of star is iteratively found from the demand of realizing the asymptotic value $\phi(r) \to \pi f_B$ with $\phi'(r) \to 0$ far outside the star. For the perfect fluid inside the NS, we use the analytic representation of SLy EOS parametrized by \be \xi=\log_{10} (\rho/{\rm g \cdot cm}^{-3})\,,\qquad \zeta=\log_{10} (P/{\rm dyn \cdot cm}^{-2})\,, \label{xizeta} \ee where the explicit relation between $\xi$ and $\zeta$ is given in Ref.~\cite{Haensel:2004nu}. The outside of NS is assumed to be in a vacuum with a vanishing density and pressure. For the numerical purpose, we introduce the following constants \be \rho_0=m_n n_0=1.6749 \times 10^{14}~{\rm g}/{\rm cm}^{3}\,,\qquad r_0=\sqrt{\frac{8\pi \Mpl^2}{\rho_0}}=89.664~{\rm km}\,, \ee where $m_n=1.6749 \times 10^{-24}$~g is the neutron mass and $n_0=0.1~(\rm fm)^{-3}$ is the typical number density of NSs. The density $\rho$ and radius $r$ are normalized by $\rho_0$ and $r_0$, respectively. In Fig.~\ref{fig3}, we plot $\phi/(\pi f_B)$ versus $r/r_0$ for $\beta=1, 5, 10, 30$ with the model parameters $m=1.0 \times 10^{-11}$~eV and $f_B=0.3 \Mpl/(\pi \beta)$. The central density of NS is chosen to be $\rho_c=6 \rho_0 \simeq 10^{15}~{\rm g/cm}^3$. With this mass scale $m$ the local gravity constraint (\ref{bebound}) gives the bound $\beta \pi f_B/\Mpl \le 0.3$ for the Sun, so the choice $f_B=0.3 \Mpl/(\pi \beta)$ corresponds to a maximally allowed value of $f_B$. When $\rho_c=6 \rho_0$ the EOS parameter at $r=0$ is $w_c \simeq 0.158$, % the condition (\ref{betabo}) translates to $\beta>0.53$. In this case the effective mass squared $m_{\rm eff}^2$ is positive at $r=0$, and the scalar field grows according to Eq.~(\ref{phiap}) in the vicinity of $r=0$. For $\beta=1$, the field value at $r=0$ is $\phi_c \simeq 0.83 \pi f_B$, % and the difference from the asymptotic value is $\pi f_B-\phi_c \simeq 0.17 \pi f_B$. On weak gravitational backgrounds discussed in Sec.~\ref{weaksec}, the field values inside and outside a star are very close to each other, see Eq.~(\ref{muphi0}) together with the condition $m^2 \gg \beta \rho/(2\Mpl^2)$. Since the nonminimal coupling $\beta \rho/(2\Mpl^2)$ can be larger than $m^2$ for NSs around $r=0$, the difference between $\pi f_B$ and $\phi_c$ exceeds the order of $0.1 \pi f_B$. With the increase of $\beta$, this difference tends to be more significant, e.g., $\phi_c \simeq 0.04 \pi f_B$ for $\beta=30$. We note that the symmetry-breaking scale $f_B$ does not appear in the effective mass squared (\ref{meffs}) at $r=0$. Hence the normalized field configuration $\phi/(\pi f_B)$ is hardly sensitive to the change of $f_B$. The large variation of $\phi(r)$ spanning in the range $0<\pi f_B-\phi_c \lesssim 0.1 \pi f_B$ is an outcome of the positive mass squared $m_{\rm eff}^2$ induced by large values of $\rho_c$. Then, the scalar field acquires a sufficient kinetic energy around $r=0$ to reach the asymptotic value $\phi_v=\pi f_B$ far outside the NSs. This is not the case for weak gravitational objects where $\phi$ needs to stay around $\pi f_B$ both inside and outside the star. Thus, our model allows the existence of an interesting scalar-field profile whose variation is significant for strongly gravitating objects, while the variation of $\phi(r)$ is suppressed on weak gravitational backgrounds as consistent with Solar-system constraints. \subsection{Modification of gravitational interactions} The scalar-field profile derived above affects the nonlinearly extended gravitational potentials $\Psi$ and $\Phi$ through Eqs.~(\ref{feq}) and (\ref{heq}). Since $f={\rm e}^{2\Psi}$ and $h={\rm e}^{2\Phi}$, the left hand-side of Eq.~(\ref{heq}) is equivalent to $2(\Phi'-\Psi')$. In GR the right hand-side of Eq.~(\ref{heq}) vanishes for $r \geq r_s$, where $r_s$ is the radial position of the NS radius. In the current model, however, there are contributions of $\phi(r)$ and its derivatives to the right hand-side of Eq.~(\ref{heq}). To quantify the difference between $\Phi'$ and $\Psi'$, we define \be \eta (r) \equiv \frac{\Phi'(r)}{\Psi'(r)}-1\,, \ee and compute it at the surface of star. In Fig.~\ref{fig4}, we plot $\eta_s=\eta(r_s)$ versus $\beta$ for $\rho_c=3\rho_0, 6 \rho_0, 10 \rho_0$ with $m=1.0 \times 10^{-11}$~eV and $f_B=0.3 \Mpl/(\pi \beta)$. When $\rho_c=3 \rho_0$, the quantity $\eta_s$ can be as large as $0.08$ for $\beta$ in the range 0.1$\sim$1. Since we are choosing the maximally allowed value of $f_B$ consistent with Solar-system constraints, increasing $\beta$ results in smaller values of $f_B$. Given that $\phi(r)$ is normalized by $\pi f_B$ in the numerical simulation of Fig.~\ref{fig3}, decreasing $f_B$ implies smaller values of $\phi(r)$ inside the star. Then, as $\beta$ increases, the corrections to gravitational potentials induced by the nonvanishing field profile should be more suppressed. Indeed, for given $\rho_c$, the property of decreasing $\eta_s$ as a function of $\beta$ can be confirmed in Fig.~\ref{fig4}. As $\rho_c$ increases in the range $\rho_c \geq 2\rho_0$, $\eta_s$ decreases from the maximum around $(\eta_s)_{\rm max}=0.08$ realized for the density $\rho_c= 2\rho_0 \sim 3\rho_0$. One of the reasons for this decrease of $\eta_s$ is that, for larger $\rho_c$, $w_c$ tends to increase. For $\rho_c=3\rho_0, 6 \rho_0, 10 \rho_0$, we have $w_c=0.047, 0.158, 0.315$, respectively. This means that, for $\rho_c \gtrsim 6 \rho_0$, the product $\rho_c(1-3w_c)$, which appears in the effective mass squared (\ref{meffs}), gets smaller as $\rho_c$ increases. The other reason is that, as $\rho_c$ increases in the range $2\rho_0 \leq \rho_c \lesssim 6 \rho_0$, the field value $\phi_c$ at $r=0$ tends to be smaller by approaching the symmetry-restored state. This leads to the overall decrease of corrections of the scalar field $\phi$ to the right hand-sides of Eqs.~(\ref{feq}) and (\ref{heq}). Due to these two combined effects, for increasing $\rho_c$ in the range $2\rho_0 \leq \rho_c \lesssim 10 \rho_0$, there is the tendency that $\eta_s$ decreases. Nevertheless, the observations of gravitational waves will provide us with interesting possibilities for probing the deviation from GR of order $\eta_s>{\cal O}(0.01)$ in the coupling range $\beta=0.1 \sim 10$. As $\rho_c$ exceeds $10 \rho_0$, the product $\rho_c(1-3w_c)$ can be negative. In such cases the EOS parameter $w=P/\rho$ decreases toward the NS surface, and there is a point at which $\rho(1-3w)$ becomes positive. Then, it is possible to have nontrivial scalar-field profiles even for $\rho_c(1-3w_c)<0$, but the difference between $\pi f_B$ and $\phi_c$ tends to be smaller. For $\rho_c \gtrsim 10\rho_0$, this results in suppressed values of $\eta_s$ in comparison to the case $\rho_c \lesssim 10\rho_0$. The ADM mass of NSs is defined by \be M_s \equiv 4\pi \Mpl^2 r \left[ 1-h(r) \right] |_{r=\infty}\,, \ee while the NS radius is determined by the condition $P(r_s)=0$. In Fig.~\ref{fig5}, we plot the relation between $M/M_{\odot}$ and $r_s$ for $\beta=0.3$ and 1 with $m=1.0 \times 10^{-11}$~eV and $f_B=0.3 \Mpl/(\pi \beta)$, where $M_{\odot}$ is the Solar mass. In comparison, we also show the case of GR derived for the SLy EOS without the scalar field $\phi$. The matter density $\rho_c$ at $r=0$ is chosen to be in the range $\rho_c \geq 2\rho_0$, under which the NS in GR has the ADM mass $M_s \geq 0.243M_{\odot}$ and radius $r_s \leq 14.33$~km. In Fig.~\ref{fig5}, we observe that, for $\beta>0$, both $M_s$ and $r_s$ are smaller than those in GR. For $\beta={\cal O}(0.1)$ the field value $\phi_c$ is generally quite close to $\pi f_B$, so the exponential factor ${\rm e}^{\beta \phi_c^2/(2\Mpl^2)}$ in Eq.~(\ref{Pr=0}) can be estimated as ${\rm e}^{\beta \phi_c^2/(2\Mpl^2)} \simeq 1+0.045/\beta>1$. This leads to the larger decreasing rate of $P(r)$ in comparison to GR, and hence $r_s$ and $M_s$ are reduced. Such reductions of $r_s$ and $M_s$ are different from the properties in standard spontaneous scalarization induced by the negative coupling $\beta$ \cite{Damour:1993hw,Minamitsuji:2016hkk,Doneva:2017duq}. For $\beta=0.3$ and $\rho_c=2\rho_0$ we obtain $r_s=13.08$~km and $M_s=0.207M_{\odot}$, so the relative difference from the ADM mass in GR ($M_{\rm GR}=0.243M_{\odot}$) with the same value of $\rho_c$ is as large as $\mu_M =0.15$,~where we have defined \be \mu_M \equiv 1-\frac{M_s}{M_{\rm GR}}. \ee As $\rho_c$ increases, the deviation parameter $\mu_M$ tends to decrease, e.g., $\mu_M=0.10$ for $\rho_c=5 \rho_0$ and $\mu_M=0.07$ for $\rho_c=15 \rho_0$. However, it is interesting to note that, for $\beta=0.3$, the difference of order 7\,\% from GR arises for the ADM mass even with high densities like $\rho_c \gtrsim 10\rho_0$. With the increase of $\beta$, $\phi_c$ tends to decrease toward the symmetry-restored state $\phi=0$ and hence the exponential factor ${\rm e}^{\beta \phi_c^2/(2\Mpl^2)}$ approaches 1. Moreover, as we already discussed in Fig.~\ref{fig4}, the quantity $\eta_s$ is a decreasing function of $\beta$ for the choice $f_B=0.3 \Mpl/(\pi \beta)$. Indeed, as we see the case $\beta=1$ in Fig.~\ref{fig5}, the deviations of $M_s$ and $r_s$ from those in GR are less significant relative to the coupling $\beta=0.3$. Still, for $\beta=1$, we have $\mu_M=0.08$ for $\rho_c=2 \rho_0$ and $\mu_M=0.04$ for $\rho_c=5 \rho_0$, so the appreciable deviation from GR is present. For $\beta$ exceeding the order of 1, the theoretical curve in the $(M_s, r_s)$ plane approaches that of GR. In standard spontaneous scalarization induced by negative values of $\beta$, the theoretical curve in the $(M_s, r_s)$ plane starts to be bifurcated from the GR one for $\rho_c$ larger than some critical density, and the mass of NSs in the scalarized branch are larger than that in GR with the same central density. This characterizes a continuous phase transition from the GR branch $\phi=0$ to the other nontrivial branch $\phi \neq 0$ triggered by a tachyonic instability. In our case, the scalarized NS solutions arise from the symmetry restored state at $\phi=0$ induced by positive $\beta$. Theoretical plots for $\beta>0$ differ from the GR curve in the whole range of $\rho_c$ shown in Fig.~\ref{fig5}, i.e., $2\rho_0 \leq \rho_c \leq 18\rho_0$. This is because in our model there is no GR solution and hence no bifurcation from it. Instead, we have only one branch of NS solutions where the scalar field $\phi$ asymptotically approaches the ground state located around $\phi=\pi f_B$. In theories given by the action (\ref{action}), the conditions for the absence of ghost/Laplacian instabilities against odd- and even-parity perturbations are given by $\rho+P>0$ and $F(\phi)>0$ \cite{Kase:2021mix} (see also Ref.~\cite{Kase:2020qvz}). Since we are considering the nonminimal coupling $F(\phi)={\rm e}^{-\beta \phi^2/(2M_{\rm pl}^2)}$ with $\beta>0$ in the presence of a perfect-fluid matter satisfying the weak energy condition, there are neither ghost nor Laplacian instabilities for our scalarized NS solutions as in the case of standard spontaneous scalarization. It should be noted that NSs in GR with other EOMs may provide the similar ADM mass and radius to those derived for nonzero $\beta$ in Fig.~\ref{fig5}. In our case, since the mass of NSs is relatively suppressed to that in the GR case, it would be difficult to distinguish NSs in our theory from those in GR with a different choice of EOSs, only with observations regarding the mass and radius of NSs. In order to break the degeneracy between the modified gravity effects and the ambiguity associated with the choice of EOSs, we should explore the existence of universal relations which are almost insensitive to the choice of EOSs \cite{Yagi:2013awa,Yagi:2014bxa,Breu:2016ufb} as well as signatures associated with gravitational perturbations of NSs such as tidal deformability and quasinormal frequencies. We leave these subjects for future works. Finally, we should comment on the case in which the energy density of an oscillating scalar field $\phi$ around $\phi=\pi f_B$ is responsible for a fraction of CDM without decaying to other particles by today. In this case, the symmetry-breaking scale $f_B$ is constrained as Eq.~(\ref{fBcon}). When $\beta=1$, this gives the constraint $f_B/\Mpl \simeq 2 \times 10^{-6}$. For such small values of $f_B$, the field $\phi$ inside and outside the NS is also suppressed and hence $\eta_s$ is at most of order $10^{-9}$ with the nonminimal coupling in the range $\beta \le {\cal O}(10)$. In such cases, the NS mass and radius are also very similar to those in GR. However, there is a possibility that the oscillating field $\phi$ decays to other particles whose energy densities decrease as that of radiation or faster, in which case larger values of $f_B$ are allowed. In Figs.~\ref{fig4} and \ref{fig5}, we have used the maximum allowed values of $f_B$ consistent with Solar-system constraints. \section{Conclusions} \label{consec} We proposed a new scenario of NS scalarizations in the presence of a pNGB potential $V(\phi)=m^2 f_B^2 [1+\cos(\phi/f_B)]$ and a nonminimal coupling to the Ricci scalar $R$ of the form $F(\phi)={\rm e}^{-\beta \phi^2/(2\Mpl^2)}$. In regions of the high density, the scalar field $\phi$ acquires a large positive mass squared by the nonminimal coupling with $\beta>0$. This can overwhelm a negative mass squared $-m^2$ of the bare potential at $\phi=0$. Then, the symmetry restoration toward $\phi=0$ occurs in strong gravitational backgrounds like the interior of NSs, while the scalar approaches a VEV $\phi_v=\pi f_B$ toward spatial infinity. This allows the existence of nontrivial field profiles affecting gravitational interactions in the vicinity of NSs. Unlike the original scenario of spontaneous scalarization induced by negative $\beta$ with $V(\phi)=0$ \cite{Damour:1993hw}, our model does not suffer from the tachyonic instability of cosmological solutions. In Sec.~\ref{cossec} we studied the cosmological evolution of $\phi$ for $m ={\cal O} ( 10^{-11}$~eV) and $f_B \lesssim \Mpl$ relevant to the mass scales of NS scalarizations. For $\beta>{\cal O}(0.1)$, the amplitude of $\phi$ exponentially decreases during inflation due to the dominance of the positive nonminimal coupling over the tachyonic mass squared $-m^2$ in the effective mass squared $m_{\rm eff}^2$. During the reheating stage, the field amplitude exhibits mild decrease further. During the radiation-dominated era, after the contribution from the nonminimal coupling to $m_{\rm eff}^2$ drops below $-m^2$, $\phi$ starts to roll down the potential toward the VEV $\pi f_B$. Numerically, we showed that the scalar field starts to oscillate around $\phi=\pi f_B$ before the epoch of BBN. If the oscillation of $\phi$ has continued by today, it can be the source of (a portion of) CDM for $f_B$ satisfying Eq.~(\ref{fBcon}). This relation is not applied to the case in which the energy density of oscillating $\phi$ is converted to other particles by today. In Sec.~\ref{weaksec}, we derived the field profile for nonrelativistic stars with constant $\rho$ on weak gravitational backgrounds. Outside the star, the scalar field is given by Eq.~(\ref{phica}) with the effective coupling (\ref{beff}). For the mass satisfying the condition $m r_s \gg 1$, where $r_s$ is the radius at the surface of star, the field stays in the region very close to $\phi=\pi f_B$. In this case, the gravitational potentials $\Phi$ and $\Psi$ receive fifth-force corrections as Eqs.~(\ref{Phi}) and (\ref{Psi}). {}From Solar-system constraints on the post-Newtonian parameter $\gamma_{\rm PPN}=\Phi/\Psi$, we obtained the upper limit (\ref{bebound}) on the product $\beta f_B$. With the mass scale $m=10^{-11}$~eV, the bound (\ref{bebound}) translates to $\beta \pi f_B/\Mpl \leq 0.3$ for the Sun. This is weaker than the constraint (\ref{bebound2}) derived in the massless limit $m r_s \to 0$ by two orders of magnitude. In Sec.~\ref{scasec}, we have numerically constructed NS solutions in the presence of a positive nonminimal coupling with the self-interacting potential. We showed the existence of scalar-field profiles with significant difference between the field value $\phi_c$ at $r=0$ and the asymptotic value $\pi f_B$ at spatial infinity for static and spherically symmetric NSs. This is an outcome of the symmetry restoration toward $\phi=0$ in regions of the high density induced by the positive nonminimal coupling $\beta$. As we observe in Fig.~\ref{fig3}, for larger $\beta$, the difference between $\phi_c$ and $\pi f_B$ tends to be more significant. The nonminimally coupled scalar field gives rise to modifications to the gravitational potentials $\Phi$ and $\Psi$ in comparison to GR. We computed the quantity $\eta_s=\Phi'(r_s)/\Psi'(r_s)-1$ by varying the central density $\rho_c$ in the range $\rho_c \geq 2\rho_0$. Taking the upper limit $f_B=0.3 \Mpl/(\pi \beta)$ constrained from Solar-system tests of gravity, we find that $\eta_s$ is a decreasing function of $\beta$ for given $\rho_c$. The increase of $\beta$ results in the decreases of $f_B$ and $\phi(r)$ inside the star, so the parameter $\eta_s$ tends to be suppressed. As $\rho_c$ increases, $\eta_s$ is also subject to the decrease due to several combined effects explained in the main text. Still, $\eta_s$ can be of order $\eta_s>{\cal O}(0.01)$ in the coupling range $\beta=0.1 \sim 10$. In Fig.~\ref{fig5}, we plotted the relation between the ADM mass $M_s$ and the radius $r_s$ of NSs for $\beta=0.3$ and $\beta=1$ with $f_B=0.3 \Mpl/(\pi \beta)$. Unlike standard spontaneous scalarization, the deviation of $M_s$ and $r_s$ from their values in GR occurs for any central density in the range $2 \rho_0 \leq \rho_c \leq 18 \rho_0$. For $\beta=0.3$, the relative difference of the ADM mass from that in GR, which is defined by $\mu_M = 1-M_s/M_{\rm GR}$, is as large as $\mu_M=0.15$ for $\rho_c=2\rho_0$. As $\rho_c$ increases, $\mu_M$ tends to decrease, but $\mu_M$ still has a considerably large value $0.07$ even for $\rho_c=15 \rho_0$. With the increase of $\beta$, the theoretical lines in the $(M_s, r_s)$ plane, which exist in the region $M_s<M_{\rm GR}$, approach that in GR. For $\beta=1$ and $\rho_c=2\rho_0$, we found that $\mu_M=0.08$ and hence there is still appreciable deviation from GR. In summary, we showed that our new model of scalarizations of NSs associated with the symmetry restoration induced by the nonminimal coupling leads to modified gravitational interactions in the vicinity of NSs, while it is free from the problem of instabilities during the cosmological evolution. It will be of interest to investigate how to test our new scalarization scenario associated with symmetry restoration observationally, for instance in the gravitational waveforms emitted from the NS binary coalescence and in the quasinormal ringdown signals after the coalescence. This will allow us to probe signatures of modifications of gravity induced by the nonminimally coupled scalar field with positive $\beta$. \section*{ACKNOWLEDGMENTS} MM was supported by the Portuguese national fund through the Funda\c{c}\~{a}o para a Ci\^encia e a Tecnologia (FCT) in the scope of the framework of the Decree-Law 57/2016 of August 29, changed by Law 57/2017 of July 19, and the Centro de Astrof\'{\i}sica e Gravita\c c\~ao (CENTRA) through the Project~No.~UIDB/00099/2020. MM also would like to thank Yukawa Institute for Theoretical Physics (under the Visitors Program of FY2022) and Department of Physics of Waseda University for their hospitality. ST was supported by the Grant-in-Aid for Scientific Research Fund of the JSPS Nos.~19K03854 and 22K03642. \bibliographystyle{mybibstyle} \bibliography{bib}
Title: Unveiling the main sequence of galaxies at $z \geq 5$ with the James Webb Space Telescope: predictions from simulations
Abstract: We use two independent, galaxy formation simulations, FLARES, a cosmological hydrodynamical simulation, and SHARK, a semi-analytic model, to explore how well the James Webb Space Telescope (JWST) will be able to uncover the existence and parameters of the star-forming main sequence (SFS) at $z=5\to10$, i.e. shape, scatter, normalisation. Using two independent simulations allows us to isolate predictions (e.g., stellar mass, star formation rate, SFR, luminosity functions) that are robust to or highly dependent on the implementation of the physics of galaxy formation. Both simulations predict that JWST can observe $\ge 70-90\%$ (for SHARK and FLARES respectively) of galaxies up to $z\sim10$ (down to stellar masses of $\approx 10^{8.3}\,\rm M_{\odot}$ and SFRs of $\approx 10^{0.5}\,\rm M_{\odot}\, yr^{-1}$) in modest integration times and given current proposed survey areas (e.g. the Web COSMOS $0.6\,\rm deg^2$) to accurately constrain the parameters of the SFS. Although both simulations predict qualitatively similar distributions of stellar mass and SFR, there are important quantitative differences, such as the abundance of massive, star-forming galaxies, with FLARES predicting a higher abundance than SHARK; the early onset of quenching as a result of black hole growth in FLARES (at $z\approx 8$), not seen in SHARK until much lower redshifts; and the implementation of synthetic photometry, with FLARES predicting more JWST-detected galaxies ($\sim 90\%$) than SHARK ($\sim 70\%$) at $z=10$. JWST observations will distinguish between these models, leading to a significant improvement upon our understanding of the formation of the very first galaxies.
https://export.arxiv.org/pdf/2208.06180
\label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \begin{keywords} cosmology:theory -- galaxies:star formation -- galaxies:high redshift -- infrared:JWST \end{keywords} \input{Sections/Introduction} \input{Sections/Simulations} \input{Sections/SMF-SFRF} \input{Sections/SFS} \input{Sections/CSFH-CSMH} \input{Sections/Discussion} \input{Sections/Conclusion} \section*{Acknowledgements} We thank the anonymous reviewer for their time and constructive comments. JCJD is supported by an Australian Government Research Training Program (RTP) Scholarship. We thank the entire \textsc{Flares} team for their support and feedback. CL has received funding from the ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. LJMD acknowledges support from the Australian Research Councils Future Fellowship scheme (FT200100055). CCL acknowledges support from the Royal Society under grant RGF/EA/181016. Cosmic Dawn Centre is funded by the Danish National Research Foundation. This work was supported by resources provided by The Pawsey Supercomputing Centre with funding from the Australian Government and the Government of Western Australia. This work used the DiRAC@Durham facility man- aged by the Institute for Computational Cosmology on be- half of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/K00042X/1, ST/P002293/1, ST/R002371/1 and ST/S002502/1, Durham University and STFC operations grant ST/R000832/1. DiRAC is part of the National e-Infrastructure. \section*{Data Availability} Figures, scripts and additional data is available upon reasonable request to the authors. \textsc{Flares} is hosted here \url{https://github.com/flaresimulations} and \textsc{Subfind}/photometry outputs are available here \url{https://flaresimulations.github.io/}. \textsc{Shark} is hosted here \url{https://github.com/ICRAR/shark}. \textsc{Shark} \textsc{Subfind}/photometry outputs are available upon reasonable request. \bibliographystyle{mnras} \bibliography{refs} % \appendix \section{Effect of uncertainties on the SFS fitting} \label{apdx:uncertainties} To investigate the effect of measurement uncertainties on the ability to recover the SFS parameters of the true galaxy population we refit the SFS of the JWST population with varied Gaussian uncertainties added to the stellar masses and star formation rates. \Cref{fig:relative} shows the relative error of the parameters of the SFS as a function of the assumed uncertainty on the stellar masses and star formation rates of the JWST detected populations in the simulations. The relative error is \begin{equation} \mathrm{ Relative \, error = \left | \frac{X_{JWST} - X_{All}}{X_{All}}\right |, } \label{eq:rel_err} \end{equation} where $\mathrm{X_{JWST}}$ are the parameters calculated for the JWST population with the absorbed uncertainties and $\mathrm{X_{All}}$ are the calculated parameters for the entire galaxy population in the simulations. When using the same prior distributions as those we used for the fitting in \Cref{fig:sfs_quantiles,fig:sfs_jwst_fit} we found that for varied uncertainty combinations the relative error was largest in different parameters, making it difficult to see just how additive normal noise affected the fitting. For this reason, we fit the parameters independently, keeping the remaining three parameters fixed, with each uncertainty combination. We achieve this by using a uniform prior on the desired parameter and narrow normal priors centred on the true population parameters for the remaining three parameters, cycling through this process until all parameters have been considered. So for 7 uncertainty combinations, 4 parameters and 3 selected redshifts we perform $\mathrm{7 \times 4 \times 3 = 84}$ MCMC fitting routines on each simulation. Both simulations predict a similar trend of increased relative error on all parameters of the SFS as the uncertainty on the stellar masses star formation rates increases, as expected. Even with zero uncertainty there is a non-zero relative error indicating systematic offsets between the JWST detected and total galaxy populations in the simulations. This is perhaps unsurprising as the JWST will miss faint galaxies, and so the relationship between the total and JWST populations is not one-to-one. There is, in general, a larger increase in the relative error when the uncertainty on the stellar mass is increased from 0.2 dex to 0.4 dex compared to when the uncertainty on the SFR is increased from 0.2 dex to 0.4 dex to 0.6 dex. We thus suggest that accurate constraints on the stellar mass are more necessary to describe the SFS than constraints on SFR. The grey shaded regions shows the 10\% margin. Beyond $\mathrm{\sim(0.2,0.4)}$ dex uncertainties on $\mathrm{(M_{\star}, SFR)}$ both simulations fail to recover the normalisation of the SFS to within 10\% at all redshifts shown. For almost any uncertainty combination, the simulations are able to recover the turn over mass to within $10$\% at all redshifts except ${z\sim6}$ in \textsc{Shark}. The intermediate and high stellar mass slopes require far more accurate constraints on the stellar mass and SFR. Beyond $\mathrm{\sim 0.2 \, dex}$ and $\mathrm{\lesssim 0.6 \, dex}$ uncertainty the intermediate stellar mass slope cannot be recovered to within 10\% at all redshifts except ${z=6}$ in \textsc{Flares}. For most uncertainty combinations used in \textsc{Flares} the high stellar mass slope can be recovered except beyond $\mathrm{\sim(0.2,0.6)}$ dex uncertainties on $\mathrm{(M_{\star}, SFR)}$. On the other hand, for almost every uncertainty combination \textsc{Shark} cannot recover the high stellar mass slope at all redshifts suggesting that the high stellar mass slope is subtle in \textsc{Shark} by default. Interestingly, the apparent redshift dependencies on the recovery of these parameters appear to be reversed between the simulations, with \textsc{Flares} showing decreasing relative error for the same uncertainty combination between ${z=10\to6}$, whereas this is true between ${z=6\to10}$ in \textsc{Shark}. We do not comment any further on this in this work. Some larger combinations of uncertainties result in a lower relative error than smaller uncertainties. For example, the $\mathrm{\sim(0.4,0.6)}$ dex combination is closer to the true population value of the turn over at all redshifts in \textsc{Shark} than the $\mathrm{\sim(0.4,0.2)}$ combination is. This is more than likely a statistical effect or a symptom of the fitting routine as, intuitively, a larger uncertainty should result in worse agreement; though we do not investigate this further here. Ultimately, these trends show that poorer constraints on $\mathrm{M_{\star}}$ and $\mathrm{SFR}$ result in a worse recovery of the true parameters of the SFS. These trends also provide an indication on the level of accuracy that would be needed to adequately determine the true SFS to within 10\%. \bsp % \label{lastpage}
Title: The stellar remnant of SN 1181
Abstract: We report observations and modelling of the stellar remnant and presumed double-degenerate merger of Type~Iax supernova SN 1181 AD. It is the only known bound stellar SN remnant and the only star with Wolf-Rayet features that is neither a planetary nebula central star nor a massive Pop I progenitor. We model the unique emission-line spectrum with broad, strong O VI and O VIII lines as a fast stellar wind and shocked, hot gas. Non-LTE wind modeling indicates a mass-loss rate of $\sim 10^{-6}\,\rm M_\odot yr^{-1}$ and a terminal velocity of $\sim$15,000 km s$^{-1}$, consistent with earlier results. O VIII lines indicate shocked gas temperatures of $T \simeq 4$ MK. We derive a magnetic field upper limit of $B<2.5$ MG, below earlier suggestions. The luminosity indicates a remnant mass of $1.2\pm0.2$ $\rm M_\odot$ with ejecta mass $0.15\pm0.05$ $\rm M_\odot$. Archival photometry suggests the stellar remnant has dimmed by $\sim$0.5 magnitudes over 100 years. A low Ne/O $<0.15$ argues against a O-Ne white dwarf in the merger. A cold dust shell is only the second detection of dust in a SN Iax and the first of cold dust. Our ejecta mass and kinetic energy estimates of the remnant are consistent with Type Iax extragalactic sources.
https://export.arxiv.org/pdf/2208.03946
. \begin{document} \title{The stellar remnant of the type Iax SN 1181} \author[0000-0002-6394-8013]{Foteini Lykou} \affiliation{Konkoly Observatory, Research Centre for Astronomy and Earth Sciences, Konkoly-Thege Mikl\'os \'ut 15-17, 1121 Budapest, Hungary} \affiliation{Laboratory for Space Research, University of Hong Kong, 405B Cyberport 4, 100 Cyberport Road, Cyberport, Hong Kong} \author[0000-0002-2062-0173]{Quentin A. Parker} \correspondingauthor{Quentin A. Parker}, \email{quentinp@hku.hk} \affiliation{Laboratory for Space Research, University of Hong Kong, 405B Cyberport 4, 100 Cyberport Road, Cyberport, Hong Kong} \affiliation{Department of Physics, The University of Hong Kong, Chong Yuet Ming Physics Building, Pokfulam Road, Hong Kong} \author[0000-0003-0869-4847]{Andreas Ritter} \affiliation{Laboratory for Space Research, University of Hong Kong, 405B Cyberport 4, 100 Cyberport Road, Cyberport, Hong Kong} \affiliation{Department of Physics, The University of Hong Kong, Chong Yuet Ming Physics Building, Pokfulam Road, Hong Kong} \author[0000-0002-3171-5469]{Albert A. Zijlstra} \affiliation{Jodrell Bank Centre for Astrophysics, The University of Manchester, Alan Turing Building, Oxford Road, M13 9PL, Manchester, U.K.} \affiliation{Laboratory for Space Research, University of Hong Kong, 405B Cyberport 4, 100 Cyberport Road, Cyberport, Hong Kong} \author[0000-0001-5094-8017]{D. John Hillier} \affiliation{Department of Physics and Astronomy \& Pittsburgh Particle Physics, Astrophysics, and Cosmology Center (PITT PACC), University of Pittsburgh, 3941 O’Hara Street, Pittsburgh, PA 15260, USA} \author[0000-0002-7759-106X]{Mart\'in A. Guerrero} \affiliation{Instituto de Astrof\'isica de Andaluc\'ia (IAA-CSIC), Glorieta de la Astronom\'ia S/N, 18008 Granada, Spain} \author[0000-0003-2385-0967]{Pascal Le D\^u} \affiliation{Kermerrien Observatory, F-29840 Porspoder, France} \keywords{Supernovae (1668), Circumstellar matter (241), Spectroscopy (1558), Photometry (1234), Plasma astrophysics (1261)} \section{Introduction} \label{sec:intro} A new class of thermonuclear supernovae (SNe), produced in low-mass (i.e., $M_* \leq 8$~M\solar) binary systems, has been identified in the last twenty years: the Type~Iax \citep[e.g.,][]{2013ApJ...767...57F}, also known as SN 2002cx-like. These are the least-understood supernovae. Two scenarios are proposed for their formation. In the single-degenerate scenario \citep[e.g.,][]{jordan2012,2015MNRAS.450.3045K}, a carbon-oxygen (CO) white dwarf (WD) that accreted material from a helium donor fails to detonate. In the double-degenerate scenario \citep[e.g.,][]{kashyap2018}, the SN is caused by deflagration of the accretion disk that forms after the merger of a CO WD with an oxygen-neon (ONe) WD companion. Type~Iax SNe are sub-luminous (as faint as $M=-13$ to $-14$ mag) with expansion velocities from $2000$ to $9000$\kms, and may leave a stellar remnant behind. The currently known Type Iax SNe population\footnote{See also \url{https://www.wiserep.org/}, where 44 objects are currently listed.} is only a few tens \citep{2013ApJ...767...57F, 2017hsn..book..375J}, so the full range of observed properties is weakly constrained. All are extra-galactic save two in our own Galaxy. One is SN Sgr A East near the Galactic Center as suggested by \citet{2021ApJ...908...31Z}. The other\footnote{Originally assigned as Type Iax by \citet{oskinova2020}. } is associated with the nebula Pa\,30, identified as the remnant of the historical SN~1181~AD \citep[][hereafter Paper I]{paper1}. Earlier studies show that the nebula -- Pa\,30 in \citet{kron2014} and IRAS\,00500+6713 in \citet{gvaramadze2019} -- emits strongly in X-rays \citep{oskinova2020}. The nebula appears circular, approximately 3\arcmin\ - 4\arcmin\ across depending on the wavelength of the imagery. Faint, diffuse [O {\sc iii}] nebular emission is seen via deep narrow-band imaging \citep{kron2014, oskinova2020, paper1}. An [S {\sc ii}] shell is expanding at $1000\pm100$ km~s$^{-1}$ \citetalias{paper1}. The stellar remnant, reported by \citet{gvaramadze2019} as J005311, is associated with the infrared source 2MASS J00531123+6730023. It is located at the center of Pa\,30 and has been argued to be the result of a double-degenerate CO\,+\,ONe WD merger \citep{gvaramadze2019,oskinova2020}. In this paper we analyze the stellar wind and nebula in order to constrain the progenitor system. The stellar wind and shocked hot gas are used to derive the stellar temperature, luminosity, mass, and abundances. The remnant mass is below the Chandrasekhar limit so a further explosive event is ruled out. We constrain the magnetic field strength to less than 2\,MG. Indications for slow fading are presented, consistent with the young age of the remnant. The outer shell is shown to be hydrogen-rich and dusty, and may contain ISM sweep-up or pre-merger ejecta. The results are compared to other type Iax events. \section{Observations and Data Reduction}\label{sec:observations} The stellar remnant 2MASS J00531123+6730023 has {\it Gaia} J2000 position RA: 00$^h$~53$^m$~11.205$^s$, DEC: +67$^o$~30'~02.381'. The {\it Gaia} DR3 parallax is $0.4065\pm0.0259$\,mas, and the adopted distance is $2.3\pm0.1$\,kpc \citep{bailerjones2021}. The Galactic coordinates ($l$,$b$)\,=\,(123.1, 4.6) place it 180\,pc from the plane, so likely part of the old stellar disk. \subsection{OSIRIS/GTC spectroscopy} High signal-to-noise spectroscopic data of both star and nebula were obtained with the OSIRIS instrument on the 10-m Gran Telescopio Canarias (GTC) telescope in late 2016 as part of our planetary nebula (PN) follow-up program. The faint nebula spectra from SN~1181 was presented in \citetalias{paper1} with only the [S {\sc ii}] 6716 and 6731\,\AA\ doublet and [Ar {\sc iii}] 7136\,\AA\ emission lines detectable. Here, we focus on the spectra we obtained for the stellar core. OSIRIS covered the wavelength region from 3700 to 7000\,\AA. The slit width was 0.8\arcsec\ \citetalias{paper1}. The most prominent stellar feature is a remarkable ``blue bump'' below 4000\,\AA\ (Figure~\ref{fig:starspec}), with additional broad spectral features beyond 4200\,\AA. The narrow absorption lines are known diffuse interstellar bands (DIBs) at 4428, 4502, 4727, 4762, 5418, 5456, 5779, 5796, 6204, 6281, and 6613\,\AA, identified using the NASA catalogue\footnote{\url{https://leonid.arc.nasa.gov/DIBcatalog.html}} \citep{DIBs}. % \subsection{SparsePak/WIYN fibre spectroscopy} We analyzed previously un-extracted spectra of the stellar core from the SparsePak fibre-based (fiber diameter 4.7\arcsec), pseudo-integral field unit on the 3.5-m WIYN telescope taken in late 2014. The spectral observations of the surrounding nebula Pa\,30 are presented in \citetalias{paper1}. \subsection{Kermerrien Observatory spectroscopy} Low resolution ($R\sim540$) optical spectroscopy of Pa\,30's central star by our amateur collaborators with a 20-cm $F/D\sim5$ telescope at Kermerrien observatory in Porspoder, France, brought its remarkable spectrum and its ``blue bump'' to our attention in October 2018. The low signal-to-noise spectra were not properly flux-calibrated and were scaled and smoothed to be compared against our OSIRIS/GTC spectrum in Figure~\ref{fig:starspec}. \subsection{Archival data} \subsubsection{X-ray observations}\label{sec:xray} Faint X-ray emission was first recorded by the \emph{ROSAT} All-Sky Survey \citep{rosatfaint} within 12\arcsec\ of the central star at a PSPC count rate in the 0.1--2.4 keV band of $17 \pm 7$ cnts~ks$^{-1}$. Most counts ($>$70\%) have energies greater than 0.5 keV. A \emph{Swift} XRT \citep{evans2014} 1.94~ks observation (ID 00358336000, 26$^{\rm th}$ July 2009) serendipitously registered this source. Analysis of these observations confirms a point source detection at the star's position with a \emph{Swift} XRT count rate in the 0.2--5 keV band of $3.6 \pm 1.4$ cnts~ks$^{-1}$, together with possible diffuse emission. Recently \citet{oskinova2020} presented deep, sensitive \emph{XMM-Newton} EPIC X-ray observations that detect a central point source while confirming diffuse nebula emission. Their spectral analysis of the diffuse emission describes an optically-thin, thermal plasma composed of carbon, oxygen, neon, and magnesium (see their Table~1). These abundances are attributed to carbon-burning ashes, suggesting Pa\,30 is a supernova remnant. The total mass of the X-ray-emitting nebula, estimated as $\simeq$0.1 $M_\odot$, strengthens this conclusion. The point source \emph{XMM-Newton} EPIC spectra are described by a hot plasma at temperatures 1--100 MK. A non-thermal component, that would reduce the plasma temperature to $\sim$1--20 MK, cannot be excluded. The EPIC spectra are consistent with the \emph{Swift} and \emph{ROSAT} data, but their much higher quality allow a detailed spectral fit \citep{oskinova2020}. The un-absorbed X-ray luminosity in the 0.2--12 keV band implied by \citet{oskinova2020} is reduced to $6.6\times10^{32}$ erg~s$^{-1}$ at our adopted distance of $2.3\pm0.1$\,kpc. To contribute SED high energy points presented in Section~\ref{sec:shellphotom} we re-processed the \emph{XMM-Newton} EPIC MOS1, MOS2, and {\it pn} data corresponding to Obs.ID.\, 0841640101 and 0841640201 using \emph{SAS} and relevant calibration files. MOS1, MOS2, and {\it pn} background-subtracted spectra of the central star of Pa\,30 were extracted using a circular aperture (radius=20\arcsec) with suitable background regions and calibration matrices computed using the corresponding \emph{SAS} tasks. The 0.2--7.0 keV extended emission image reveals a shell with a diameter of $\approx$4\arcmin\ with a filled morphology. \subsubsection{Infrared imaging}\label{sec:obsir} We mined the IRSA/IPAC archive\footnote{\url{https://irsa.ipac.caltech.edu/}} for available infrared imaging 3\arcmin\ around Pa\,30. Apart from the WISE data presented previously \citep{gvaramadze2019, oskinova2020} and \citetalias{paper1}, usable mid- and far-infrared images exist from AKARI \citep{2015PASJ...67...50D} and IRAS \citep{1984ApJ...278L...1N}. Only the central star is visible in the near-infrared, while the nebula is prominent beyond 10\micron. The far-infrared maps (i.e., AKARI and IRAS) do not have sufficient resolution to resolve the central star from the nebula, but they resolve Pa\,30 which retains its apparent circular shape. The 12\micron\ WISE image shows a round nebula with a radius of 83\arcsec$\pm$4\arcsec, corresponding to a physical radius of $0.93\pm0.04$ pc, identical to the size of the [O {\sc iii}] shell, \citetalias{paper1}. The extended X-ray shell seen by XMM-Newton is roughly 120\arcsec\ in radius. No nebular emission has been detected in submillimeter {\it Planck} maps. The infrared photometry from these datasets assume unresolved sources although the nebular component can exceed the size of the apertures. We re-evaluate the photometric measurements taking into account the observed nebular size. In Section~\ref{sec:shellphotom}, we present estimates of the nebula's surface brightness. \subsubsection{Photometry}\label{phot} The central star of Pa\,30 is relatively bright, at $G \approx 15.4$ mag. It has been covered in many sky surveys from which we retrieved all available temporal optical and photometric data together, in order to investigate temporal variability (cf. Sect.~\ref{sec:varphot}). \subsubsection{Spectroscopy}\label{sec:ultra} In the following sections, we directly compare our OSIRIS/GTC and SparsePak/WIYN stellar spectra with \citet{gvaramadze2019} and \citet{2020RNAAS...4..167G}. These archival long-slit spectra were very kindly provided by these authors. We also used reduced archival ultraviolet spectra from the Space Telescope Imaging Spectrograph (STIS) on the Hubble Space Telescope (HST) under program GO15864 (P.I. G.~Graefener; epoch: 4 November 2020). % The spectral coverage was from 1150 to 3150\,\AA, with a slit width of 0.2\arcsec. These spectra provided a comparison against the {\sc cmfgen} models in Sect.~\ref{sec:nlte}. Finally, we complemented our observations with very recent {\it Gaia} DR3 BP/RP low resolution spectra of $R=30-100 $ \citep{Montegriffo2022} that extends beyond OSIRIS/GTC, i.e., from 3200\AA\ to 1$\mu$m and covers the ``blue bump''. The {\it Gaia} telescope is outside the atmosphere so telluric lines are not present. The spectra were obtained from the UK Gaia Data Mining Platform\footnote{https://www.gaia.ac.uk/data/uk-gaia-data-mining-platform} and are encoded as Hermite functions. The spectra show instrumental distortions around the ``blue bump''. Between 4000 and 4500\AA, the feature wavelengths differ by 30\,\AA\ from our ground-based spectra, possibly due to the response of the Hermite functions to the sharp edge of the bump. The wavelength calibration appears correct for $\lambda>5000$\AA. \subsection{Extinction} \label{sect:ext} \citetalias{paper1} adopted $A_V=2.4\,\pm\,0.1$~mag but field stars now imply this may be an underestimate. Figure~\ref{fig:extinction} shows parallax and $A_0$ from {\it Gaia} DR3 field stars within a 5~arcmin radius (black points) from the central star of Pa\,30. Red points are stars within 2~arcmin. The curve is the model extinction plot from \citet{Vergely2022} from the {\sc explore} platform\footnote{https://explore-platform.eu/sdas} at a spatial resolution of 25~pc (30~arcmin at the distance of Pa\,30). At the parallax of the central star (dashed line), the nominal extinction is $A_0=2.9$ mag, albeit with a large scatter between individual stars. The extinction is primarily due to clouds at around 400\,pc and at 800\,pc. The \citet{Vergely2022} curve ends at a distance of 3 kpc. {\it Gaia} DR3 finds some higher extinctions beyond 2.5\,kpc but this is not included in the curve. The data indicates an extinction $A_0=2.8\pm0.4$ mag. The central value is the same as listed for our star in {\it Gaia} DR3, but that is fortuitous in view of the complexity of the spectrum. In this paper we will argue that the extinction for the central star is at the higher end of the range. \section{The stellar remnant}\label{sec:CS} \subsection{The stellar spectrum}\label{sec:spec} The central star of Pa\,30 has a unique spectrum (Figure~\ref{fig:starspec}) with very broad, very high excitation emission lines and a striking discontinuity bluewards of 4000\,\AA. There is a complete absence of hydrogen, helium and nitrogen lines. The broad lines are mostly identified with very high ionization stages of oxygen (Figure~\ref{fig:model_n38}). The sharp and significant `blue bump' that cuts-on around 4000\,\AA\ is identified as O {\sc vi} 3811/34\,\AA. This feature turns over near the atmospheric cut-off as confirmed in the recent spectrum of \cite{2020RNAAS...4..167G}. This doublet has been previously noted in Pop~I WO stars \citep[e.g.,][]{crowther1998, gvaramadze2019}. The broadened emission at 5270/91\,\AA\ is identified as O {\sc vi}. Currently, stars exhibiting such Wolf-Rayet features are associated with either young massive Pop~I stars or old low-mass central stars of planetary nebulae (designated as [WO]) before they enter the white dwarf phase. The central star of Pa\,30 seems unique in arising from neither pathway if the double degenerate merger and accretion fed Type~Iax SN are correct \citep{gvaramadze2019,oskinova2020}. In addition to these strong, broad features, a number of weaker, individual lines are seen. Their profiles can be accurately fitted with Gaussians with FWHM of $0.015c$. The wings extend to velocities as high as 15,000\,km\,s$^{-1}$ ($\approx0.05c$). Several lines coincide with O~{\sc viii} and are tentatively identified as such. They are well centered on the systemic velocity. \subsection{The stellar wind {\sc cmfgen} models}\label{sec:nlte} The broad, prominent spectral features are indicative of a strong stellar wind. We modeled it using the non-LTE line-blanketed radiative transfer code {\sc cmfgen}\footnote{\url{http://kookaburra.phyast.pitt.edu/hillier/web/CMFGEN.htm}} adapted to WC and WO stars \citep{hillier1998,hillier2012a,hillier2012b}. Input parameters were our OSIRIS/GTC spectrum that includes the O {\sc vi} doublet at 3811/44\,\AA\ with the interstellar extinction derived in \citetalias{paper1}, and a wind velocity of 15,000\kms. A classic velocity law is used, \begin{equation} v(r)=v_{\infty}\left( 1-{r\over R_*}\right), \end{equation} \noindent where $v_{\infty}$ is the wind terminal velocity measured from the spectra and $R_*$ is the stellar radius. There is a consensus that stellar winds are clumped. This is taken into account in {\sc cmfgen} modeling via a volume filling factor, adopted here as $f=0.1$. For the spectral modeling the ratio $\dot{M}/\sqrt{f}$, where $\dot{M}$ is the mass-loss rate, can be considered an invariant. We explored a range of stellar luminosity and temperature, mass-loss rate, stellar radius, velocity law and chemical abundances, summarized in Table~\ref{tab:models}. Given the limited spectral coverage of our OSIRIS/GTC spectrum (and indeed of all available optical spectra- except that from Gaia, see above), we could not find a unique model that completely reproduces the observations \citep[unlike ][although they also stated large uncertainties, e.g., Table~\ref{tab:models}]{gvaramadze2019}. Nevertheless, many models could reproduce most spectral features. The results shown are best approximations and/or upper limits to the individual parameters. Most models tested could reproduce the visual and near-infrared photometry. Determining errors on the fits is tricky and we refrain from giving error bars as the errors are not easily transformed into a standard deviation and strong coupling exists between some parameters. Potentially four species can have mass fractions $>$0.1, so changing the mass fraction of one species necessitates a corresponding change of at least one other species. Therefore, below, we provide viable ranges for the stellar parameters and indicate the properties used to constrain them. The three primary modeling constraints come from matching the mean flux level, the strength of the O\,{\sc vi} 3811/34\,\AA\ doublet, and the O\,{\sc v} 5572--5604\,\AA\ line strength. Only models with luminosities between 25,000 and 40,000 L$_\odot$ provide reasonable fits. Below the low luminosity limit either the O\,{\sc vi} doublet is too weak and/or O {\sc v} too strong, while at the upper limit either O\,{\sc vi} is too strong and/or O\,{\sc v} is too weak. At very high luminosities the O\,{\sc vi} doublet strength is suppressed (but other O\,{\sc vi} lines remain strong) since the upper levels preferentially decay directly to the ground state. We hence set an upper limit at $\leq$230,000~K\footnote{This is the classical effective temperature defined at a Rosseland optical depth of 2/3. Due to the dense wind this is somewhat lower (around 10\%) than would be defined using the radius of the star at the sonic point.}. The peak near 5300\,\AA\ is due to O\,{\sc vi} offset to the red from observations, perhaps due to the overlapping O\,{\sc v} line being too strong in the model (Figure~\ref{fig:model_n38}). The model matches bumps seen at $\sim$4500\,\AA\ and $\sim$4659\,\AA, but fails to predict the feature near 4250\,\AA\ which is assigned to O\,{\sc viii}. An alternative suggestion for this emission is Ne\,{\sc viii}, which has approximately the same wavelength as the O\,{\sc viii} line. In a model with a slightly lower mass loss, the Ne\,{\sc viii} is clearly visible in predicted spectra though the overall fit to the spectrum is worse. As apparent from Figure~\ref{fig:model_n38}, the broad emission lines blend together making it impossible to see the true continuum.% We provide some upper limits for mass fractions of key elements (He, C, O and Ne) assuming Solar abundance ratios for other elements. In these hot models helium has little influence on the spectra. From the apparent absence of the He\,{\sc ii} 4686\,\AA\ line we place an upper limit of 0.4 on $X_{\rm He}$. The O\,{\sc vi} features at 3811/34\,\AA\ and 5270/91\,\AA\ are clear indicators that the stellar atmosphere is oxygen-rich. Therefore we used an oxygen-rich model atmosphere with an oxygen mass fraction $<0.647$. This reproduces the O\,{\sc vi} doublets, as well as O\,{\sc v} lines. All the strongest features can be reproduced but not some weaker individual ones. The lines at 4500 and 4658\,\AA\ are sitting on a broader O\,{\sc vi} feature whose strength depends on continuum choice, uncertain due to the density. Carbon-free and carbon-enhanced models were tested. The C\,{\sc iv} line can contribute to the 4568\AA\ line but otherwise there is no easily distinguishable carbon line contribution. We set an upper limit for $X_{\rm C}\leq0.261$. We explored a range of neon abundances (Table~\ref{tab:models}) and find an enhanced neon abundance could contribute to the spectral features at the 3600--4000\,\AA\ plateau (Ne\,{\sc vii}), at 4341\AA\ and at 6070\AA\ (Ne {\sc viii}). Inclusion of neon does not affect our ability to get a good fit to the oxygen lines. On the other hand, the 4341\,\AA\ line can also be identified with O {\sc viii} (see below). The fit shown in Figure~\ref{fig:model_n38} provides a `maximum' model where as many features as possible are reproduced. Oxygen is well fitted, but carbon may be less certain and neon is likely significantly over predicted. The plotted model has $X_{\rm Ne} =0.436$ and may be considered as the upper limit. A more carbon-rich composition cannot reproduce the spectrum. The ranges of chemical abundance of carbon, oxygen and neon in Table~\ref{tab:models} agree with those derived by \citet{oskinova2020} from the central star's X-ray spectrum.% \begin{table*}[htbp] \centering \caption{Range of stellar parameters for {\sc cmfgen} models. The last two columns show the values from \citet{gvaramadze2019} and from the hot gas model in the Sect.~\ref{sec:cloudy}. } \label{tab:models} \begin{tabular}{lccccc} \hline Parameter & Explored range & Best fit range & Illustrated model & \citet{gvaramadze2019} & Hot gas model \\ \hline $L_*$ ($\rm L_{\odot}$) & 10,000 -- 200,000 & 30,000 -- 50,000 & 36,000 & $39,810^{+20,144}_{-10,970}$ & \\ $T_*$ (K) & 145,000 -- 580,000 & 200,000 -- 250,000 & 237,000 & 211,000$^{+40,000}_{-23,000}$ & \\ $\dot{M}$ ($\rm M_{\odot}$/yr) & 7.5$\times$ 10$^{-7}$ -- 2.5$\times$ 10$^{-6}$ & $\leq$ 4$\times$ 10$^{-6}$ & 2.6 $\times$ 10$^{-6}$ & $3.5(\pm0.6)\times 10^{-6}$ & \\ $R_*$ ($\rm R_{\odot}$) & 0.04 -- 0.22 & $\leq 0.2$ & 0.155 & 0.15$\pm$0.04 & \\ v$_{\infty}$ (kms) & -- & $\sim$15,000 & 15,000 & 16,000$\pm$1,000 & \\ \hline Mass fractions & & & \\ \hline $X_{\rm H}$ & -- & -- & -- & -- & -- \\ $X_{\rm He}$ & 0.017 -- 0.135 & $<0.4$ & 0.017 & $<0.1$ & $\leq 0.44 $\\ $X_{\rm C}$ & 0.0 -- 0.261 & $\leq 0.26$ & 0.13 & $0.2\pm0.1$ & 0.13 \\ $X_{\rm O}$ & 0.414 -- 0.697 & $\leq 0.7 $ & 0.41& $0.8\pm0.1$ & 0.39 \\ $X_{\rm Ne}$ & 0.011 -- 0.501 & $\leq 0.5 $ & 0.44 & 0.01 & 0.04 \\ \hline \end{tabular} \end{table*} The broad wind profile ($\approx$15,000\kms) provides a slightly better fit to the broad spectral profiles for both the O\,{\sc vi} doublets and O\,{\sc v} feature at 5900\,\AA\ (Figure~\ref{fig:model_n38}). This value is close to the measured value of \citet[][16,000\kms]{gvaramadze2019}. Since O\,{\sc v} is present, this emerges from the outer regions, and so a higher terminal velocity is needed to fit both oxygen features simultaneously. As found by \cite{gvaramadze2019} radiation pressure appears insufficient to drive the wind. Though the OSIRIS/GTC spectra do not extend below 3600\,\AA, the {\sc cmfgen} models predict an O\,{\sc vi} line at 3435\,\AA\ similar to \citet{gvaramadze2019} and has been observed in the spectra of \cite{2020RNAAS...4..167G}. After finalizing the model, we compared it to HST ultraviolet (UV) spectra of the stellar remnant (Sect.~\ref{sec:ultra}). To fit the UV data we needed to adjust the reddening to E(B-V)$=1.05$. With the new reddening the fluxes no longer matched, so we recomputed the model with scaled parameters that would yield the same spectral shape, and this model is shown in Figure~\ref{fig:model_n38}.\footnote{% To a good approximation an accurate distance is not needed for the spectral fitting. For stars with dense stellar winds, models with the same abundances, effective temperature, and value of $\dot{M}/R_*^{3/2}$ produce similar spectra -- only differing in flux \citep{1989A&A...210..236S}. The parameters for one distance can be scaled to a new distance using the following scaling relations: $L \propto d^2$, $R_* \propto d$ and $\dot{M} \propto d^{3/2}$.} Given the bump's strength, the estimated reddening should be more accurate than that obtained using optical fluxes over a limited wavelength range. No further model modifications were made and there is excellent agreement between the STIS/HST fluxes and the OSIRIS/GTC spectrum. Figure~\ref{fig:model_n38} shows the {\sc cmfgen} model correctly predicts the O\,{\sc v} and Ne\,{\sc viii} blend at $\sim$2800\,\AA\ and the strong O\,{\sc v} emission at $\sim$1370\,\AA. The model also predicts strong O\,{\sc vi} resonance emission (1032, 1038\,\AA). Absorption shortward of 1500\,\AA\ is from the C\,{\sc vi} resonance transition (1548, 1551\,\AA) being too strong, possibly indicating a lower carbon abundance than in the model. The strong 2200\,\AA\ band and its Galactic variation make it difficult to judge agreement between model and observation for features in the band. We computed additional models to better constrain the parameters and potentially improve the fit, but this did not lead to a change in the adopted parameters. One possible exception is the neon abundance -- a model with the abundance reduced by a factor of 2 might give a better fit to the UV spectrum. However, a better understanding of the interstellar 2200\,\AA\ band is required to draw firm conclusions. \subsection{Shocked gas: {\sc cloudy} models }\label{sec:cloudy} The X-ray spectrum indicates an additional high temperature gas component \citep{oskinova2020} that is strong compared to what is typical for OB stars. The unabsorbed X-ray to bolometric luminosity ratio, $\log(L_{\rm X}/L_{\rm bol})$, is in the range of $-4.8$ to $-6.1$, one to two orders of magnitude larger than OB stars, i.e., $\log(L_{\rm X}/L_{\rm bol}) = -6.912\,\pm\,0.153$ \citep{Sana2006}. \citet{oskinova2020} attribute the bulk of the remnant's thermal X-ray emission to the wind's outer regions. The wind model describes well the shape and many features of the optical spectrum. However, some weaker lines are not fitted and the 4340\,\AA\ feature requires Ne\,{\sc viii} at a high neon abundance. This line also coincides with a high-excitation O\,{\sc viii} line. We therefore explore a spectral contribution from the high-temperature gas component which is not included in the wind model. As a hydrogenic ion, O\,{\sc viii} has a restricted number of lines. The 4340\,\AA\ line coincides with the O\,{\sc viii} 8--9 transition and the 6069\,\AA\ line with the O\,{\sc viii} 9--10 transition. These transitions have a wavelength ratio of 5/7 and the two line profiles overplot accurately when wavelengths are scaled by this ratio. The observed intensity ratio of the two lines (after subtraction of an estimated continuum) is reproduced by these O\,{\sc viii} lines for an extinction in agreement with the foreground extinction. This strengthens identification as O\,{\sc viii}, although other lines may contribute to these features. The temperature required for O\,{\sc viii} is $>$1~MK, far above the photospheric or wind temperature. We ran exploratory photo-ionisation models using the {\sc cloudy} code \citep[ver. 17.03][]{cloudy}. In the `coronal mode' gas temperature is pre-defined and all lines are assumed collisionally excited. There is no input radiation field. A single shell model was used, with a constant density and temperature and without considering optical depth effects. The models covered a range of temperatures and abundances. The hydrogen density was set to $\log n = 7.5$. All lines were assumed to have a Gaussian profile with sigma of $v/c=\Delta \lambda/\lambda = 0.0092$ as found from line fitting. The {\sc cloudy} line output list was convolved with this Gaussian profile to obtain model spectra. The model spectra were multiplied by the extinction curve of \citet{ccm2} and scaled to the peak intensity of the 4340\,\AA\ line. The ratio of the integrated line strength of the 4337\,\AA\ and 6064\,\AA\ lines was used to derive the extinction $A_V$, where all contributions to these features were taken into account. The broad wind features at 3800\,\AA\ and 5200\,\AA\ are not reproduced in this model: the hot gas models produced no significant O\,{\sc vi} emission. However, the weaker lines absent from the wind model could be reproduced with this hot gas. The model fit is shown in Figure~\ref{fig:cloudycorona} along with the observed spectra for a gas temperature of $T=4$\,MK. The best fit gave abundances (number ratios) of He/C/N/O/Ne = 0.73/0.07/0.02/0.16/0.01, where helium must be considered as an upper limit. These are within the wind model range. The brightest predicted lines between 3000\,\AA\ and 7000\,\AA\ are listed in Table \ref{tab:lines}. An estimated constant-value continuum was subtracted from the observed spectrum across the full wavelength range with no attempt to correct for any continuum slope. The line strengths depend on the continuum choice. This introduces an uncertainty: the dominant broad wind features obliterate most of the continuum. We subtracted a constant level from the WIYN spectrum, which brought both the blue and red ends to zero. For the GTC spectrum, a constant subtraction left an offset between the ends. We used the WIYN spectrum for the fit. The procedure indicated an extinction of $A_V=2.9\pm0.2$ mag from the WIYN data, in excellent agreement with that found in Sect.~\ref{sect:ext} and with the UV extinction indicated by the wind model. We adopted $A_V=3.0$ mag. The model is shown with the continuum-subtracted WIYN spectrum in Figure \ref{fig:cloudycorona}. \begin{table}[hbtp] \centering \caption{Predicted, brightest ultra-high excitation (UHE) lines from the {\sc cloudy} models. The line intensity, $I_0$, is relative to the 4337\,\AA\ O {\sc viii} 8-9 transition, and are not corrected for extinction. }\label{tab:lines} \begin{tabular}{clc} \hline $\lambda$ (\AA) & Ion & $I_0$ \\ \hline 4337 & O\,{\sc viii} 8-9 & 1.00 \\ 4498 & O\,{\sc viii} 11-14 & 0.12 \\ 4655 & O\,{\sc viii} 10-12 & 0.24 \\ 4681 & O\,{\sc viii} 12-16 & 0.06 \\ 4686 & He\,{\sc ii} 3-4 & 0.07 \\ 4795 & Ne\,{\sc ix} & 0.29 \\ 4940 & Ne\,{\sc x} 12-14 & 0.05 \\ 5243 & Ne\,{\sc x} 10-11 & 0.16 \\ 5288 & C\,{\sc vi} 7-8 & 0.05 \\ 5670 & O\,{\sc vii} & 0.16 \\ 5690 & O\,{\sc viii} 12-15 & 0.08 \\ 6060 & O\,{\sc viii} 11-13 & 0.15 \\ 6064 & O\,{\sc viii} 9-10 & 0.54 \\ 6481 & Ne\,{\sc ix} & 0.13 \\ 6894 & Ne\,{\sc x} 10-11 & 0.09 \\ 7080 & O\,{\sc viii} 13-16 & 0.05 \\ 7728 & O\,{\sc viii} 12-18 & 0.09 \\ 8203 & O\,{\sc viii} 10-11 & 0.30 \\ 8521 & Ne\,{\sc ix} & 0.04 \\ 8672 & O\,{\sc viii} 14-17 & 0.03 \\ 8857 & Ne\,{\sc x} & 0.04 \\ 9668 & O\,{\sc viii} 13-15 & 0.06 \\ \hline \end{tabular} \end{table} The {\sc cloudy} model fits the 4337, 4655, and 6064\,\AA\ with the O\,{\sc viii} 8-9, 10-12, and 9-10 transitions, respectively, though the observed 4655\,\AA\ line is stronger than predicted. Another O\,{\sc viii} line at 5790\,\AA\ coincides with a peak inside the broad wind band. This identification is not as secure because of the unknown optical depth in this wind band. The 4800\,\AA\ feature is reproduced with Ne\,{\sc ix}. The He\,{\sc ii} line at 4686\,\AA\ contributes to the observed line at this wavelength together with the O\,{\sc viii} line. A good fit requires a helium abundance ratio of He/O\,=\,4 (by number). However, there is also a C\,{\sc vi} and an Ne\,{\sc x} line at this wavelength which could contribute. Hence, this helium abundance is considered as an upper limit. The 4940\,\AA\ feature is not reproduced. A weaker model line predicted at 5290\,\AA\ is due to C\,{\sc vi} but it is likely obscured by the broad wind feature. Our {\sc cloudy} hot-gas model is able to reproduce most lines not present in the wind model, using abundances equivalent to that of the wind model, for a gas temperature of $T=3.5$--$4$\,MK. Importantly, all lines which are predicted to be detectable are seen in the spectrum, with a sole exception the Ne\,{\sc x} line at 6900\,\AA, which is affected by a telluric feature. The carbon abundance is poorly constrained because of a lack of bright lines though several fainter carbon lines do coincide with features in the spectra. Figure~\ref{fig:cloudycorona2} shows a model with the carbon abundance (by number) increased to C/O\,=\,4.5, and with a lower helium abundance. This produces a good fit to many of the lines. However, there is no other evidence for such a high carbon abundance and the wind model needs oxygen-rich gas. We therefore do not adopt these abundances. Figure~\ref{fig:cloudy_wind} shows the same fit, but now the observed spectra are shown with the wind model subtracted. Only wavelength regions free of the broad wind features are shown. The wind model accounts for almost all of the continuum present at these wavelengths. A small excess was subtracted shortward of 5000\,\AA. This shows that the combination of the wind and hot gas models can explain a great deal of the observed spectra. The abundances in the {\sc cloudy} model were adopted from the wind model. The nitrogen abundance is not constrained in this model because of a lack of bright lines. There are potential C\,{\sc vi} lines at 6195\,\AA\ and at 4683\,\AA, which can fit lines at these locations, however these lines need a C/O ratio well above unity to fit the corresponding features, while the wind model does not support C/O\,$>1$. In addition, the 5290\,\AA\ C\,{\sc vi} line becomes too strong at higher carbon abundance. The temperature is mainly constrained by O\,{\sc viii} lines which affect the 6064\,\AA\ line at temperatures below 3\,MK, and the Ne {\sc x} line at 6894\,\AA\ which becomes too strong for $T>4$\,MK. Because of the model's simplicity these constraints should be viewed with caution, along with the assumption of uniform temperature and density, and the fact that other wind lines may be present. The line widths and temperature indicate that the hot gas is closely related to the stellar wind, with line widths similar to that of the wind. The symmetric line profiles favor a physical location embedded in the outer wind. The gas can be heated by shocks as regions with different speeds collide. This is a common situation in OB stellar winds. For Wolf-Rayet winds (hydrogen poor), the shocks can be considered approximately isothermal \citep{Hillier1993}. We use the relation \citep{Hillier1993}: \begin{equation} T_x \approx 5.8 \times 10^6\, {\mu \over 1+\gamma} \left( {v_{\rm s} \over 500\,{\rm km\,s^{-1}}} \right)^2 , \end{equation} \noindent where $v_{\rm s}$ is the differential velocity between wind and shock front, $\mu$ is the mean ionic weight, and $\gamma$ the ratio of electrons to ions. Putting in O\,{\sc viii} as the dominant ion, the second term is approximately unity. For $T_x \sim 5\,$MK, $v_{\rm s} \sim 500\,\rm km\,s^{-1}$. This is much less than the wind velocity but plausible as internal differential velocities. \subsection{Neon} Figure~\ref{fig:gaia_spec} shows the {\it Gaia} BP/RP spectrum. It covers a much larger wavelength range to the red. The sensitivity is less than that of our other spectra, and there are calibration issues as mentioned before. The top panel shows the wind model overplotted with the {\it Gaia} spectra. It shows that the wind feature at 7500--8000\AA\ is well reproduced. We use this `above atmosphere' spectrum to discuss the neon abundance. Table \ref{tab:lines} lists a number of expected neon lines. The chosen neon abundance was based on lines in our optical spectra. The predicted Ne\,{\sc x} line at 6894\,\AA\ is too strong but this coincides with a strong telluric feature that was removed from the ground-based spectra. The bottom panel shows the {\it Gaia} spectrum with the wind model (red, dashed; shifted down for clarity). The lower red line shows the hot-gas model. The blue-dashed line shows the same model but with three times higher neon abundance. The neon lines now disagree with the observed spectrum. From this we deduce an upper limit of Ne/O\,$<0.15$ by number. The wind spectrum is seen to overpredict the line at 8200\,\AA. In the wind model this an Ne\,{\sc vii} complex, while the hot-gas model has an O\,{\sc viii} line at this location. The model that is shown has a high neon abundance. Again this points at a lower neon abundance for the remnant. \subsection{Magnetic fields}\label{sec:mag} The presence of strong magnetic fields has been proposed by \citet{gvaramadze2019}. This is based on predictions of WD merger models which yield $B \sim 200$\,MG \citep{ji2013}, and on the expectation that the stellar wind is magnetically driven from a very rapidly rotating star. For the SN~1181 stellar remnant, \cite{kashiyama2019} carried out magneto-hydrodynamic simulations that argue for a WD remnant with a strong magnetic field (20 -- 50 MG) but still an order of magnitude lower than \citet{gvaramadze2019}. Magnetic fields have been proposed for hot ($T>60$\,kK), hydrogen-poor (DO) white dwarfs with O\,{\sc viii} lines and other ultra-high excitation lines (UHE) in absorption. \citet{Reindl2019} attributes these coronal lines to gas trapped in an equatorial, magnetically-confined, shock-heated magnetosphere. The confinement requires \begin{equation} \eta ={B^2 R_\ast^2 \over \dot{M} v_w} >1, \end{equation} where $B$ is the magnetic field strength, $R_*$ the stellar radius, $v_w$ the wind speed, and $\dot{M}$ the mass-loss rate. For this star, the mass-loss rate from the {\sc cmfgen} models require $B>0.1$\,MG for $\eta>1$, a modest magnetic field for such stars. The hydrogenic O\,{\sc viii} lines can provide strong constraints on magnetic field strength due to Zeeman splitting of the line into $\pi$ and $\sigma$ components. For moderate fields, the central $\pi$ component remains unchanged in wavelength, whilst the two $\sigma$ components shift to either side. The relative strength of the $\pi$ component depends on the magnetic field's orientation angle: it is absent when the magnetic field points at the observer and it is equal in intensity to the sum of the $\sigma$ lines if the magnetic field is perpendicular to the line of sight. Each of the three Zeeman components can be considered as a Gaussian of width derived from the temperature. The observed line is the sum of these three Gaussians. For small shifts much less than the width of each line, this sum is itself a Gaussian widened by the shift. For larger shifts the line profile deviates. The shift $\Delta \lambda$ between the two $\sigma$ lines can be parametrized as $\Delta \lambda = \alpha B$, where $\alpha$ is a constant for each ion and transition. The value for $\alpha$ is tabulated by \citet[][their Table~1]{2002PPCF...44.1229B}. For transitions considered here, that is O\,{\sc viii} 8-9, O\,{\sc viii} 9-10, and C\,{\sc vi} 7-8, and for field strengths in Gauss, the $\alpha$ parameter is $1.8 \times 10^{-5}$, $3.4 \times 10^{-5}$, and $2.6 \times 10^{-5}$, respectively. The 6064\,\AA\ O\,{\sc viii} line has a FWHM of 93\,\AA. Assuming a separation between the $\sigma$ lines equal to the FWHM would have strongly distorted the shape, we constrain $B<2.7$\,MG. The 4340\,\AA\ line with FWHM of 69\,\AA\ gives a conservative upper limit of 3.8\,MG. We ran models adding the Zeeman components assuming all three Zeeman lines have equal intensity. This ratio corresponds to a magnetic field orientation to the line of sight of 55 degrees. We then fit the profile of the 6064\,\AA\ line, where the intrinsic width is a fitting parameter. For $B=1.2$\,MG an acceptable fit to the line shape is found. For $B=1.5$\,MG the line profile deviation was clearly visible at 6064\,\AA, and developed a flat peak. Running the same test on the 4337\,\AA\ line shows it is indeed a bit less sensitive. We also ran the test with the central $\pi$ line twice as strong as each accompanying $\sigma$ line corresponding to a magnetic field oriented along the line of sight. Here the line remains centrally peaked which makes it easier to fit the profile. A notable deviation from the observed profile of the 6064\,\AA\ line was found for a field of 2.5\,MG. At this field strength, half the line width is due to Zeeman splitting. The O\,{\sc viii} line shapes give an upper magnetic field limit of $B<2.5$\,MG, easily within the requirement of the magnetically-confined toroidal structure proposed by \citet{Reindl2019}. Based on the available data we can not rule out the alternative of such a structure. This new field strength limit is far below that proposed by \citet[][200~MG]{gvaramadze2019} and of typical low-mass, magnetic WDs (10\,MG). There is some caution here. If the hot gas is located in the wind, then the surface magnetic field may be underestimated because the field strength of a magnetic dipole will go down as $1/r^2$ with distance $r$. Also, the star has a very high luminosity and temperature, and so a larger radius than a typical WD. If it evolves as a WD it will contract significantly, and the field would strengthen. Hence, our limits are not inconsistent with known field strengths of magnetic WDs, which are of order 10\,MG but they do not support values as high as 200\,MG previously proposed. \section{Stellar variability}\label{sec:varphot} \subsection{Photometry} The stellar remnant ({\it Gaia} DR3 526287189172936320) has been observed by various terrestrial and space-based all-sky photometric surveys over many decades. Some offer short cadences to detect transient events (e.g., TESS 2-min and 30-min cadence), while others cover decades of sparse photometric sampling with long exposures (e.g, DASCH). Many have variable depth and sensitivity and large effective detector pixel sizes and large photometric apertures. This can result in blending and nearby source contamination. A faint, foreground star ({\it Gaia} DR3 526287189166341504) is 2.3\arcsec\ eastwards of the stellar remnant (Figure~\ref{fig:iphas}). Though this star is clearly distinguished in short-exposure IPHAS \citep{Drew2005} images, it is $\sim4.5$ mag fainter in {\it Gaia} filters and well below limits of earlier photographic imagery so should not affect recorded optical photometry. A very bright star ({\it Gaia} DR3 526287257892412928) is $\sim$21\arcsec\ directly west. Because recent all-sky surveys use large photometric apertures (e.g., the ASAS-SN PSF is $\sim$15\arcsec, or single-pixel aperture $\sim$17\arcsec\ in OMC), the bright star ($G\sim11$ mag) can affect adjacent pixels and influence photometric results. However, it is well separated from the stellar remnant in scanned data from all older photographic plates recorded in DASCH and so it is unlikely any variability for the stellar remnant is affected by the brighter star in 20$^{\rm th}$ century observations. Here, we show $B$ band results from various surveys as this filter has the best temporal coverage. The data include photographic photometry from DASCH, the Ukrainian FON astrographic catalogue \citep{2016KPCB...32..260A} and the Palomar Observatory Sky Surveys (POSS-I and POSS-II); CCD data from the AAVSO Photometric All-Sky Survey \citep[APASS DR10, ][]{2018AAS...23222306H}, the INT Galactic Plane Survey \citep[IGAPS/UVEX; $g,r$;][]{2020A&A...638A..18M}, the Panoramic Survey Telescope and Rapid Response System \citep[Pan-STARRS1, $g,r,i$;][]{tonry2012}, the Asteroid Terrestrial-impact Last Alert System survey \citep[ATLAS, $o,c$;][]{2018PASP..130f4505T}, the Transiting Exoplanet Survey Satellite \citep[TESS;][]{tess}, and the Near-Earth Object Wide-field Infrared Survey Explorer Reactivation Mission \citep[NEOWISE, $3.6, 4.5\rm\mu m$; ][]{2014ApJ...792...30M}. The single epoch IGAPS/UVEX $g,r$ photometry from Pan-STARRS1, and the Pan-STARRS1 $g,r,i$ photometry, were converted to $B$ using the transformations of \citet{tonry2012}. The optical light curves are shown in Figure~\ref{fig:LC} and the infrared light curve is shown in Figure~\ref{fig:neo}. The stellar remnant is faint and the optical light curve sparsely populated over much of its recorded history (Figure~\ref{fig:LC}) but evidence of dimming is seen. The photographic $B_j$ magnitude estimates, calibrated via the GSC~2.3.2 \citep{2008AJ....136..735L} provided by DASCH for measurements between 1924.8 and 1950.7, show a mean $B_j$ magnitude of $15.6\pm0.2$ mag, and maximum magnitude variation of 0.85 from 37 data points with a dimming slope of 0.02~magnitudes/year and an $R^2=0.38$ (which indicates a weak though positive correlation). Exposures where the recorded plate limit was within 0.3~mags of the star's value are omitted as unreliable. Seven nearby stars, both fainter and brighter (including the bright star to the west), have much lower $R^2$ values (average of 0.047 with rms of 0.064) and are flat (average slope of 0.0007 mag/yr with rms also of 0.007). See the top panel in Figure~\ref{fig:LC}. Overall, the temporal coverage is rather sparse apart from for ATLAS data\footnote{Which has a pixel size of $\sim2$\arcsec\ in two bands: the ``cyan" ({\it c}) band (4200-6500\AA) and the ``orange" ({\it o}) band (5600-8200\AA).}. For the ATLAS dataset (before binning), no significant peaks were found in Lomb-Scargle periodograms so the source appears stable on short timescales in optical bands. We extracted archival photometry from the Wide-field Infrared Survey Explorer (WISE) 3-band Cryo data release \citep{2010AJ....140.1868W}, and the NEOWISE latest data release. Figure~\ref{fig:neo} shows the averaged photometry per epoch in bands $W1$ (3.4\micron) and $W2$ (4.6\micron). Each epoch covers about six months. Frame inspection showed there may be bright star contamination within the standard photometric aperture of NEOWISE (radius 8.25\arcsec). There is no nebula contamination as this is not visible at those wavelengths. The $W1$ and $W2$ light curves show a modest decline of $\sim$0.01~mag/year over 11 years, half the rate seen at earlier epochs in the optical $B$-band. The wind model indicates some weak but no strong emission features in the WISE $W1$ and $W2$ filters: the emission is dominated by continuum. Since the Pa\,30 nebula is not visible at these wavelengths, we can exclude nebular lines such as [Fe {\sc ii}] and [Ar~{\sc vi}] or the 3.3\micron\ PAH band. Therefore the potential decline would be likely due to the continuum. \citet{kashiyama2019} propose that the central star (WD J005311 in their paper) is a highly-magnetic WD spinning at an angular frequency of 0.2 to 0.5 s$^{-1}$. This would suggest very short light-curve variability at a period of 12.6 to 31.4 seconds. The current data does not trace this time scale. \subsection{Wind variability}\label{sec:varwind} In Section~\ref{sec:spec} we hinted at variability in the stellar wind. The SparsePak/WIYN data from 2014, our 2016 OSIRIS/GTC spectra, the 2017 spectrum by \citet{gvaramadze2019}, and the averaged the spectra in 2020 by \citet{2020RNAAS...4..167G} are examined for long-term spectral variability (Figure~\ref{fig:specdiff}). The SparsePak/WIYN fibre is 4.7\arcsec\ and should include all stellar flux assuming the fibre is well placed. Comparison of the synthetic photometry from this spectrum (using the {\it Python} package {\it pyphot}\footnote{\url{https://github.com/mfouesneau/pyphot}}) to the photometric measurements provided by the individual surveys justifies this assumption. Hence, all slit spectra are scaled to match the SparsePak/WIYN spectrum. To improve the signal-to-noise ratio the SparsePak/WIYN and \citet{2020RNAAS...4..167G} spectra have been smoothed with a mean filter of length of 7 and 9 pixels respectively. The most striking differences are intensity changes in the ``blue bump'', as seen in panels (a) and (b) of Figure~\ref{fig:specdiff}. The discontinuity in the \citet{gvaramadze2019} spectrum at 3850\,\AA\ and the different spectral shape further to the blue are to be taken with caution, since flux calibration is less reliable in the far blue. \cite{2020RNAAS...4..167G} found that the ``blue bump'' shows short-term $<2$hour variability, which was attributed to clumpiness in the stellar wind. Calibration differences among the different instruments do not allow a direct comparison of wind variability within the ``blue bump'', but the OSIRIS/GTC spectra overall shape appears consistent with the averaged spectra of \cite{2020RNAAS...4..167G}. A decrease in the peak emission of the O\,{\sc vi} 5300\,\AA\ line between 2014 and 2017 (panel (e) in Figure~\ref{fig:specdiff}) is also seen, perhaps associated with a change in the stellar wind. Small changes in continuum slope beyond 5500\,\AA\ can be due to instrumental and calibration effects between epochs, as the photometry appears to have remained constant throughout (cf. Sect.~\ref{sec:varphot}). Future photometric monitoring observations should be obtained at cadences of 10~seconds or less in the visual and at angular resolutions sufficient to allow separation of the target from the adjacent bright star (e.g., $\le 5$\arcsec). \section{The outer nebula} \label{sec:mass} \subsection{Hydrogen emission}\label{sec:hydrogen} Though no hydrogen lines were detected in the nebular spectrum from large aperture telescopes \citepalias{paper1}. The outer Pa\,30 nebula is tentatively detected in deep, $\approx$1~Rayleigh sensitivity\footnote{Compared to the $\simeq$3 Rayleighs for IPHAS where only the central star is visible.} H$\alpha$ imaging from the low angular resolution (pixel size 1.6\arcmin) Virginia Tech ``H$\alpha$" Spectral-line Survey \citep[VTSS;][]{vtss} that revealed a single enhanced pixel at Pa\,30's position. The 5-$\sigma$ excursion within a 15\arcmin$\times$15\arcmin\ region indicates a low-surface brightness hydrogen (H$\alpha$) shell of similar size to the [O\,{\sc iii}] shell found in \citetalias{paper1}. The VTSS survey limit corresponds to $5.661\times 10^{-18}$ erg\,s$^{-1}$\,cm$^{-2}$\,arcsec$^{-2}$ at H$\alpha$. The continuum-corrected surface brightness within a 1\,pc radius of Pa\,30 is $\approx 5.6$ Rayleigh $\approx 8.8\times10^{-13}$ erg\,s$^{-1}$\,cm$^{-2}$. Adopting the extinction law of \citet{howarth1983} for H$\alpha$, $c_{H\alpha}=0.99\,E(B-V)$, then the dereddened H$\alpha$ brightness becomes $7.4\times 10^{-12}$\,erg\,s$^{-1}$\,cm$^{-2}$ for $A_V=2.9$, which translates to $L_{\rm H\alpha}\leq1.2\,\rm L_{\odot}$. Deeper narrow-band H$\alpha$ imagery is needed to confirm this tentative detection. The ionized mass of a nebula can be computed from \citep{pottasch1984}: \begin{equation} M_{\rm ion}\,[{\rm M_\odot}] = 11.06 \times F({\rm H}\beta) \, \frac{d^2 \,(T_{\rm e}/10^4)^{0.88}}{N_{\rm e}}, \end{equation} where $F({\rm H}\beta)$ is in units of $10^{-11}$ erg\,cm$^{-2}$\,s$^{-1}$, distance $d$ in kpc, electron temperature $T_e$ in K, and electron density $N_e$ in cm$^{-3}$. The intrinsic H$\beta$ flux, derived from the H$\alpha$ flux assuming recombination Case B for a theoretical H$\alpha$ to H$\beta$ ratio of 2.85, is $F(\rm H \beta)\approx 7.2 \times10^{-12}$ erg~cm$^{-2}$~s$^{-1}$. Hence, an ionized mass of $\approx0.3\,\rm M_\odot$ is derived for an assumed temperature $T_{\rm e} = 10^4$\,K and an estimated density $N_{\rm e} = 120\,\rm cm^{-3}$. The latter is the average density from the [S\,{\sc ii}] emission in \citetalias{paper1}. \citet{oskinova2020} estimate 0.1\,M$_\odot$ for the H-free X-ray gas. The possible H$\alpha$ detection shows more mass could be present than seen in the X-ray and optical spectra. \subsection{Infrared emission}\label{sec:shellphotom} Pa\,30 is visible at mid- and far-infrared wavelengths though not detected in the IRAS 12\micron\ maps. In the far-infrared, the shell is round as indicated by AKARI 65 to 140\micron\ maps, extending from 88\arcsec\ up to 128\arcsec ($\pm5$\arcsec) from the central star, or else a radius of about 0.98 -- 1.42 pc at the adopted distance. Foreground interstellar emission confuses the images beyond 100\micron. We attribute the far-infrared wavelengths to dust emission and the mid-infrared emission to line emission. % Given the infrared shell sizes and the uncertainties in archival point-source photometry, we have re-estimated surface brightnesses for all IRAS and AKARI images. Images were downloaded from IRSA/IPAC with the same angular resolution (15\arcsec/pixel): HIRES maps for IRAS, and Far-Infrared Surveyor maps for AKARI. An 100\arcsec\ radius aperture was selected for all maps. We use the conversion factor of \citet{ueta2019} to derive flux densities. The final photometry beyond 100\micron\ was corrected for interstellar contamination (Table~\ref{tab:dmass}). IRAS 12\micron\ and AKARI 160\micron\ flux densities are upper limits (non detection). The new measurements are given in Figure~\ref{fig:SED}, along with archival photometry from Pan-STARRS1, {\it Gaia}, IPHAS, 2MASS, and WISE. The IRAS and AKARI fluxes integrated over the nebula indicate the shell flux peaks at $\lambda \sim$90\micron, while the foreground emission is colder. Using the aperture photometry results for the nebula and the VOSA tool, we derive a shell blackbody temperature of 60~K (cf. Figure~\ref{fig:SED}). \begin{table} \centering \caption{Pa~30 nebula surface photometry vs. {\sc cloudy} predictions.}\label{tab:dmass} \begin{tabular}{ccc} \hline Band ($\mu$m) & $F_{\nu}$ (Jy) & {\sc cloudy} model (Jy) \\ \hline IRAS/12$^\dagger$ & $\leq0.25$ & 0.25 \\ IRAS/25 & $1.41\,(\pm0.09)$ & 1.4 \\ IRAS/60 & $13.24\,(\pm0.26)$ & 16.6 \\ AKARI/65 & $11.72\,(\pm0.25)$ & -\\ AKARI/90 & $19.99\,(\pm0.32)$ & -\\ IRAS/100 & $7.25\,(\pm0.20)$ & 7.8 \\ AKARI/140 & $\leq6.39\,(\pm0.51)$ & -\\ AKARI/160$^\dagger$ & $\leq5.27\,(\pm0.66)$ & 2.0 \\ \hline \end{tabular} \tablecomments{($\dagger$) stands for upper limit. } \end{table} We ran a {\sc cloudy} model for the shell assuming it is ionized gas with normal abundances. We assume a blackbody star with $L_* =3\times 10^4\,\rm L_\odot$, $T_* = 2.3 \times 10^5\, \rm K$, surrounded by a shell with inner and outer radius of 0.8 and 1.2\,pc, and density $n= 17\,\rm cm^{-3}$. Standard ISM abundances and dust content were used as defined in {\sc cloudy}, with a range of grain size and a dust-to-gas ratio of $3.8\times 10^{-3}$. This gives a total gas mass of $\sim 2.1\,\rm M_\odot$ and a dust mass of $8 \times 10^{-3} \,\rm M_\odot$. The model predicts broadband fluxes listed in Table \ref{tab:dmass}, in good agreement with the infrared measurements. At shorter wavelengths, the model flux is dominated by a few strong emission lines: [Ne\,{\sc vi}] at 7.6\micron, [Ne\,{\sc v}] at 14.7\micron\ and 25\micron, and [O\,{\sc iv}] at 25.9\micron. The longer wavelength emission cannot be explained by emission lines and is from dust. The predicted H$\alpha$ flux is $2.2\times 10^{-11}\,\rm erg\,s^{-1}\,cm^{-2}$, which, after extinction correction, gives 13 Rayleighs, 2.3$\times$ the VTSS H$\alpha$ surface brightness. The model explains the infrared photometry but over-predicts the apparent H$\alpha$ surface brightness. The H$\alpha$ over prediction allows that the shell may be hydrogen poor. Dust in the shell is strongly supported whose mass is dependent on composition and grain size and may be less than derived. The dust origin remains open: it can be supernova dust, interstellar dust heated by the star, or dust from previous mass loss. The circumstellar shell could represent swept-up ISM or matter ejected in a mass-loss episode before the supernova explosion, such as a relic planetary nebula. \section{Discussion} \subsection{SN shell mass} \citet{oskinova2020} estimate the amount of hydrogen-free, X-ray-emitting nebula gas to be $0.1~\rm M_\odot$. Our ionized mass estimate from the tentative H$\alpha$ detection is an additional 0.3--2 $\rm M_\odot$ (Sec.~\ref{sec:hydrogen}). However, the relation of this gas to the supernova is unclear (see next subsection) and its kinematics are not known. Consider the kinetic energy of the H-poor ejecta, $E_{\rm kin} = (1/2)\, M_{\rm ej} v^2_{\rm exp}$, where $v_{\rm exp}$ is the ejecta expansion velocity. In \citetalias{paper1}, we find the shocked emission has a radial velocity of 1100\kms. If the extended halo-like feature in the AKARI 90\micron\ data (radius $128\pm5$ arcsec) is the farthest extent of the nebula, then the ejecta velocity would be $\sim$1700\kms\, not much different to the shocked emission. Both values are consistent with the general decline of ejecta velocities more than 100 days post-eruption in Type Iax SNe \citep[e.g.,][]{2018PASJ...70..111K,2021arXiv211109491S}. Using the above ejecta mass and velocities, the kinetic energy of the ejecta $E_{\rm kin}$ ranges from $1\times10^{48}$ to $3\times10^{48}$ erg. These are conservative estimates since this is a lower limit for ejected mass, while the expansion velocity at the time of eruption could have been higher. We compare these tentative estimates against known extragalactic Type~Iax SNe in Figure~\ref{fig:mej}. These include theoretical models by \citet[][open diamonds]{lach2022}, the proposed range of values for Type~Iax by \citet[][black dashed bar]{2013ApJ...767...57F}, and a few examples of extragalactic sources: 2008ha and 2010ae \citep[gray; ][]{2014A&A...561A.146S}, 2007gd \citep[blue;][]{2010ApJ...720..704M}, 2014ck \citep[orange;][]{2016MNRAS.459.1018T}, 2019gsc \citep[magenta;][]{2020ApJ...892L..24S}, and 2020kg \citep[green;][]{2021arXiv211109491S}. Both the low $E_{\rm kin}$ and ejecta mass estimate are consistent with observed ranges for Type~Iax sources. However, they do not agree with the model predictions for pure deflagrations of Chandrasekhar-mass CO WDs of \citet{lach2022}. \subsection{Merger and ejected mass} Post merger, the star evolved on a track similar to post-AGB stars. Here, the stellar luminosity comes from residual hydrogen burning and follows the core mass--luminosity relation, originally proposed by \citet{Paczynski1970}. Using the star's luminosity determined as $L_* = (40\pm 10)\times 10^3\,\rm L_\odot$, this relation gives a core mass (which is essentially the mass of the remnant star) of $M_* =1.20\pm0.17\,\rm M_\odot$. The relation from \citet{VW1994} gives 0.1\,M$_\odot$ less. This mass fits well with WD merger products. The CO WD mass distribution peaks at approximately 0.6\,M$_\odot$, while helium WDs are approximately 0.45\,M$_\odot$. Two merging CO WDs or a CO+He WD can therefore reach the remnant mass. O-Ne WDs are more massive and a merger involving one \citep{gvaramadze2019} would lead to a more massive remnant than observed. The moderate neon abundance also may not support involvement of such a WD. The mass ejected in the merger should be included in the progenitor masses. The minimum ejecta mass is that derived from the X-ray emission \citep{oskinova2020} of 0.1\,M$_\odot$. The outer shell may contain further mass ejected during the merger. The hydrogen mass in the shell cannot come from the merger as WDs have very thin hydrogen layers and should not be be counted in the ejecta mass. The hydrogen is seen only in the VTSS detection and needs confirmation. If the outer shell is hydrogen-free, then it may add another 0.1\,M$_\odot$. The merger ejecta are estimated at $M_{\rm ej} = 0.15 \pm 0.05\,\rm M_\odot$. This leaves the question of where the outer shell originates. If there is hydrogen in the outer shell, it can be swept-up ISM or from mass lost by the WD progenitor. If so the faint, outer shell is a relic planetary nebula, re-ionized by the luminous, hot post-merger star. \subsection{Dust} Pa\,30 is only the second detection of dust emission in Type Iax SNe and the only known case of cold dust. The other example is the recent Type Iax SN~2014dt in M~61. {\it Spitzer} observations by \citet{2016ApJ...816L..13F} found excess mid-infrared emission (3.6 and 4.5$\rm\mu m$) one year post-eruption in 2014dt, that could be a pre-existing dusty circumstellar matter or newly formed dust ($\sim10^{-5}\,\rm M_\odot$ of carbonaceous material). Coincidentally, 2014dt has been suggested as a possibly bound remnant \citep{2018PASJ...70..111K}. We note that cold dust, as in Pa\,30, would not be detectable in extragalactic SNe. \subsection{The lack of a kicked stellar remnant} Theoretical models of Type~Iax explosions predict a kicked stellar remnant \citep[e.g.,][]{jordan2012,kashyap2018}. However, the radial velocity of Pa\,30's stellar remnant, as well as the mean radial velocity of its surrounding nebula, is nearly zero, while the tangential velocity is low ($\sim30$\kms). The {\it Gaia} DR3 proper motion indicates the central star could have moved by at most $\sim2$\arcsec\ since 1181~AD. It clearly remains at the center of Pa\,30. So these data are already a challenge to existing models. The nearby faint star (Figure~\ref{fig:iphas}) cannot be a kicked companion as this star is in the foreground \citep[$1.1^{+0.4}_{-0.3}$~kpc;][]{bailerjones2021} with a much higher proper motion (21.285 mas yr$^{-1}$) \citet{lach2022} estimate kick velocities between 6.9\kms\ and 369.8\kms\ for bound remnants in Type Iax explosions. These could be in the ballpark for Pa\,30. The (only) CO+ONe WD merger model is by \citet{kashyap2018} with a predicted kick velocity of $\sim 90$\kms. However, an ONe WD may not be required here because of the remnant mass and the moderate neon abundance. \section{Conclusions}\label{sec:conclusions} The firm association of Pa\,30 with the supernova of 1181~AD \citepalias{paper1} makes this system one of only two Galactic Type~Iax supernovae known, the other being Sgr~A East \citep{2021ApJ...908...31Z}. Pa\,30 is the only bound SN with a visible stellar remnant in the Galaxy. Its Wolf-Rayet-type optical spectrum is the only example known that is neither the central star of a planetary nebula nor the product of a high-mass Population I progenitor. This system, unique in several respects, is the nearest and youngest Type~Iax remnant known and amenable to detailed study. It is also the only example where a Type~Iax stellar and nebular remnant can be related to an observed eruption almost a millennium earlier. Most features of the optical spectrum can be reproduced with a stellar wind model, with $v_{\rm w} = 15,000\,\rm km\,s^{-1}$ and mass loss rate $\dot M \sim 10^{-6}\,\rm M_\odot\,yr^{-1}$. The wind appears H-free, is strongly deficient in helium relative to C, N, O, and Ne, and has C/O$\sim 0.25$. The wind cannot be driven by radiation pressure alone. Some broad O\,{\sc viii} lines not reproduced by the wind model can be well fitted with hot gas with $T\simeq 4$\,MK, either shocked gas embedded in the wind or close to the star. This gas, with the same abundances as the wind, is seen in lines of O\,{\sc viii} and possibly C\,{\sc vi}. The hot gas shows a low neon abundance, Ne/O$\,<0.15$. The infrared spectrum can be fitted with a circumstellar shell where shorter wavelengths (10--30\micron) show line emission and longer wavelengths show emission from cold dust, with $M_{\rm dust} \sim 8\,\times\, 10^{-3}\,\rm M_\odot$. The shell is also tentatively detected in wide-field H$\alpha$ imaging. This is only the second detection of dust in a Type Iax SN and the only detection of cold dust. The origin of this dust is open: possibilities are supernova dust, heated interstellar dust, or dust from previous mass loss from the system. On timescales of the order of a century, the visual light curves of the central star show evidence of $\sim0.5$ magnitude dimming between 1925 and 1950 around a value of $B_j\sim$15.8 mag. Recent mid-infrared data from WISE may also show a slight fading over 11 years of about 0.1~magnitudes for $W1$ and $W2$. The abundances, ejecta mass, explosion luminosity and energetics are all consistent with a type Iax supernova. There are two scenarios for this type of supernova: a pure deflagration of a Chandrasekhar-mass CO WD, and a double-degenerate merger. The observed parameters and the presence of a highly evolved post-eruption star indicate that the stellar remnant is an example of the second scenario: a merger of two WDs. The luminosity indicates a mass of the remnant star of $1.2\pm0.2$\,M$_\odot$. This is below the Chandrasekhar mass and indicates that the object will not go supernova again \citep{gvaramadze2019}. The mass is consistent with a merger between two CO WDs or a high-mass CO+He WD. The low neon abundance is consistent with this. The ejecta indicate that $0.15\pm0.05$\,M$_\odot$ was lost during the merger. The outer shell, if indeed H-rich, cannot be purely explained as merger ejecta and could be swept-up ISM or a fossil planetary nebula from pre-merger mass-loss. Merger products are expected to be fast rotators. Our observations cannot establish a short rotational period and this remains to be tested. Merger products are also assumed to be the progenitors of highly magnetic WDs. The absence of detectable Zeeman splitting places an upper limit of $B<2.5$~MG, way below the previously predicted values from \citet[][200~MG]{gvaramadze2019} but may somehwat increase during future remnant contraction. The mass of the merger progenitor stars, the ejecta and circumstellar mass, the non-detection of a significant magnetic field, the fact that the remnant has remained at the center of its bound nebula post-eruption, and the possibility of modest photometric dimming of the stellar remnant, are some of its key observed properties that merit detailed investigation to improve our understanding of Type~Iax evolutionary models. \begin{acknowledgments} We especially thank Dr.~G.~Gr\"afener and Dr.~P.~Garnavich for providing their spectra. We thank Dr.~Andr\'as P\'al for help with qualitative analysis of the TESS data. We thank Dr.~Laurence Sabin for designing the OSIRIS/GTC observations. QAP thanks the Hong Kong Research Grants Council for support under GRF grants 17326116 and 17300417. FL and AR thank HKU for postdoctoral fellowships. AAZ thanks the Hung Hing Ying Foundation for the provision of a visiting professorship at HKU. MAG was funded under grant number PGC2018-102184-B-I00 of the Ministerio de Educaci\'on, Innovaci\'on y Universidades cofunded with FEDER funds. This work was partially funded by Kepler/K2 grant J1944/80NSSC19K0112 \& HST GO-15889, and STFC grants ST/T000414/1, ST/T000198/1 and ST/S006109/1. Part of this work is based on data from (i) the OMC Archive at CAB (INTA-CSIC) pre-processed by ISDC; (ii) data products from the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE) funded by NASA; (iii) the Swift public data archive; (iv) the NASA/IPAC Infrared Science Archive; ( v) the AAVSO Photometric All-Sky Survey (APASS); (vi) data from the Asteroid Terrestrial-impact Last Alert System (ATLAS) project; (v) Extinction data from the EXPLORE platform funded by the EU; (vi) {\it Gaia} spectroscopy from the Gaia Data Mining Platform at the Royal Observatory Edinburgh, funded by STFC. % For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising. This publication is supported by multiple datasets which are openly available at locations cited in the References. \end{acknowledgments} \vspace{5mm} \facilities{GTC (OSIRIS), WIYN (SparsePak), XMM, Swift (XRT), WISE, AKARI, TESS, IRSA, Planck, IRAS, ROSAT, VOSA, HST (STIS), ATLAS, Gaia} \software{astropy \citep{2013A&A...558A..33A,2018AJ....156..123A}, cloudy \citep{2013RMxAA..49..137F}, pyphot (\url{https://github.com/mfouesneau/pyphot} ), GaiaXPy (\url{https://gaia-dpci.github.io/GaiaXPy-website/}) } \bibliography{Pa30_star_paper_V3}{} \bibliographystyle{aasjournal}
Title: A TESS search for donor-star pulsations in High-Mass X-ray Binaries
Abstract: Ground-based optical photometry of the counterparts of High-Mass X-ray Binaries (HMXBs) has revealed the presence of periodic modulations on timescales of ~0.3-0.5 d. More recent space-based observations Corot and TESS of OB and Be stars have shown that pulsations caused by p and g modes are common in early type stars. We have therefore undertaken a systematic search for variability in the optical counterparts of 23 HMXBs (mostly neutron star systems, but including one black hole, Cyg X-1) using TESS data primarily in 2 min cadence mode. After removing the orbital period modulation in four systems, we find that all 23 sources show evidence for quasi-periodic variability on periods shorter than ~1 d. We compare their power spectra with those from observations of other OB and Be type stars. In two systems, V725 Tau and HD 249179 (which may not be a HMXB), we find evidence for an outburst, the former being simultaneous with an X-ray flare. We search for changes in the power spectra over the outburst duration, and compare them with outbursts seen in other Be systems.
https://export.arxiv.org/pdf/2208.02064
\outer\def\gtae {$\buildrel {\lower3pt\hbox{$>$}} \over {\lower2pt\hbox{$\sim$}} $} \outer\def\ltae {$\buildrel {\lower3pt\hbox{$<$}} \over {\lower2pt\hbox{$\sim$}} $} \newcommand{\Msun}{$M_{\odot}$} \newcommand{\lsun}{$L_{\odot}$} \newcommand{\Rsun}{$R_{\odot}$} \newcommand{\solar}{${\odot}$} \newcommand{\kep}{\sl Kepler} \newcommand{\ktwo}{\sl K2} \newcommand{\tess}{\sl TESS} \newcommand{\swift}{\it Swift} \newcommand{\Porb}{P_{\rm orb}} \newcommand{\nuorb}{\nu_{\rm orb}} \newcommand{\eplus}{\epsilon_+} \newcommand{\eminus}{\epsilon_-} \newcommand{\cd}{{\rm\ c\ d^{-1}}} \newcommand{\MdotL}{\dot M_{\rm L1}} \newcommand{\Mdot}{$\dot M$} \newcommand{\Mdotsolar}{\dot{M_{\odot}} yr$^{-1}$} \newcommand{\Ldisk}{L_{\rm disk}} \newcommand{\src}{KIC 9202990} \newcommand{\ergscm} {erg s$^{-1}$ cm$^{-2}$} \newcommand{\rchi}{$\chi^{2}_{\nu}$} \newcommand{\chisq}{$\chi^{2}$} \newcommand{\pcmsq} {cm$^{-2}$} \providecommand{\lum}{\ensuremath{{\cal L}}} \providecommand{\mg}{\ensuremath{M_{\rm G}}} \providecommand{\bcg}{\ensuremath{BC_{\rm G}}} \providecommand{\mbolsun}{\ensuremath{M_{{\rm bol}{\odot}}}} \providecommand{\teff}{\ensuremath{T_{\rm eff}}} \begin{keywords} stars: emisison line Be -- stars: circumstellar matter -- stars: early type -- stars: oscillations -- X-rays: binaries \end{keywords} \section{Introduction} High-Mass X-ray Binaries (HMXBs) are intense X-ray sources consisting of a compact star, either a neutron star (NS) or a black hole (BH), and a giant or supergiant companion star which is close to filling its Roche Lobe. Accretion onto the primary takes place in the form of an accretion disc and/or via a stellar wind from the hot companion. They have orbital periods ranging from days to months and many have a high binary eccentricity which can cause short duration X-ray outbursts on the orbital cycle as the companion star passes through periastron (Type I outbursts). Longer duration outbursts lasting several orbital cycles (Type II outbursts) are seen less frequently (see \citet{OkazakiNegueruela2001} for an overview of the outburst models). For a detailed review of HMXBs see \citet{LewinvanderKlis2006}. The optical light curves of HMXBs over an interval of days and weeks can be complex, and is a combination of the ellipsoidal modulation of the secondary star with the effect of a tilted and precessing accretion disc (e.g. \citet{GerendBoyton1976}). On shorter timescales, observations by \citet{GutierrezSoto2011} of two HMXBs revealed short period modulations which were identified as being due to nonradial pulsations from the companion star in V635 Cas (0.30 d) and GSC 03588-00834 (0.45 d). Observations using wide-field imaging surveys, such as OGLE IV, have allowed the discovery of short period photometric modulations in other HMXBs, e.g. \citet{Schmidtke2014,Schmidtke2016,Schmidtke2019}. If the nature of these periodic signals can be effectively modelled they can give insight into the internal structure of the companion star in HMXBs. The discovery of pulsations from the secondary star in some HMXBs is not especially surprising since they are OB stars with spectral types ranging from supergiants (e.g. HDE\,226868 the optical companion to Cyg\,X-1) to dwarfs (e.g. V490\,Cep), which have previously shown pulsations. The slowly pulsating B-type stars (SPBs) display high-order $g$-modes (with periods $\geq$6\,hr) whilst the $\beta$\,Cep stars stars can show lower-order $p$ modes (periods $\sim$1--8\,hr) and $g$-modes (see \citet{DeCat2010} for a review of observations made using {\sl Corot}). More recently, \citet{LabadieBartz2022} made an analysis of 432 classical Be stars observed by {\tess} during its first year of operation. Almost all of these stars showed significant variability and their power spectra could be classified, with 85 percent showing closely spaced frequencies in their power spectra. The launch of satellites such as {\sl Corot} and {\tess} have provided a golden opportunity to search for pulsations in stars of all types, including X-ray binaries. Although none were known in the original 115 square degree {\sl Kepler} field \citep{Borucki2010}, the prototypical LMXB Sco\,X-1 was observed when {\kep} was repurposed as {\ktwo} with fields along the ecliptic being observed for 2 month blocks (see \citet{Scaringi2015,Hakala2015,Hynes2016}). {\tess} was launched in April 2018 and although it does not go as deep as {\kep}, it does provide high signal-to-noise photometry of sources down to $V\sim13-14$ (see \citet{Ricker2015} for more details). In its inital 2 year mission it observed $\sim$3/4 of the whole sky, with a gap along the ecliptic plane and an additional section of the northern hemisphere. Given that some HMXBs are optically bright, this provides an opportunity to search for short period pulsations, of the type identified by \citet{GutierrezSoto2011} and \citet{Schmidtke2014,Schmidtke2016,Schmidtke2019}. In this paper we present observations of the optical light curve of 23 HMXBs and identify those which show evidence for short periodic, likely pulsation, variations. We have not attempted to do a full frequency analysis of the light curves, but rather determine how common pulsations are in the donor-stars of HMXBs. Further, we identify two systems which show an optical outburst, one of which is simultaneous with a low energy X-ray outburst. We compare these findings with observations of other Be stars. \section{The HMXB sample} \label{sample} As we are searching for periodic variability on a timescale \ltae1 d, we restrict our sample to include HMXBs which have {\tess} 2 min cadence data or a calibrated light curve made using full-frame image data (see \S \ref{tess} for further details). We cross-matched all stars observed in Cycles 1--4\footnote{The sources in this paper were on the 2 min cadence list thanks to their inclusion on the following Guest Investigator programs: G011060/PI Paunzen; G011155/PI Huber; G011204/PI Pepper; G011224/PI Redfield; G011268/PI Scaringi; G011281/PI Silvotti; G022020/PI Dorn-Wallenstein; G022062/PI Prsa; G022071/PI Scaringi; G022172/PI Labadie-Bartz; G022184/PI Coley; G03156/PI Pope; G03186/PI Labadie-Bartz; G03221/PI Barlow; G04067/PI Wisniewski; G04074/PI Bowman; G04103/PI Huber.} with the HMXB catalogue of \citet{Liu2006}, finding 23 sources which are detailed in Table \ref{targetlist}. Four of our sample have supergiant donor stars, while more than half the sample have Be type donors. Although HD\,49798 is classed as HMXB in \citet{Liu2006} this appears to be a sdO/WD binary and have therefore not included it in this study. HD 141926 is included in the catalogue of candidate Herbig Ae/Be stars \citep{Vieira2003} and hence its evolutionary state will differ from other stars in this sample. In \S \ref{subhd249179} we note that HD\,249179 might not be an HMXB: however, for reasons outlined there we decided to retain this source as part of our study. \begin{table*} \caption{Properties of HMXBs observed by {\tess} in 2 min cadence mode.} \label{targetlist} \resizebox{\textwidth}{!}{ \begin{tabular}{llrrrrrrrcrl} Name & Other Name & TIC & $T_{mag}^1$ & Sectors & Spectral& $P_{orb}$ & Distance$^2$ & $MG_{o}^3$ & $(BP-RP)_{o}^3$ & Primary & Power Spectra\\ & & & & & Type & (d) & (kpc) & & & & Type\\ \hline GP Vel & Vela X-1 & 191450569 & 6.3 & 8,9 & B0.5 Ib & 8.96 & 2.0$^{+0.1}_{-0.1}$ & -5.8 & 0.2 & NS & Mid \\ Hen 3-640 & 1A 1118-615& 468095832 & 10.7 & 10,37,38 & O9.5Ve & 24.0 & 3.0$^{+0.2}_{-0.1}$ & -2.0 & 0.9 & NS & Mid/Low\\ V801 Cen & 2S 1145-619 & 324268119 & 8.7 & 10,37,38 & B0.2IIIe & 187.5 & 2.1$^{+0.1}_{-0.1}$ & -4.0 & 0.0 & NS & Mid\\ BZ Cru & 1H 1249-637 & 433936219 & 5.2 & 11,37,38 & B0.5IIIe & - & 0.44$^{+0.03}_{-0.02}$ & -3.4 & 0.4 & NS & Mid\\ $\mu^{2}$ Cru & 1H 1255-567& 261862960 & 5.3 & 11,37,38 & B5Ve & - & 0.121$^{+0.003}_{-0.003}$ & -0.4 & -0.2 & & Mid \\ HD 141926 & 1H 1555-552 & 84513533 & 8.0 & 12,39 & B2IIIn & - & 1.4$^{+0.1}_{-0.1}$ & -2.7 & 0.5 & & Isolated \\ V884 Sco & 4U 1700-37 & 347468930 & 6.1 & 12 & O6.5Iaf & 3.41 & 1.6$^{+0.2}_{-0.1}$ & -5.3 & 0.1 & & Mid\\ HDE\,226868 & Cyg X-1 & 102604645 & 7.9 & 14 & O9.7 Iab & 5.6 & 2.2$^{+0.2}_{-0.1}$ & -4.3 & 0.7 & BH & Mid \\ GSC 03588-00834 & SAX J2103.5+4545 & 273066197 & 13.0 & 16 & B0Ve & 12.68 & 7.4$^{+1.5}_{-0.9}$ & -2.3 & 0.7 & NS & Mid\\ V490 Cep & 1H 2138+579 & 341320747 & 13.0 & 16,17 & B1-B2Ve & & 9.0$^{+2.1}_{-1.2}$ & -2.8 & 0.6 & NS & Mid \\ BD +53 2790 & 4U 2206+543 & 328546890 & 9.6 & 16,17 & O9.5Ve & 9.57 & 3.3$^{+0.2}_{-0.2}$ & -4.1 & -0.2 & NS & Mid/Low\\ V662 Cas & 2S 0114+650 & 54469882 & 9.8 & 18,24,25 & B0.5Ib & 11.6 & 5.1$^{+0.5}_{-0.4}$ & -4.5 & 0.8 & NS & Mid \\ V635 Cas & 4U 0115+634 & 54527515 & 13.3 & 18,24,25 & B0.2Ve & 24.3 & 7.0$^{+1.8}_{-1.0}$ & -1.7 & 1.3 & NS & Isolated\\ BQ Cam & EXO 0331+530 & 354185144 & 13.0 & 18,19 & O8.5Ve & 34.25 & 7.0$^{+2.2}_{-1.1}$ & -1.8 & 1.5 & NS & Mid \\ X Per & 4U 0352+309 & 94471007 & 6.0 & 18,43,44 & B0Ve & 250.3 & 0.61$^{+0.03}_{-0.02}$ & -3.2 & 0.3 & NS & Mid \\ CI Cam & XTE J0421+560 & 418090700 & 9.9 & 19 & sgB[e] & 19.41 & 4.7$^{+0.7}_{-0.5}$ & -4.0 & 0.9 & & Isolated\\ LS V +44 17 & RX J0440.9+4431 & 410336237 & 10.0 & 19 & B0.2Ve & 150.0 & 2.6$^{+0.2}_{-0.1}$ & -2.8 & 0.5 & NS & Mid\\ V420 Aur & EXO 051910+3737.7 & 143681075 & 7.0 & 19 & B0 IVpe & & 1.4$^{+0.1}_{-0.1}$ & -4.2 & 0.1 & & Isolated\\ HD 109857$^4$ & 1H 1253-761 & 360632151 & 6.5 & 11,12,38,39 & B8V & & 0.21$^{0.02}_{-0.02}$ & -0.3 & 0.1 & & Isolated \\ V725 Tau & 1A 0535+262 & 75078662 & 8.2 & 43-45 & O9/B0III/Ve & 111.0 & 1.91$^{0.16}_{-0.12}$ & -3.7 & 0.5 & NS & Isolated\\ IGR J06074+2205 & & 45116246 & 11.8 & 43,44 & B0.5Ve & & 6.9$^{+1.9}_{-1.0}$ & -3.8 & -0.1 & & Isolated\\ BD+47 3129 & RX J2030.5+4751 & 187940144 & 8.7 & 41 & B0.5V-IIIe & & 2.39$^{+0.17}_{-0.13}$ & -3.9 & 0.2 & & Isolated\\ HD 249179 & 4U 0548+29 & 78499882 & 9.2 & 43-45 & B5ne & & 1.68$^{+0.16}_{-0.12}$ & -2.0 & -0.3 & & Mid\\ \hline \end{tabular}} {\footnotesize $^1$ Magnitude in the {\tess} pass-band; $^2$ From {\it Gaia} EDR3; $^3$ De-reddened; $^4$ Data taken in 30 min cadence. } \end{table*} For further insight as to the nature of the donor stars in these 23 HMXBs, we use the {\sl Gaia} EDR3 parallaxes \citep{Gaia2021} to infer their distances\footnote{ following the guidelines of \citet{BailerJones2015,Astra2016} and \citet{GaiaLuri2018}, which is based on a Bayesian approach.}. In practise we use a routine in the {\tt STILTS} package \citep{Taylor2006} and use a scale length L=1.35 kpc, which is appropriate for stellar populations in the Milky Way in general. From these distances we determine their absolute magnitudes in the Gaia $G$ band (a very broad optical filter), $M_{G}$, using the mean Gaia $G$ magnitude. The other key observables are the blue ($BP$) and red ($RP$) filtered magnitudes, which are derived from the Gaia Prism data. We then deredden the $(BP-RP)$, $M_{G}$ values using the 3D-dust maps derived from Pan-STARRS1 data \citep{Green2019} and the relationship between $E(B-V)$, $E(BP-RP)$, $A_{G}$ and $E(BP-RP)$ outlined in \citet{Andrae2018}. For those stars just off the edge of the Pan-STARRS1 field of view (stars with $\delta<-30^{\circ}$) we take the nearest reddening distance relationship. We include the distances to our targets and $MG_{o}$, $(BP-RP)_{o}$ in Table \ref{targetlist}. The nearest HMXB in our sample is $\mu^{2}$\,Cru which is only 120\,pc distant (we note that although $\mu^{2}$\,Cru is bright, $T_{mag}$=5.3, there is another star 35.3$^{''}$ distant which is $T_{mag}$=4.2, so some contamination in the light curve will be present). In contrast, V490\,Cep (1H\,2138+579) is $\sim$9\,kpc distant, although with a large uncertainty. We show the dereddened $MG_{o}$ and $(BP-RP)_{o}$ values for our sample in Figure \ref{gaia-hrd}, where the size of the symbols reflect the binary orbital period (if known). To give context, we also show the apparent $M_{G}$ and $(BP-RP)$ values for stars within 50\,pc and assume they show negligible reddening. What is immediately striking is the spread in position of our sample HMXBs in the Gaia HRD. Cyg\,X-1, the one system which we know has a BH primary is near the upper part of the distribution. Those sources close to Cyg\,X-1 are V662\,Cas (a NS primary with a very slow rotation period of 2.78\,hr \citep{Hall2000}) and CI\,Cam the nature of whose primary remains controversial. \section{The TESS data} \label{tess} {\sl TESS}, launched on 18th April 2018, has four telescopes, each with an aperture of 10.5\,cm that cover a 24$^{\circ}\times90^{\circ}$ sector of sky for $\sim$27\,d. In the prime mission, the majority of the sky was observed apart from a strip along the ecliptic plane. In the first year of the extended mission, the southern ecliptic hemisphere was observed for a second time, whilst in the second year those regions not observed in the prime mission were covered with a second observation of northern fields. The {\tess} camera detectors have $21^{``}\times21^{``}$ pixels, with the PSF covering more than 2$\times$2 pixels, which can make blending an issue especially in crowded fields. For the first two years, $\sim$20,000 predefined targets were observed in each sector with a 2\,min cadence, with a 20\,s cadence being introduced in year 3. In years 1 and 2, full-frame images with a cadence of 30\,min are available, with the cadence increasing to 10\,min in year 3. All of our targets, with the exception of HD\,109857 (which has 30\,min cadence in sectors 11--12 and 10\,min in sectors 38--39), were observed in 2\,min cadence. We downloaded the calibrated light-curves with 2\,min cadence of our targets from the MAST data archive\footnote{\url{https://archive.stsci.edu/tess/}}. We used the data values for {\tt PDCSAP\_FLUX}, which are the Simple Aperture Photometry values, {\tt SAP\_FLUX}, after the removal of systematic trends common to all stars on that Chip. Each photometric point is assigned a {\tt QUALITY} flag which indicates if the data has been compromised to some degree by instrumental effects. For HD\,109857, we obtained calibrated light-curves from full-frame images using the TESS-SPOC pipeline \citep{Caldwell2020}, again from the MAST data archive\footnote{\url{https://archive.stsci.edu/hlsp/tess-spoc}}. We removed those points which did not have {\tt QUALITY=0} and normalised each light-curve by dividing the flux of each point by the mean flux of the star in that sector. As an example of the light curves obtained by {\tess}, we show in Figure \ref{v635cas-cygx1} V635\,Cas, which has a clear short-period modulation with the same period (0.3 d) as found by \citet{GutierrezSoto2011}, and HDE\,226868 (the optical counterpart to Cyg\,X-1). The latter displays the signature of the 5.6\,d orbital period through the well-known ellipsoidal modulation of the donor star (hence its appearance as peaks every 2.8\,d). Even though Cyg\,X-1 is an extremely luminous ($\geq$ 10$^{37}$ erg s$^{-1}$) X-ray source, the donor is an OB I star, so the effects of X-ray heating are small. (As a benefit to the reader we show one sector of {\tess} data for each of our targets in Fig. \ref{lightcurves-A} to \ref{lightcurves-D}). Four sources showed evidence of orbital modulations (GP\,Vel, V884\,Sco, HDE\,226868 and V662\,Cas). We detrended these light curves using the {\tt flatten} routine in the {\tt lightkurve} python package \citep{lightkurve2018} to search for shorter period variability. After some trial and error we chose a window length for the filter of 1\,d: we acknowledge that the resulting detrended light curves could contain features which are a result of this choice of window length. Nevertheless, we show in the lower panel of Figure \ref{v635cas-cygx1} the detrended light curve of HDE\,226868, which reveals significant short period variability in the light curve (we show the normalised and detrended light curves for GP Vel, B884 Sco and V662 Cas in Figure \ref{detrendappendix}). In sector 44, X\,Per shows a prominent sinusoidal variation on a timescale of $\sim$15\,d (see Fig. \ref{lightcurves-C}) which is much shorter than its orbital period of 250 d. We also removed the 15 d trend in X Per to search for shorter period variability. To search for periodic variations in the light curves, we used the generalised Lomb Scargle periodogram \citep[LS,][]{Zechmeister2009,Press1992}, obtaining a power spectrum for each sector the star was observed. We show the power spectra obtained for each sources in Figures \ref{power-A} to \ref{power-C} with the periods of the most prominent peaks shown in Table \ref{lsperiods} on a sector-by-sector basis. In their study of 432 classical Be stars made using {\tess} data \citet{LabadieBartz2022} classified them based on their power spectra. The classifications include low frequency ($>$2\,d) signals which dominate and are typical of $g$ mode pulsations; frequency groups in the mid frequency range (0.16--2\,d) closely spaced groups ($p$ and $g$ modes); high frequency (0.06--0.16\,d) signals dominate ($p$ and $g$ modes); and single isolated frequencies -- sources can have more than one classification. We made an assessment of the power spectra shown in Figures \ref{power-A} to \ref{power-C} and classified them based on the criteria of \citet{LabadieBartz2022}. We find that 13/23 (57 percent) have mid frequency spectra; 8/23 (35 percent) have isolated spectra and 2/23 (9 percent) show both mid and low frequency spectra. In comparison, \citet{LabadieBartz2022} find that 32 percent of Be stars show isolated frequencies and 85 percent show frequency groups. {Hen 3-640 was included in both this study and that of \citet{LabadieBartz2022}, with us classifying the power spectra as mid/low and \citet{LabadieBartz2022} indicating it shows low-frequency trends and isolated signals (including at high frequencies). Given the element of uncertainty in classifying the power spectra of the HMXB in our sample, the light curves analysed here appear to be broadly similar to the classical Be stars reported in \citet{LabadieBartz2022}. In Figure \ref{gaia-hrd-power-type} we show the same Gaia HRD as we showed in Figure \ref{gaia-hrd} but here we have colour coded the sources depending on their classification shown in Table \ref{targetlist}. There appears to be no separation between sources showing mid frequency groups, isolated signals and those showing mid and low frequency signals. We now comment on some specific sources. GSC 03588-00834 (SAX J2103+4545) shows two eclipses in its {\tess} light curve separated by 16.6 d. Given this is longer than the orbital period, this suggests that one of two relatively bright stars ($G$=13.9, $G$=14.3) within 21 arcsec (the {\tess} pixel size) is the source of the eclipses. The light curve also shows evidence of two dip-like features which may be related to the orbital period. Using a Weighted Wavelet Z-Transform \citep{Foster1996} to search for evidence of short period variations we found evidence for a period which drifts in the three sectors of data between 0.33--0.45 d which is similar to the period reported by \citet{GutierrezSoto2011}. However, once we take into account the dilution of the two spatially nearby stars this increases to 1.2 percent. We conclude that it is likely that a quasi-periodic signal on a timescale similar to that reported by \citet{GutierrezSoto2011} is present at times in GSC 03588-00834. We noted earlier that X Per shows a modulation on a timescale of $\sim$15 d in sector 44 whose signature we removed before obtaining a LS power spectrum. In Figure \ref{power-B} we see that power is concentrated at periods near 0.50-0.55 d in sectors 18 and 43, but in sector 44 the peak in the power spectrum is shifted to an isolated peak at $\sim$0.3 d (there is no evidence for an X-ray outburst at this time). In Figure \ref{power-A} we show the power spectra of Hen 3-640: in sector 10, peaks are seen between periods of 0.5-1.1 d but in sectors 37 and 38 there is a clear modulation in a period $\sim$ 3 d. Similarly, in Figure \ref{power-A} we find that BZ Cru shows enhanced power between 0.6-0.8 d in sectors 37 and 38 compared to sector 11. For two sources, V725 Tau and HD 249179, we discuss their power spectra in greater detail in \S \ref{outburst}. \begin{table*} \begin{tabular}{lr} Object & Periods (d) \\ \hline GP Vel & [8] 0.521 [9] 0.656 \\ Hen 3-640 & [10] 1.066 [37] 3.06 [38] 3.05, 1.072 \\ V801 Cen & [10] 0.665, 0.357, 0.464 [37] 0.358, 0.463, 0.661 [38] 0.463, 0.358, 0.375 \\ BZ Cru & [11] 0.339, 0.104, 0.969 [37] 0.725, 0.637, 0.694 [38] 0.636, 0.339, 0.725 \\ $\mu^{2}$ Cru & [11] 1.614, 0.935, 0.989, 0.336 [37] 0.918, 1.005, 0.337, 1.066 [38] 1.598, 0.930, 0.337, 0.893 \\ HD 141926 & [12] 0.904, 0.953, 0.419 [39] 0.904, 0.407, 0.415 \\ V884 Sco & [12] 0.729, 0.763, 0.614 \\ HDE 226868 & [14] 0.661, 0.597, 0.578 \\ GSC 03588-00834 & [16] 0.448, 0.102 \\ V490 Cep & [16] 1.970, 0.538 [17] 1.963, 1.772 \\ BD +53 2790 & [16] 1.120, 0.981, 0.647 [17] 0.998, 0.834, 0.791 \\ V662 Cas & [18] 0.497, 0.352, 0.572 [24] 0.687, 0.741, 0.791 [25] 0.594, 0.613, 0.578 \\ V635 Cas & [18] 0.3003 [24] 0.3003 [25] 0.3010 \\ BQ Cam & [18] 0.419, 0.755 [19] 0.420, 0.756 \\ X Per & [18] 0.574, 0.277 [43] 0.575, 0.277 [44] 0.277, 0.529 \\ CI Cam & [19] 0.406 \\ LS V +44 17 & [19] 0.385, 0.460\\ V420 Aur & [19] 0.672, 1.446 \\ HD 109857 & [11] 0.568, 0.284 [12] 0.569, 0.285 [38] 0.571, 0.454, 0.285 [39] 0.570, 0.284, 0.453 \\ V725 Tau & [43] 0.468 [44] 0.469 [45] 0.468 \\ IGR J06074+2205 & [43] 0.434 [44] 0.435 \\ BD +47 3129 & [41] 1.225 \\ HD 249179 & [43] 0.283 [44] 0.490, 0.283 [45] 0.282 \\ \hline \end{tabular} \caption{The principal periods determined for our targets on a sector-by-sector basis (indicated by the square brackets) using the LS periodogram.} \label{lsperiods} \end{table*} \section{Outbursts} \label{outburst} {\tess} has been used to observe supernovae prior to their detection in ground-based transient surveys (e.g. \citet{Fausnaugh2021}) and also serendipitous outbursts of previously known Cataclysmic Variables (e.g. \citet{Court2019}). Moreover, outbursts have been seen in many classical Be stars. We therefore manually searched the {\tess} light curves in our sample for any evidence of outbursts. We found outbursts in two of them, V725\,Tau and HD\,249179. \subsection{V725\,Tau} V725\,Tau (1A\,0535+262/HD\,245770) is a prototypical transient X-ray pulsar which has been observed to show many X-ray outbursts, including a recent giant outburst (peaking at 1.2$\times10^{38}$ erg s$^{-1}$) in Nov 2020 (e.g. \citet{Kong2021}). Since then, observations made using MAXI showed a weaker outburst around 2021 June 20 (the field was not being observed by {\tess}) and an even weaker X-ray outburst a few months later, starting on Oct 13 (MJD=59500). In Figure \ref{outburst} (left) we show the simultaneous {\tess} and X-ray (MAXI) light curves of V725\,Tau during this much weaker outburst. The increase in optical flux suggests an outburst amplitude of $\sim$0.2 mag (there are no significant issues of dilution from nearby bright stars). Figure \ref{outburst} indicates there was a clear periodic ($\sim$0.5 d) signal in all three sectors (43--45) of {\tess} data. To investigate this further, we detrended each of these sectors so as to remove the large outburst variation, and show the resulting light curve in Figure \ref{V725-detrend}: it appears like a typical multi-periodic pulsation where modes come and go on a quasi-period of $\sim$6.4\,d, causing a change in the amplitude of the main pulsation. We applied a shifted time window version of the epoch folding periodogram \citep{Davies1990} to search for the evolution of the pulsations over the TESS observations. Firstly, long term variations were removed, after which we moved a 2\,d wide time window in steps of 0.05\,d through the observations and carried out the epoch folding analysis with 30 phase bins for each step. We show the resulting trailed periodogram in the upper left hand panel of Figure \ref{outburst}. There is no apparent change in the amplitude of the $\sim$0.5 d period during or immediately after the optical outburst. However, it is clear that we detect at least three epochs of increased pulsation amplitude, separated by $\sim$ 30\,d. An Analysis of Variance (AoV) periodogram \citep{Schwarzenberg1996} using the full detrended light curve finds a peak in the power spectrum at 13.6\,d: this is half of the reported period of $\sim$28\,d determined using radial velocities \citep{Hutchings1978}. Given the orbital period is 111\,d, it is unclear what the origin of the 28\,d period is due to. \subsection{HD 249179} \label{subhd249179} 4U\,0548+29 \citep{Forman1978} was associated with the B5ne star HD\,249179 by \citet{Wackerling1970} and classed as an HMXB in the catalogue of \citet{Liu2006}. However, \citet{Torrejon2001} made pointed observations of HD\,249179 using BeppoSax and found no evidence for X-ray emission, concluding that HD\,249179 was not in fact the optical counterpart of 4U\,0548+29. Nevertheless, we have retained HD\,249179 in this study as a potential HMXB, since Be stars are known to undergo long intervals of weak or no X-ray emission, as a result of their highly eccentric orbits \citep{OkazakiNegueruela2001}. In Figure \ref{outburst} (right hand panels) we show the light curve of HD\,249179 during sector 45: in the second half, after a slight decline in flux, there is a rapid increase of $\sim$0.1 mag (this increases to $\sim$0.14 mag once dilution from nearby bright stars is taken into account). We believe this is the first recorded optical outburst from HD\,249179. The detrended light curve of HD\,249179 (Figure \ref{HD249179-detrend}) shows a quasi-periodic signal on a period of $\sim0.28$\,d. This light curve also highlights how the amplitude of the short period modulation increases by a factor of $\sim$3 at the same time as the start of the optical outburst. This is also demonstrated in the sliding epoch folding periodogram shown in the upper right hand panel of Figure \ref{outburst}. In addition there is some evidence that after the peak of the outburst has been reached, most of the power is transferred from the initial 0.28\,d period to twice that (i.e. from 3.6 d$^{-1}$ to 1.7 d$^{-1}$ in frequency). We speculate that material has been ejected from this Be star to its disc. \section{Discussion} We now discuss pulsations from isolated early type stars and previous observations of outbursts from Be type stars. \subsection{Pulsations from early type stars} \citet{Bowman2020} gives an overview of observations of High-Mass stars including $\beta$\,Cep and SPB stars. {\tess} observations of nearly 100 OB stars, coupled with high resolution optical spectroscopy, were presented by \citet{Burssens2020}, who found that many of them showed pulsations and concluded that if the modes of pulsation could be determined then asteroseismic modelling would be possible. They identified a group of pulsators which show periods of a few days, such as SPBs which were likely due to coherent $g$ mode pulsations. Furthermore $\beta$\,Cep stars show periods of a few tenths of a day, which are likely due to coherent $p$ mode pulsations. In addition, there are a group of stars which appear to be hybrid pulsators, showing both $g$ and $p$ mode pulsations. \citet{Sharma2022} presented {\tess} observations of 119 B type stars which were members of the Sco-Cen association and found pulsations in 2/3 of the stars they sampled. Although they initially applied a cut-off at 0.4\,d where stars were likely to be SPB if the main periods were $<$0.4\,d and $\beta$\,Cep if $>$0.4\,d, they conclude that it was not always possible to separate these stars purely on this criterion. Indeed there were stars which appeared to be hybrids showing power in both frequency ranges. However, based on these criteria, we find that 42\% of our sample were hybrids, 33\% were SPBs and 25\% were $\beta$\,Cep stars. This compares with 23\% hybrids, 28\% SPBs and 40\% $\beta$\,Cep stars as we have estimated from Table 1 of \citet{Sharma2022}. Given the indirect nature of the comparison, and that the \citet{Sharma2022} study covers all B spectral sub-types, whilst our sample is more concentrated on earlier sub-types or even giant/sub-giants, there is not a large difference in the fractions of types in these two studies. \subsection{Outbursts} Optical outbursts have been seen from many Be stars with timescales ranging from days, weeks to years (e.g. \citet{LabadieBartz2018}). The outburst causes material to be expelled from the star and forms (or maintains) a {\sl excretion} disc. Much of this material is eventually returns onto the star (see \citet{Rivinius2013} for details). Using {\tess} observations we have detected optical outbursts from V725\,Tau and HD\,249179. V725\,Tau is a {\em bona fide} HMXB with a neutron star as it primary. In contrast, there is some doubt that HD\,249179 is an HMXB, but rather an isolated Be star \citep{Torrejon2001}. The outburst from V725\,Tau showed no change in the period or amplitude of the short period modulation which is likely due to $p$-mode oscillations in the donor star. In contrast, we have found evidence that the period changes slightly during the outburst of HD 249179, with strong evidence that the amplitude increases significantly at the time where the outburst starts. The outburst which we detected in HD\,249179 is very similar to the outburst seen in HD\,49330 (B0.5IVe) made using {\sl CoRot} data \citep{Huat2009}. They found that the amplitudes of the $p$-modes (short period) and $g$-modes (long period) were directly correlated with the outburst: the amplitude of the $p$-modes decreased just before the outburst and increased after the start of outburst. This is almost identical to our findings for HD\,249179. Indeed, there is evidence that the combination of pulsation modes actually cause outbursts from Be stars (e.g. \citet{Baade2017}), with the increase in the amplitude of the pulsations during the outburst has been explained by stochastically excited waves (e.g. \citet{Baade2018,Neiner2020}). More detailed work would be needed to identify the modes in HD\,249179 before a similar study could be undertaken on the {\tess} outburst data. It is not clear whether HD 249179 is a binary system. Unfortunately, there are no radial velocity data for it in Gaia DR3, presumably since it is in the Galactic plane ($b=1.9^{\circ}$) where blending of the RVS spectra can be a serious issue. Studies on the multiplicity of early type stars (see \citet{Sana2017} for an overview) suggest the multiplicity fraction of B4--9 type stars in the Milky Way is $\sim$1/5. It is therefore a matter of speculation of whether the outburst seen in HD 249179 is due to tidal interaction of a companion star. \section{Conclusions} We have shown that each of the 23 optical counterparts to the HMXBs in this survey show clear evidence of quasi-periodic modulations with periods in the range $\sim$0.1--1 d which fully supports the earlier ground based observations of HMXBs by \citet{GutierrezSoto2011} and \citet{Schmidtke2014,Schmidtke2016,Schmidtke2019}. Many other massive stars, including Be stars, also show quasi-periodic pulsations with similar periods. To be able to derive fundamental parameters of the stars it is essential to identify the modes in the power spectrum. This is not normally achievable just from broad band photometry, but generally must be combined with high resolution spectra (e.g. \citet{Aerts2019}). However, this will be much more difficult to achieve for HMXBs where an accretion disc will make it difficult to separate the disc and stellar pulsation emission. Observations of massive eclipsing binaries (e.g. \citet{Southworth2020}) show that tidal forces can affect the pulsation frequencies. Given that some HMXBs, primarily those with Be donor stars, have a non-negligible eccentricity (e.g. Vela X-1 has $e\sim$0.09) it is possible that a more detailed study of the pulsation spectrum as a function of their orbital period may reveal some dependency on the orbital phase. Finally, in Cycle 5, the cadence of {\tess} full frame images will be 200 s which will allow many more HMXBs and early type stars to be searched for pulsations. \section{Acknowledgments} This paper includes data collected by the {\sl TESS} mission, for which funding is provided by the NASA Explorer Program. This work presents results from the European Space Agency (ESA) space mission {\sl Gaia}. {\sl Gaia} data is being processed by the {\sl Gaia} Data Processing and Analysis Consortium (DPAC). Funding for the DPAC is provided by national institutions, in particular the institutions participating in the {\sl Gaia} MultiLateral Agreement (MLA). The Gaia mission website is \url{https://www.cosmos.esa.int/gaia}. The Gaia archive website is \url{https://archives.esac.esa.int/gaia}. This research has made use of MAXI data provided by RIKEN, JAXA and the MAXI team. Armagh Observatory \& Planetarium is core funded by the Northern Ireland Executive through the Dept for Communities. We thank the anonymous referee for a detailed review of the manuscript. \section*{Data Availablity} The {\tess} data are available from the NASA MAST portal\footnote{\url{https://archive.stsci.edu/}} and MAXI data are available from the the Riken MAXI portal.\footnote{\url{http://maxi.riken.jp}} \vspace{4mm} \appendix \section{Light curves} The light curves of all the HMXB in our sample where we indicate the {\tess} sector the originated. For most cases we show the first sector of data where more than one sector of data was obtained. \section{Detrended light curves} For sources showing high amplitude variations in their light curve over the orbital period we detrended the signature of the orbital period to search for short period pulsations. In Figure \ref{v635cas-cygx1} we show the original and detrended light curve of HDE\,226868 (Cyg X-1). Here we show the original and detrended light curves of GP\,Vel, V884\,Sco and V662\,Cas. \section{Light Curves} The light curves of V725 Tau and HD 249179 which have been detrended to remove the signature of the outburst. The time of the optical outbursts are indicated for each source.
Title: Catastrophic Cooling in Superwinds. III. Non-equilibrium Photoionization
Abstract: Observations of some starburst-driven galactic superwinds suggest that strong radiative cooling could play a key role in the nature of feedback and the formation of stars and molecular gas in star-forming galaxies. These catastrophically cooling superwinds are not adequately described by adiabatic fluid models, but they can be reproduced by incorporating non-equilibrium radiative cooling functions into the fluid model. In this work, we have employed the atomic and cooling module MAIHEM implemented in the framework of the FLASH hydrodynamics code to simulate the formation of radiatively cooling superwinds as well as their corresponding non-equilibrium ionization (NEI) states for various outflow parameters, gas metallicities, and ambient densities. We employ the photoionization program CLOUDY to predict radiation- and density-bounded photoionization for these radiatively cooling superwinds, and we predict UV and optical line emission. Our non-equilibrium photoionization models built with the NEI states demonstrate the enhancement of C IV, especially in metal-rich, catastrophically cooling outflows, and O VI in metal-poor ones.
https://export.arxiv.org/pdf/2208.12030
command. \newcommand{\vdag}{(v)^\dagger} \newcommand\aastex{AAS\TeX} \newcommand\latex{La\TeX} \renewcommand{\vec}[1]{\mathbf{#1}} \newcommand{\uvec}[1]{\mathbf{\hat{#1}}} \shorttitle{Catastrophic Cooling in Superwinds. III. Non-equilibrium Photoionization} \shortauthors{Danehkar et al.} \graphicspath{{./}{figures/}} \begin{document} \title{Catastrophic Cooling in Superwinds. III. Non-equilibrium Photoionization} \correspondingauthor{A.~Danehkar} \email{danehkar@eurekasci.com} \author[0000-0003-4552-5997]{A.~Danehkar} \affiliation{Eureka Scientific, Inc., 2452 Delmer Street Suite 100, Oakland, CA 94602-3017, USA} \author[0000-0002-5808-1320]{M. S. Oey} \affiliation{Department of Astronomy, University of Michigan, 1085 S. University Ave, Ann Arbor, MI 48109, USA} \author[0000-0001-9014-3125]{W. J. Gray} \altaffiliation{~Private address.} \date[ ]{\footnotesize\textit{Received 2022 June 1; revised 2022 July 30; accepted 2022 August 3}} \keywords{\href{https://astrothesaurus.org/uat/572}{Galactic winds (572)}; \href{https://astrothesaurus.org/uat/1656}{Superbubbles (1656)}; \href{https://astrothesaurus.org/uat/2028}{Cooling flows (2028)}; \href{https://astrothesaurus.org/uat/1565}{Star forming regions (1565)}; \href{https://astrothesaurus.org/uat/694}{H II regions (694)}; \href{https://astrothesaurus.org/uat/1570}{Starburst galaxies (1570)}; \href{https://astrothesaurus.org/uat/459}{Emission line galaxies (459)}; \href{https://astrothesaurus.org/uat/979}{Lyman-break galaxies (979)}; \href{https://astrothesaurus.org/uat/978}{Lyman-alpha galaxies (978)} \vspace{4pt} \newline \textit{Supporting material:} interactive figure, machine-readable tables} \section{Introduction} \label{cooling:introduction} Observations of star-forming galaxies reveal the presence of galactic outflows, known as \textit{superwinds} \citep{Heckman1990}, with multiphase structures having regions with low temperatures ($10^{1{\rm -}3}$\,K) based on sub-millimeter and infrared observations \citep{Ott2005,Weis2005,Bolatto2013,Leroy2015}, warm ($\approx 10^{4{\rm -}4.5}$\,K) wind regions as seen in optical and near-UV measurements \citep{Walsh1989,Izotov1999,James2009,James2013}, as well as hot ($10^{6.5{\rm -}8}$\,K) bubbles in X-ray observations \citep{dellaCeca1996,Strickland1997,Martin1999,Ott2005a}. % Historically, starburst-driven superwinds have been modeled using adiabatic assumptions \citep{Castor1975,Weaver1977,Chevalier1985,Canto2000}. % However, adiabatic models are not consistent with suppressed superwinds observed in several starburst galaxies such as M82 \citep{Smith2006,Westmoquette2014}, NGC\,5253 \citep{Turner2017}, NGC\,2366 \citep{Oey2017}, and extreme Green Peas \citep[GPs;][]{Jaskot2017}. In particular, cooling superwinds can produce virial temperatures below $10^4$\,K, and can stimulate star formation \citep{Fabian1984,Sarazin1988,Krause2016,Silich2017} and extensive molecular gas \citep[e.g.,][]{Veilleux2020}. Semianalytic solutions for radiative superwinds, which have been obtained with cooling functions \citep[e.g.,][]{Silich2003,Silich2004,Tenorio-Tagle2005,Tenorio-Tagle2007}, show that temperature profiles of superwinds have significant departures from the adiabatic solutions, and are therefore also known as \textit{catastrophic cooling} models. These semi-analytical results indicate that radiative cooling is contingent on the gas metallicity, mass loading, and kinetic heating efficiency. A semi-analytic analysis of such a model by \citet{Wuensch2011} also demonstrates that strong radiative cooling with low heating efficiency is more sensitive to the gas metallicity, while the metallicity effect is not significant with large mass loading. More recently, hydrodynamic simulations of superwinds have also been carried out by \citet{Danehkar2021}, which confirmed the parametric dependence of radiative cooling. Photoionization calculations have been conducted by \citet{Danehkar2021} under the assumption that gas is in collisional ionization equilibrium (CIE) and photoionization equilibrium (PIE). The radiative cooling functions have been calculated in CIE by a number of authors \citep[][]{Cox1969,Raymond1976,Shull1982,Sutherland1993,Bryans2006}. The cooling functions for gas in ionization equilibrium have been employed for modeling radiatively cooling superwinds \citep[see e.g.][]{Schneider2018,Lochhaas2018,Lochhaas2020}. However, the CIE assumption is not entirely correct when the time-scale of ionization or recombination of a gas exceeds its cooling time \citep{Gnat2007,Oppenheimer2013}. Time-dependent radiative cooling functions produce non-equilibrium ionization (NEI) states that have significant departures from CIE states \citep{Kafatos1973,Shapiro1976,Schmutzler1993,Gnat2007,Vasiliev2011,Oppenheimer2013}. At temperatures below $10^6$\,K, where the gas is transiting from pure CIE to PIE, NEI departures from CIE are predicted to be considerable \citep[][]{Vasiliev2011}. % A non-equilibrium chemistry network with cooling functions was incorporated into a module called \maihem \citep{Gray2015,Gray2016} for hydrodynamic simulations with \flash. The formation of radiatively cooling superwinds and their emission lines has been investigated using \maihem \citep[][hereafter Paper~I]{Gray2019a}. In \citet[][hereafter Paper~II]{Danehkar2021}, we explored the parameter space for radiatively cooling superwinds using a grid of hydrodynamic simulations from \maihem, made in the parameter space of the metallicity ($Z$), mass loading ($\dot{M}$), wind velocity ($V_{\infty}$), and ambient density ($n_{\rm amb}$), finding that cooling is enhanced by increasing the gas metallicity and mass-loading rate, and decreasing the wind velocity. We employed the physical conditions produced by our hydrodynamic simulations to create a grid of collisional ionization and photoionization (CPI) models, also predicting UV and optical emission lines under CIE conditions. Paper~I indicates that NEI states can generate enhanced emission from highly ionized UV lines such as \ionic{O}{vi} and \ionic{C}{iv}. Therefore, in the present work, we additionally incorporate NEI states predicted by \maihem into non-equilibrium photoionization (NPI) models in the same parameter space grid considered in Paper~II, which allows us to investigate the implications of NEI calculations for catastrophically cooling superwinds. In Section \ref{cooling:winds}, we briefly describe our \maihem hydrodynamic settings and results. In Section \ref{cooling:photoionization}, we explain how we build NPI modeling with NEI states generated by \maihem. UV emission lines produced by \cloudy and diagnostics diagrams are explained in Section~\ref{cooling:cloudy:results}. Applications of NPI models for observations are discussed in Section~\ref{cooling:observations}, followed by a summary and conclusions in Section~\ref{cooling:conclusion}. \section{Hydrodynamic Simulations} \label{cooling:winds} A complete description of the \maihem code and our adopted parameters is given in Paper~II. Here, we provide a brief summary. We use a directionally unsplit hydrodynamic solver \citep{Lee2009,Lee2009a,Lee2013} with a hybrid Riemann solver \citep{Toro1994} and a second-order MUSCL--Hancock reconstruction scheme \citep{vanLeer1979} in the framework of the adaptive mesh hydrodynamics code \flash v4.5 \citep{Fryxell2000} to solve the continuity equation, Euler equation, and energy conservation equation of the fluid model of superwinds with negligible gravitational forces in one-dimensional spherical coordinates that are coupled to the radiative cooling and photo-heating functions of the \maihem package: \begin{align} \frac{d \rho _{w}}{d t}+\frac{1}{r^2} \frac{d}{dr} \left( \rho_{w} u_{w} r^2 \right) &=q_{m}, \label{eq_1} \\ \rho _{w}\frac{d u_{w}}{d t}+\rho_{w} u_{w} \frac{d u_{w}}{dr}+\frac{d P_{w}}{dr} &=-q_{m} u_{w}, \label{eq_2} \\ \rho _{w}\frac{d {E}_{w}}{d t}+ \frac{1}{r^2} \frac{d}{dr} \bigg[ \rho_{w} u_{w} r^2 \bigg( \frac{u_{w}^{2}}{2} + & \frac{\gamma}{\gamma -1} \frac{P_{w}}{\rho_{w}} \bigg) \bigg] \notag \\ &=q_{e}-q_{c}+q_{h}, \label{eq_3} \end{align} where $\rho_w$ is the fluid density, ${u}_w$ the fluid velocity, $P_{w}=\left( \gamma -1\right) \rho _{w}\epsilon _{w}$ the thermal pressure with an ideal gas equation of state, ${E}_{w}=\epsilon _{w}+\frac{1}{2}\left\vert u_{w}\right\vert ^{2}$ the total energy per unit mass, $\epsilon _{w}$ the internal energy per unit mass, $\gamma=5/3$ the ratio of specific heats, $q_{m} = \dot{M}_{\rm sc} / V_{\rm sc}$ the mass deposition rate per unit volume, $q_{e} = \dot{E}_{\rm sc} / V_{\rm sc}$ the energy deposition rate per unit volume, $\dot{M}_{\rm sc}$ the mass-loading rate, $\dot{E}_{\rm sc}$ the energy deposition rate, $V_{\rm sc}= \frac{4}{3} \pi R^3_{\rm sc}$ the SSC volume, $R_{\rm sc}$ the cluster radius, $q_{c}$ the radiative cooling rates per unit volume, and $q_{h}$ the photo-heating rate per unit volume. In a steady state ($d/d t=0$), the fluid equations take the forms presented in Paper~II that were semianalytically studied by \citet{Silich2004}. Outside the SSC ($r > R_{\rm sc}$), $q_{m}=0$ and $q_{e}=0$, while $q_{c}$ and $q_{h}$ also vanish inside the SSC ($r < R_{\rm sc}$) with negligible radiative effects. The steady-state fluid equations reduce to the adiabatic solutions obtained by \citet{Chevalier1985} and \citet{Canto2000} in the absence of the radiative functions $q_{c}$ and $q_{h}$. As fully described in Paper~II, we set the boundary conditions for the density, temperature, and velocity of the outflow at the cluster boundary ($r=R_{\rm sc}$) according to the semianalytic solutions \citep{Chevalier1985,Canto2000,Silich2004} as follows: \begin{equation}% \begin{array} [c]{ccc}% \rho_{w} = \dfrac{ \dot{M}_{\rm sc}}{ 2 \pi R_{\rm sc}^2 V_{\infty}},~~~~ & T_{w} = \dfrac{ 1 }{4\gamma} \dfrac{\mu_{\rm p}}{ k_{\rm B}} V_{\infty}^2 , ~~~~ u_{w} = \dfrac{1}{2} V_{\infty} , \end{array} \end{equation} where $V_{\infty} = ( 2 \dot{E}_{\rm sc} / \dot{M}_{\rm sc} )^{1/2}$ is the actual wind velocity, $\mu_{\rm p}=\mu m_{\rm p}$ the mean mass per particle, $\mu \approx 0.61$ the mean atomic weight of particles for a fully ionized gas in units of the proton mass $m_{\rm p}$, and $k_{\rm B}$ the Boltzmann constant. We also set the initial conditions ($t=0$) of the density, temperature, and velocity of the ambient medium outside the SSC ($r> R_{\rm sc}$) as $\rho_{w} = \mu_{\rm p} n_{\rm amb}$, $T_{w} = T_{\rm amb}$, and $u_{w} = 0$, where $n_{\rm amb}$ is the number density of the ambient medium, and $T_{\rm amb}$ is the ambient temperature that is calculated in PIE by \cloudy for the density profile predicted by an initial \maihem run with $T_{\rm amb}=10^3$\,K. The radiative cooling rate $q_{c}$ and photo-heating rate $q_{h}$ per volume are calculated by the \maihem module \citep{Gray2019} using the ion-by-ion cooling efficiencies $\Lambda_i$, the photo-heating efficiencies $\Gamma_i$, the number densities $n_i$ of ionic species $i$, and the number density of electrons $n_e$: \begin{equation}% \begin{array} [c]{cc}% q_{c} = \displaystyle\sum_{i}^{} n_i n_e \Lambda_i ,~~~~ & q_{h} = \displaystyle\sum_{i}^{} n_i \Gamma_i, \end{array} \end{equation} where the cooling efficiencies $\Lambda_i$ are calculated for a given wind temperature $T_{w}$ through interpolation on the ion-by-ion cooling rates from \citet[][]{Gnat2012} that are also extended down to 5000\,K by \citet{Gray2015}, and the photo-heating efficiencies $\Gamma_i = \int^{\infty}_{\nu_{0,i}} (4 \pi J_{\nu}/h\nu) h (\nu-\nu_{0,i}) \sigma_{i}(\nu) {\rm d}\nu$ ($\nu$ frequency, $\nu_{0,i}$ ionization frequency, and $h$ the Planck constant) are calculated for a given ionizing spectral energy distribution (SED) $J_{\nu}$ using the photoionization cross sections $\sigma_{i}(\nu)$ from \citet{Verner1995} and \citet{Verner1996} as implemented by \citet{Gray2016} and expanded for further ions in the species network of \citet{Gray2019}. The \maihem module for photo-heating calculations is supplied with the ionizing SED made by Starburst99 \citep{Leitherer1999,Leitherer2014} for the fiducial model at age 1\,Myr with total stellar mass of $M_\star =2.05\times10^6$\,M$_{\odot}$ and metallicity close to the gas metallicity $Z$. We produce a grid of hydrodynamic simulations in the parameter space of the metallicity $Z$, mass loading $\dot{M}_{\rm sc}$, wind velocity $V_{\infty}$, and ambient density $n_{\rm amb}$. We consider the gas metallicity of $\hat{Z} \equiv Z/$Z$_{\odot}=1$, $0.5$, $0.25$, and $0.125$, where $Z/$Z$_{\odot}=1$ associated with the baseline ISM abundances given in Table~2 of Paper~II, the mass-loading rate $\dot{M}_{\rm sc} = 10^{-1}$, $10^{-2}$, $10^{-3}$, and $10^{-4} \times \hat{Z}^{0.72}$ M$_{\odot}$\,yr$^{-1}$ according to $ \dot{M}_{\rm sc} \varpropto Z^{0.72}$ \citep{Mokiem2007}, the wind terminal speed $V_{\infty}= 250$, $500$, and $1000 \times \hat{Z}^{0.13}$ km\,s$^{-1}$ according to $V_{\infty} \varpropto Z^{0.13}$ \citep{Vink2001}, and the ambient density $n_{\rm amb}=1$, $10$, $10^2$, and $10^3$\,cm$^{-3}$. The C/O ratio is also parameterized by the metallicity \citep[described by][]{Danehkar2021}. The resulting density and temperature profiles for various parameters are shown as an interactive figure and animation in Figures 2 and 3 of Paper~II. In Paper~II, we classified our simulated superwinds according to adiabatic/radiative cooling and the presence/absence of the hot bubble, namely: the adiabatic wind (AW), adiabatic bubble (AB), catastrophic cooling (CC), and catastrophic cooling bubble (CB). Temperature and density profiles of one example of each of the four wind classification modes are plotted in Figure~\ref{fig:temp:dens:profiles}. Moreover, we assigned the adiabatic, pressure-confined (AP) and cooling, pressure-confined (CP) wind modes to those models whose bubble expansions are stalled by thermal pressures from the ambient media (see animation in Figure~3 of Paper~II). While models with adiabatic and radiative cooling (AW and CC) thereby suppress the hot bubble, there are many models with strong radiative cooling that still do have a bubble (CB). Two fully suppressed wind modes were also defined, namely no wind (NW) and momentum-conserving (MC) evolution. In the NW mode, the wind is completely inhibited where the supersonic outflow pressure analytically found by \citet{Canto2000} is less than the ambient pressure. In the MC mode, radiative cooling caused by high mass deposition and low heating efficiency is able to completely suppress the wind before it is launched. The wind classification in the parameter space shown in Figure 4 of Paper~II indicates that radiative cooling is enhanced by an increase in $\dot{M}_{\rm sc}$ and $Z$, and a decrease in $V_{\infty}$. Our hydrodynamic simulations have been conducted without the gravity module. In particular, the Jeans length of the ionized ambient medium with density $10^3$ cm$^{-3}$ and temperature $10^4$\,K (corresponding to the sound speed $\approx 15$ km\,s$^{-1}$), is approximately equal to 100\,pc, which could be comparable to the size of the \ionic{H}{ii} region. Thermal pressure cannot resist gravitational collapse on scales larger than the Jeans length, while the self-gravity is negligible below it. A gravitational collapse occurs in cool clouds of high density formed by radiative cooling superwinds, which leads to star formation. Detailed handling of self-gravity and optical depth for the ambient diffuse radiation is computationally expensive \citep[see e.g.][]{Truelove1997,Wuensch2018,Wuensch2021} and is beyond the scope of this work. \section{Non-equilibrium Photoionization Modeling} \label{cooling:photoionization} We conduct non-equilibrium photoionization modeling for the physical conditions ($T_{w}$ and $n_{w}$) and NEI states produced by our \maihem simulations using \cloudy v17.02 \citep{Ferland1998,Ferland2013,Ferland2017}. The SED and ionizing luminosity of the SSC calculated by Starburst99 are provided as inputs into our \cloudy models to specify an ionizing source (the same as Paper II). Non-equilibrium calculations are now made using the outflow temperature $T_w$, outflow density $n_w$ and time-dependent NEI states obtained as a function of the radius $r$ derived from our \maihem simulations. In Paper~II, we generated PI models based on the Starburst99 SEDs using only $n_w$, and CPI models with $T_w$ and $n_w$ produced by \maihem without the use of NEI states. Time-dependent calculations of the NEI states are performed by \maihem using the atomic data for recombination, collisional ionization, and photoionization. In non-equilibrium conditions, the number density of each ion $n_{i}$ of each chemical element $A$ evolves according to the time-dependent ionization balance equation \citep[see e.g.][]{Dopita2003}: \begin{align} \frac{d n_{A,i}}{d t} = & n_{\rm e} n_{A,i+1}\alpha^{A,i+1}_{\rm rec} - n_{\rm e} n_{A,i}\alpha^{a,i}_{\rm rec} \notag \\ & + n_{\rm e} n_{A,i-1} R^{A,i-1}_{\rm coll} - n_{\rm e} n_{A,i}R^{A,i}_{\rm coll} \notag \\ & + n_{A,i-1} \zeta^{A,i-1}_{\rm phot} - n_{A,i} \zeta^{A,i}_{\rm phot}, \label{eq_4} \end{align}% where $\alpha^{A,i}_{\rm rec}$ is the recombination coefficient of the ion ${i}$ of element $A$, including the radiative recombination rate \citep{Badnell2006} and dielectronic recombination rate \citep[see Table 1 in][]{Gray2015}, $R^{A,i}_{\rm coll}$ is the collisional ionization rate from \citet{Voronov1997}, $\zeta^{i,A}_{\rm phot}= \int^{\infty}_{\nu_{0,i}} (4 \pi J_{\nu}/h\nu) \sigma_{i, A}(\nu) {\rm d}\nu$ is the photoionization rate of each ion determined from the specified SED field $J_{\nu}$ and the photoionization cross-section $\sigma_{i,A}(\nu)$ \citep{Verner1995,Verner1996}. Non-equilibrium ionization occurs in regions where the radiative cooling timescale $\tau_{\rm cool} \approx 3 k_{\rm B} T_{w}/ (n_{\rm e} \Lambda)$ is shorter than the collisional ionization timescale $\tau_{\rm CIE} \approx 1 / (n_{\rm e}\alpha^{A,i}_{\rm rec} + n_{\rm e}R^{A,i}_{\rm coll})$, where $\Lambda$ is the total radiative cooling efficiency. In the expanding wind region, where density is low ($\lesssim 1$\,cm$^{-3}$), this condition where $\tau_{\rm cool} < \tau_{\rm CIE}$ is obtained for \ionic{C}{iv} and \ionic{O}{vi} at temperatures below $10^{6}$\,K. Thus, these ions may be in NEI states because of radiative cooling, while most ions still remain in CIE $ (\tau_{\rm cool} \gg \tau_{\rm CIE}$). Figure~\ref{fig:cool:time} shows the radiative cooling timescale and the collisional ionization timescales of different C ions (top panel) and O ions (bottom) plotted against the electron temperature. We calculated the radiative cooling timescale using the total cooling efficiency from \citet[][]{Gnat2012}, and the CIE timescales using the atomic data for recombination and collisional ionization. The figure compares the timescales for $n_{\rm e} =1$\,cm$^{-3}$ and the solar composition. It can be seen that the ionization timescales of \ionic{C}{v}, \ionic{C}{iv}, and \ionic{O}{vi} are longer than the radiative cooling timescale at temperatures below $10^6$\,K, where these ions are in NEI conditions. To build the NPI models, the density and temperature structures of the outflow extracted from our \maihem simulations are employed to calculate emissivities by performing \cloudy runs for all individual zones. The NEI states computed by \maihem with SED are also supplied as inputs to \cloudy. The ionizing SED and luminosity produced by Starburst99 are provided in our \cloudy model to include combined photoionization and non-equilibrium ionization, decreasing the luminosity by distance from the SSC as $r^{-2}$. Following Paper~II, the results predicted by the CPI model are applied to the ambient medium for the ionized, isothermal part of the shell starting outward from $\sim 1$--2\,pc after the interior boundary of the shell. The final emissivity profiles of emission lines are built by combining the NEI results of all the outflow zones and the CPI results of the ambient medium. The NPI model for the outflow region is therefore created by zone-by-zone \cloudy calculations based on NEI states generated by \maihem that requires an individual \cloudy run for each zone. Typically, there are up to 1024 zones for each simulation. As described in Paper II, we generate the \cloudy models for the fiducial, $Z/$Z$_{\odot}=1$ model using ISM abundances from \citet{Savage1977} and O/H from \citet{Meyer1998}, along with their depletion factors, and $Z$-parameterized C/O ratio according to the metallicity--C/O correlations from \citet{Garnett1999}. The assumed baseline abundances of elements heavier than helium are also scaled down by \cloudy for $Z/$Z$_{\odot}=0.5$, $0.25$, and $0.125$. In our \cloudy models, we also incorporate typical ISM dust grains with $M_{\rm d}/M_{\rm Z}=0.2$ found by \citet{DeVis2019} for typical galaxies. Our \cloudy NPI models similarly use an SED generated by Starburst99 \citep{Levesque2012,Leitherer2014} for a fixed SSC mass of $2.05\times10^6$\,M$_{\odot}$ at all metallicities and age of 1\,Myr using Geneva population grids with stellar rotation \citep{Ekstroem2012,Georgy2012,Georgy2013}, Pauldrach/Hillier atmosphere models \citep{Hillier1998,Pauldrach2001}, and an initial mass function (IMF) with the Salpeter slope $\alpha= 2.35$ for stellar mass range 0.5--150 M$_{\odot}$. Details of our Starburst99 settings can be found in Paper II, together with the model outputs, including the predicted ionizing luminosity ($L_{\rm ion}$) required for our NPI calculations. \section{Emission Line Predictions} \label{cooling:cloudy:results} In this section, we describe the volume emissivities of emission lines calculated by \cloudy for the NPI (non-equilibrium photoionization) models, which are used to produce luminosities. Previously, we provided the results for the PI (pure photoionization) and CPI (collisional ionization plus photoionization) models in Paper II. For comparison with the CPI models (Paper II), Figure~\ref{fig:NEI:emissivity} presents the emissivities produced for different emission lines as a function of radius for both the CPI and NPI models (top-right panels). The CPI model also supplies the NPI model with the emissivity profiles of the ambient medium that were also calculated using the ambient temperature structure from the PI model as described in Paper~II. The emissivities for the outflow region of the NPI model are calculated using the SED ionizing source, hydrodynamic NEI states, and hydrodynamic physical conditions ($T_{w}$ and $n_{w}$). Figure~\ref{fig:NEI:emissivity} also shows the ionic fractions for both the CPI and NPI models (bottom-right panels). Temperature and density profiles are also presented in Figure~\ref{fig:NEI:emissivity} (left panels). In Paper II, we classified our models as optically thin if their ambient media beyond the shell are ionized (H$^+$) in the associated PI models. For the ambient media in the optically thick NPI models, we use the volume emissivity profile from the CPI \cloudy models up to $\sim 2$ pc after the shell inner boundary, where the temperature profile in our hydrodynamic simulations starts to be isothermal. The luminosity $L_{\lambda}$ of each emission line at wavelength $\lambda$ is determined by taking the integrals of the volume emissivity $\epsilon_{\lambda} (r)$ as follows (see Appendix~A in Paper II): \begin{equation} L_{\lambda} = \int^{2 \pi}_{\varphi=0} \int^{R_{\rm aper}}_{R=0} \left[ 2 \int^{R_{\rm max}}_{r=R} \frac{\epsilon_{\lambda} (r)}{ \sqrt{r^2 - R^2} } r dr \right] R d R d\varphi, \label{eq:20}% \end{equation} where $r$ is the radial distance of the volume emissivity from the center, $R$ the projected radius from the center, $R_{\rm aper}$ the boundary radius of the circular aperture used for the luminosity integration, and $R_{\rm max}$ the maximum radius of the boundary in the line of sight. We set $R_{\rm aper}=R_{\rm max}=R_{\rm Str}$ for radiation-bounded models, and $R_{\rm aper}=R_{\rm max}=R_{\rm shell}$ for density-bounded models, where $R_{\rm shell}$ is the exterior radius of the shell and $R_{\rm Str}$ is the Str\"{o}mgren radius. As in Paper~II, we also generate partially density-bounded models that are radiation-bounded in the line of sight, but for a density-bounded aperture: $R_{\rm aper}=R_{\rm shell}$ and $R_{\rm max}=R_{\rm Str}$. The emission-line luminosities derived for radiation-bounded, partially density-bounded, and density-bounded NPI models are listed in Table~\ref{tab:cloudy:output}. The tables for all the NPI model grids are provided in the machine-readable format in Appendix~\ref{appendix:a}. \subsection{UV Diagnostic Diagrams} \label{cooling:opt:diagnostics} We now generate the UV diagnostic diagrams using the emission line luminosities obtained from the emissivities predicted by our NPI models, which include various wind modes such as CC and CB. They cover the parameter space of metallicity $Z$, mass-loading $\dot{M}_{\rm sc}$, wind velocity $V_{\infty}$, and ambient density $n_{\rm amb}$. UV diagnostics diagrams have been used to identify star-forming and active galaxies \citep{Feltre2016,Gutkin2016,Hirschmann2019}. Figures~\ref{fig:uv:diag:radi} and \ref{fig:uv:diag:pden} show \ionf{O}{iii}] $\lambda\lambda$1661,1666/\heii $\lambda$1640 versus \ionic{C}{iv} $\lambda\lambda$1548,1551/\ionf{C}{iii}] $\lambda\lambda$1907,1909 and \ionic{C}{iv} $\lambda\lambda$1548,1551/\heii $\lambda$1640, for our radiation-bounded and partially density-bounded models, respectively. As seen in Figure~\ref{fig:uv:diag:radi}, the \ionf{O}{iii}]/\heii and \ionic{C}{iv}/\ionf{C}{iii}] ratios decrease with an increase in the metallicity. However, the radiation-bounded models with $Z/$Z$_{\odot}=1$ that experience strong radiative cooling produce higher values of the \ionic{C}{iv}/\ionf{C}{iii}] ratio. Moreover, we notice some slight enhancements in the \ionic{C}{iv}/\ionf{C}{iii}] ratio in the models with $Z/$Z$_{\odot}=0.5$ and $\dot{M}_{\rm sc}$\,$\geqslant $\,$10^{-2} \hat{Z}^{0.72}$\,M$_{\odot}/$yr, where strong radiative cooling occurs. Cooling produces strong \ionic{C}{iv} emission within the free-expanding wind region, displacing some models relative to adiabatic ones. For the models with lower mass-loading rates ($\dot{M}_{\rm sc}$\,$\leqslant $\,$10^{-3} \hat{Z}^{0.72}$\,M$_{\odot}/$yr) and lower metallicity ($Z/$Z$_{\odot}\leqslant0.5$), radiative cooling is not strong enough to generate significant \ionic{C}{iv} emission. In the partially density-bounded models having radiatively cooled winds (Figure~\ref{fig:uv:diag:pden}), enhanced \ionic{C}{iv} emission is more pronounced, since these models are more strongly weighted toward emission produced in the free wind and the shell. On the other hand, \ionf{O}{iii}] and \ionf{C}{iii}] are less sensitive to strong radiative cooling, so they do not demonstrate any significant departures from the adiabatic wind models. Similarly, we present diagnostic diagrams for \ovi$\lambda\lambda$1032,1038/\heii $\lambda$1640 versus \civ $\lambda\lambda$1549,1551/\heii $\lambda$1640 in Figures~\ref{fig:uv3:diag:radi} and \ref{fig:uv3:diag:pden} for radiation-bounded, and partially density-bounded, respectively, which may be compared to those in Paper~II (CPI results plotted by lightly shaded colors). The highly ionized \ovi $\lambda\lambda$1032,1038 emission has been suggested as a diagnostic tool for catastrophic cooling winds \citep{Gray2019}. At first glance, it seems that the \ionic{C}{iv} $\lambda\lambda$1548,1551 emission rather \ovi $\lambda\lambda$1032,1038 is significantly enhanced in the models with catastrophic cooling. However, we also see in Figure~\ref{fig:uv3:diag:radi} that strong radiative cooling considerably increases the \ovi emission in the models with $\dot{M}_{\rm sc}$\,$= $\,$10^{-2}$ and $10^{-3} \times \hat{Z}^{0.72}$\,M$_{\odot}/$yr at $Z/$Z$_{\odot}= 0.5$ and $0.25$, while the \civ emission from the radiatively cooled winds is slightly increased in the models with $\dot{M}_{\rm sc}$\,$= $\,$10^{-2}\hat{Z}^{0.72}$\,M$_{\odot}/$yr, remaining similar to that from the adiabatic models for $\dot{M}_{\rm sc}$\,$= $\,$10^{-3}\hat{Z}^{0.72}$\,M$_{\odot}/$yr. While the \civ/\heii emission-line ratio expected from the NPI models may be employed to distinguish between radiatively cooling and adiabatic winds in metal-rich ($Z/$Z$_{\odot}\gtrsim 0.5$) regions, the non-equilibrium photoionized \ovi/\heii emission-line ratio could be used as a diagnostic of radiative cooling in metal-poor ($Z/$Z$_{\odot}\lesssim 0.5$) \hii\ regions typical of starburst galaxies. Previously, we also saw enhancement of radiation-bounded \ovi emission predicted by the CPI models for radiatively cooled outflows with mass-loading rates $\geqslant 10^{-2}$\,M$_{\odot}/$yr in metal-poor regions $Z/$Z$_{\odot}\leqslant0.5$ (Figure\,11 in Paper~II). The enhanced \ovi/\heii emission-line ratios produced by non-equilibrium photoionization processes in metal-poor \hii\ regions is similar to that seen in combined collisional ionization and photoionization processes, whereas the substantially enhanced \civ/\heii ratios in the NPI models are not seen in the CPI models presented in Paper II. As seen in Figure~\ref{fig:NEI:emissivity}, the \ovi\ emission is mostly generated in the central part of the free wind close to the SSC and in the hot bubble region in our NPI models. The \ovi\ emissivity profile for NPI is different from that for the CPI model (see Figure~\ref{fig:NEI:emissivity}), while the enhanced \civ\ emissivity profile in NPI looks roughly similar to the CPI model, except for some sharp peaks in \ovi\ and \civ\ in the CPI model in the interface region between the bubble (CIE) and the shell (PIE). It can be seen in Figures~\ref{fig:uv3:diag:radi} and \ref{fig:uv3:diag:pden} that the \ovi\ emission from the adiabatic winds rises with increasing metallicity and mass deposition. This trend in the adiabatic winds of the NPI models that is similar to those of the CPI models in Paper II is more noticeable in the partially density-bounded models in Figure~\ref{fig:uv3:diag:pden}. The enhanced \ovi\ emission at $Z/$Z$_{\odot}<1$ agrees with Paper~I suggesting that \ovi\ could be enhanced in radiatively cooled winds, although not significantly for $Z/$Z$_{\odot}= 1$. However, the \civ\ emission appears to be a useful diagnostic of radiative cooling at $Z/$Z$_{\odot}= 1$. Thus, to identify strong radiative cooling, it is necessary to know the metallicity. Moreover, information on the ambient density and mass loading further help to diagnose radiatively cooling superwinds. Figures~\ref{fig:uv:diag:radi}--\ref{fig:uv3:diag:pden} also include optically thick NPI models. In particular, the enhanced \ovi\ and \civ\ emission are also seen in some of the optically thick NPI models with strong radiative cooling. While \civ\ emission did not increase much in the CPI model in Paper II, substantially enhanced \civ\ emission is seen in the radiatively cooling wind models with $Z/$Z$_{\odot}= 1$, as well as those with $Z/$Z$_{\odot}= 0.5$ and $\dot{M}_{\rm sc} \geqslant 10^{-2}$\,M$_{\odot}/$yr. We note that our models do not account for radiative transfer, in particular, absorption and reemission by dust and other species. We incorporate metal depletion into our hydrodynamic simulations and include dust grains and depletion in our \cloudy calculations using the NEI states and physical conditions produced by \maihem. Including dust grains as separate species with distinctive equations of state in \maihem hydrodynamic simulations would contribute to significant changes in the UV diagnostic diagrams, since dust grains can affect thermal structures and absorb radiation fields. Moreover, NEI cooling rates can also be affected by dust grains \citep[][]{Richings2014}. Currently, we assume typical ISM dust grains with $M_{\rm d}/M_{\rm Z}=0.2$, in typical galaxies \citep{DeVis2019}. \vfill\break \section{Applications for Observations} \label{cooling:observations} Our NPI models predict that radiatively cooling outflows could contribute to the higher excitation seen in extreme starbursts and GPs. In Figures~\ref{fig:uv:diag:radi}--\ref{fig:uv:diag:pden}, we plot UV observations of nearby and distant starburst galaxies \citep{Richard2011,Masters2014,Berg2016,Amorin2017,Senchyna2017}. Their physical properties are roughly similar to those considered for our models. The nearby dwarf galaxies analyzed by \citet{Berg2016} and \citet{Senchyna2017} have oxygen abundances of $12+\log($O/H$)\approx 7.6$ and $\lesssim 8.3$, respectively; and the more distant star-forming galaxies studied by \citet{Amorin2017} have a mean oxygen abundance of $12+\log($O/H$) \approx 7.6$. In particular, our models show that \ionic{O}{vi} $\lambda\lambda$1032,1038 emission is produced in radiatively cooling outflows, as suggested by \citet{Heckman2001}, \citet{Otte2003}, \citet{Hayes2016}, and \citet{Li2017}. As seen in Figures~\ref{fig:uv3:diag:radi}--\ref{fig:uv3:diag:pden}, particularly in the metal-poor ($Z/$Z$_{\odot}\leqslant 0.5$) models, the \ionic{O}{vi} $\lambda\lambda$1032,1038 emission lines predicted by the radiative cooling wind models are higher than those expected by the adiabatic wind models with the same metallicity and mass loading. Strong \ionic{O}{vi} emission indeed is sometimes seen in young starbursts. For example, \citet{Marques-Chaves2021} find clear P-Cygni emission in \ionic{O}{vi}, \ionic{Si}{iv}, and \ionic{C}{iv}, in a luminous LyC emitter at $z=3.2$. The strength of these features is hard to explain with a simple starburst stellar population. While the broad line profiles must be dominated by stars, it may be possible that a component from radiatively cooling outflows could contribute. Similarly, \ionic{O}{vi} $\lambda\lambda$1032,1038 emission lines with P Cygni profiles were also identified in Haro\,11 \citep{Bergvall2006,Grimes2007} and other starbursting blue compact galaxies \citep{Izotov2018a}. \ionic{O}{vi} emission was detected by \citet{Otte2003} toward a soft X-ray bubble in the star-forming galaxy NGC\,4631 that could be associated with cooling, galactic outflows ($v_{\rm w} \sim 50$--100\,km\,s$^{-1}$). More recently, \ionic{O}{vi} imaging of the starburst SDSS J1156$+$5008 by \citet{Hayes2016} shows an extended halo, and its spectrum also contains \ionic{O}{vi} absorption outflowing with an average velocity of $380$\,km\,s$^{-1}$. Observations of the dwarf starburst galaxy NGC\,1705 studied by \citet{Heckman2001} revealed a low-speed outflow ($v_{\rm w} = 77$\,km\,s$^{-1}$) in \ionic{O}{vi} absorption, surrounded by a $\sim 10^4$\,K medium arising in a conductive interface between the hot superbubble and cool outer shell, and which cannot be explained by a simple adiabatic superbubble model. The physical properties of these starburst galaxies are in the parameter ranges used by our models. Our models with catastrophically cooling winds at $Z/$Z$_{\odot}\geqslant 0.5$ also demonstrate prominent \ionic{C}{iv} emission lines relative to adiabatic wind models. Interestingly, \citet{Senchyna2017} found strong \civ\ emission in metal-poor star-forming regions associated with minimal stellar wind features and $12+\log ($O/H$)\lesssim 8.3$ ($Z/$Z$_{\odot} \lesssim 0.6$). Similarly, \citet{Berg2019a} detected intense \ionic{C}{iv} $\lambda\lambda$1548,1551 emission lines from two extreme UV emission-line galaxies demonstrating little-to-no outflows \citep{Berg2019a} at $12+\log ($O/H$) \approx 7.5$ ($Z/$Z$_{\odot} \sim 0.1$), as well as other nearby metal-poor high-ionization dwarf galaxies \citep{Berg2019b}. Our non-equilibrium photoionization models predict that such intense \civ emission could be prevalent in radiatively cooling winds with $Z/$Z$_{\odot}\gtrsim 0.5$, but if these objects have C-enhanced abundances, radiatively cooling \ionic{C}{iv} could contribute to the unusually strong emission at lower metallicities. With constraints on metallicity and mass loading, it is becomes possible to distinguish between adiabatic and radiatively cooling outflows. \section{Summary and Conclusions} \label{cooling:conclusion} We have presented here our grid of non-equilibrium photoionization (NPI) models constructed using non-equilibrium ionization (NEI) states and physical conditions predicted by our hydrodynamic simulations previously presented in Paper II. We use the same ionizing SED associated with a total stellar mass of $2.05\times10^6$\,M$_{\odot}$, and the same parameter space of the metallicity ($\hat{Z} \equiv Z/$Z$_{\odot} = 1$, $0.5$, $0.25$, $0.125$), mass-loading rate ($\dot{M}_{\rm sc}$\,$=$\,$10^{-1},\ldots,10^{-4} \times \hat{Z}^{0.72} $\,M$_{\odot}\,$yr$^{-1}$), wind velocity ($V_{\infty}$\,$=$\,$250,500,1000 \times \hat{Z}^{0.13} $\,km\,s$^{-1}$), and ambient density ($n_{\rm amb}$\,$=$\,$1,\ldots,10^3$\,cm$^{-3}$) employed in Paper II. The non-equilibrium photoionization modeling is carried out using the same non-CIE approach implemented by \citet{Gray2019a}. Previously, we identified the parameter space (in $\hat{Z}$, $\dot{M}_{\rm sc}$, $V_{\infty}$, $n_{\rm amb}$) associated with strongly radiative cooling, and classified them under different wind modes based on departures from the adiabatic solutions and presence/absence of the hot bubble within 1\,Myr \citep[see \S\,4 in][]{Danehkar2021}. We found that low heating efficiency and high mass deposition are linked to strong radiative cooling effects, while the presence of a hot bubble is not a reliable indicator of either adiabatic or radiatively cool outflows. We also employed physical conditions to predict emission lines for combined collisional ionization and photoionization (CPI) processes without considering non-equilibrium ionization conditions. In this paper, we utilize NEI states and physical proprieties generated by time-dependent non-equilibrium processes to calculate volume emissivities of UV and optical emission lines with \cloudy, and their luminosities for radiation-bounded and partially density-bounded models. We use the predicted line luminosities of UV lines studied in Paper~II to compile diagnostic diagrams for comparison with observations of star-forming regions. The radiation-bounded UV emission-line ratios predicted by our NPI models are generally located in our diagnostic diagrams where both nearby and distant starburst galaxies with the modeled physical properties are observed (see Figures~\ref{fig:uv:diag:radi}--\ref{fig:uv:diag:pden}), similar to what is seen for the CPI models in Paper II. However, we see some noticeable differences between the emission-line ratios made by our NPI models and those by the CPI models that are due to NEI conditions. Our NPI models demonstrate that radiative cooling strongly enhances \ionic{C}{iv} emission in metal-rich ($Z/$Z$_{\odot}\geqslant 0.5$) regions and also increases \ovi\ emission in metal-poor ($Z/$Z$_{\odot}\leqslant 0.5$) regions that are typical of starbursts. However, some constraints on the metallicity and mass loading are necessary in order to diagnose superwinds with strongly radiative cooling. The enhanced \ionic{C}{iv} $\lambda\lambda$1548,1551 and \ovi\ $\lambda\lambda$1032,1038 in catastrophically cooling outflows were previously suggested based on non-equilibrium ionization models by \citet{Gray2019a} and \citet{Gray2019}. Moreover, hydrodynamic simulations of starburst-driven galactic outflows by \citet{Cottle2018} could not adequately produce the observed \ionic{O}{vi}, implying possible implications of nonequilibrium ionization processes. Previously, time-dependent calculations of the NEI states by \citet{deAvillez2012} showed the production of \ovi\ in the thermally stable ($\sim 10^{\textrm{3.9--4.2}}$) and unstable ($\sim 10^{\textrm{4.2--5}}$) regimes having temperatures below that required for \ovi\ in collisional ionization. Our photoionization calculations done with NEI states demonstrate the feasibility of intense \ovi\ made by radiative cooling in metal-poor environments. The occurrence of catastrophic cooling and the formation of \ionf{C}{iv} and \ovi\ emission should also be investigated under different physical conditions such as compact and ultra-compact \hii\ regions where cluster dimensions are much smaller than our typical SSC radius (1\,pc) and stellar masses are lower than our SSC mass. It is also necessary to explore different dust proprieties that are typically seen in distant star-forming galaxies. Future studies on the wide range of physical parameters will help us broaden our understanding of radiatively cooling outflows and their associated emission lines. \begin{acknowledgments} We are grateful to Richard W\"{u}nsch for careful review of the manuscript. The hydrodynamics code \flash used in this work was developed in part by the DOE NNSA ASC- and DOE Office of Science ASCR-supported Flash Center for Computational Science at the University of Chicago. Analysis and visualization of the \flash simulation data were performed using the yt analysis package \citep{Turk2011}. \end{acknowledgments} \begin{appendix} \section{Supplementary Material} \label{appendix:a} The interactive figure (192 images) of Figure~\ref{fig:NEI:emissivity} is available in the electronic edition of this article, and is archived on Zenodo (doi:\href{https://doi.org/10.5281/zenodo.6601127}{10.5281/zenodo.6601127}). This interactive figure is also hosted at: \url{https://galacticwinds.github.io/superwinds}. The 3 machine-readable tables with the emission-line data (including Table~\ref{tab:cloudy:output}) is available in the electronic edition of this article. Each file is named as \verb|table_case_bound.dat|, such as \verb|table_NPI_radi.dat|, where \verb|case| is for the ionization case (\verb|NPI|: non-equilibrium photoionization), and \verb|bound| for the optical depth model (\verb|radi|: fully radiation-bounded, \verb|pden|: partially density-bounded, and \verb|dens|: fully density-bounded). Each file contains the following information: \begin{table} \begin{center} \caption[]{Emission line luminosities calculated by the NPI models on a logarithmic scale (unit in erg/s) with different optical depth configurations (see Appendix~\ref{appendix:a} for more information). \label{tab:cloudy:output}} \scriptsize \begin{tabular}{l|c|c|c} \hline\hline\noalign{\smallskip} \multicolumn{1}{l|}{Emission Line} & \multicolumn{1}{c|}{radiation-bound} & \multicolumn{1}{c|}{part.density-bound} & \multicolumn{1}{c}{density-bound} \\ \noalign{\smallskip} \tableline \noalign{\smallskip} Ly$\alpha$ $\lambda$1216 & 40.561 & 40.151 & 40.025 \\ H$\alpha$ $\lambda$6563 & 40.791 & 40.463 & 40.186 \\ H$\beta$ $\lambda$4861 & 40.352 & 40.024 & 39.748 \\ \ionic{He}{i} $\lambda$5876 & 39.443 & 39.118 & 38.843 \\ \ionic{He}{i} $\lambda$6678 & 38.876 & 38.550 & 38.275 \\ \ionic{He}{i} $\lambda$7065 & 39.121 & 38.818 & 38.532 \\ \ionic{He}{ii} $\lambda$1640 & 38.767 & 38.685 & 38.588 \\ \ionic{He}{ii} $\lambda$4686 & 36.311 & 36.238 & 36.153 \\ \ionic{C}{ii} $\lambda$1335 & 38.402 & 37.941 & 37.546 \\ \ionic{C}{ii} $\lambda$2326 & 38.177 & 37.360 & 35.930 \\ \ionic{C}{iii} $\lambda$977 & 38.574 & 38.347 & 38.117 \\ \ionf{C}{iii}] $\lambda$1909 & 39.878 & 39.261 & 38.669 \\ \ionic{C}{iii} $\lambda$1549 & 36.902 & 36.679 & 36.451 \\ \ionic{C}{iv} $\lambda$1549 & 39.529 & 39.290 & 39.177 \\ {[\ionf{N}{i}]} $\lambda$5200 & 36.092 & 35.172 & 31.005 \\ {[\ionf{N}{ii}]} $\lambda$5755 & 37.051 & 36.207 & 34.730 \\ {[\ionf{N}{ii}]} $\lambda$6548 & 38.179 & 37.337 & 35.992 \\ {[\ionf{N}{ii}]} $\lambda$6583 & 38.649 & 37.806 & 36.462 \\ \ionic{N}{iii} $\lambda$1750 & 39.343 & 38.679 & 37.966 \\ \ionic{N}{iii} $\lambda$991 & 39.062 & 38.820 & 38.584 \\ \ionic{N}{iv} $\lambda$1486 & 39.173 & 38.774 & 38.453 \\ \ionic{N}{v} $\lambda$1240 & 39.593 & 39.593 & 39.593 \\ \ionic{O}{i} $\lambda$1304 & 38.011 & 37.092 & 33.792 \\ {[\ionf{O}{i}]} $\lambda$6300 & 36.234 & 35.323 & 31.558 \\ {[\ionf{O}{i}]} $\lambda$6364 & 35.739 & 34.827 & 31.063 \\ {[\ionf{O}{ii}]} $\lambda$3726 & 38.609 & 37.842 & 36.955 \\ {[\ionf{O}{ii}]} $\lambda$3729 & 38.753 & 37.981 & 37.055 \\ {[\ionf{O}{ii}]} $\lambda$7323 & 37.364 & 36.815 & 36.385 \\ {[\ionf{O}{ii}]} $\lambda$7332 & 37.281 & 36.729 & 36.296 \\ {\ionf{O}{iii}]} $\lambda$1661 & 38.791 & 38.244 & 37.778 \\ {\ionf{O}{iii}]} $\lambda$1666 & 39.262 & 38.715 & 38.250 \\ {[\ionf{O}{iii}]} $\lambda$2321 & 38.549 & 38.066 & 37.670 \\ {[\ionf{O}{iii}]} $\lambda$4363 & 39.148 & 38.665 & 38.270 \\ {[\ionf{O}{iii}]} $\lambda$4959 & 40.568 & 40.165 & 39.835 \\ {[\ionf{O}{iii}]} $\lambda$5007 & 41.043 & 40.640 & 40.310 \\ \noalign{\medskip} \multicolumn{4}{c}{ \ldots }\\ \noalign{\medskip} \noalign{\medskip} \hline \end{tabular} \end{center} \begin{list}{}{} \footnotesize \item[\textbf{Note:}]Table \ref{tab:cloudy:output} is published in its entirety in the machine-readable format. A portion is shown here for guidance regarding its form and content. \item[]Model parameters for this example are as follows: metallicity $Z/$Z$_{\odot}=0.5$, mass-loading rate $\dot{M}_{\rm sc} = 0.607 \times 10^{-2}$ M$_{\odot}$\,yr$^{-1}$, actual wind velocity $V_{\infty}= 457$ km\,s$^{-1}$, SSC radius $R_{\rm sc} = 1$ pc, stellar mass $M_{\star}= 2.05 \times 10^6$\,M$_{\odot}$, age $t=1$ Myr, ambient density $n_{\rm amb} = 100$ cm$^{-3}$, and ambient temperature $T_{\rm amb}$ estimated by \cloudy. \end{list} \end{table} \begin{itemize} \item[--] \verb|metal|: metallicity $\hat{Z} \equiv Z/$Z$_{\odot} = 1$, $0.5$, $0.25$, $0.125$. \item[--] \verb|dMdt|: mass-loading rate $\dot{M}_{\rm sc} = 10^{-1}$, $10^{-2}$, $10^{-3}$, $10^{-4} \times \hat{Z}^{0.72}$ M$_{\odot}$\,yr$^{-1}$. \item[--] \verb|Vinf|: wind terminal velocity $V_{\infty}= 250$, $500$, $1000 \times \hat{Z}^{0.13}$ km\,s$^{-1}$. \item[--] \verb|Rsc|: SSC radius $R_{\rm sc} = 1$ pc. \item[--] \verb|age|: current age $t = 1$ Myr. \item[--] \verb|Mstar|: total stellar mass $M_{\star}= 2.05 \times 10^6$\,M$_{\odot}$. \item[--] \verb|logLion|: ionizing luminosity $\log L_{\rm ion}$ (erg/s). \item[--] \verb|Namb|: ambient density $n_{\rm amb} = 1$, 10, $10^2$, $10^3$ cm$^{-3}$. \item[--] \verb|Tamb|: mean ambient temperature $T_{\rm amb}$ from \cloudy. \item[--] \verb|Rmax|: maximum radius $R_{\rm max}$ (pc) for the surface brightness integration. \item[--] \verb|Raper|: aperture radius $R_{\rm aper}$ (pc) for the luminosity integration. \item[--] \verb|Rshell|: shell exterior radius $R_{\rm shell}$ (pc). \item[--] \verb|Rstr|: Str\"{o}mgren radius $R_{\rm str}$ (pc) from \cloudy. \item[--] \verb|Rbin|: bubble interior radius $R_{\rm b,in}$ (pc). \item[--] \verb|Rbout|: bubble exterior radius $R_{\rm b,out}$ (pc) or shell interior radius. \item[--] \verb|Tbubble|: median temperature $T_{\rm bubble}$ of the bubble. \item[--] \verb|Tadi|: median adiabatic temperature $T_{\rm adi,med}$ of the expanding wind. \item[--] \verb|Twind|: median radiative temperature $T_{\rm w,med}$ of the expanding wind. \item[--] \verb|logUsp|: logarithmic ionization parameter $\log U_{\rm sph}$ in a spherical geometry from \cloudy. \item[--] \verb|thin|: optically thin (1) or thick (0) model. \item[--] \verb|mode|: the cooling/heating radiative/adiabatic modes: 1 (AW: adiabatic wind), 2 (AB: adiabatic bubble), 3 (AP: adiabatic, pressure-confined), 4 (CC: catastrophic cooling), 5 (CB: catastrophic cooling bubble), and 6 (CP: catastrophic cooling, pressure-confined), described in details by \citet[][]{Danehkar2021}. \item[--] \verb|H_1_1216|, \verb|H_1_6563|, \ldots , \verb|Ar_5_7006|: luminosities of the emission lines Ly$\alpha$ $\lambda$1216 {\AA}, H$\alpha$ $\lambda$6563 {\AA}, \ldots , [\ionf{Ar}{v}] $\lambda$7006 {\AA}, respectively. \end{itemize} \vspace{1mm} \software{\flash \citep{Fryxell2000}, yt \citep{Turk2011}, \cloudy \citep{Ferland2017}, Starburst99 \citep{Leitherer2014}, NumPy \citep{Harris2020}, SciPy \citep{Virtanen2020}, Matplotlib \citep{Hunter2007}.} \end{appendix} { \small \begin{center} \textbf{ORCID iDs} \end{center} \vspace{-5pt} \noindent A.~Danehkar \orcidauthor{0000-0003-4552-5997} \url{https://orcid.org/0000-0003-4552-5997} \noindent M. S. Oey \orcidauthor{0000-0002-5808-1320} \url{https://orcid.org/0000-0002-5808-1320} \noindent W. J. Gray \orcidauthor{0000-0001-9014-3125} \url{https://orcid.org/0000-0001-9014-3125} }
Title: Late-time accretion in neutron star mergers: implications for short gamma-ray bursts and kilonovae
Abstract: We study the long-term (t >> 10 s) evolution of the accretion disk after a neutron star(NS)-NS or NS-black hole merger, taking into account the radioactive heating by r-process nuclei formed in the first few seconds. We find that the cumulative heating eventually exceeds the disk's binding energy at t ~ 10^2 s (\alpha/0.1)^{-1.8} (M/2.6 Msun)^{1.8} after the merger, where \alpha is the Shakura-Sunyaev viscosity parameter and M is the mass of the remnant object. This causes the disk to evaporate rapidly and the jet power to shut off. We propose that this is the cause of the steep flux decline at the end of the extended emission (EE) or X-ray plateau seen in many short gamma-ray bursts (GRBs). The shallow flux evolution before the steep decline is consistent with a plausible scenario where the jet power scales linearly with the disk mass. We suggest that the jets from NS mergers have two components -- a short-duration narrow one corresponding to the prompt gamma-ray emission and a long-lasting wide component producing the EE. This leads to a prediction that "orphan EE" (without the prompt gamma-rays) may be a promising electromagnetic counterpart for NS mergers observable by future wide-field X-ray surveys. The long-lived disk produces a slow ejecta component that can efficiently thermalize the energy carried by beta-decay electrons up to t ~ 100 d and contributes 10% of the kilonova's bolometric luminosity at these late epochs. We predict that future ground-based and JWST near-IR spectroscopy of nearby (< 100 Mpc) NS mergers will detect narrow (~0.01c) line features a few weeks after the merger, which provides a powerful probe of the atomic species formed in these events.
https://export.arxiv.org/pdf/2208.04293
\label{firstpage} \begin{keywords} gamma-ray bursts: general --- hydrodynamics --- neutron star mergers --- black hole - neutron star mergers --- nuclear reactions, nucleosynthesis, abundances --- gravitational waves --- accretion, accretion discs \end{keywords} \section{Introduction} The overall picture of GW170817/AT2017gfo, as inferred from its gravitational wave (GW) emission, $\gamma$-ray flash, kilonova, and broadband afterglow from a relativistic off-axis jet, confirmed the conjecture that binary neutron star (NS) mergers are a significant contributor to the r-process nucleosynthesis as well as the sources of short gamma-ray bursts \citep[GRBs,][]{1989Natur.340..126E, 1992ApJ...395L..83N, 2017ApJ...848L..13A, 2017ApJ...848L..14G, 2017Sci...358.1556C, 2017Sci...358.1559K, mooley18_VLBI_proper_motion, metzger19_kilonova_review, nakar20_GW170817_review, margutti21_GW170817_ARAA}. During the first few seconds after the merger (roughly corresponding to the prompt $\gamma$-ray emission phase in a short GRB), the accretion disk is efficiently cooled by neutrino emission at a high accretion rate of $0.01$ to $0.1\rm\,\msun\,s^{-1}$ \cite[e.g.][]{1999ApJ...518..356P,2001ApJ...557..949N, 2002ApJ...579..706D, kohri05_NDAF}, and the gas near the disk mid-plane maintains a low electron fraction of $Y_{\rm e}\sim 0.1$ since electrons are mildly degenerate \citep{2007ApJ...657..383C, siegel18_GRMHD_disk}. As a result of viscous evolution, the outer disk expands with time to larger radii. When the majority of the disk mass reaches a distance of a few hundred gravitational radii of the central object, where the temperature drops below a few MeV, some dramatic changes occur \citep{2007NJPh....9...17L, metzger08_onezone_model, 2008AIPC.1054...51B, metzger09_Ye_freeze}: (i) the disk is no-longer neutrino cooled and the heat generated by viscous dissipation and nuclear reactions is advected with the fluid motion; (ii) electrons become non-degenerate causing the electron fraction $Y_{\rm e}$ to increase; (iii) photo-disintegration becomes less efficient and free nucleons recombine to form He and heavier elements; (iv) the energy release of a few to $10\,$MeV/nucleon from nuclear recombination is sufficient to unbind a large fraction of the disk mass. Thus, we expect that the disk experiences a state transition with violent outflows, as has recently been shown in the general relativistic magnetohydrodynamic (GRMHD) simulations by \citet{siegel18_GRMHD_disk, fernandez19_long_GRMHD}. The dynamical evolution of the electron fraction $Y_{\rm e}$ is controlled by disk expansion on the viscous timescale $t_{\rm vis}$ and the timescale $t_{\rm weak}$ for pair capture on free nucleons (and hence the $n\leftrightarrow p$ conversion). If $t_{\rm vis}>t_{\rm weak}$, then the local fluid is close to $\beta$-equilibrium as the disk slowly expands and the electron fraction gradually increases to the equilibrium value of $Y_{\rm e}\simeq 0.5$ (no r-process nuclei will be generated in this case). However, simulations generally found $t_{\rm vis}\ll t_{\rm weak}$ at the time when He recombination occurs in the disk and subsequent nucleosynthesis starts, so the disk and outflow composition after the $\Ye$ freeze-out is on average at least mildly neutron-rich with $Y_{\rm e}\lesssim 0.4$ \citep[e.g.,][]{metzger09_Ye_freeze, fernandez13_disk_evolution_Ye}. Observationally, the 10 day-long, ``red'' kilonova emission in GW170817 requires a relatively massive (a few percent of $\msun$), neutron-rich ($\Ye\lesssim 0.25$) ejecta component, which is most likely ejected from the accretion disk \citep{shibata17_GW170817_Mej, metzger19_kilonova_review}. Global numerical simulations of the post-merger accretion disk evolution \citep[e.g.,][]{siegel18_GRMHD_disk, 2018ApJ...869L..35R, fernandez19_long_GRMHD, murguia-berthier21_GRMHD_NDAF} only covered a timescale $t\lesssim 10\rm\, s$ and hence have not been able to determine the final fate of the disk at $t\gg 10\rm\,s$. The goal of this paper is to explore this late stage of evolution. Since the material in the remaining disk after nuclear recombination is expected to be enriched with r-process nuclei (as a result of low $\Ye\lesssim 0.4$), heating due to radioactive decay may be dynamically important, for the following reason. The heating rate can be approximated as a power-law $q_{\rm h}\propto t^{-1.3}$ in the time interval of $10 \lesssim t \lesssim 10^4\rm\,s$ \citep[][see also Appendix \ref{sec:heating_rate}]{metzger10_kilonova, roberts11_kilonova_tidal_tail, 2015ApJ...815...82L, barnes16_heating_thermalization, 2017MNRAS.468...91H, 2018A&A...615A.132R}. The cumulative energy deposition from radioactive heating $\propto t q_{\rm h}\propto t^{-0.3}$ will eventually exceed the specific binding energy $\propto GM/r_{\rm d}\propto t^{-2/3}$ (since the disk radius $\rd$ increases as $t^{2/3}$ from Kepler's third law). When this occurs, the disk material becomes unbound and is expected to quickly evaporate. On the observational side, a significant fraction ($\gtrsim 1/4$) of short GRBs show in their $\gamma$/X-ray lightcurves an extended emission (EE) component or a plateau/shallow-decay phase lasting for about $10^2\rm\, s$ followed by a rapid decline steeper than $t^{-3}$ \citep{norris06_sGRB_EE, 2006ApJ...647.1213O, perley09_sGRB_EE, sakamoto11_BAT_catalog, gompertz13_sGRB_EE, 2013MNRAS.430.1061R, kaneko15_sGRB_EE}. Some low-redshift apparently longer-duration GRBs from which a supernova can be ruled out (and hence a NS merger scenario is favored) are dominated by an EE-like episode lasting for about $10^2\rm\, s$, e.g., GRB 060614, GRB 211211A, GRB 211227A \citep{gehrels06_GRB060614, galyam06_GRB060614, lu22_GRB211227A, rastinejad22_GRB211211A, yang22_GRB211211A}. The EE and X-ray plateau require a long-lived central engine over a timescale that is about two orders of magnitude longer than the duration of the prompt $\gamma$-ray emission in short GRBs \citep{berger14_sGRB_review}. We note that such a shallow-decay phase followed by a sharp drop is also found in the X-ray lightcurves some long GRBs, although the duration is of the order $10^4\rm\, s$ \citep[e.g.,][]{2007ApJ...670..565L, troja07_longGRB_EE, lyons10_longGRB_plateau}. The origin(s) of the EE and the X-ray plateau followed by a sudden flux decline are still debated. Long GRBs are generally believed to be produced by the collapse of rapidly rotating massive stars \citep[collapsars,][]{1993ApJ...405..273W, 1999ApJ...524..262M, 2006ARA&A..44..507W}, and in this situation a long-lasting X-ray emitting jet might be produced by continuous fallback of an extended stellar envelope \citep{kumar08_longGRB_disk_evolution}, although it is unclear whether the energy injection from the disk wind and jet during the prompt emission phase could completely unbind the outer stellar envelope \citep{2010ApJ...713..800L, gottlieb22_MHDjet_cocoon}. For short GRBs from NS-NS or NS-BH mergers, it has been proposed that a fraction of the bound gas can fall back on longer timescales and fuel the accretion \citep[e.g.,][]{rosswog07_fallback, lee09_disk_phase_transition, cannizzo11_fallback_disk, gibson17_fallback_accretion}. Along this line of thought, \citet{kisaka15_fallback_power} propose that the magnetic field topology near the black hole (BH) evolves due to interactions with the fallback material in a way such that the \citet{blandford77_BZjet} jet power stays constant with time during the EE phase. However, mechanical feedback from the disk wind/jet \citep{fernandez15_wind_fallback_interaction} as well as radioactive heating by r-process nuclei \citep{metzger10_heating_fallback, desai19_fallback_heating, ishizaki21_fallback_heating} can suppress\footnote{\citet{desai19_fallback_heating} argue that fallback accretion producing EE is likely restricted to NS-BH systems, while in this work we propose that NS-NS mergers could have long-lived accretion disks and hence could power EE. This may be tested by future GW data.} the fallback rate of marginally bound material (those with fallback time longer than $\sim$1 second). More detailed calculations on the hydrodynamic interactions between the marginally bound material and existing accretion disk are needed to reliably test the viability of the fallback accretion model (especially to determine if the model can produce a sharp X-ray flux cutoff at $t\sim 10^2\rm\,s$). Another popular model for the EE and X-ray plateau is based on the dipole spindown power of a millisecond magnetar --- strongly magnetized NS \citep[e.g.,][]{metzger08_magnetar_EE, metzger11_magnetar_GRB}. One scenario to explain the steep flux decline is that the central object needs is a supramassive NS temporarily supported by rigid-body rotation until $t\sim10^2\rm\, s$ when it collapses into a BH \citep{2014ApJ...785...74L, 2015ApJ...805...89L, 2016PhRvD..93d4065G}. In this scenario, there is no physical explanation why the collapse to a BH due to angular momentum loss by GW and magnetic torques would preferentially occur on a timescale of $10^2\rm\, s$ in most of the observed systems. In fact, if a supramassive NS is indeed formed in a large fraction of NS mergers, a very broad distribution of the collapse time is expected \citep{beniamini21_SMNS_collapse_time}. Within the magnetar framework, another scenario to explain the steep flux decline at the end of the EE/plateau is that the magnetization of NS wind rapidly increases as the neutrino-driven mass loss rate from the NS drops on a timescale of $\sim 10^2\rm\, s$. A potential issue of any NS-spindown-powered scenario is that the large amount ($\gtrsim10^{52}\rm\, erg$) of energy injection into the surrounding medium would produce exceedingly bright radio emission which is severely constrained by increasingly stringent upper limits \citep[e.g.,][]{shroeder20_radio_upper_limits, bruni21_radio_upper_limits}. Another potential issue is that if the spindown energy is used to generate the EE/plateau, then the prompt emission (which has much higher luminosity and shorter duration) still requires an ultra-relativistic jet driven by hyper-accretion (of $0.01$--$0.1\msun$) onto the NS, which has so far not been numerically demonstrated. From the angular momentum transport inside the merger remnant (which is, for most cases, initially a hypermassive NS supported by differential rotation), most NS mergers likely form a BH within the first second \citep{margalit22_remnant_fate}. In this work, we explore an alternative model for the late-time central engine activities in GRBs based on long-lived disk accretion, under the assumption that a fraction of the disk mass survives the violent outflow phase of nuclear recombination. In the most energetically efficient scenario, only a small disk mass ($>10^{-4}\msun$) is needed to power the observed EE and X-ray plateau. The more challenging features to explain are the duration of the EE/X-ray plateau and the shallow flux decay before the steep decline. In our model, the jet shuts off when the disk evaporates as a result of the cumulative radioactive heat exceeding the binding energy at late time, and the shallow flux decay is because jet power is linearly proportional to the disk mass (but NOT the accretion rate). A schematic picture of our model is shown in Fig. \ref{fig:sketch}. If this model is true, the late-time disk evaporation provides slower ejecta ($v\sim 0.01c$) which will produce narrow line features in the kilonova spectra observable at $t\gtrsim 20\rm\, d$. Potential detections of such narrow lines by \textit{James Webb Space Telescope} (JWST) provides another motivation for this work. This paper is orgainzed as follows. In \S \ref{sec:disk_model}, we calculate the disk evolution under two possible scenarios (\S\ref{sec:without_wind} and \S\ref{sec:with_wind}) that differ on the prescriptions of MHD winds from advection-dominated accretion flows. In \S \ref{sec:jet}, we estimate the jet power based on the \citet{blandford77_BZjet} formalism assuming the remnant is a rapidly spinning BH and compare it with GRB observations. In \S \ref{sec:kilonova}, we estimate the contribution to late-time kilonova emission by the slow ejecta and discuss JWST detectability. We discuss observational constraints on our model in \S \ref{sec:discussion}. A summary is provided in \S \ref{sec:summary}. \section{Model}\label{sec:disk_model} We assume that the merger remnant has collapsed to a black hole (BH) early on (at $t< 1\rm\, s$ after the merger) and that it only affects the outer disk evolution through Newtonian gravity. Our fiducial value for the remnant mass is $M=2.6\msun$, as expected from the mass distribution of the Galactic double NS population after accounting for the mass loss due to GW, neutrinos and baryonic ejecta \citep{farrow19_Galactic_BNS_masses}. We approximate the radial distribution of the disk mass as a ``ring'' located at the disk outer radius $\rd$ where the surface density distribution peaks, and calculate the time evolution of the thermodynamic quantities near the mid-plane at radius $\rd$ --- pressure $P$, density $\rho$, temperature $T$, etc. As a result of angular momentum transport by viscous stresses, the disk mass $M_{\rm d}$ and radius $\rd$ evolve as \begin{equation} \label{eq:evolution} \dotMd = -\Md/\tvis,\ \dot{r}_{\rm d} = {2\rd/ \tvis}, \end{equation} where the viscous timescale is given by \begin{equation} \label{eq:tvis} t_{\rm vis} = \rd^2/\nu = (\alpha \theta^2 \OmgK)^{-1}, \end{equation} $\Omega_{\rm K} = \sqrt{GM/\rd^3}$ is the Keplerian angular frequency, $\nu=\alpha c_{\rm s}H$ is the kinetic viscosity \citep{1973A&A....24..337S} most likely due to MHD turbulence \citep{balbus98_MRI_turbulence}, $c_{\rm s} = \sqrt{P/\rho} = H\Omega_{\rm K}$ is the isothermal sound speed, $H$ is the disk scaleheight, and $\theta=H/\rd$ is the dimensionless scaleheight. The mass evolution in eq. (\ref{eq:evolution}) is due to accretion towards smaller radii as well as unbound outflow driven by viscous heating \citep{yuan14_ADAF_review} --- the fractional contributions from these two components are uncertain but our results are qualitatively insensitive to which dominates $\Md/\tvis$. The radius evolution in eq. (\ref{eq:evolution}) is due to angular momentum conservation and the expression roughly holds even when a large fraction of the disk mass loss $\dotMd$ is due to wind, provided that the wind has the same specific angular momentum as that in the disk (i.e., the wind does not provide an additional torque on the remaining gas in the disk). In the case where the wind carries less (more) specific angular momentum, the disk expands faster (slower) than in our model and hence the disk mass and accretion rate drop faster (slower) \citep[see e.g.,][]{kumar08_longGRB_disk_evolution}. For given initial conditions $\rd(t=t_0)=r_{\rm d,0}$ and $\Md(t=t_0)=M_{\rm d,0}$, one can analytically integrate eq. (\ref{eq:evolution}) using the viscous time in eq. (\ref{eq:tvis}), under the assumption that the dimensionless disk height $\theta$ stays constant. The well-known result is \citep[see e.g.,][]{metzger08_onezone_model} \begin{equation}\label{eq:self_similar} \begin{split} \Md(t) &= M_{\rm d,0} \lrsb{1 + 3(t-t_0)/t_{\rm vis,0}}^{-1/3},\\ \rd &= r_{\rm d,0} \lrsb{1 + 3(t-t_0)/t_{\rm vis,0}}^{2/3},\\ \tvis &= 3(t-t_0) + t_{\rm vis,0}, \end{split} \end{equation} where $t_{\rm vis,0} = \alpha^{-1}\theta^{-2}\sqrt{r_{\rm d,0}^3/GM}$ is the initial viscous time. We emphasize that $M_{\rm d,0}$ is the disk mass at the end of nuclear recombination (but NOT right after the merger). The energy release due to nuclear recombination is capable of driving the majority of the initial disk mass unbound in the first few seconds, so we expect $M_{\rm d,0}\ll 0.1\msun$. The global GRMHD simulations by \cite{siegel18_GRMHD_disk, fernandez19_long_GRMHD} show that roughly 1\%--10\% of the initial disk mass stays bound at the end helium recombination, although there is so far no self-consistent, long-term simulation that includes dynamically coupled nucleosynthesis beyond He with MHD accretion. Taking their results at face value, and for an initial torus mass of the order $0.1\msun$, we expect $M_{\rm d,0}$ to be in the range $10^{-3}$--$10^{-2}\msun$. In reality, the disk height at a given time is determined by the vertical pressure gradient which in turn is determined by the balance between heating and cooling rates. Since the disk material achieves local thermodynamic equilibrium in the vertical direction on a sound-crossing timescale which is much shorter than the viscous time, the energy conservation equation can be written as \begin{equation} \label{eq:energy} q_{\rm vis} + q_{\rm h} = q_{\rm adv} + q_{\rm w}, \end{equation} where the heating terms on the LHS are the viscous heating rate per unit mass $q_{\rm vis}$ and radioactive heating rate $q_{\rm h}$, and the cooling terms on the RHS are the rate at which heat is advected towards smaller radii by the mass inflow $q_{\rm adv}$ and the wind cooling rate $q_{\rm w}$ which is important if a large fraction of $\dotMd=-\Md/\tvis$ is due to viscously driven wind. We note that, due to our incomplete understanding of geometrically thick or advection-dominated disks \citep[see][for a review]{yuan14_ADAF_review}, the wind cooling term $q_{\rm w}$ is rather uncertain, and an attempt to treat this term is presented in \S\ref{sec:with_wind} where we demonstrate that our qualitative result (that the disk rapidly evaporates at late time due to radioactive heating) is robust to uncertainties in the wind prescription. In the following we estimate each of the heating/cooling rates. Due to radial pressure gradient, the disk rotates at a sub-Keplerian frequency roughly given by \begin{equation} \label{eq:omega} \Omega \simeq {\OmgK\over 1 + \theta^2}. \end{equation} Viscous heating occurs as a result of frictional angular momentum exchange between adjacent annuli and the heating rate is given by \begin{equation} \label{eq:qvis} q_{\rm vis} \simeq {9\over 4} \nu \Omega^2 \simeq {9\over 4} {GM/\rd \over (1+\theta^2)^2 \tvis}. \end{equation} The radial mass inflow advects heat to smaller radii \citep{narayan94_adaf1} and hence contributes to an advective cooling rate approximately given by \begin{equation} \label{eq:qadv} q_{\rm adv} = v_{\rm r}T{\partial s\over \partial r} \simeq {3\over 2} \lrb{{U\over P} - 1} \theta^2 {GM/\rd \over t_{\rm vis}}, \end{equation} where $s$ is the specific entropy, and $v_{\rm r} = -3\nu_{\rm vis}/(2\rd)$ is the radial velocity of the mass inflow. To obtain the second expression in eq. (\ref{eq:qadv}), we have made use of $T\d s = \d h - \d P/\rho$, $h=(U+P)/\rho\propto c_{\rm s} \propto r^{-1}$ being the specific enthalpy ($U$ being the thermal energy density) and $\partial h/\partial r\simeq -h/\rd$, and we have taken an approximate radial pressure scaling of $P\propto r^{-2}$ (and hence $\partial P/\partial r\simeq -2P/\rd$) in between extreme cases of no wind loss ($P\propto r^{-2.5}$) and strongest wind loss ($P\propto r^{-1.5}$) \citep{blandford99_ADIOS}. We adopt the $\mathtt{Helmholtz}$\footnote{We used a convenient Python implementation provided by M. Coleman at \href{https://github.com/msbc/helmeos}{https://github.com/msbc/helmeos}.} \citep{timmes00_helmeos} equation of state $P(\rho, T)$ and $U(\rho, T)$ for a mean atomic mass number $\bar{A}=100$ and mean charge number $\bar{Z}=0.4\bar{A}$. For the parameter space in this work, the pressure is dominated by electrons, positrons and radiation while the contribution from ions is negligible, so our results are practically unaffected by the choice of $\bar{A}$ and $\bar{Z}$. Finally, the radioactive heating is due to the decay of the freshly synthesized r-process elements inside the disk. For an initial $\Ye\lesssim 0.4$, the heating rates after the end of r-process in the time range $10\lesssim t\lesssim 10^4\rm\, s$ are insensitive to the initial conditions \citep[e.g.,][]{metzger10_kilonova} and can be approximated by the following power-law (see Appendix \S \ref{sec:heating_rate}) \begin{equation} \label{eq:qh} q_{\rm h}\approx 4\times10^{16} (t/\mr{s})^{-1.3}\rm\, erg\,s^{-1}\,g^{-1}, \end{equation} where we have subtracted 30\% of the total heating rate due to neutrinos accompanied by $\beta$-decays \citep{barnes16_heating_thermalization}. As the disk viscously expands following the scaling of $\rd\propto t^{2/3}$, the viscous heating rate has temporal scaling $q_{\rm vis}\propto t^{-5/3}$. The advective and wind cooling rates have the same scaling as the viscous heating rate. Thus, at sufficiently late time, the radioactive heating rate must exceed the viscous heating rate, and the solution to eq. (\ref{eq:energy}) will be $\theta\gtrsim 1$ --- which is unphysical. We propose that the excessive heating will rapidly evaporate all the disk material. When the disk material becomes unbound, we modify the mass conservation equation in eq. (\ref{eq:evolution}) by adding an evaporation term $\dot{M}_{\rm evap}=-\Md\OmgK$ to capture rapid mass removal, where $\OmgK^{-1}$ is the sound-crossing time in the vertical direction. In the following, we try two different approaches to quantify the ``boundness'' of the disk material --- the first approach ignores wind cooling term $q_{\rm w}$ in the energy conservation equation and the second takes into account the possibility that the disk is cooled by a viscously driven wind --- and the results are qualitatively similar. We note here a limitation of our simplified one-zone disk model. When the outer disk starts to evaporate as a result of excessive heating, the inner disk material at smaller radii $r\ll \rd$ can remain bound despite the loss of pressure confinement and will evaporate later. Thus, the rate at which the total disk mass evaporates with time will be somewhat slower than $\dot{M}_{\rm evap}=-\Md\OmgK$. The self-consistent solution can only be captured by higher dimensional hydrodynamic simulations of the system, which are left for future works. \subsection{Sound Speed-limited Disk without Wind}\label{sec:without_wind} In this subsection, we ignore the wind cooling term by setting $q_{\rm w}=0$ and this allows the scaleheight of a disk without radioactive heating (in the limit $q_{\rm h}\ll q_{\rm vis}$) to be moderately high $\theta\sim 0.5$. A well-known consequence is that the Bernoulli number (see eq. \ref{eq:Bernoulli_number}) of the fluid elements in the outer disk is positive \citep{narayan94_adaf1, blandford99_ADIOS}. In this scenario, we impose a mass evaporation term when the sound speed $\cs$ exceeds the orbital speed $\Omega\rd$, and the mass conservation equation becomes \begin{equation} \label{eq:evolution_mass_cs} \dotMd = -{\Md/\tvis} - \Md\OmgK\, \mr{Sig}\lrb{\cs - \Omega\rd \over \Omega \rd}, \end{equation} where, instead of an abrupt transition at $\cs=\Omega \rd$, we use a Sigmoid function to achieve numerical smoothness, \begin{equation} \label{eq:sigmoid} \mr{Sig}(x) = {1\over 1 + \exp(-x/\delta)}, \end{equation} and $\delta=1/300$ describes the sharpness of the transition. We then numerically integrate the disk radius evolution in eq. (\ref{eq:evolution}) and mass evolution in eq. (\ref{eq:evolution_mass_cs}) (with the disk temperature given by eq. \ref{eq:energy}), for an initial time $t_0=5\rm\, s$ after the merger (at the end of the nucleosynthesis) and initial disk radius such that $t_{\rm vis,0}(r_{\rm d,0}) = 3t_0$. The late-time evolution at $t\gg t_0$ depends weakly on the choices of initial conditions. For our fiducial case, the initial disk mass is taken to be $M_{\rm d,0}=3\times 10^{-3}\msun$, and the disk evaporation time depends weakly on the choice of $M_{\rm d,0}$ (which affects the equation of state through disk density). The results of the disk evolution are shown by the solid lines in Fig. \ref{fig:disk_evolution}, for a number of choices of viscous parameter $\alpha$ in the range between 0.03 and 0.2. We find that the disk mass initially evolves slowly with time $\Md\propto t^{-1/3}$ until a critical time when the disk sound speed exceeds its rotational speed, and then, according to our model prescription, the disk rapidly evaporates on a sound-crossing timescale of $\OmgK^{-1}$. The disk accretion rate initially evolves as $\dotMd\propto t^{-4/3}$ and then quickly drops after the critical time for this phase transition. We define the evaporation time $\tevap$ as when $\d\log\Md/\d\log t$ first drops below $-3$ (as a signature of rapid X-ray flux drop). This depends strongly on the viscous parameter but weakly on the initial disk mass $M_{\rm d,0}$, as shown in Fig. \ref{fig:tevap_nofw}. Since disk evaporation occurs at roughly a fixed ratio between the cumulative heating and the disk binding energy $q_{\rm h} t/(GM/\rd)$, we obtain $\tevap^{-0.3}\rd(\tevap)/M\simeq \rm const.$ and hence the following scaling \begin{equation} \tevap\simeq 50\mr{\,s}\, \lrb{\alpha\over 0.1}^{-1.8}\lrb{M\over 2.6\msun}^{1.8}, \end{equation} where we have used $\rd\propto t^{2/3} M^{1/3} \alpha^{2/3}$ (from eq. \ref{eq:tvis}) and the normalization is taken from the $M_{\rm d,0}=3\times 10^{-3}\msun$ case. Note that the dependence of $\tevap$ on the BH mass $M$ is not shown in Fig. \ref{fig:tevap_nofw}. \subsection{Bernoulli-limited Disk with Wind}\label{sec:with_wind} In this subsection, we describe another scenario where a fraction of the viscously driven mass evolution $\Md/\tvis$ is taken away by the disk wind and the wind tries to cool the remaining disk material such that its Bernoulli number is maintained at a negative or energetically bound value $B_{\rm e,0}<0$ \citep[see][for a similar treatment but in a different context]{margalit16_NSWD_merger}. The dimensionless Bernoulli number is defined as \begin{equation} \label{eq:Bernoulli_number} B_{\rm e} = {(U+P)/\rho + \Omega^2\rd^2/2 - GM/\rd \over GM/\rd} \approx {U\over P} \theta^2 - 0.5, \end{equation} where we have ignored a small kinetic energy term associated with the radial velocity of the disk material. We consider $B_{\rm e,0}=-0.1$ as a fiducial value, which is motivated by recent numerical simulations of adiabatic accretion flows \citep[][their ``most realistic'' Model B]{yuan12_Be_ADAF} \citep[see also the ``SANE'' model of][]{narayan12_ADAF}. We note that the initial conditions of these simulations are not necessarily realistic as in NS mergers, and more work (with larger numerical domains and longer runs with more realistic magnetic fields) is needed to get a more reliable $B_{\rm e,0}$. The overall expectation is that the closer $B_{\rm e,0}$ is to zero, the easier it is for extra radioactive heating to unbind the disk and the sooner the disk evaporates. We assume the wind cooling rate to be in the following form \begin{equation} \label{eq:qw} q_{\rm w} = f_{\rm w} {GM/\rd \over \tvis}, \end{equation} where $f_{\rm w}$ is a strength parameter of order unity that depends on the detailed wind mass loss rate fraction (out of $\Md/\tvis$) and the asymptotic specific energy of the wind. To maintain $B_{\rm e}=B_{\rm e,0}$, the wind cooling strength parameter for a disk without radioactive heating (in the limit $q_{\rm h}\ll q_{\rm vis}$) can be found to be \begin{equation} \label{eq:fw} f_{\rm w} = {9\over 4} {1\over (1+\theta_0^2)^2} - {3\over 2}\lrb{{U\over P} - 1}\theta_0^2, \end{equation} where $\theta_0$ is the characteristic disk height given by (from eq. \ref{eq:Bernoulli_number}) \begin{equation}\label{eq:Be0} \theta_0^2 = (B_{\rm e,0} + 0.5) P/U. \end{equation} For our fiducial choice of $B_{\rm e,0}=-0.1$, the characteristic disk height is $\theta_0\simeq 0.4$, which is slightly smaller than that in the $q_{\rm w}=0$ case. One can plug the wind cooling rate given by eqs. (\ref{eq:qw}) and (\ref{eq:fw}) into the energy conservation equation (\ref{eq:energy}) to solve for the disk height $\theta$ at a given time. At sufficiently late time, the Bernoulli number must become positive as a result of excessive radioactive heating and we still expect the disk to evaporate on a sound-crossing time. Thus, we impose a mass evaporation term and the mass conservation equation becomes \begin{equation} \label{eq:evolution_mass_Be} \dotMd = -{\Md/\tvis} - \Md\OmgK\, \mr{Sig}\lrb{B_{\rm e}}, \end{equation} where the Sigmoid function is given by eq. (\ref{eq:sigmoid}). We then numerically integrate the disk radius evolution in eq. (\ref{eq:evolution}) and mass evolution in eq. (\ref{eq:evolution_mass_Be}), for an initial time $t_0=5\rm\, s$ since the merger and initial disk radius such that $t_{\rm vis,0} = 3t_0$ (same as in \S\ref{sec:without_wind}). We take a fiducial initial disk mass of $M_{\rm d,0}=3\times 10^{-3}\msun$, which only weakly affects the disk evaporation time. The results of the disk evolution are shown by the dotted lines in Fig. \ref{fig:disk_evolution}. For the case of $B_{\rm e,0}=-0.1$, the overall disk evolution is qualitatively similar to the windless case with $q_{\rm w}=0$ in \S \ref{sec:without_wind}. The disk evaporation occurs slightly later than in the windless model, but the time of this transition depends on our choice of $B_{\rm e,0}$. The evaporation time $\tevap$, as defined by the earliest time when $\d\log\Md/\d\log t=-3$ (same as in \S\ref{sec:without_wind}), as a function of $\alpha$ and $B_{\rm e,0}$ is shown in Fig. \ref{fig:tevap_fw}. Since disk evaporation occurs roughly at a fixed ratio of $q_{\rm h} t/(|B_{\rm e,0}|GM/\rd)$, we obtain $\tevap^{-0.3}\rd(\tevap)/(|B_{\rm e,0}|M)\simeq \rm const.$ and hence the following scaling \begin{equation} \tevap\simeq 100\mr{\,s}\, \lrb{\alpha\over 0.1}^{-1.8}\lrb{M\over 2.6\msun}^{1.8} \lrb{|B_{\rm e,0}|\over 0.1}^{2.7}, \end{equation} where the normalization approximately reproduces the numerical values in Fig. \ref{fig:tevap_fw}. We note that $\tevap$ strongly depends on the critical Bernoulli number $B_{\rm e,0}$ in our model. This means that the model is not strongly predictive without a careful calibration against numerical simulations which can provide a better physical understanding of the viscously driven disk wind (as parameterized by $f_{\rm w}$ in the current model). Nevertheless, our very simple model is able to reproduce the duration of the EE and X-ray plateau (or the time of the rapid X-ray flux drop) with reasonable parameters of $\alpha\simeq 0.1$ and $B_{\rm e,0}\simeq -0.1$ that are consistent with existing global MHD simulations of adiabatic accretion flows \citep{narayan12_ADAF, liska20_disk_dynamo} \citep[however, some other simulations predict lower $\alpha$, e.g.,][]{davis10_alpha_shearbox, hawley11_alpha_GRMHD}. Observations of fully ionized geometrically thin (i.e., radiatively efficient) disks indicate $\alpha\sim 0.1$ but it is unclear if this applies to thick disks. Better calibration of our model can be obtained by performing GRMHD simulations with more realistic initial conditions (better corresponding to NS mergers) and longer runs lasting for many viscous timescales of the outer disk. A final note is that since the evaporation time scales with the BH mass as $\tevap\propto M^{1.8}$, it is likely that the steep flux decline at the end of long GRB X-ray plateaus (at $t\sim 10^4\rm\, s$) is simply due to more massive BHs in those cases. \section{Jet Power}\label{sec:jet} The properties in the outer disk provides the boundary conditions for the inner regions of the disk, which extends from $\rd$ all the way down to the horizon. The inner boundary condition, for the case of a BH remnant, is provided by freefall at the event horizon. In this section, we assume that the jet is powered by the Poynting flux associated with the rotating B-field lines that are frame-dragged by the BH spin \citep{blandford77_BZjet}. It should be noted that the physical processes related to GRB jet launching are still poorly understood, so the model to be presented in this subsection is only a possibility that is not ruled out by observations \citep[see][for a review of other possibilities]{kumarzhang15_GRB_review}. In the \citet{blandford77_BZjet} picture, the jet power is roughly given by \citep{tchekhovskoy10_BZpower} \begin{equation} L_{\rm BZ} \simeq {c\over 96\pi^2 \rg^2} \PhiBH^2 \omega_{\rm BH}^2, \end{equation} where $\PhiBH$ is the (poloidal) magnetic flux accumulated on the BH horizon and $\omega_{\rm BH}$ is the angular frequency of the horizon due to frame dragging \begin{equation} \omega_{\rm BH} = {a\over 1 + \sqrt{1-a^2}}. \end{equation} For nearly equal mass NS binaries, the typical spin of the merger remnant is $a\sim 0.7$ \citep{gonzalez07_merger_kick_spin, kiuchi09_BH_remnant, rezzolla10_torus_mass, sekiguchi16_BH_remnant, dietrich17_bNS_ejecta_mass}, provided that a large fracton the angular momentum of the binary system is retained by the remnant BH (although this depends on the NS equation of state and the uncertain evolution of the merger remnant). Thus, we take $\omega_{\rm BH}=0.4$ as a fiducial value. After the merger, the magnetic field strength is expected to be dominated by the toroidal component, and it has been shown that turbulent motions in the outer disk may generate a large-scale poloidal component in a process similar to the $\alpha\omega$-dynamo \citep{liska20_disk_dynamo}. Motivated by this idea, we assume that, when the system reaches a saturated state, the squared ratio between the poloidal B-field component and the total (toroidal-dominated) B-field is \begin{equation} \Bp^2/B^2 = \xi. \end{equation} Note that here $\Bp$ represents the volume- and time-averaged net poloidal field that is coherent on a lengthscale of the disk scaleheight $H=\theta \rd$, so the total poloidal flux in the outer disk is given by \begin{equation} \Phid \simeq \Bp H^2 = \xi^{1/2}\theta^2 B \rd^2 = \xi^{1/2} \theta^2 \lrb{P_{\rm B}\over P}^{1/2} \sqrt{8\pi P} \rd^2, \end{equation} where $P_{\rm B} = B^2/8\pi$ is the total magnetic pressure. For a radiation-dominated system in virial equilibrium, the total pressure can be estimated by $P\simeq U/3\sim \rho (GM/2\rd)/3$. Since the density in the outer disk is $\rho\simeq \Md/(2\pi \rd^2 H)$, we obtain the total poloidal flux \begin{equation} \Phid \simeq \xi^{1/2} \theta^{3/2} \lrb{P_{\rm B}\over P}^{1/2} \sqrt{2GM\Md/3}. \end{equation} Over a viscous timescale, a significant fraction of the poloidal flux in the outer disk may accumulate onto the BH horizon as a result of accretion, so one may expect that the magnetic flux on the horizon to also scale as $\PhiBH \propto \Md^{1/2}$, provided that the ratio $\PhiBH/\Phid$ is roughly a constant of order unity. In reality, $\PhiBH$ is the net flux accumulated over the accretion history up to the current time. For toroidal-dominated fields in the disk, we expect the poloidal fields arriving at the horizon to undergo cancellations such that the long-term averaged poloidal flux is negligible whereas the instantaneous value roughly tracks $\Phid$ in the outer disk. Based on the above arguments, the jet power can be written in the following form \begin{equation}\label{eq:LBZ} \begin{split} L_{\rm BZ}\simeq &\, 5\times10^{48}\mr{\,erg\,s^{-1}} {\Md \over 3\times 10^{-3}\msun} {2.6\msun\over M} \lrb{\omega_{\rm BH}\over 0.4}^2 \\ &\lrb{\PhiBH\over 0.3 \Phid}^2 \lrb{\theta\over 0.5}^3 \lrb{\xi P_{\rm B}\over 0.01P}, \end{split} \end{equation} where we have adopted some fiducial values of the ratios $\PhiBH/\Phid=0.3$ and $\xi P_{\rm B}/P=0.01$. These values have large uncertainties meaning that our eq. (\ref{eq:LBZ}) does not have strong predictive power and may only serve as a consistency check. An alternative model for the magnetic flux threading the horizon is provided by \citet{kisaka15_fallback_power}, who proposed that the BH's field topology evolves due to interactions with the fallback gas. A jet power of $L_{\rm BZ}\lesssim 10^{49}\rm\, erg\, s^{-1}$ lasting for $10^2\rm\, s$ would give a kinetic energy of $E_{\rm K}\lesssim 10^{51}\rm\, erg$, which is consistent the observational constraints from the afterglow\footnote{We note that it is often difficult to obtain the beaming-corrected GRB jet kinetic energy from the sparse afterglow observations in most cases \citep{berger14_sGRB_review, kumarzhang15_GRB_review} \citep[but see][]{beniamini15_prompt_efficiency}: one needs to measure the synchrotron cooling frequency to break the degeneracy between $\epsilon_{\rm B}$ (fraction of internal energy in the shock-heated region shared by B-fields) and the pre-shock gas density, and one also needs to detect the ``jet break'' in the late-time (faint) phase of the afterglow to constrain the opening angle of the jet. } and the fluence ratio between the EE and the prompt $\gamma$-ray emission (see eq. \ref{eq:fluence_ratio}). On the other hand, the jet power is limited by the maximum value expected from the limit of a Magnetically-Arrested Disk (MAD) where the magnetic pressure near the horizon is comparable to the ram pressure of the inflow \citep{narayan03_MAD, tchekhovskoy11_MAD_efficiencies, narayan22_MAD_efficiencies}, and this gives \begin{equation}\label{eq:LMAD} L_{\rm MAD}\sim \dotMBH c^2\simeq 1.8\times10^{49}\mr{\,erg\,s^{-1}} {\dotMBH\over 10^{-5}\msun\rm\,s^{-1}}, \end{equation} where $\dotMBH$ is the mass accretion rate onto the BH. However, due to possible wind mass loss, we expect the accretion rate at the horizon $\dotMBH$ to be a fraction $\eta_{\rm acc}<1$ of the mass inflow rate at the outer disk, i.e., \begin{equation} \dotMBH = \eta_{\rm acc}\Md/\tvis. \end{equation} The accretion fraction $\eta_{\rm acc}$ is currently highly uncertain due to the lack of global simulations that reach inflow equilibrium in a sufficiently large radial dynamical range, especially in the case where the BH launches a strong jet. A physical testbed is provided by tidal disruption events (TDEs) with super-Eddington accretion and relativistic jets \citep{bloom11_jetted_TDE, cenko12_jetted_TDE}, where the observed X-ray lightcurve roughly follows the theoretically expected mass fallback rate of $\dot{M}_{\rm fb}\propto t^{-5/3}$. In these jetted TDE cases, the total jet energy of the order $10^{53}\rm\, erg$ \citep[as inferred from the radio afterglow,][]{decolle_jetted_TDE} indicates that a large fraction $\eta_{\rm acc}\gtrsim 0.1$ of the stellar mass is indeed accreted by the BH. For the fiducial parameters in eq. (\ref{eq:LBZ}), the jet power is sub-MAD with $L_{\rm BZ}\lesssim L_{\rm MAD}$ for an accretion fraction of $\eta_{\rm acc}\gtrsim 0.1$ \citep[in contrast to the MAD case proposed by][]{barkov11_MAD_jet_EE}. The shallow flux decay in the EE and X-ray plateau is consistent with the case that the BH always has sub-MAD magnetic flux such that $L_{\rm BZ}< L_{\rm MAD}$ --- in this case the jet power would be proportional to the disk mass $L_{\rm BZ}\propto \Md\propto t^{-1/3}$ (eq. \ref{eq:LBZ}) instead of the accretion rate (which would otherwise give $L_{\rm BZ}\propto t^{-4/3}$). If the large-scale poloidal B-fields in the outer disk are only gradually amplified on a long timescale of $\gtrsim 10\rm\, s$, then the decay of the jet power may be even shallower than $t^{-1/3}$. \section{Late-time Kilonova Emission}\label{sec:kilonova} In this section, we discuss the imprint on the late-time kilonova emission of a long-lived disk of mass $M_{\rm d,0}\in (10^{-3}, 10^{-2})\msun$ after nuclear recombination. The key prediction is that the evaporation of the disk at time $t_{\rm evap}\sim 10^2\rm\, s$ (due to radioactive heating) generates a tail of very slow ejecta that produces narrow line features in the kilonova spectrum --- these are potentially detectable by JWST and future Extremely Large Telescopes. In the self-similar solution of eq. (\ref{eq:self_similar}), the late-time outer disk radius is given by $\rd \simeq (3t\alpha\theta^2\sqrt{GM})^{2/3}$ and hence the typical asymptotic velocity of the disk wind is \begin{equation} \begin{split} v(t) &\simeq \sqrt{GM\over \rd} \simeq \lrb{GM\over 3t\alpha \theta^2}^{1/3}\\ &\simeq 0.01c\, \lrb{t\over 100\mr{\,s}}^{-1/3} \lrsb{{M\over 2.6\msun} {0.1\over \alpha} \lrb{0.5\over \theta}^2}^{1/ 3}. \end{split} \end{equation} Since the disk mass evolves as $\Md\propto t^{-1/3}$, we expect a slow ejecta tail of mass $\Mej \simeq (\tevap/t_0)^{-1/3}M_{\rm d,0}\simeq M_{\rm d,0}/3$ and velocity $v \simeq 0.01 c$ (for typical values $\tevap\sim 100\rm\, s$ and $t_0\sim 5\rm\, s$) as a result of late-time disk evaporation. In the following, we discuss the late-time kilonova signatures from this slow ejecta, which has not been considered in the literature before. We consider a representative model for the late-time disk evolution with $M_{\rm d,0}=3\times10^{-3}\msun$, $t_0= 5\rm\, s$, $\alpha=0.1$ and including wind cooling (this case corresponds to the green dotted line in Fig. \ref{fig:disk_evolution}). Starting from the time evolution of disk mass $\Md(t)$, radius $\rd(t)$ and viscous time $\tvis(t)$, we assume that a constant fraction $\eta_{\rm acc}=0.5$ of the accretion rate $\Md/\tvis$ is consumed by the BH and that the rest is ejected as wind at a mean velocity $v(t) = \sqrt{GM/\rd(t)}$. Our results are insensitive to the precise value of $\eta_{\rm acc}$. This allows us to calculate the mean velocity distribution of the late-time disk wind \begin{equation} {\d M_{\rm late} \over \d v} = \lrb{\abs{\d \Md\over \d t} - \eta_{\rm acc}{\Md\over \tvis} } \abs{\d t \over \d v}, \end{equation} which is shown by the blue dashed line in the upper panel of Fig. \ref{fig:dMdlgv}. In reality, the disk wind at a given time likely has a broader velocity distribution above and below the mean value, so we use a Gaussian filter to smooth out the mean velocity distribution by a standard deviation of $\sigma_{\log v}=0.15$ (corresponding to a factor of $\sqrt{2}$ in both directions), and the result is shown by the blue solid line in the upper panel of Fig. \ref{fig:dMdlgv}. Our qualitative results do not depend on the detailed smoothing procedure, since the mass in the slow ejecta is given by the disk evolution. The peak kilonova emission is dominated by the much faster ($v\gtrsim 0.1c$) ejecta that is produced at earlier time ($t\lesssim\,$a few seconds) by tidal disruption and disk outflow driven by nuclear recombination. We schematically represent this faster ejecta component by the following power-law velocity distribution \citep[following][]{metzger19_kilonova_review} \begin{equation}\label{eq:dMdlgv} u{\d M_{\rm early}\over \d u} = {2M_{\rm ej,early}\over \Gamma(p/2)} \lrb{u\over u_{\rm min}}^{-p} \mr{e}^{-u_{\rm min}^2/u^2}, \end{equation} where $u=\gamma\beta$ is the four-velocity ($\beta=v/c$ and $\gamma$ is the Lorentz factor), $M_{\rm ej,early}=\int \d u (\d M_{\rm early}/\d u)$ is the total baryonic mass of the early ejecta, $u_{\rm min}$ is the minimum four-velocity below which the distribution cuts off, $p$ is the power-law index describing the steepness of the velocity distribution, and $\Gamma(x)$ is the Gamma function. We take $u_{\rm min}=0.1$ and $p=2.5$ such that most of the kinetic energy is contained in the slower parts near $u_{\rm min}$ as motivated by the disk simulations by \citet{siegel18_GRMHD_disk, fernandez19_long_GRMHD}. The highly relativistic ejecta with $u\gtrsim 1$ as well as the ultra-relativistic jets are unimportant for our purpose here since we focus the late-time kilonova emission. The early ejecta mass is taken to be $M_{\rm ej,early}=0.05\msun$, as motivated by kilonova lightcurve of GW170817 \citep[see][for a number of best-fit models]{wu19_late_time_lightcurve}. The resulting velocity distribution of the early ejecta is shown by the solid red line in the upper panel of Fig. \ref{fig:dMdlgv}. With the velocity distribution, we consider the whole ejecta to be in homologous expansion like the Hubble flow. The time $t_{\rm ph}(u)$ at which the photosphere recedes to a certain velocity layer is obtained by solving \begin{equation} \int_u^\infty \d u {\d M\over \d u} {\kappa \over 4\pi \beta^2 c^2 t_{\rm ph}^2} = 1, \end{equation} where $\kappa$ is the Planck-mean opacity. The realistic opacity depends on chemical composition, electron temperature, and ionization states of each species \citep{barnes13_high_opacity, tanaka13_high_opacity, kasen17_kilonova_modeling, fontes20_opacity, tanaka20_opacity, hotokezaka21_nebular_emission, pognan22_nebular_opacity}. We take a constant opacity of $\kappa=3\rm\, cm^2\,g^{-1}$, as representative for intermediate electron fraction $\Ye\in(0.25, 0.35)$ \citep[see][]{tanaka20_opacity}. For the longer expansion times of $\mc{O}(1)\rm\,s$ appropriate to the disk viscous evolution, the mass fractions of Lanthanides and Actinides at a given $\Ye$ are somewhat lower than for the shorter expansion times typically considered for NS merger dynamical ejecta or disk winds (See Fig. \ref{fig:yfinal}). A proper calculation of the late-time slow ejecta's opacity will require modeling of the disk's expansion history coupled to r-process nucleosynthesis. The results of $t_{\rm ph}(u)$ are shown in the middle panel of Fig. \ref{fig:dMdlgv}. We find that the late-time slow ejecta from disk evaporation extends the photospheric phase by about 10 days as compared to the case without a long-lived disk. How much does the slow ejecta contribute to the late-time continuum and line emission? We provide some crude estimates in the following. Let us assume that the late-time heating is entirely due to $\beta$-decays, although $\alpha$-decays and fission might also contribute \citep[provided that the nucleosynthesis produces nuclei beyond the third r-process peak, see][]{wu19_late_time_lightcurve}. After injection, a $\beta$-electron loses energy due to adiabatic expansion $\dot{E}_{\rm e,ad} = -(1+1/\ge)\Ee/t$ and collisional losses $\dot{E}_{\rm e,coll}=-K_{\rm st}\be \rho c$, and these two terms can be manipulated such that its four-velocity $\ue\equiv \ge \be$ evolves according to \begin{equation}\label{eq:electron_cooling} {\d \ue \over \d t} = -{\ue \over t} - {K_{\rm st}\rho \over \me c}, \end{equation} where $t$ is the time since explosion, $\be=v_{\rm e}/c$ is the dimensionless speed ($\ge=1/\sqrt{1-\be^2}$ being the Lorentz factor), $\rho$ is the local ejecta density, $\me$ is electron rest mass, and $K_{\rm st}$ is the stopping coefficient given by \begin{equation} K_{\rm st} \simeq (1.2 \ue^{-1.3} + 0.5\ue^{0.5})\, \mr{MeV\,cm^2\, g^{-1}}. \end{equation} The above stopping coefficient is a numerical fit based on analytical expressions including ionization losses, Coulomb interactions with other electrons, and Bremstruhlung by heavy nuclei \citep[e.g.,][]{barnes16_heating_thermalization, waxman19_thermalization, hotokezaka20_thermalization}. There is a critical time $t_{\rm th}$ at which the $\beta$-electrons thermally decouple with the local ejecta, and it is given by \begin{equation} {\ue\over t_{\rm th}}\simeq {K_{\rm st}\rho(t_{\rm th}) \over \me c}, \end{equation} where the ejecta density of a layer at bulk four-velocity $u$ is given by \begin{equation} \rho(u, t) = {\d M/\d v\over 4\pi v^2 t^3} = {\d M/\d u \over \gamma u^2 4\pi c^3 t^3}. \end{equation} Thus, the $\beta$-electron decoupling time is given by \begin{equation} t_{\rm th}(u) \simeq \lrb{{K_{\rm st}\over \ue \me c^2 4\pi c^2} {\d M /\d u \over \gamma u^2}}^{1/2}, \end{equation} where we take $\ue \simeq 2$ and $K_{\rm st}\simeq 1.2\mr{\,MeV\, cm^2\,g^{-1}}$ (corresponding to a typical injection kinetic energy of $\Ee\simeq 0.5\,\rm MeV$). At sufficiently late time $t\gtrsim 10\rm\, d$ when photon trapping is negligible, the kilonova bolometric luminosity (ignoring the X-ray and $\gamma$-ray photons) is equal to the sum of the heating rates $q_{\rm e}(u, t)$ contributed by all fluid elements, i.e., \begin{equation} L_{\rm bol}(t) = \int \d u {\d M\over \d u} q_{\rm e}(u, t), \end{equation} Following \citet{hotokezaka20_thermalization}, we take the solar r-process abundance pattern as the final composition of stable elements with minimum and maximum atomic mass numbers $A_{\rm min}=85$ and $A_{\rm max}=209$, and hence the heating rate from $\beta$-electrons can be approximated by \begin{equation}\label{eq:qe} q_{\rm e}(u, t) \simeq 4\times10^9\mr{erg\over s\,g} \lrb{t/\mr{day}}^{-1.3} \lrb{1 + {t\over t_{\rm th}(u)}}^{-1.5}, \end{equation} where the asymptotic behavior of $q_{\rm e}\propto t^{-2.8}$ after thermal decoupling is due to the following reason \citep[see][]{waxman19_thermalization}. At late time $t\gg t_{\rm th}$, the heating rate $q_{\rm e}=\int \d \ue \lrb{\d N/\d \ue} K_{\rm st}\be \rho c$ is dominated by the electrons near velocity $\ue\sim t_{\rm th}/t$ (these were injected at time $t\sim t_{\rm th}$), and the ejecta expansion $\rho\propto t^{-3}$ plus the weak dependence of $K_{\rm st}\be$ on electron velocity give rise to a roughly power-law behavior $q_{\rm e}\propto t^{-2.8}$. The contributions to the bolometric luminosity by the early/fast and late/slow ejecta components are shown in Fig. \ref{fig:Lbol}. We find that the slow ejecta from the late-time evaporation of the disk contributes $\sim\!10\%$ of the bolometric luminosity at very late time $t\gtrsim 100\rm\, d$. Such a small contribution is difficult to identify photometrically. The \textit{Spitzer} mid-infrared (mean wavelength $\lambda\approx 4.5\rm\,\mu m$) detections of the GW170817 kilonova at $t=43$ and $74\rm\, d$ \citep{kasliwal22_spitzer_GW170817} is almost certainly driven by the dominant fast ejecta since the IRAC Channel 2 photometric band is wide $\Delta \lambda \sim 1\rm\,\mu m$. However, the contribution from the slow $v\sim 0.01 c$ ejecta could be disentangled via spectroscopy. For a 10 ks exposure time, JWST low-resolution ($\lambda/\Delta\lambda \sim 100$) fixed-slit spectroscopy using NIRSpec with PRISM disperser and CLEAR filter (0.6--5.3$\rm \mu m$) has $5\sigma$ line flux sensitivity\footnote{This is for $\rm SNR=5$ at the line center, based on JWST Exposure Time Calculator at \href{https://jwst.etc.stsci.edu}{https://jwst.etc.stsci.edu}} in the wavelength range of $3\times10^{-18}\rm\, erg\, s^{-1}\, cm^{-2}$ for a narrow line of width $\Delta v = 0.01c$ and $2\times10^{-17}\rm\, erg\, s^{-1}\, cm^{-2}$ for a broad line of width $\Delta v = 0.1c$. These line flux limits are shown as horizontal dotted lines in Fig. \ref{fig:Lbol} for a source at a distance of 100 Mpc. At time $15\lesssim t\lesssim 30\rm\, d$, the faster ejecta is optically thin, and this allows the observer to see the emission from the slower ejecta in the deep interior. It is possible to identify narrow absorption lines (NALs) at wavelengths where the flux contribution from the broad emission lines from the faster ejecta is minimized (so as to reduce the dilution for the NALs). In addition to JWST, the slower ejecta's photospheric continuum at $\lambda\lesssim 2\rm\, \mu m$ will also be accessible to ground-based spectroscopic observations by future Extremely Large Telescopes. At later time $t\gtrsim 30\rm\, d$, the slower ejecta becomes optically thin as well, and since the emission shifts to longer wavelengths, space-based observations are required. The late-time spectrum is expected to show narrow emission lines (NELs) whose strengths are not affected by broad emission lines from the faster ejecta. Identification of atomic lines will provide powerful diagnostic for the chemical composition of the ejecta \citep[see][for potential identification of Strontium in the early kilonova spectra]{watson19_strontium_line}. Finally, we note that, even without the late-time disk evaporation, the early ejecta (launched at $t\lesssim\,$a few seconds) may already contain a slow tail at $v\ll 0.1c$ as a result of energy exchange between fluid elements due to pressure gradient, which may produce a power-law instead of an exponential cut-off at the low-velocity end of the velocity distribution in eq. (\ref{eq:dMdlgv}). In this case, the total mass of low-velocity ejecta near $v\sim 0.01c$ would be even higher than considered in this section and the predicted late-time narrow line features would be more prominent. \section{Discussion --- Constraints on Two-component Jet}\label{sec:discussion} Our model predicts that the relativistic jet from a NS merger has two components: one is launched in the first $\mc{O}(1)\rm\, s$ and produces the prompt $\gamma$-ray emission; the other one is produced by long-lived BH accretion which lasts for $\mc{O}(10^2)\rm\, s$ and is responsible for the EE/X-ray plateau. These two components should in principle have different opening angles $\Omega_{\gamma}$ and $\Omega_{\rm EE}$, although it is currently difficult to predict these two values from our model and hence they can only be constrained by observations --- observers at different viewing angles would see a diverse set of phenomena in the $\gamma$-ray or hard X-ray band at early time (when the jet undergoes internal dissipation) and in the broad-band afterglow emission at late time (when the jet interacts with the interstellar medium). In this section, we discuss the constraints on our model by the observed fluence (or isotropic-equivalent energy) ratio between the EE and the prompt $\gamma$-ray emission in the \swift BAT band (15--150 keV), which spans a wide range from $\mc{F}_{\rm EE}/\mc{F}_{\gamma}\lesssim 0.1$ to $\sim\!30$ \citep[see Fig. 4 of][]{perley09_sGRB_EE}. This is perhaps not surprising, as there are a number of highly uncertain factors that could cause the variations of this ratio from burst to burst. Assuming that our line of sight is inside the beaming cones of both EE and the prompt $\gamma$-ray emission, we write \begin{equation}\label{eq:fluence_ratio} {\mc{F}_{\rm EE}\over \mc{F}_{\gamma}} = {\epsilon_{\rm EE}\over \epsilon_{\gamma}} {\Omega_{\gamma}\over \Omega_{\rm EE}} {L_{\rm j, EE}\over L_{\rm j,\gamma}} {T_{\rm EE} \over T_\gamma}, \end{equation} where $L_{\mr{j},i}$, $T_{i}$, $\Omega_i$ and $\epsilon_i$ are the jet luminosity, duration, beaming solid angle, and radiative efficiency in the \swift BAT band for the two components ($i=\mr{EE}$ or $\gamma$). Apart from $T_{\rm EE}/T_\gamma\sim 10^2$, we briefly discuss the other factors in the following. Compared to the prompt emission, the EE usually has a softer spectrum such that a larger fraction of the energy may be emitted below $\sim\! 20\rm\,keV$ where \swift BAT loses effective area significantly, so we might expect $\epsilon_{\rm EE}/\epsilon_\gamma < 1$. By modeling the afterglow emission, one can constrain the total isotropic equivalent kinetic energy $E_{\rm K,iso}$, provided that the magnetization of the shocked emitting region is known (by measuring the synchrotron cooling frequency). Under the \textit{assumption} that the jet responsible for the prompt emission dominates $E_{\rm K,iso}$, previous works obtained $\epsilon_\gamma\sim 0.1$ for some GRBs with bright afterglows \citep{beniamini15_prompt_efficiency, beniamini16_prompt_efficiency}, but since the relative fractional contributions from EE and prompt emission to $E_{\rm K,iso}$ is currently unknown, this exercise does not strongly constrain the ratio of $\epsilon_{\rm EE}/\epsilon_\gamma$ \citep[but see][for a model-dependent constraint]{matsumoto20_EE_efficiency}. Observational constraints on $\Omega_{\gamma}$ can be obtained as follows. Under the (perhaps reasonable) assumption that each NS merger generates a short GRB, one can compare the NS merger rate of $10^2$--$10^3\rm\, Gpc^{-3}\, yr^{-1}$ \citep{abbott21_LIGO_merger_rate} to the on-axis short GRB rate $2$--$6\rm\, Gpc^{-3}\,yr^{-1}$ \citep[obtained by modeling their luminosity function, e.g.,][]{wanderman15_sGRB_luminosity_function} --- this roughly gives $\Omega_\gamma/4\pi \sim 10^{-2}$ but with large uncertainties. On the other hand, one can measure the jet opening angle $\theta_{\rm j}$ by searching for the ``jet break'' signature \citep{sari99_jet_lateral_expansion} in the multiband afterglow lightcurves. However, despite several likely detections, many short GRBs do not show a jet break before the afterglow fades below the detection threshold. \citet{berger14_sGRB_review} reviewed these observations and concluded that the mean opening angle $\lara{\theta_{\rm j}}\gtrsim 10^{\rm o}$ corresponding to $\Omega_{\rm j}/4\pi \gtrsim 1.5\times10^{-2}$. Here, the total jet solid angle may be understood as $\Omega_{\rm j} = \max(\Omega_\gamma, \Omega_{\rm EE})$ (although this depends on the isotropic-equivalent energies of the two components). We conclude that the case of $\Omega_{\rm EE} > \Omega_\gamma$ is allowed by current observations\footnote{\citet{barkov11_MAD_jet_EE} proposed a two-component jet model with $\Omega_\gamma > \Omega_{\rm EE}$ (their Fig. 3), where the prompt $\gamma$-ray jet component is produced by the neutrino annihilation mechanism (with a wider beaming angle) and the EE jet component is produced by the Blandford-Znajek process (with a narrower beaming angle). In this paper, we present arguments suggesting the opposite --- $\Omega_{\rm EE}>\Omega_\gamma$. This can be tested by the data from future $\gamma$/X-ray surveys on the rate of short GRBs and EEs (including ``orphan EEs''). } and even favored by some short GRBs lacking a jet break up to several weeks after the burst \citep[e.g.,][]{grupe06_no_jet_break, berger13_no_jet_break}. Recently, \citet{laskar22_late_jet_break} reported the detection of a very late jet break from the short GRB 211106A at observer's time $\sim\!30\rm\, d$, which implies $\theta_{\rm j}\sim 15^{\rm o}$ or $\Omega_{\rm j}/4\pi\sim 0.03$. The afterglow emission from a two-component jet model has been considered by \citet{huang04_2component_jet, peng05_2component_jet} who show that such a model explains the late-time afterglow data in many GRBs better than a single-component jet. The picture consists of two jet components with (true) kinetic energies $E_1$ and $E_2$ and opening angles $\theta_1 <\theta_2$ --- the second component usually has a lower isotropic equivalent energy $E_2/\theta_2^2 < E_1/\theta_1^2$. For an observer on the jet axis with viewing angle $\theta_{\rm obs}<\theta_1$, the wider jet component with lower isotropic equivalent energy can dominate the afterglow emission after the jet-break time of the narrower component \citep{peng05_2component_jet}. For an observer far from the jet axis $\theta_{\rm obs}>\theta_2$, as is the case for GW170817 \citep{mooley18_VLBI_proper_motion}, the wider jet component decelerates faster and its emission comes into our view earlier than the narrower jet component. The afterglow emission from component $i=1, 2$ (at a given observer's frequency of e.g., 10 GHz) reaches a peak flux of $F_{\mr{p}, i}\propto E_i^{(p+1)/4}$ at time $t_{\mr{p}, i}\propto E_i^{1/3} (\theta_{\rm obs}-\theta_i)^2$ \citep{nakar02_offaxis_afterglow, gottlieb19_offaxis_afterglow}. We see that the emission from the wider jet component likely peaks only slightly earlier than that from the narrower jet --- $t_{\rm p,2}/t_{\rm p,1}$ is of order unity due to the weak dependence on $E_i$. However, the peak flux $F_{\rm p}$ has a strong dependence on the jet energy --- and we find that the afterglow data of GW170817 \citep{margutti21_GW170817_ARAA} cannot rule out a two-component jet with $E_2/E_1\lesssim 1/3$. The intriguing possibility of $\Omega_{\rm EE} > \Omega_\gamma$ allows for the detection of ``orphan EE'' without a prompt $\gamma$-ray spike (in this case $\mc{F}_{\rm EE}/\mc{F}_{\gamma}\rightarrow \infty$). Some supernova-less apparently long-duration GRBs \citep{gehrels06_GRB060614, rastinejad22_GRB211211A, yang22_GRB211211A} may indeed be such cases, although the rate of these events are highly uncertain. Other possible cases are the so-called X-ray Flashes (XRFs), which have much softer spectra than GRBs \citep{sakamoto05_XRF}. Furthermore, the population of ultra-long GRBs with durations of $10^3$--$10^4\rm\,s$ \citep{gendre13_ULGRB_111209A, levan14_ULGRB, greiner15_ULGRB_111209A} may be the ``orphan EE'' from normal long GRBs from standard collapsars (without requiring an extended blue supergiant). Since \swift BAT is less likely to trigger on faint-long events than on bright-short ones and is not sensitive below $\sim\!20\rm\, keV$, it does not provide a very strong constraint on $\Omega_{\rm EE}$ by itself. Nevertheless, the case of $\Omega_{\rm EE}/4\pi \gtrsim 0.1$ can be ruled out by current non-detections, because that would produce $\gtrsim 1$ event within 400 Mpc every year in the $1.4\rm\,sr$ field of view of \swift BAT (for a NS merger rate of $300 \rm\, Gpc^{-3} \,yr^{-1}$). For a modest isotropic equivalent energy of $10^{50}\rm\, erg$, each of these events would produce a fluence of $5\times 10^{-6}\rm\, erg\,cm^{-2}$, which is easily detectable by BAT \citep[see Fig. 11 of][]{levan14_ULGRB}. A physical reason that may lead to $\Omega_{\rm EE} > \Omega_\gamma$ is as follows. In NS-NS mergers, there is a significant amount of baryonic material (e.g., the dynamical ejecta) near the rotational axis of the system. Thus, the prompt $\gamma$-ray jet must push its way through the surrounding gas --- a high-pressure cocoon surrounding the jet is produced in this process and the cocoon provides additional collimation for the early jet \citep{ramirez-ruiz02_cocoon, bromberg11_cocoon, nagakura14_jet_collimation, duffel18_jet_propagation, geng19_HDjet_propagation, gottlieb21_HDjet_cocoon, gottlieb22_MHDjet_cocoon}. In the case of a successful jet break-out, the dynamical ejecta gas near rotational axis is evacuated as a result of lateral expansion of the shocked gas. Then, the late EE jet is likely less collimated due to a weaker cocoon (though the late-time disk wind can still provide some collimation). Detailed hydrodynamic simulations including a long-lived jet component are needed to provide a better prediction on $\Omega_\gamma$ and $\Omega_{\rm EE}$. In this scenario, it is likely that NS-BH mergers will have more similar $\Omega_\gamma$ and $\Omega_{\rm EE}$ because they lack the polar ejecta that provides most of confining pressure in NS-NS mergers. Finally, the jet luminosity ratio $L_{\rm j, EE}/L_{\rm j, \gamma}$ likely differs substantially from source to source due to the variation in disk mass $M_{\rm d,0}$ at the end of nuclear recombination. The global GRMHD simulations by \cite{siegel18_GRMHD_disk, fernandez19_long_GRMHD} show that of the order 1\%--10\% of the disk mass right after merger stays bound at the end of helium recombination. This suggests that the disk mass ratio alone would give $L_{\rm j, EE}/L_{\rm j, \gamma}\sim 0.01$ to $0.1$. The other factors in eq. (\ref{eq:LBZ}) $\PhiBH/\Phid$ and $\xi P_{\rm B}/P$ (which depends on poorly understood MHD processes) may also cause fluctuations in the jet luminosity ratio. Based on the above considerations, we conclude that currently existing observations are consistent with the scenario that the observed EE is powered by a long-lived accretion disk with mass $M_{\rm d,0}\in (10^{-3},10^{-2})\msun$ that remains after nuclear recombination. The short GRBs without EE detections may be due to softer spectrum that is below the energy band of Swift BAT, a wide beaming angle $\Omega_{\rm EE}$ that dilutes the EE flux, very low leftover mass $M_{\rm d,0}$, inefficient accumulation of magnetic flux at the BH horizon, or a combination of these factors. Fortunately, the ECLAIRs telescope onboard the SVOM mission is sensitive down to 4 keV \citep{godet14_ECLAIRS} and will likely detect a large sample of EEs either accompanied by a short GRB or without a prompt $\gamma$-ray spike (i.e. ``orphan EE'', provided that $\Omega_{\rm EE}>\Omega_\gamma$). Other wide-field X-ray surveys like Einstein Probe \citep{yuan15_Einstein_Probe} and THESEUS \citep{cielfi21_Theseus} will also detect many EEs. These instruments will provide more accurate measurements of the EE spectrum, a better understanding of the statistical distribution of $\mc{F}_{\rm EE}/\mc{F}_{\gamma}$, as well as a tighter constraint on $\Omega_{\rm EE}$ in the near future. \section{Summary}\label{sec:summary} This work is motivated by (i) the need to understand the long-term ($t\gg 10\rm\, s$) evolution of the accretion disk in NS mergers, (ii) the puzzling observations of many short GRBs showing extended emission (EE) and X-ray plateau lasting for $t\sim 10^2\rm\, s$ followed by a sharp drop in $\gamma/$X-ray flux, (iii) future JWST spectroscopic observations of kilonovae especially at late time ($t\gtrsim 15\rm\, d$). We construct a model for the late-time disk evolution taking into account the radioactive heating by r-process nuclei formed earlier in the nuclear recombination phase (the first few seconds), under the assumption that a fraction (1\% to 10\%) of the initial disk mass is left bound at the end of nucleosynthesis. As the disk viscously spreads in radius, its binding energy $\propto t^{-2/3}$ drops faster than the cumulative heating by radioactive decays $\propto t^{-0.3}$ (provided that $\Ye\lesssim 0.4$ right before r-process nucleosynthesis). There is a critical time $\tevap$ when the disk material is overheated to become unbound and hence will quickly evaporate. We propose that the jet power rapidly drops and hence the EE and X-ray plateau ends abruptly at the evaporation time $\tevap$. Our crude semi-analytic disk evolution model predicts $\tevap\sim 10^2\rm\, s (\alpha/0.1)^{-1.8} (M/2.6\,\msun)^{1.8}$ in the scenario without cooling by the disk wind (\S\ref{sec:without_wind}), and for the case with wind cooling (\S\ref{sec:with_wind}), an evaporation time of $10^2\rm\, s$ is obtained at a critical Bernoulli number of $B_{\rm e,0}=0.1$ (which determines the wind cooling rate by eq. \ref{eq:Be0}) and $\alpha=0.1$. Long GRBs from the collapse of massive stars likely produce more massive BHs and hence their X-ray plateaus should last longer ($\tevap\sim 10^4\rm\, s$ requires $M\sim 30\,\msun$). However, the strong dependence of $\tevap$ on $\alpha$ and $B_{\rm e,0}$ means our model needs to be carefully calibrated against GRMHD simulations in order to be strongly predictive. We also propose a new scenario for the jet power that can potentially explain the shallow decay in the lightcurves of the EE and X-ray plateau. Recent simulations by \citet{liska20_disk_dynamo} showed that the $\alpha\omega$-dynamo operating in the outer disk produces poloidal B-fields that are coherent on the lengthscale of the vertical pressure scaleheight. We further demonstrate that if the magnetic pressure stays at a fixed fraction of the total pressure over the viscous evolution of the disk, then the total poloidal magnetic flux in the outer disk scales with the disk mass as $\Phid\propto \Md^{1/2}$. This motivates us to consider a jet launched by the \citet{blandford77_BZjet} mechanism (based on frame-dragging of the B-field lines threading the horizon) with a total power that scales as $L_{\rm BZ}\propto \Phid^2\propto \Md$. Before the radioactively driven evaporation, the disk mass evolves slowly with time $\Md\propto t^{-1/3}$, so this produces a slowly evolving jet power $L_{\rm BZ}\propto t^{-1/3}$ that is roughly consistent with observations, provided that the leftover disk mass at the end of the nuclear recombination is roughly in the range of $10^{-3}\lesssim M_{\rm d,0} \lesssim 10^{-2}\msun$. However, the leftover disk mass $M_{\rm d,0}$ at the end of r-process nucleosynthesis is currently rather uncertain. This uncertainty can be reduced by future simulations that dynamically couples MHD with nucleosynthesis beyond He. Making further predictions for the $\gamma$/X-ray lightcurves requires better understanding of the processes that determines the magnetic flux threading the BH horizon as well as the beaming angle and X-ray emission mechanism of jets. Existing afterglow observations suggest that the beaming angle of the EE may be wider than that of the prompt $\gamma$-ray emission. We propose a possible physical reason for this: the earlier prompt jet is strongly collimated by the cocoon pressure whereas the EE jet propagates in the cavity produced by the expanding cocoon and is hence less collimated. This means that ``orphan EEs'' (without prompt $\gamma$-rays) may be promising electromagnetic counterparts to NS-NS or NS-BH mergers, and they are observable by future wide-field X-ray surveys, e.g., ECLAIRs \citep{godet14_ECLAIRS}, Einstein Probe \citep{yuan15_Einstein_Probe}, and THESEUS \citep{cielfi21_Theseus}. Possible existing examples of orphan EEs are the supernova-less apparently long-duration ($10^2\rm\,s$) GRBs whose ``prompt emission'' properties are similar to EE. Another potential support of our 2-component jet model is provided by the population of ultra-long GRBs whose ``prompt emission'' lasts for $10^4\rm\,s$ --- these could be orphan EE from normal long GRBs from collapsars. Alternative scenarios for the EE/X-ray plateau are based on the spindown of a supra-massive NS or fallback accretion. These models, as well as our proposal here, all have large theoretical uncertainties. A major difference is that the energy release in the EE phase is modest $\lesssim 10^{51}\rm\, erg$ in our model based on long-lived disk accretion whereas we expect a typical energy injection of $\gtrsim 10^{52}\rm\, erg$ in the supra-massive NS model. A larger energy injection produces much brighter afterglow emission when the kilonova ejecta is decelerated by the surrounding medium. Another prediction of our model is that short GRBs due to BH-NS mergers should also show EE/X-ray plateau followed by a rapid decline, and since the BH is likely more massive ($M\gtrsim 5\msun$), the plateau lasts for longer ($\gtrsim 300$ seconds) than in the NS-NS merger case. These predictions are testable by future multi-messenger observations of BH-NS and NS-NS mergers. Finally, the late-time disk evaporation produces a tail of very slow ($\sim\!0.01c$) ejecta following the faster ($\gtrsim 0.1c$) ejecta generated by the tidal disruption and the early disk wind. We show that the late/slow ejecta, due to its higher density, can efficiently thermalize the energy injection from $\beta$-decay electrons (as well as the nuclear fragments from $\alpha$-decay and fission) up to $t\sim 100\rm\, d$, which increases its contribution to the bolometric luminosity to of the order $10\%$ at these late epochs. We predict that JWST NIRSpec observations of nearby ($\lesssim 100\rm\, Mpc$) NS mergers will be able to detect narrow ($\Delta v\sim 0.01c$) line features a few weeks after the merger. This will potentially provide a powerful probe of the detailed chemical composition of the r-process enriched ejecta (which is otherwise much harder to achieve if all lines are very broad and significantly overlap with each other). \section*{Acknowledgments} We thank Brian Metzger for suggesting looking into possible late-time line features in kilonovae. We thank Paz Beniamini for helpful discussion on the afterglow emission from two-component jets. We also thank Dan Kasen, Jim Stone, Todd Thompson, Bradley Meyer, Adam Burrows, Andrei Beloborodov, Matt Coleman, Bing Zhang, Pawan Kumar, Ryan Chornock, and Raf Margutti for useful discussions. WL was supported by the Lyman Spitzer, Jr. Fellowship at Princeton University. EQ was supported in part by a Simons Investigator grant from the Simons Foundation. \section*{Data Availability} The data underlying this article will be shared on reasonable request to the corresponding author. {\small \bibliographystyle{mnras} \bibliography{refs} } \appendix \section{Radioactive Heating}\label{sec:heating_rate} Past numerical studies \citep[e.g.,][]{metzger09_Ye_freeze, siegel18_GRMHD_disk, fernandez19_long_GRMHD} showed that as the post-NS-merger disk evolves with time, the electron fraction of the disk material freezes at a moderately low value $\Ye\lesssim 0.4$. Observations of the kilonova from GW170817 suggest that the majority of the ejecta mass likely comes from a neutron-rich ($\Ye\lesssim 0.25$, as required by the long-lasting red kilonova emission component) accretion disk outflow \citep[][and refs therein]{metzger19_kilonova_review}. We model the nucleosynthesis \textit{inside} the accretion disk using $\mathtt{SkyNet}$ \citep{lippuner17_skynet}. The system is initially in nuclear statistical equilibrium (NSE) at temperature $T_0=6\rm\, GK$ and specific entropy $s_0\in (8, 350)\kB/\mp$ (in units of the Boltzmann constant per proton mass). The specific entropy determines the initial density (for a radiation pressure-dominated gas, we have $s_0\propto T^3/\rho$) and the range of $s_0$ considered here spans from (electron-)degenerate gas and non-degenerate gas \citep[see Fig. 14 of][]{fernandez19_long_GRMHD}. We follow the adiabatic evolution of the system as it undergoes expansion on a timescale of $\tau=0.1$ to $1\rm\, s$. The results are shown in Fig. \ref{fig:heating_rates} (time-dependent heating rate) and \ref{fig:yfinal} (composition at $t=10^9\rm\,s$). We find that the heating rates in the time range $10\lesssim t\lesssim 10^4\rm\, s$ do not depend sensitively on the initial conditions, provided that the initial $\Ye\lesssim 0.4$. The (neutrino subtracted) heating rate can be approximated by eq. (\ref{eq:qh}). We also find that for a longer expansion time $\tau$ (and hence neutron captures occur on a longer timescale), less Lanthanides and Actinides are produced. \label{lastpage}
Title: Reheating with Effective Potentials
Abstract: We consider reheating for a charged inflaton which is minimally coupled to electromagnetism. The evolution of such an inflaton induces a time-dependent mass for the photon. We show how the massive photon propagator can be expressed as a spatial Fourier mode sum involving three different sorts of mode functions, just like the constant mass case. We develop accurate analytic approximations for these mode functions, and use them to approximate the effective force exerted on the inflaton $0$-mode. This effective force allows one to simply compute the evolution of the inflaton $0$-mode and to follow the progress of reheating.
https://export.arxiv.org/pdf/2208.11146
\begin{titlepage} \begin{flushright} UFIFT-QG-22-02 \end{flushright} \vskip 2.5cm \begin{center} {\bf Reheating with Effective Potentials} \end{center} \vskip 1cm \begin{center} S. Katuwal$^{1*}$, S. P. Miao$^{2\star}$ and R. P. Woodard$^{1\dagger}$ \end{center} \begin{center} \it{$^{1}$ Department of Physics, University of Florida,\\ Gainesville, FL 32611, UNITED STATES} \end{center} \begin{center} \it{$^{2}$ Department of Physics, National Cheng Kung University, \\ No. 1 University Road, Tainan City 70101, TAIWAN} \end{center} \vspace{1cm} \begin{center} ABSTRACT \end{center} We consider reheating for a charged inflaton which is minimally coupled to electromagnetism. The evolution of such an inflaton induces a time-dependent mass for the photon. We show how the massive photon propagator can be expressed as a spatial Fourier mode sum involving three different sorts of mode functions, just like the constant mass case. We develop accurate analytic approximations for these mode functions, and use them to approximate the effective force exerted on the inflaton $0$-mode. This effective force allows one to simply compute the evolution of the inflaton $0$-mode and to follow the progress of reheating. \begin{flushleft} PACS numbers: 04.50.Kd, 95.35.+d, 98.62.-g \end{flushleft} \vspace{2cm} \begin{flushleft} $^{*}$ e-mail: sanjib.katuwal@ufl.edu \\ $^{\star}$ email: spmiao5@mail.ncku.edu.tw \\ $^{\dagger}$ e-mail: woodard@phys.ufl.edu \end{flushleft} \end{titlepage} \section{Introduction} Scalar-driven inflation is supported by the slow roll of the inflaton down its potential. At the end of inflation the inflaton begins oscillating, and its kinetic energy is transferred to ordinary matter during the process of reheating. The efficiency of this transfer obviously depends on the way the inflaton is coupled to ordinary matter. Ema et al. have shown that the most efficient coupling is that of a charged inflaton to electromagnetism \cite{Ema:2016dny}. What happens is that the evolution of a charged inflaton induces a time-dependent photon mass which oscillates around zero during reheating. The temporal and longitudinal components of the photon diverge as the mass goes to zero, which makes reheating very efficient. The process has been previously studied by discretizing space, carrying out a finite Fourier transform, and then numerically evolving the nonlinear system of the inflaton plus electromagnetism \cite{Bezrukov:2020txg}. However, the energy transfer is broadly distributed over so many modes that there is little point to including nonlinear effects in the photon field, provided that its response to the inflaton $0$-mode is known to all orders. In that case, one merely sums the contribution from each photon mode's wave vector, which can be accomplished by varying the inflaton effective potential. The goal of this paper is to develop a good analytic approximation for the massive photon propagator in a time-dependent inflaton background, and then use it to compute the quantum-induced, effective force in the equation for the inflaton $0$-mode. In this way reheating can be studied by numerically solving a nonlocal equation for the inflaton $0$-mode. This paper consists of five sections, of which the first is this Introduction. In section 2 we derive a spatial Fourier mode sum for the massive photon propagator which is valid when the mass becomes time-dependent. Section 3 develops analytic approximations for the temporal and longitudinal modes, checking them against explicit numerical analysis for a simple model of inflation. In section 4 we discuss how these approximations can be used to estimate the quantum-induced effective force which controls the process of reheating. Section 5 gives our conclusions. \section{The Massive Photon Propagator} The purpose of this section is to generalize the massive photon propagator from its known form for a constant mass \cite{Katuwal:2021kry} to the case of a time-dependent mass. The Lagrangian is, \begin{eqnarray} \lefteqn{\mathcal{L} = -\frac14 F_{\mu\nu} F_{\rho\sigma} g^{\mu\rho} g^{\nu\sigma} \sqrt{-g} } \nonumber \\ & & \hspace{2cm} - \Bigl( \partial_{\mu} \!-\! i q A_{\mu}\Bigr) \varphi \Bigl( \partial_{\nu} \!+\! i q A_{\nu} \Bigr) \varphi^* g^{\mu\nu} \sqrt{-g} - V(\varphi \varphi^*) \sqrt{-g} \; , \qquad \label{Lagrangian} \end{eqnarray} where $\varphi$ is the inflaton and $F_{\mu\nu} \equiv \partial_{\mu} A_{\nu} - \partial_{\nu} A_{\mu}$ is the electromagnetic field strength. We work on a general homogeneous, isotropic and spatially flat geometry in $D$-dimensional, conformal coordinates, with Hubble parameter $H$ and first slow roll parameter $\epsilon$, \begin{equation} ds^2 = a^2 \Bigl[-d\eta^2 + d\vec{x} \!\cdot\! d\vec{x}\Bigr] \qquad , \qquad H \equiv \frac{\partial_0 a}{a^2} \quad , \quad \epsilon \equiv - \frac{\partial_0 H}{a H^2} \; . \label{geometry} \end{equation} The section first reviews the constant mass case, and then makes the generalizations necessary to incorporate a time-dependent mass. \subsection{Constant Mass} When the photon's mass is constant its propagator $i[\mbox{}_{\mu} \Delta_{\rho}](x;x')$ is transverse, \begin{equation} \partial_{\mu} \Bigl\{ \!\sqrt{-g(x)} \, g^{\mu\nu}(x) \, i\Bigl[\mbox{}_{\nu} \Delta_{\rho}\Bigr](x;x')\Bigr\} = 0 = \partial'_{\rho} \Bigl\{\! \sqrt{-g(x')} \, g^{\rho\sigma}(x') \, i\Bigl[\mbox{}_{\mu} \Delta_{\sigma}\Bigr](x;x') \Bigr\} . \label{transverse} \end{equation} Its propagator equation reflects this transversality \cite{Katuwal:2021kry, Tsamis:2006gj}, \begin{eqnarray} \lefteqn{ \sqrt{-g} \Bigl[ \square^{\mu\nu} - R^{\mu\nu} - M^2 g^{\mu\nu}\Bigr] i \Bigl[\mbox{}_{\nu} \Delta_{\rho}\Bigr](x;x') } \nonumber \\ & & \hspace{3.5cm} = \delta^{\mu}_{~\rho} i\delta^D(x \!-\! x') + \sqrt{-g(x)} \, g^{\mu\nu}(x) \partial_{\nu} \partial'_{\rho} i\Delta(x;x') \; . \qquad \label{constMprop} \end{eqnarray} Here $\square^{\mu\nu}$ is the vector d'Alembertian, $R^{\mu\nu}$ is the Ricci tensor and $i\Delta(x;x')$ is the propagator of a massless, minimally coupled scalar, \begin{equation} \partial_{\mu} \Bigl[\sqrt{-g} \, g^{\mu\nu} \partial_{\nu} i\Delta(x;x') \Bigr] = i\delta^D(x \!-\! x') \; . \label{MMCSprop} \end{equation} The solution to (\ref{transverse}-\ref{constMprop}) can be expressed as a spatial Fourier mode sum over three sorts of polarizations \cite{Katuwal:2021kry}, \begin{eqnarray} \lefteqn{ i\Bigl[\mbox{}_{\mu} \Delta_{\rho}\Bigr](x;x') = \int \!\! \frac{d^{D-1}k}{(2\pi)^{D-1}} \sum_{\lambda = t,u,v} s_{\lambda} \Biggl\{ \theta(\Delta \eta) \mathcal{A}_{\mu}(x;\vec{k},\lambda) \mathcal{A}_{\nu}^*(x'; \vec{k},\lambda) } \nonumber \\ & & \hspace{6cm} + \theta(-\Delta \eta) \mathcal{A}_{\mu}^*(x;\vec{k},\lambda) \mathcal{A}_{\nu}(x';\vec{k},\lambda) \Biggr\} , \qquad \label{constMmodesum} \end{eqnarray} where $\Delta \eta \equiv \eta - \eta'$. Longitudinal photons correspond to $\lambda = t$ and have $s_t = -1$ with, \begin{equation} \mathcal{A}_{\mu}(x;\vec{k},t) = \frac{\partial_{\mu}}{M} \Bigl[ t(\eta,k) e^{i\vec{k} \cdot \vec{x}} \Bigr] \;\; , \;\; \Bigl[ \mathcal{D} \partial_0 + k^2\Bigr] t = 0 \;\; , \;\; t \cdot \partial_0 t^* - \partial_0 t \cdot t^* = \frac{i}{a^{D-2}} , \label{tmodes} \end{equation} where $\mathcal{D} \equiv \partial_0 + (D-2) a H$. Temporal photons correspond to $\lambda = u$ and have $s_u = +1$ with, \begin{eqnarray} \mathcal{A}_{\mu}(x;\vec{k},u) = \frac{\overline{\partial}_{\mu}}{M} \Bigl[ u(\eta,k) e^{i\vec{k} \cdot \vec{x}} \Bigr] & , & \overline{\partial}_0 \equiv k \;\; , \;\; \overline{\partial}_m \equiv \frac{-i k_m}{k} \mathcal{D} \; , \qquad \\ \Bigl[ \partial_0 \mathcal{D} + k^2 + a^2 M^2\Bigr] u = 0 & , & u \cdot \partial_0 u^* - \partial_0 u \cdot u^* = \frac{i}{a^{D-2}} . \qquad \label{umodes} \end{eqnarray} Transverse spatial photons correspond to $\lambda = v$ and have $s_v = +1$ with, \begin{eqnarray} \mathcal{A}_{\mu}(x;\vec{k},v) = \epsilon_{\mu}(\vec{k},v) \, v(\eta,k) e^{i\vec{k} \cdot \vec{x}} & , & \epsilon_0 = 0 \;\; , \;\; k_m \epsilon_m = 0 \; , \qquad \label{vmodesA} \\ \Bigl[ \partial^2_0 + (D \!-\! 4) a H \partial_0 + k^2 + a^2 M^2\Bigr] v = 0 & , & v \cdot \partial_0 v^* - \partial_0 v \cdot v^* = \frac{i}{a^{D-4}} , \qquad \label{vmodesB} \end{eqnarray} where the sum over the $(D-2)$ spatial polarizations gives, \begin{equation} \sum_{v} \epsilon_i(\vec{k},v) \times \epsilon^*_j(\vec{k},v) = \delta_{ij} - \frac{k_i k_j}{k^2} \; . \label{polsum} \end{equation} \subsection{Time-Dependent Mass} To understand the case of a time-dependent mass we must consider the vector and scalar field equations, \begin{eqnarray} \lefteqn{ \frac{\delta S}{\delta A_{\mu}} = \partial_{\nu} \Bigl[ \sqrt{-g} \, g^{\nu\rho} g^{\mu\sigma} F_{\rho\sigma} \Bigr] } \nonumber \\ & & \hspace{2.5cm} + iq \Bigl[ \varphi \!\cdot\! \Bigl( \partial_{\nu} \!+\! i q A_{\nu}\Bigr) \varphi^* - \Bigl( \partial_{\nu} \!-\! i q A_{\nu}\Bigr) \varphi \!\cdot\! \varphi^* \Bigr] g^{\mu\nu} \sqrt{-g} \; , \qquad \label{vector} \\ \lefteqn{ \frac{\delta S}{\delta \varphi^*} = \Bigl(\partial_{\mu} \!-\! i q A_{\mu}\Bigr) \Bigl[ \sqrt{-g} \, g^{\mu\nu} \Bigl(\partial_{\nu} \!-\! i q A_{\nu}\Bigr) \varphi \Bigr] - \varphi V'(\varphi \varphi^*) \sqrt{-g} \; .} \label{scalar} \end{eqnarray} The $0$-th order inflaton is $\varphi_0(\eta)$ which is real and obeys the equation, \begin{equation} \partial_0 \Bigl[ a^{D-2} \partial_0 \varphi_0\Bigr] + a^{D} \varphi_0 V'(\varphi_0^2) = 0 \; . \label{0order} \end{equation} The first order perturbations are $A_{\mu}(x)$ and the real fields $\alpha(x)$ and $\beta(x)$, \begin{equation} \varphi(x) = \varphi_0(\eta) + \alpha(x) + i \beta(x) \; . \end{equation} The first order contribution to the vector equation (\ref{vector}) is, \begin{equation} \partial_{\nu} \Bigl[ \sqrt{-g} \, g^{\nu\rho} g^{\mu\sigma} F_{\rho\sigma} \Bigr] - 2 q^2 \varphi_0^2 \Bigl[ A_{\nu} - \partial_{\nu} \Bigl( \frac{\beta}{q \varphi_0}\Bigr) \Bigr] \sqrt{-g} \, g^{\nu\mu} = 0 \; . \label{vector1} \end{equation} The photon mass is $M^2 \equiv 2 q^2 \varphi_0^2$. Note from equation (\ref{vector1}) that antisymmetry of the field strength tensor implies, \begin{equation} \partial_{\mu} \Bigl[ M^2 \sqrt{-g} \, g^{\mu\nu} \Bigl(A_{\nu} - \partial_{\nu} \Bigl( \frac{\beta}{q \varphi_0} \Bigr) \Bigr] = 0 \; . \label{betaeqn} \end{equation} This constraint is identical to the imaginary part of the first order contribution to the scalar equation (\ref{scalar}). The analogous real part is, \begin{equation} \partial_{\mu} \Bigl[ \sqrt{-g} \, g^{\mu\nu} \partial_{\nu} \alpha\Bigr] - \sqrt{-g} \Bigl[ V'(\varphi_0^2) + 2 \varphi_0^2 V''(\varphi_0^2)\Bigr] \alpha = 0 \; . \label{alphaeqn} \end{equation} Relations (\ref{vector1}) and (\ref{betaeqn}) demonstrate that the Higgs mechanism continues to function when the scalar background $\varphi_0$ depends upon spacetime. To simplify the subsequent analysis, we will absorb (``eat'') the imaginary part of the scalar perturbation into the vector field as usual, \begin{equation} A_{\mu} - \partial_{\mu} \Bigl( \frac{\beta}{q \varphi_0} \Bigr) \longrightarrow A_{\mu} \; . \label{eating} \end{equation} We can also use the conformal coordinate relation $g_{\mu\nu} = a^2 \eta_{\mu\nu}$ to provide simple expressions for (\ref{vector1}) and (\ref{betaeqn}), \begin{equation} \partial_{\nu} \Bigl[ a^{D-4} F^{\nu\mu} \Bigr] - M^2 a^{D-2} A^{\mu} = 0 \qquad \Longrightarrow \qquad \partial_{\mu} \Bigl[ M^2 a^{D-2} A^{\mu} \Bigr] = 0 \; , \label{vector2} \end{equation} where $F^{\nu\mu} \equiv \eta^{\nu\rho} \eta^{\mu\sigma} F_{\rho\sigma}$ and $A^{\mu} \equiv \eta^{\mu\nu} A_{\nu}$. The $3+1$ decomposition of the constraint on the right hand side of (\ref{vector2}) is, \begin{equation} \Bigl[ \mathcal{D} + \frac{2 \partial_0 M}{M}\Bigr] A_0 - \partial_m A_m = 0 \qquad , \qquad \mathcal{D} \equiv \partial_0 + (D\!-\!2) a H \; . \label{constraint} \end{equation} Relation (\ref{constraint}) permits us to $3+1$ decompose the left hand side of (\ref{vector2}) to, \begin{eqnarray} \Bigl[ \partial_0 \Bigl( \mathcal{D} + \frac{2 \partial_0 M}{M} \Bigr) - \nabla^2 + a^2 M^2\Bigr] A_0 &\!\!\! = \!\!\!& 0 \; , \label{vector3A} \\ 2 \Bigl( a H + \frac{\partial_0 M}{M}\Bigr) \partial_m A_0 + \Bigl[ \partial_0^2 + (D\!-\!4) a H \partial_0 - \nabla^2 + a^2 M^2\Bigr] A_m &\!\!\! = \!\!\!& 0 \; . \label{vector3B} \end{eqnarray} Equations (\ref{constraint}-\ref{vector3B}) are satisfied by three polarizations of spatial plane waves whose associated mode functions are $t(\eta,k)$, $u(\eta,k)$ and $v(\eta,k)$. Our notation is that a ``tilde'' over a differential operator such as $\partial_0$ or $\mathcal{D}$ indicates the addition of $\partial_0 M/M$, whereas a ``hat'' denotes subtraction of the same quantity, \begin{equation} \widetilde{\mathcal{D}} \equiv \mathcal{D} + \frac{\partial_0 M}{M} \qquad , \qquad \widehat{\partial}_0 \equiv \partial_0 - \frac{\partial_0 M}{M} \; . \label{notation} \end{equation} What we term {\it Longitudinal photons} have the form, \begin{equation} \mathcal{A}_0(x;\vec{k},t) = \frac{\widehat{\partial}_0 t(\eta,k)}{M(\eta)} \, e^{i \vec{k} \cdot \vec{x}} \qquad , \qquad \mathcal{A}_m(x;\vec{k},t) = \frac{i k_m t(\eta,k)}{M(\eta)} \, e^{i \vec{k} \cdot \vec{x}} \; , \label{tAs} \end{equation} where the mode function $t(\eta,k)$ obeys,\footnote{Although $\mathcal{A}_{\mu}(x;\vec{k},t)$ satisfies (\ref{constraint}), it does not quite obey equations (\ref{vector3A}-\ref{vector3B}), but rather the relation $\partial_{\nu} [a^{D-4} \mathcal{F}^{\nu\mu}(x;\vec{k},t)] = 0$.} \begin{equation} \Bigl[ \widetilde{\mathcal{D}} \widehat{\partial}_0 + k^2\Bigr] t = 0 \qquad , \qquad t \cdot \partial_0 t^* - \partial_0 t \cdot t^* = \frac{i}{a^{D-2}} \; . \label{teqn} \end{equation} {\it Temporal photons} take the form, \begin{equation} \mathcal{A}_0(x;\vec{k},u) = \frac{k u(\eta,k)}{M(\eta)} \, e^{i \vec{k} \cdot \vec{x}} \qquad , \qquad \mathcal{A}_m(x;\vec{k},u) = - \frac{i k_m \widetilde{\mathcal{D}} u(\eta,k)}{k M(\eta)} \, e^{i \vec{k} \cdot \vec{x}} \; , \label{uAs} \end{equation} where the mode function $u(\eta,k)$ obeys, \begin{equation} \Bigl[ \widehat{\partial}_0 \widetilde{\mathcal{D}} + k^2 + a^2 M^2 \Bigr] u = 0 \qquad , \qquad u \cdot \partial_0 u^* - \partial_0 u \cdot u^* = \frac{i}{a^{D-2}} \; . \label{ueqn} \end{equation} The tendency for longitudinal and temporal photons to diverge when the mass $M(\eta)$ passes through zero is obvious from expressions (\ref{tAs}) and (\ref{uAs}). In contrast, the time-dependent mass makes no change at all in relations (\ref{vmodesA}-\ref{polsum}) for the {\it Transverse spatial photons}, and these polarizations remain finite as the mass passes through zero. A time-dependent mass makes no change in mode sum (\ref{constMmodesum}) for the propagator. However, the propagator obeys a revised version of the constraint equation (\ref{transverse}), \begin{equation} \partial^{\mu} \Bigl\{a^{D-2} M^2 i \Bigl[\mbox{}_{\mu} \Delta_{\rho}\Bigr](x;x') \Bigr\} = 0 = \partial^{\prime \rho} \Bigl\{ {a'}^{D-2} {M'}^2 i\Bigl[ \mbox{}_{\mu} \Delta_{\rho}\Bigr](x;x') \Bigr\} \; . \label{newconstraint} \end{equation} The propagator equations analogous to (\ref{constMprop}-\ref{MMCSprop}) can be given in terms of the massive photon kinetic operator, \begin{equation} \mathcal{D}^{\mu\nu} \equiv \partial_{\alpha} \Bigl[ a^{D-4} \Bigl( \eta^{\mu\nu} \partial^{\alpha} - \eta^{\alpha\nu} \partial^{\mu}\Bigr) \Bigr] - a^{D-2} M^2 \eta^{\mu\nu} \; . \label{kineticop} \end{equation} The revised versions of (\ref{constMprop}-\ref{MMCSprop}) are, \begin{eqnarray} \mathcal{D}^{\mu\nu} i\Bigl[\mbox{}_{\nu} \Delta_{\rho}\Bigr](x;x') & \!\!\! = \!\!\!& \delta^{\mu}_{~\rho} i\delta^D(x \!-\! x') + \frac{a^{D-2} M}{M'} \widehat{\partial}^{\mu} \widehat{\partial}'_{\rho} i \Delta_{t}(x;x') \; , \qquad \label{newprop} \\ \frac1{M} \partial^{\mu} \Bigl[ a^{D-2} M \widehat{\partial}_{\mu} i\Delta_{t}(x;x') \Bigr] &\!\!\! = \!\!\!& i\delta^D(x \!-\! x') \; . \qquad \label{newtprop} \end{eqnarray} \section{Approximating the Amplitudes} The purpose of this section is to develop analytic approximations for the crucial mode functions $t(\eta,k)$ and $u(\eta,k)$. We begin by giving a dimensionless formulation of the problem. This formalism is then employed to derive good analytic approximations for first, the longitudinal amplitude and then, the temporal amplitude. At each stage these approximations are checked against explicit numerical evolution in a simple mode of inflation. \subsection{Dimensionless Formulation} It is best to change the evolution variable from conformal time $\eta$ to the number of e-foldings from the start of inflation, $n \equiv \ln[a(\eta)]$, \begin{equation} \partial_0 = a H \frac{\partial}{\partial n} \qquad , \qquad \partial_0^2 = a^2 H^2 \Bigl[ \frac{\partial^2}{\partial n^2} + (1 \!-\! \epsilon) \frac{\partial}{\partial n}\Bigr] \; . \end{equation} We can also use factors of $8 \pi G$ to make the inflaton, the Hubble parameter and the scalar potential dimensionless, \begin{equation} \psi(n) \equiv \sqrt{8\pi G} \, \varphi_0(\eta) \quad , \quad \chi(n) \equiv \sqrt{8\pi G} \, H(\eta) \quad , \quad U(\psi^2) \equiv (8\pi G)^2 V(\varphi^2_0) \; . \label{dimgeom} \end{equation} This gives dimensionless forms for the classical Friedmann equations, and for the inflaton evolution equation, \begin{eqnarray} \frac12 (D \!-\! 2) (D \!-\! 1) \chi^2 &\!\!\! = \!\!\!& \chi^2 {\psi'}^2 + U(\psi^2) \; , \qquad \label{Friedmann1} \\ -\frac12 (D\!-\!2) \Bigl[ (D\!-\! 1) - 2 \epsilon\Bigr] \chi^2 &\!\!\! = \!\!\!& \chi^2 {\psi'}^2 - U(\psi^2) \; , \qquad \label{Friedmann2} \\ 0 &\!\!\! = \!\!\!& \chi^2 \Bigl[ \psi'' + (D\!-\!1\!-\!\epsilon) \psi'\Bigr] + \psi U'(\psi^2) \; . \qquad \label{inflatoneqn} \end{eqnarray} Factors of $8\pi G$ can be extracted to give similar dimensionless forms for the time-dependent mass $M^2(\eta) \equiv 2 q^2 \varphi_0^2(\eta)$ and the wave number $k^2$, \begin{equation} \mu^2(n) \equiv 8\pi G M^2(\eta) = 2 q^2 \psi^2(n) \qquad , \qquad \kappa^2 \equiv 8 \pi G k^2 \; . \label{dimparams} \end{equation} We define the dimensionless Longitudinal and Temporal amplitudes as, \begin{equation} \mathcal{T}(n,\kappa) \equiv \ln\Bigl[ \frac{\vert t(\eta,k)\vert^2}{ \sqrt{8\pi G}}\Bigr] \qquad , \qquad \mathcal{U}(n,\kappa) \equiv \ln\Bigl[\frac{\vert u(\eta,k)\vert^2}{\sqrt{8\pi G}}\Bigr] \; . \label{Amps} \end{equation} By combining the mode equations and Wronskians (\ref{teqn}) and (\ref{ueqn}) for each mode we can infer a single nonlinear relation for the associated amplitudes \cite{Romania:2011ez,Romania:2012tb,Brooker:2015iya}, \begin{eqnarray} \mathcal{T}'' + \frac12 {\mathcal{T}'}^2 + (D \!-\! 1 \!-\! \epsilon) \mathcal{T}' + \frac{2 \kappa^2 e^{-2n}}{\chi^2} + \frac{2 \mu_t^2}{\chi^2} - \frac{e^{-2 [\mathcal{T} + (D-1) n]}}{2 \chi^2} &\!\!\! = \!\!\! & 0 \; , \qquad \label{Teqn} \\ \mathcal{U}'' + \frac12 {\mathcal{U}'}^2 + (D \!-\! 1 \!-\! \epsilon) \mathcal{U}' + \frac{2 \kappa^2 e^{-2n}}{\chi^2} + \frac{2 \mu_u^2}{\chi^2} - \frac{e^{-2 [\mathcal{U} + (D-1) n]}}{2 \chi^2} &\!\!\! = \!\!\! & 0 \; , \qquad \label{Ueqn} \end{eqnarray} where a prime denotes differentiation with respect to $n$ and the two masses are, \begin{eqnarray} \frac{\mu^2_t}{\chi^2} &\!\!\! \equiv \!\!\!& -(D\!-\!1\!-\!\epsilon) \frac{\mu'}{\mu} - \frac{\mu''}{\mu} \; , \qquad \label{tmass} \\ \frac{\mu^2_u}{\chi^2} &\!\!\! \equiv \!\!\!& (D\!-\!2) (1\!-\! \epsilon) + \frac{\mu^2}{\chi^2} - (D\!-\!3\!+\!\epsilon) \frac{\mu'}{\mu} + \Bigl( \frac{\mu'}{\mu}\Bigr)' - \Bigl( \frac{\mu'}{\mu}\Bigr)^2 \; . \qquad \label{umass} \end{eqnarray} Because $\mu^2(n) = 2 q^2 \psi^2(n)$ we can use the inflaton $0$-mode equation (\ref{inflatoneqn}) to simplify the $t$-mode mass, \begin{equation} \frac{\mu^2_{t}}{\chi^2} = -\frac{[\psi'' + (D\!-\!1\!-\!\epsilon) \psi']}{ \psi} = \frac{U'(\psi^2)}{\chi^2} \; . \label{tmasssimp} \end{equation} In order to follow the amplitudes numerically one must use a specific model of inflation. For simplicity we have chosen the quadratic mass model, $U = c^2 \psi^2$, even though its prediction for the tensor-to-scalar ratio is disfavored by the data \cite{Planck:2018vyg, Tristram:2020wbi}. The Slow Roll Approximation gives analytic expressions for this model which are accurate until almost the end of inflation, \begin{equation} \psi(n) \simeq \sqrt{\psi_0^2 \!-\! 2n} \quad , \quad \chi(n) \simeq \frac{c}{\sqrt{3}} \sqrt{\psi_0^2 \!-\! 2n} \quad ,\quad \epsilon(n) \simeq \frac1{\psi_0^2 \!-\! 2n} \; , \label{slowroll} \end{equation} where $\psi_0$ is the initial value of the dimensionless inflaton $0$-mode. About 56 e-foldings of inflation results from the choice $\psi_0 = 10.6$. To estimate the constant $c$, note that modes which experience 1st horizon crossing at e-folding $n_1$ (that is, $\kappa = \chi(n_1) e^{n_1}$) have the following approximate scalar power spectrum and spectral index, \begin{equation} \Delta^2_{\mathcal{R}}(n_1) \simeq \frac1{8\pi^2} \frac{\chi^2(n_1)}{\epsilon(n_1)} \qquad \Longrightarrow \qquad 1 - n_s \simeq 2 \epsilon + \frac{\epsilon'}{\epsilon} \; . \label{CMB} \end{equation} Hence the observed scalar spectral index is consistent with $\psi_0 = 10.6$, and the observed scalar amplitude with the choice of $c = 7.1 \times 10^{-6}$ \cite{Planck:2018vyg,Tristram:2020wbi}. We must also choose a specific value for the charge $q$. Using $q^2 = 1/137$ would cause the classical potential of $U = c^2 \psi^2$ to be completely overwhelmed by the 1-loop Coleman-Weinberg correction of $\Delta U \simeq 3/64\pi^2 \times \mu^4 \ln(\mu^2/s^2)$, where $s$ is the dimensionless renormalization scale \cite{Miao:2015oba}. Choosing the much smaller value of $q = 1.2 \times 10^{-6}$ reduces the 1-loop correction to a negligible tenth of a percent effect at the start of inflation. Once we have a specific model it is possible to understand the magnitudes of the various terms. Figure~\ref{Earlygeom} shows the dimensionless scalar, the dimensionless Hubble parameter and the first slow roll parameter while inflation is occurring ($\epsilon < 1$). The slow roll approximations (\ref{slowroll}) are excellent during this period. \noindent Figure~\ref{Lategeom} shows the same three quantities through the end of inflation (which occurs at $n_e \simeq 56.7$) under the assumption that the classical relations (\ref{Friedmann1}-\ref{inflatoneqn}) are not corrected by the quantum effects we seek to incorporate. During this phase the inflaton oscillates around $\psi = 0$ with decreasing amplitude and increasing frequency, while the first slow roll parameter oscillates in the range $0 \leq \epsilon \leq 3$. Because $\epsilon = {\psi'}^2$, the first slow roll parameter vanishes at extrema of $\psi(n)$, and it reaches its maximum (of $\epsilon(n) = 3$) when $\psi(n) = 0$. Of course the dimensionless Hubble parameter is monotonically decreasing; this decrease is rapid when $\epsilon \simeq 3$, and slow when $\epsilon \simeq 0$. The slow roll approximation (\ref{slowroll}) tells us that $\psi' \simeq -1/\psi$ and $\chi(n) \simeq c/\sqrt{3} \times \psi(n)$. Setting $D=4$, and using our values of $c = 7.1 \times 10^{-6}$ and $q = 1.2 \times 10^{-6}$, gives the mass hierarchy, \begin{equation} \frac{\mu^2_u}{\chi^2} \simeq 2 + \frac{6 q^2}{c^2} \simeq 2.16 \quad > \quad \frac{\mu^2}{\chi^2} \simeq \frac{6 q^2}{c^2} \simeq 0.16 \quad > \quad \frac{\mu^2_t}{\chi^2} \simeq \frac{3}{\psi^2} \; . \label{masses} \end{equation} Figure~\ref{Earlymass} shows the various masses through inflation. \noindent As one can just see from the larger $n$ values of Figure~\ref{Earlymass}, the hierarchy of equation (\ref{masses}) becomes inverted after the end of inflation. Figure~\ref{Latemass} shows the behavior after the end of inflation. During this phase $\mu^2_u/\chi^2$ is mostly tachyonic, and actually diverges at points where $\psi(n) = 0$. On the other hand, $\mu^2/\chi^2$ oscillates between $0$ and the small value of $0.16$, while $\mu^2_t/\chi^2$ grows monotonically to large, positive values. The $u$-mode mass is the most important of the three, and its evolution is the most complex. Figure~\ref{Zoomumass} shows its behavior in more detail. \subsection{Approximating the Longitudinal Amplitude} Equation (\ref{Teqn}) for $\mathcal{T}(n,\kappa)$ contains six terms. The ultraviolet regime is defined by the condition $\kappa \gg \chi(n) e^{n}$. In this regime equation (\ref{Teqn}) is dominated by the 4th and 6th terms, $2\kappa^2 e^{-2n}/\chi^2$ and $-e^{-2 [\mathcal{T} + (D-1)n]}/2 \chi^2$, and the amplitude takes the form, \begin{eqnarray} \lefteqn{\mathcal{T}(n,\kappa) = \ln\Bigl[\frac1{2\kappa}\Bigr] - (D\!-\!2) n } \nonumber \\ & & \hspace{2.7cm} + \Bigl[ \frac12 (D\!-\!2) (D \!-\! 2\epsilon) - \frac{2 \mu_t^2}{\chi^2}\Bigr] \Bigl( \frac{\chi e^n}{2 \kappa}\Bigr)^2 + O\Biggl( \Bigl(\frac{\chi e^n}{2 \kappa}\Bigr)^4 \Biggr) \; . \qquad \label{TUVexp} \end{eqnarray} Figure~\ref{EarlyT} compares numerical evolution of the exact equation (\ref{Teqn}) with the ultraviolet form (\ref{TUVexp}) for wave numbers which experience first horizon crossing at $n_1 = 10$, $n_1 = 20$, and $n_1 = 30$. The agreement is excellent up to horizon crossing. After 1st horizon crossing the 4th and 6th terms of (\ref{Teqn}) effectively drop out and the relation simplifies to, \begin{equation} \mathcal{T}'' + \frac12 {\mathcal{T}'}^2 + (D \!-\! 1 \!-\! \epsilon) \mathcal{T}' - 2 (D\!-\!1\!-\!\epsilon) \frac{\mu'}{\mu} - 2 \frac{\mu''}{\mu} \simeq 0 \; . \label{Teqnsimp} \end{equation} This is an equation for $\mathcal{T}'$, and it is easy to see that a particular solution is, \begin{equation} \mathcal{T}' = 2 \frac{\mu'}{\mu} \; . \label{Tprime} \end{equation} Integrating (\ref{Tprime}), and using the tensor power spectrum to infer the integration constant to all orders in the slow roll approximation \cite{Brooker:2015iya}, implies,\footnote{The integration constant in relation (\ref{TIRform}) suffices for smooth inflationary potentials. When features are present the constant can be supplemented by known corrections which depend nonlocally on the expansion history before first horizon crossing \cite{Brooker:2017kij}.} \begin{equation} \mathcal{T}(n,\kappa) \simeq \ln\Biggl[ \frac{\chi_1^2 C(\epsilon_1)}{2 \kappa^3} \times \frac{\mu^2(n)}{\mu^2_1} \Biggr] \; , \label{TIRform} \end{equation} where the function $C(\epsilon)$ is, \begin{equation} C(\epsilon) = \frac1{\pi} \Gamma^2\Bigl( \frac12 \!+\! \frac1{1 \!-\! \epsilon} \Bigr) \Bigl[2 (1 \!-\! \epsilon)\Bigr]^{\frac2{1-\epsilon}} \; . \label{Cdef} \end{equation} Figure~\ref{LateT} compares the exact numerical result with the infrared form (\ref{TIRform}) for modes which experience horizon crossing at $n_1 = 10$, $n_1 = 20$, and $n_1 = 30$. Agreement is excellent. Although one can see from Figure~\ref{LateT} that the approximate solution (\ref{TIRform}) is highly accurate, it cannot be exact for two reasons: \begin{enumerate} \item{We neglected the 4th and 6th terms in simplifying equation (\ref{Teqn}) to reach (\ref{Teqnsimp}); and} \item{Just because (\ref{Tprime}) is {\it a} solution to (\ref{Teqnsimp}) does not mean it is {\it the} solution.} \end{enumerate} To find the {\it general} solution to (\ref{Teqnsimp}) we substitute $\mathcal{T}' = 2 \mu'/\mu + f(n)$, \begin{equation} f' + 2 \frac{\mu'}{\mu} f + \frac12 f^2 + (D \!-\! 1 \!-\! \epsilon) f \simeq 0 \; . \label{feqn1} \end{equation} Now divide by $\mu^2 e^{(D-1) n} \chi f^2$ to reach the form, \begin{equation} \frac{\partial}{\partial n} \Biggl[ \frac1{\mu^2 e^{(D-1) n} \chi f}\Biggr] = \frac1{2 \mu^2 e^{(D-1)n} \chi} \; . \label{feqn2} \end{equation} Integrating equation (\ref{feqn2}) from some point $n_2$ gives the general solution, \begin{eqnarray} \lefteqn{f(n) = f_2 \Biggl[ e^{(D-1)(n-n_2)} \Bigl[ \frac{\chi(n)}{\chi_2}\Bigr] \Bigl[ \frac{\mu(n)}{\mu_2}\Bigr]^2 } \nonumber \\ & & \hspace{4cm} + \frac12 f_2 \int_{n_2}^{n} \!\!\!\!\! dn' \, e^{(D-1)(n-n')} \Bigl[ \frac{\chi(n)}{\chi(n')}\Bigr] \Bigl[ \frac{\mu(n)}{\mu(n')} \Bigr]^2 \Biggr]^{-1} \; . \qquad \label{fsol} \end{eqnarray} Careful consideration of (\ref{fsol}) reveals that $\mathcal{T}(n,\kappa)$ actually has a finite limit as the mass vanishes. To see this, assume $n$ is such that $\mu(n) \rightarrow 0$, and expand the integral of (\ref{fsol}) for small $\mu(n)$, \begin{eqnarray} \lefteqn{\int_{n_2}^{n} \!\!\!\!\! dn' \, e^{(D-1)(n-n')} \Bigl[ \frac{\chi(n)}{\chi(n')}\Bigr] \Bigl[ \frac{\mu(n)}{\mu(n')} \Bigr]^2 \!\!= -\frac{\mu(n)}{\mu'(n)} } \nonumber \\ & & \hspace{2cm} - \frac12 \Bigl[D\!-\!1 \!-\! \epsilon(n) \!+\! \frac{\mu''(n)}{\mu'(n)} \Bigr] \Bigl[ \frac{\mu(n)}{\mu'(n)}\Bigr]^2 \ln[\mu^2(n)] + O(1) \; . \qquad \label{singexp} \end{eqnarray} Near the point where $\mu(n) \rightarrow 0$ we therefore have, \begin{equation} f(n) \longrightarrow -\frac{2 \mu'(n)}{\mu(n)} + \Bigl[D \!-\! 1 \!-\! \epsilon(n) \!+\! \frac{\mu''(n)}{\mu'(n)}\Bigr] \ln[\mu^2(n)] + O(1) \; . \label{fexp} \end{equation} Hence we have, \begin{equation} \mathcal{T}'(n,\kappa) \longrightarrow \Bigl[D\!-\!1 \!-\! \epsilon(n) \!+\! \frac{\mu''(n)}{\mu'(n)} \Bigr] \ln[\mu^2(n)] + O(1) \; . \label{Tpfull} \end{equation} Although expression (\ref{Tpfull}) diverges as $\mu(n)$ goes to zero, the singularity is integrable, which means that $\mathcal{T}(n,\kappa)$ remains finite. As we see from Figure~\ref{f2plot}, the constant $f_2$ in expression (\ref{fsol}) represents the difference between the actual value of $\mathcal{T}'(n_2,\kappa)$ and its approximate form (\ref{Tprime}) $2 \mu'(n_2)/\mu(n_2)$. Because the approximate form is quite accurate, $f_2$ is a very small number, about $f_2 \sim -10^{-7}$. The fact that $f_2$ drops out of the asymptotic form (\ref{fexp}) means that the ultimate finiteness of $\mathcal{T}(n,\kappa)$ is a robust conclusion. However, $\mu(n)$ must be {\it very} close to zero before the integral (\ref{singexp}) begins to dominate over the first term in the denominator of (\ref{fsol}), which has a relative enhancement of $e^{(D-1)(n-n_2)}$. If we take $n_2 = 56$ and define $n_* \simeq 57.25$ as the first zero of $\mu(n)$, the point $n_f$ at which expressions (\ref{fexp}-\ref{Tpfull}) become valid approximately obeys, \begin{equation} n_* - n_f \simeq \frac12 f_2 \, e^{-3 (n_* - n_2)} \Bigl( \frac{\chi_2}{\chi_*} \Bigr) \Bigl(\frac{\mu_2}{\mu_*'} \Bigr)^2 \simeq 10^{-9} \; . \label{fdomn} \end{equation} We can therefore estimate the minimum value of $\mathcal{T}(n,\kappa)$ as \begin{equation} \mathcal{T}_{\rm min} \simeq \ln\Biggl[ \frac{\chi_1^2 C(\epsilon_1)}{2 \kappa^3} \!\times\! \frac14 f_2^2 \, e^{-6 (n_* - n_2)} \Bigl( \frac{\chi_2}{\chi_*}\Bigr)^2 \Bigl( \frac{\mu_2}{\mu_1}\Bigr)^2 \Bigl( \frac{\mu_2}{\mu_*'}\Big)^2 \Biggr] \; . \label{Tmin} \end{equation} \subsection{Approximating the Temporal Amplitude} Equation (\ref{Ueqn}) for $\mathcal{U}(n,\kappa)$ contains the same six terms as (\ref{Teqn}). In the ultraviolet it is also dominated by the 4th and 6th terms, $2\kappa^2 e^{-2n}/\chi^2$ and $-e^{-2 [\mathcal{U} + (D-1)n]}/2 \chi^2$. Hence the ultraviolet expansion of $\mathcal{U}(n,\kappa)$ takes the same form as (\ref{TUVexp}), \begin{eqnarray} \lefteqn{\mathcal{U}(n,\kappa) = \ln\Bigl[\frac1{2\kappa}\Bigr] - (D\!-\!2) n } \nonumber \\ & & \hspace{2.7cm} + \Bigl[ \frac12 (D\!-\!2) (D \!-\! 2\epsilon) - \frac{2 \mu_u^2}{\chi^2}\Bigr] \Bigl( \frac{\chi e^n}{2 \kappa}\Bigr)^2 + O\Biggl( \Bigl(\frac{\chi e^n}{2 \kappa}\Bigr)^4 \Biggr) \; . \qquad \label{UUVexp} \end{eqnarray} Figure~\ref{EarlyU} compares the exact numerical solution with the ultraviolet form (\ref{UUVexp}) for modes which experience first horizon crossing at $n_1 = 10$, $n_1 = 20$ and $n_1 = 30$. Agreement is excellent up to first horizon crossing, just as it was in the analogous comparison of Figure~\ref{EarlyT} for $\mathcal{T}(n,\kappa)$. The 4th and 6th terms of (\ref{Ueqn}) drop out after first horizon crossing, and the relation simplifies to, \begin{equation} \mathcal{U}'' + \frac12 {\mathcal{U}'}^2 + (D\!-\!1\!-\! \epsilon) \mathcal{U}' + \frac{2 \mu^2_u}{\chi^2} \simeq 0 \; . \label{Ueqnsimp} \end{equation} Recall from Figure~\ref{Earlymass} that $\mu_u^2(n)/\chi^2(n)$ is approximately constant during inflation. This means that equation (\ref{Ueqnsimp}) can be roughly solved as, \begin{equation} \mathcal{U}'(n,\kappa) \simeq -(D\!-\!1\!-\!\epsilon) + \sqrt{(D \!-\! 1 \!-\! \epsilon)^2 - \frac{4 \mu_u^2}{\chi^2}} \; . \label{approxUprime} \end{equation} With the appropriate integration constant we therefore have, \begin{eqnarray} \lefteqn{\mathcal{U}(n,\kappa) \simeq \ln\Biggl[ \frac{\chi_1^2 C(\epsilon_1)}{ 2 \kappa^3} \!\times\! \frac{\chi_1}{\chi(n)} \Biggl] - (D \!-\! 1) (n \!-\! n_1) } \nonumber \\ & & \hspace{5cm} + \int_{n_1}^{n} \!\!\!\! dn' \sqrt{[D\!-\! 1 \!-\! \epsilon(n')]^2 - \frac{4 \mu_u^2(n')}{\chi^2(n')} } \; . \qquad \label{UIRform} \end{eqnarray} Figure~\ref{LateU} compares this approximation with the numerical evolution for modes which experience horizon crossing at $n_1 = 10$, $n_1 = 20$ and $n_1 = 30$. After the end of inflation $\mu^2(n)/\chi^2(n)$ falls off whereas the derivative terms in $\mu^2_u/\chi^2$ become large and tachyonic. This means we can neglect $\mu^2(n)/\chi^2(n)$, \begin{equation} \frac{\mu_u^2}{\chi^2} \simeq (D\!-\!2) (1\!-\! \epsilon) - (D\!-\!3\!+\! \epsilon) \frac{\mu'}{\mu} + \Bigl( \frac{\mu'}{\mu}\Bigr)' - \Bigl( \frac{\mu'}{\mu}\Bigr)^2 \; . \label{umasssimp2} \end{equation} We now make the substitution, \begin{equation} \mathcal{U}'(n,\kappa) = -\frac{2 \mu'(n)}{\mu(n)} - 2 (D\!-\!2) + g(n) \; , \label{Uansatz} \end{equation} in equation (\ref{Ueqnsimp}) to find, \begin{equation} g' - \Bigl(D \!-\! 3 \!+\! \epsilon \!+\! 2 \frac{\mu'}{\mu}\Bigr) g + \frac12 g^2 = 0 \; . \label{geqn} \end{equation} Multiplying by $e^{(D-3)n} \mu^2(n)/[\chi(n) g^2(n)]$ makes the $g$-dependent terms a total derivative, and permits us to write the general solution as, \begin{eqnarray} \lefteqn{ g(n) = g_2 \Biggl[ e^{-(D-3) (n-n_2)} \Bigl[ \frac{\chi(n)}{\chi_2} \Bigr] \Bigl[ \frac{\mu_2}{\mu(n)}\Bigr]^2 } \nonumber \\ & & \hspace{4cm} + \frac12 g_2 \!\! \int_{n_2}^{n} \!\!\!\!\! dn' \, e^{-(D-3) (n-n')} \Bigl[ \frac{\chi(n)}{\chi(n')}\Bigr] \Bigl[ \frac{\mu(n')}{ \mu(n)}\Bigr]^2 \Biggr]^{-1} \; , \qquad \label{gsol} \end{eqnarray} where the constant $g_2$ is determined to interpolate between (\ref{approxUprime}) and (\ref{Uansatz}), \begin{equation} g_2 = \frac{2 \mu_2'}{\mu_2} + (D\!-\!3\!+\!\epsilon_2) + \sqrt{(D\!-\! 1 \!-\! \epsilon_2)^2 - \frac{4 \mu_2^2}{\chi_2^2}} \; . \label{g2def} \end{equation} Note that, whereas $f(n)$ diverges as $\mu(n)$ approaches zero, $g(n)$ goes to zero like $\mu^2(n)$. Integrating equation (\ref{Uansatz}), and using (\ref{UIRform}) to supply the integration constant, gives, \begin{eqnarray} \lefteqn{ \mathcal{U}(n,\kappa) \simeq \ln\Biggl[ \frac{\chi_1^2 C(\epsilon_1)}{ 2 \kappa^3} \!\times\! \frac{\chi_1}{\chi_2} \!\times\! \frac{\mu_2^2}{\mu^2(n)} \Biggr] - (D\!-\!1) (n_2 \!-\! n_1) - 2 (D\!-\!2) (n \!-\! n_2) } \nonumber \\ & & \hspace{3.2cm} + \int_{n_1}^{n_2} \!\!\!\!\! dn' \sqrt{ [D\!-\! 1 \!-\! \epsilon(n')]^2 - \frac{4 \mu_u^2(n')}{\chi^2(n')} } + \int_{n_2}^{n} \!\!\!\!\! dn' \, g(n') \; . \qquad \label{UPostform} \end{eqnarray} Because $g(n)$ vanishes as $\mu(n) \rightarrow 0$, the $-\ln[\mu^2(n)]$ divergence of $\mathcal{U}(n,\kappa)$ is robust. Note that this is not even affected by neglecting $\mu^2(n)/\chi^2(n)$ in (\ref{umasssimp2}). Figure~\ref{UPost} compares the numerical solution with our analytic approximation (\ref{UPostform}). \section{Quantum-Correcting the Inflaton $0$-Mode} The purpose of this section is to use the photon propagator to quantum-correct the classical equation for the inflaton $0$-mode from (\ref{0order}) to, \begin{equation} \partial_0 \Bigl[a^{D-2} \partial_0 \varphi_0\Bigr] + a^{D} \varphi_0 V'(\varphi_0^2) + q^2 \varphi_0 a^{D-2} \eta^{\mu\nu} i \Bigl[\mbox{}_{\mu} \Delta_{\nu}\Bigr](x;x) = 0 \; . \label{new0modeeqn} \end{equation} We begin by deriving exact expressions for the $t$-mode and $u$-mode contributions to the trace of the photon propagator. We then use a variant of the work-energy theorem to show how reheating occurs. \subsection{The Effective Force} The $t$-mode contribution to the coincidence limit of the trace of the photon propagator in equation (\ref{new0modeeqn}) is, \begin{equation} \sqrt{-g} \, g^{\mu\nu} i\Bigl[ \mbox{}_{\mu} \Delta_{\nu}\Bigr]_{t}(x;x) = a^{D-2} \!\! \int \!\! \frac{d^{D-1} k}{(2\pi)^{D-1}} \Biggl\{ \frac1{M^2} \, \widehat{\partial}_0 t \!\cdot\! \widehat{\partial_0} t^* - \frac{k^2}{M^2} \, t \!\cdot \! t^* \Biggr\} \; . \label{ttrace1} \end{equation} The $t$-mode equation (\ref{teqn}) can be exploited to write the product of time derivatives in terms of the norm-squared, \begin{equation} \partial_0 t \!\cdot\! \partial_0 t^* = \Bigl[ \frac12 \partial_0^2 + \frac12 (D\!-\!2) a H \partial_0 + k^2 - (D\!-\!2) a H \frac{\partial_0 M}{M} - \frac{\partial_0^2 M}{M} \Bigr] (t t^*) \; . \label{tID1} \end{equation} Using this identity we can re-express the $t$-mode contribution (\ref{ttrace1}) as, \begin{eqnarray} \lefteqn{\sqrt{-g} \, g^{\mu\nu} i\Bigl[ \mbox{}_{\mu} \Delta_{\nu}\Bigr]_{t} = \frac{a^{D-2}}{M^2} \!\! \int \!\! \frac{d^{D-1}k}{(2\pi)^{D-1}} \Biggl\{ \frac12 \partial_0^2 + \frac12 (D\!-\!2) a H \partial_0 - \frac{\partial_0 M}{M} \partial_0 } \nonumber \\ & & \hspace{4cm} - (D\!-\!2) a H \frac{\partial_0 M}{M} - \frac{\partial_0^2 M}{M} + \Bigl( \frac{\partial_0 M}{M}\Bigr)^2 \Biggr\} (t t^*) \; . \qquad \label{ttrace2} \end{eqnarray} Converting to dimensionless form, and employing equation (\ref{Teqn}) to eliminate second derivatives, gives the compact form, \begin{eqnarray} \lefteqn{\sqrt{-g} \, g^{\mu\nu} i\Bigl[ \mbox{}_{\mu} \Delta_{\nu}\Bigr]_{t}(x;x) = \frac{e^{D n} \chi^2(n)}{(8\pi G)^{\frac{D}2 -1} \mu^2(n)} \!\! \int \!\! \frac{d^{D-1} \kappa}{(2\pi)^{D-1}} \, e^{\mathcal{T}(n,\kappa)} } \nonumber \\ & & \hspace{1.3cm} \times \Biggl\{ \Bigl[ \frac12 \mathcal{T}'(n,\kappa) - \frac{\mu'(n)}{\mu(n)} \Bigr]^2 - \frac{\kappa^2 e^{-2n}}{\chi^2(n)} + \frac1{4 \chi^2} e^{-2 [\mathcal{T}(n,\kappa) + (D-1)n]} \Biggr\} . \qquad \label{ttrace3} \end{eqnarray} The $u$-mode contribution to the photon trace in (\ref{new0modeeqn}) is, \begin{equation} \sqrt{-g} \, g^{\mu\nu} i\Bigl[ \mbox{}_{\mu} \Delta_{\nu}\Bigr]_{u}(x;x) = a^{D-2} \!\! \int \!\! \frac{d^{D-1} k}{(2\pi)^{D-1}} \Biggl\{ -\frac{k^2}{M^2} \, u \!\cdot\! u^* + \frac{1}{M^2} \widetilde{D} u \!\cdot\! \widetilde{D} u^* \Biggr\} \; . \label{utrace1} \end{equation} We can eliminate the norm-square of $\partial_0 u$ using the $u$-mode equation (\ref{ueqn}), \begin{eqnarray} \lefteqn{\partial_0 u \!\cdot\! \partial_0 u^* = \Biggl[ \frac12 \partial_0^2 + \frac12 (D\!-\!2) a H \partial_0 + k^2 + a^2 M^2 + (D\!-\!2) a^2 H^2 (1 \!-\! \epsilon) } \nonumber \\ & & \hspace{3.2cm} - (D\!-\!2) a H \frac{\partial_0 M}{M} + \partial_0 \Bigl( \frac{\partial_0 M}{M}\Bigr) - \Bigl(\frac{\partial_0 M}{M} \Bigr)^2 \Biggr] (u u^*) \; . \qquad \label{uID1} \end{eqnarray} Substituting (\ref{uID1}) in (\ref{utrace1}), and taking apart the factors of $\widetilde{D} = \partial_0 + (D-2) a H + \partial_0 M/M$ gives, \begin{eqnarray} \lefteqn{\sqrt{-g} \, g^{\mu\nu} i\Bigl[ \mbox{}_{\mu} \Delta_{\nu}\Bigr]_{u} = \frac{a^{D-2}}{M^2} \!\! \int \!\! \frac{d^{D-1}k}{(2\pi)^{D-1}} \Biggl\{ \frac12 \partial_0^2 + \frac32 (D\!-\!2) a H \partial_0 + \frac{\partial_0 M}{M} \partial_0 + a^2 M^2 } \nonumber \\ & & \hspace{.7cm} + (D\!-\!2) (D\!-\!1\!-\!\epsilon) a^2 H^2 + (D\!-\!2) a H \frac{\partial_0 M}{M} + \partial_0 \Bigl(\frac{\partial_0 M}{M} \Bigr) \Biggr\} (u u^*) \; . \qquad \label{utrace2} \end{eqnarray} The final, dimensionless form is very similar to (\ref{ttrace3}), \begin{eqnarray} \lefteqn{\sqrt{-g} \, g^{\mu\nu} i\Bigl[ \mbox{}_{\mu} \Delta_{\nu}\Bigr]_{u}(x;x) = \frac{e^{D n} \chi^2(n)}{(8\pi G)^{\frac{D}2 -1} \mu^2(n)} \!\! \int \!\! \frac{d^{D-1} \kappa}{(2\pi)^{D-1}} \, e^{\mathcal{U}(n,\kappa)} } \nonumber \\ & & \hspace{-0.1cm} \times \Biggl\{ \Bigl[ \frac12 \mathcal{U}'(n,\kappa) + \frac{\mu'(n)}{\mu(n)} + D\!-\!2 \Bigr]^2 - \frac{\kappa^2 e^{-2n}}{\chi^2(n)} + \frac1{4 \chi^2} e^{-2 [\mathcal{U}(n,\kappa) + (D-1)n]} \Biggr\} . \qquad \label{utrace3} \end{eqnarray} \subsection{Reheating} The dimensionless form of the inflaton $0$-mode equation (\ref{new0modeeqn}) takes the form, \begin{equation} e^{n} \chi \frac{\partial}{\partial n} \Bigl[ e^{(D-1) n} \chi \psi'\Bigr] = - e^{D n} \psi \Biggl[ U'(\psi^2) + \frac{Q^2 \chi^2}{\mu^2} \!\! \int \!\! \frac{d^{D-1}\kappa}{(2\pi)^{D-1}} \Biggl\{ \qquad \Biggr\} \Biggr] \equiv \mathcal{F} \; . \label{force} \end{equation} where the term inside the curly brackets is the sum of the $t$ and $u$ contributions from expressions (\ref{ttrace3}) and (\ref{utrace3}), and $Q^2 \equiv q^2/(8\pi G)^{ \frac{D}2 -2}$ is the dimensionless charge. Multiplying both sides of (\ref{force}) by $e^{(D-2)n} \chi \psi'$ and integrating gives a curious generalization of the famous work-energy theorem of introductory physics, \begin{equation} e^{(D-1)n} \chi \psi' \frac{\partial}{\partial n} \Bigl[ e^{(D-1) n} \chi \psi' \Bigr] = \frac12 \frac{\partial}{\partial n} \Bigl[ e^{(D-1) n} \chi \psi'\Bigr]^2 = e^{(D-2) n} \chi \psi' \!\times\! \mathcal{F} \; . \label{WEthm1} \end{equation} We now integrate (\ref{WEthm1}) from the beginning of reheating (at $n = n_i$) to the end (at $n = n_f$), and use the fact that $\psi'(n_f) = 0$ at the end of reheating, \begin{equation} 0 - \frac12 \Bigl[ e^{(D-1) n_i} \chi_i \psi'_i\Bigr]^2 = \int_{n_i}^{n_f} \!\!\!\! dn \, e^{(D-2) n} \chi(n) \psi'(n) \mathcal{F}(n) \; . \label{WEthm2} \end{equation} Equation (\ref{WEthm2}) clearly implies that the product of $\psi'(n) \times \mathcal{F}(n)$ must be {\it negative} in order to suck the energy out of the inflaton $0$-mode. The classical contribution from $\psi' \times -\psi U'(\psi^2)$ is positive, so reheating must be driven by the quantum corrections from the $t$-modes and the $u$-modes, each of which has two positive and one negative contribution. From expressions (\ref{ttrace3}) and (\ref{utrace3}) we see that the desired negative contribution can only come from the $-\kappa^2 e^{-2n}/\chi^2(n)$ terms, however, it is not clear if the dominant effect comes from $t$-modes or $u$-modes. It is also not clear whether the largest contributions come from super-horizon modes (with $\kappa < \chi(n_e) e^{n_e}$, where $n_e$ denotes the end of inflation) or sub-horizon modes (with $\kappa > \chi(n_e) e^{n_e}$). Note that discretization can only recover the longest wavelength sub-horizon modes. Let us first examine sub-horizon modes, for which $\chi e^{n}/\kappa$ is small. In this case the ultraviolet expansions (\ref{TUVexp}) and (\ref{UUVexp}) imply that the multiplicative exponentials agree to leading order, \begin{equation} e^{\mathcal{T}(n,\kappa)} = \frac{e^{-(D-2) n}}{2 \kappa} \Biggl\{ 1 + O\Bigl( \frac{\chi^2 e^{2n}}{\kappa^2}\Bigr) \Biggr\} \;\; , \;\; e^{\mathcal{U}(n,\kappa)} = \frac{e^{-(D-2) n}}{2 \kappa} \Biggl\{ 1 + O\Bigl( \frac{\chi^2 e^{2n}}{\kappa^2}\Bigr) \Biggr\} \; . \end{equation} Substituting the same ultraviolet expansions into the curly bracketed parts of (\ref{ttrace3}) and (\ref{utrace3}) gives, \begin{eqnarray} \lefteqn{ \Bigl[ \frac12 \mathcal{T}' - \frac{\mu'}{\mu}\Bigr]^2 - \frac{\kappa^2 e^{-2n}}{\chi^2} + \frac{e^{-2 [\mathcal{T} + (D-1) n]}}{4 \chi^2} } \nonumber \\ & & \hspace{2cm} = -\frac12 (D\!-\!2) (1 \!-\! \epsilon) - (1\!-\!\epsilon) \frac{\mu'}{\mu} - \Bigl( \frac{\mu'}{\mu}\Bigr)' + O \Bigl( \frac{\chi^2 e^{2n}}{ \kappa^2}\Bigr) \; , \qquad \\ \lefteqn{ \Bigl[ \frac12 \mathcal{U}' + \frac{\mu'}{\mu}+ D\!-\!2 \Bigr]^2 - \frac{\kappa^2 e^{-2n}}{\chi^2} + \frac{e^{-2 [\mathcal{U} + (D-1) n]}}{4 \chi^2} } \nonumber \\ & & \hspace{2cm} = \frac12 (D\!-\!2) (1 \!-\! \epsilon) + (1\!-\!\epsilon) \frac{\mu'}{\mu} + \Bigl( \frac{\mu'}{\mu}\Bigr)' + O \Bigl( \frac{\chi^2 e^{2n}}{ \kappa^2}\Bigr) \; . \qquad \end{eqnarray} Hence there is perfect cancellation between the sub-horizon $t$-mode and $u$-mode contributions at leading order. Super-horizon modes cannot show the same cancellation because $\mathcal{T}(n,\kappa)$ approaches a large, negative constant (\ref{Tmin}) as $\mu(n)$ goes to zero, whereas $\mathcal{U}(n,\kappa)$ diverges like $\mathcal{U}_* + \ln[\mu_2^2/\mu^2(n)]$.\footnote{The constant $\mathcal{U}_*$ can be found from expression (\ref{UPostform}) by extracting the factor of $\ln[\mu_2^2/\mu^2(n)]$ and then setting $n = n_*$ in the remainder.} This means that the multiplicative exponentials take the form, \begin{equation} e^{\mathcal{T}(n,\kappa)} \longrightarrow e^{\mathcal{T}_{\rm min}} \qquad , \qquad e^{\mathcal{U}(n,\kappa)} \longrightarrow e^{\mathcal{U}_*} \Bigl[ \frac{\mu_2}{\mu(n)} \Bigr]^2 \; . \end{equation} The curly bracketed terms which involve explicit factors of $\mu'/\mu$ wind up depending on the functions $f(n)$ and $g(n)$, given in expressions (\ref{fsol}) and (\ref{gsol}), respectively, \begin{eqnarray} \Bigl[ \frac12 \mathcal{T}' - \frac{\mu'}{\mu}\Bigr]^2 &\!\!\! = \!\!\!& \frac14 f^2(n) \longrightarrow \Bigl[\frac{\mu'(n)}{\mu(n)}\Bigr]^2 \; , \\ \Bigl[ \frac12 \mathcal{U}' + \frac{\mu'}{\mu}+ D\!-\!2 \Bigr]^2 &\!\!\! = \!\!\!& \frac14 g^2(n) \longrightarrow 0 \; . \end{eqnarray} This means that the $t$-modes contribute positively, while the $u$-modes make a negative contribution, \begin{eqnarray} e^{\mathcal{T}} \!\times\! \Biggl\{ \qquad \Biggr\} &\!\!\! \longrightarrow \!\!\!& e^{\mathcal{T}_{\rm min}} \Bigl[ \frac{\mu'(n)}{\mu(n)}\Bigr]^2 \; , \label{tlead} \\ e^{\mathcal{U}} \!\times\! \Biggl\{ \qquad \Biggr\} &\!\!\! \longrightarrow \!\!\!& e^{\mathcal{U}_*} \Bigl[ \frac{\mu_2}{\mu(n)} \Bigr]^2 \!\times\! -\frac{\kappa^2 e^{-2n_*}}{\chi_*^2} \; . \qquad \label{ulead} \end{eqnarray} How large the relative coefficients are depends on the integration constant $f_2$, for which we do not yet have an analytic form. Whether (\ref{tlead}) or (\ref{ulead}) dominates, it is significant that both terms diverge like $1/\mu^2(n)$. Because the effective force contains another factor of $1/\mu^2(n)$, this means that the quantum correction diverges like $1/\mu^4(n)$ near the point $n_*$ at which $\mu(n)$ vanishes. The measure factor in (\ref{WEthm2}) softens this somewhat, but not enough, \begin{equation} Q^2 \psi'(n) \psi(n) dn = \frac14 d\mu^2 \; . \label{measure} \end{equation} The integral (\ref{WEthm2}) therefore diverges before $\mu(n) = 0$, which presumably brings reheating to an end. \section{Conclusions} Ema et al. have shown that coupling a charged inflaton to electromagnetism provides the most efficient reheating \cite{Ema:2016dny}. The mechanism is that the inflaton's evolution induces a time-dependent photon mass through the Higgs mechanism. Nothing special changes about the transverse spatial polarizations, but inverse powers of the mass appear in the longitudinal-temporal polarizations (\ref{tAs}) and (\ref{uAs}), which result from the photon having ``eaten'' the phase of the inflaton field. These factors diverge when the inflaton passes through zero. The effect is strengthened by factors of $\mu'(n)/\mu(n)$ which appear in the mass terms (\ref{tmass}-\ref{umass}) of the two modes. Our paper represents an effort to improve on previous excellent numerical studies of this process based on discretizing space \cite{Bezrukov:2020txg}. Although that method can accommodate arbitrarily strong photon fields, it is of course limited to a finite range of sub-horizon modes. In contrast, we use the trace of the coincident photon propagator to study the inflaton $0$-mode equation (\ref{new0modeeqn}). Our expressions (\ref{ttrace2}-\ref{ttrace3}) and (\ref{utrace2}-\ref{utrace3}) for the longitudinal and temporal contributions to this trace are exact. They can be used to include the effects of super-horizon modes, and of arbitrarily short wave length modes. In fact, our use of dimensional regularization means that the far ultraviolet can be included as well, through the use of expansions (\ref{TUVexp}) and (\ref{UUVexp}). We have also derived good analytic approximations for the amplitudes. Before first horizon crossing these are (\ref{TUVexp}) and (\ref{UUVexp}), respectively. After first crossing the $t$-mode amplitude is well approximated by expression (\ref{TIRform}) until close to the point at which $\mu(n)=0$. However, expression (\ref{Tpfull}) shows that the $t$-mode amplitude remains finite when $\mu(n) = 0$. Two forms are required to approximate the $u$-mode amplitude after first horizon crossing, owing to its dependence on the complicated behavior of the $u$-mode mass term (\ref{umass}), which is evident from Figures~\ref{Latemass} and \ref{Zoomumass}. During inflation, the near constancy of $\epsilon(n)$ and $\mu^2_{u}(n)/\chi^2(n)$, result in expression (\ref{UIRform}) giving a good approximation. After the end of inflation the better approximation is provided by expression (\ref{UPostform}). Because this last form becomes exact as $\mu(n) \rightarrow 0$, we know that the $u$-mode amplitude diverges like $-\ln[\mu^2(n)]$, which provides an extra factor of $1/\mu^2(n)$ in the trace of the photon propagator (\ref{utrace3}). The obvious next step is to exploit the powerful analytic expressions we have derived to make a detailed numerical study of reheating in a realistic model, such as Starobinsky inflation \cite{Starobinsky:1980te}, Higgs inflation \cite{Bezrukov:2007ep}, or a hybrid model \cite{Bezrukov:2020txg}. Such an analysis would begin by renormalizing equation (\ref{new0modeeqn}), and then focus on determining whether the dominant effect for $\mu(n) \rightarrow 0$ comes from sub-horizon or super-horizon modes, and whether it is the $t$-modes or the $u$-modes which contribute more strongly. Another key issue is whether or not the effect is so strong that the inflaton is precluded from making even a single oscillation. Right now, it seems as if the strongest effect comes from super-horizon $u$-modes, and this contribution is so strong that the inflaton $0$-mode is prevented from passing through zero. Finally, our extension of the vector propagator to include time-dependent masses in cosmological backgrounds has two obvious applications in addition to reheating. The first of these is the study of quantum corrections to the expansion history of classical inflation \cite{Miao:2015oba,Liao:2018sci, Kyriazis:2019xgj,Miao:2019bnq,Miao:2020zeh,Sivasankaran:2020dzp,Katuwal:2021kry}. Another obvious application is for the study of phase transitions in the early universe \cite{Ema:2016dny}. \centerline{\bf Acknowledgements} This work was partially supported by Taiwan MOST grant 110-2112-M-006-026; by NSF grants PHY-1912484 and PHY-2207514; and by the Institute for Fundamental Theory at the University of Florida.
Title: An adaptive algorithm for detecting double stars in astrometric surveys
Abstract: The paper develops a method for detecting optical binary stars based on the use of astrometric catalogs in combination with machine learning (ML) methods. A computational experiment was carried out on the example of the HIPPARCOS mission catalog and the Pan-STARRS (PS1) catalog by applying the suggested method. It has shown that the reliability of predicting a stellar binarity reaches 90-95%. We note the prospects and effectiveness of creating a proprietary research platform - Cognotron.
https://export.arxiv.org/pdf/2208.03269
command. \newcommand{\vdag}{(v)^\dagger} \newcommand\aastex{AAS\TeX} \newcommand\latex{La\TeX} \submitjournal{AJ} \shorttitle{Adaptive algorithm for detecting double stars} \shortauthors{Sazhin et al.} \graphicspath{{./}{figures/}} \begin{document} \title{An adaptive algorithm for detecting double stars in astrometric surveys} \author{Mikhail V. Sazhin} \affiliation{Sternberg Astronomical Institute of Lomonosov Moscow State University,\\ Universitetsky pr., 13, Moscow 119234, Russia} \author[0000-0002-3428-0106]{Valerian Sementsov} \affiliation{Sternberg Astronomical Institute of Lomonosov Moscow State University,\\ Universitetsky pr., 13, Moscow 119234, Russia} \author{Sergey Sorokin} \affiliation{Tver State University, Zhelyabova 33, Tver, Russia} \author{Dan Lubarskiy} \affiliation{Scienteco, Inc., Boston, MA} \author{Alexander Raikov} \affiliation{Institute of Control Sciences, Russian Academy of Sciences} \keywords{Binary stars (154) --- Fundamental parameters of stars (555) --- % Astroinformatics (78) --- Convolutional neural networks (1938) --- Random Forests (1935)} \section{Introduction} \label{sec:intro} The purpose of this work is to increase the accuracy of estimating the ratio of the number of binaries over single stars by applying artificial intelligence (AI) methods to the classical astronomical techniques. Binary stars form dynamic systems that rotate under the gravitational attraction around a common center of masses. Binary stars are subdivided into visual binary stars, spectral binary stars, and eclipsed variable stars \cite[pp. 17-21]{Hilditch2001}. Methods of their detection are, therefore, divided into astrometric, spectral, and photometric. The development of classical methods of astronomical measurements calls for increasingly complex models, which may include more than ten free parameters per object. As a result, the proportion of discovered stars demonstrating non-linear motions is constantly increasing and, accordingly, the risk of an erroneous estimation of the ratio of binary and single stars is growing. In addition, the classical methods of processing astrometric measurements suffer from the ``curse of dimensionality'' when the computational costs to determine the nonlinear and free parameters grows exponentially with respect to their number. In this paper, it is proposed to supplement the classical methods of estimating the aforementioned ratio with ML methods. The latter allow to work with a set of poorly defined parameters and reduce the exponential growth of the volume of calculations, arising from their number, to polynomial. However, modern ML methods, as it is well known, do not always enable us to get an explanation of the results obtained. For the purpose of this paper, this is not an obstacle, since the verification of the results derived from the data by ML, as in the classical case, is carried out based on direct observations. Applications of ML have an appealing potential, which is constantly increasing, primarily due to the development of ML methods and analytical methods of processing big data, as well as the growth of computer power. New versions of ML are being introduced, and the computing capabilities for their implementation are increasing. The development of methods is heading in the direction of providing opportunities to self-study, adapt to the dynamics of the external environment, solve interdisciplinary tasks, plan, analyze, give explanations, consider the subjective, non-local and wave aspects of the behavior of the research objects \cite{Wang2018,Raikov2021}. As an example of a successful application of ML in astronomy, one can mention the work \cite{becker2020}, presenting a classifier of astronomical events in real time for Automatic Learning System for the Rapid Classification of Events (ALeRCE). The article \cite{CarrascoDavis2021} discusses photometric and spectroscopic observations of rapidly variable sources formed after the explosion of an astronomical object. To classify signals from the nuclei of galaxies, supernovae, asteroids, etc. Convolutional Neural Network (CNN) is used. At the same time, metadata is added to astronomical images in the form of a priori known functions and indicators, which helps achieve a high level of accuracy ($\approx94$ \%). There are two types of classification methods --- based on a template and a light curve. The first is able to distinguish a richer taxonomy of events, the second uses only the first event warning. The classification of events by templates is based on the use of CNN \cite{CarrascoDavis2021}. CNN input requires images and metadata about the properties of objects from various catalogs. The standard shape of the event template is $63\times63$ pixels. The list of alert metadata includes about 15 indicators. The authors of \cite{ALeRCE} proposed a template-based classification method for distinguishing five different classes of events within the framework of ALeRCE, the detection and alert of supernova explosions (SNE), using prediction and distinguish between SNE, as well as other complex classes of events. Template classification is necessary for morphological differentiation of galactic nuclei, SNEs, stars, asteroids and false warnings. It uses rotational invariance of images. CNN is trained using entropy data and additional information that experts could use to evaluate candidates. Modern astronomical instruments are able to assess the level of chaos caused by the explosion of objects \cite{Reyes2018}, estimate the size of a companion star \cite{Jiang2017}, recognize, annotate and classify big data obtained from survey telescopes. The article is structured as follows. First, the HIPPARCOS catalog is discussed, as well as an overview of the labor of the detection of binaries in this catalog. At the same time, the general principles of allocation of unresolved binary stars in the astrometric survey are discussed. Following this, a review of a calculation experiment using machine learning and ML methods in astronomical research is described. At the same time, considering the features of ML, significant and control signs of duality are distinguished, these signs are selected classifying singular and binary stars. The stability of the constructed classification to changes of the observational selection of the training sample is checked. In conclusion, an estimate of the proportion of binary stars to single stars in the catalog is given. \section{Ground-based observations and cataloging of binary stars} \label{sec:bin-catalogs} The idea of the existence of physical binary and multiple star systems in the Universe was first considered by John Michell \cite{michell1767}. He applied then new statistical methods to the study of stars and demonstrated that many more stars occur in pairs or groups than a random distribution can explain. For the Pleiades cluster, Mitchell estimated that the probability of such a close group of stars is $5\cdot10^{-5}$. He concluded that stars in such binary or multiple star systems can attract each other, which is the first proof of the very fact of the existence of binary stars and star clusters. His work on binary stars may have influenced William Herschel's research on the same topic, which took shape in the first catalog of binary stars \cite{hersh1785}. \subsection{Modern astrometric, photometric, and spectral methods for determining the binarity of stars} \label{subsec:bin-methods} Identifying objects with a small angular distance as binary stars was initially proposed (the so-called optical double stars, incredibly close to each other in the sky). Later it was supplemented by other methods, primary astrometric (see e.g. \cite{makkap2005}). Long-term observations of optical and astrometric binaries make it possible to determine the orbits of the components in some cases. Currently we are talking about hundreds of such objects \cite{soder1999, hart2001}. This is the only direct method of determining the physical mass of stars. Astrometric methods are good for investigating sufficiently wide star pairs. For closer components, which are unresolved, astrophysical methods (photometrical or spectroscopic) are more effective. In most cases, the duality of an object is revealed using criteria such as the Rayleigh criterion \cite{arenou2005,griffin1986}. Optical binary stars were discovered by applying Rayleigh criterion: if the distance between the components exceeds the half-width of the so-called point spread function (PSF), according to earlier methods, one considers the star to be optically binary. It is essential that for more than a hundred years the accuracy of determining the astrometric parameters of stars has been much better than PSF. This is related to the development of astrometric methods for detecting binary stars based on the features of their proper motions. Our work is also aimed at clarifying the criteria for detecting the duality of astrometric binary stars, which would be orders of magnitude more sensitive than the Rayleigh criterion. \subsection{Binary star catalogs, binarity in stellar surveys} \label{subsec:bin-catalogs} Catalogs of binary stars have been published since the end of the XVIII century \cite{hersh1785}. The development of the situation is shown in the table 1 for publications \cite{burnham1906, aitken1932, IDS1963, Lipaeva2014, wds1984, wds1997, wds2001, wds2021, CCDM1994, TDS2002}. It is easy to see that the growth of number of discovered binary systems is quite moderate: at the beginning of the XX century, astronomers used visual catalogs containing a total of about a million stars \cite{SD1886,CpD1896,CpD1897,CpD1900,BD1903}, in the middle of the century, a photographic Carte du Ciel with about 4.5 million stars \cite{Eichhorn1957, CdC1972,CdC1983} became available, the development of space astronomy required catalogs of tens of millions of objects \cite{GSC1990a,GSC1990b,GSC1990c,GSC12_2001}, and in the XXI century, electronic versions of photographic catalogs with a volume of about a billion stars were introduced to the scientific community \cite{GSC2_2008,USNOB_2003}. During this period, the volume of double star catalogs has grown only a few times. \begin{table} \caption{Catalogs of double stars}\label{tab:catadouble} \begin{center} \begin{tabular}{|l|r|r|r|p{2.7in}|} \hline ID & Year & \multicolumn{2}{c|}{Total number of} & References\\ ~ & ~ &\multicolumn{1}{p{1in}|}{multiple systems} &\multicolumn{1}{p{1in}|}{individual components} & \\ \hline Hershel& 1785& 434& ~ & \cite{hersh1785} \\ BDS & 1906& 13665 & ~ & \cite{burnham1906}\\ ADS & 1932 & 17180 & ~ &\cite{aitken1932}\\ IDS & 1961 & 56572 (29965) & 69819 (36861) & \cite{IDS1963,Lipaeva2014}\\ WDS & 1994& 73610 & 154333 & \cite{wds1984,wds1997,wds2001,wds2021}\\ CCDM &1994,2002 &34031& 74861&\cite{CCDM1994}\\ Tycho-3&2001 &32631 &103259 & \cite{TDS2002} \\ \hline \end{tabular} \end{center} \end{table} The above situation is typical for survey catalogs limited to the weakest observed stellar magnitude. Most of the stars in such a catalog \cite{kharchenko_2001} are slightly brighter than the detection limit and identifying duality in this case is problematic. A survey of the star catalogs, compiled according to a somewhat different principle: identification of all objects within a given volume of space \cite{Gliese1991}, significantly changes the statistics \cite{imf_2010, Duquennoy1991a,Duquennoy1991b}. Dimmer objects of late spectral classes are beginning to prevail in the samples of the nearest to the Sun stars, and the proportion of stars with signs of duality is approaching 50\%. \subsection{Theoretical models of the emergence of multiple stars, the percentage of binarity among the stars in the vicinity of the Solar System} \label{subsec:bin-theory} There is no common opinion in the research of the mods of the collapse of protostellar clouds yet. For the appearance of single stars, theoretical models are well-regarded, and a sufficiently convincing initial mass function is obtained, which then gives reasonable results in further population calculations. So, according to \cite{kroupa2002}, there are following equations for different masses of stars: \begin{equation*} \xi (M)={\begin{cases}k_{0}\left({\frac {M}{m_{0}}}\right)^{-\alpha_{0}}&,\quad m_{0}<M\leqslant m_{1}\\k_{1}\left({\frac {M}{m_{1}}}\right)^{-\alpha_{1}}&,\quad m_{1}<M\leqslant m_{2}\\k_{2}\left({\frac {M}{m_{2}}}\right)^{-\alpha_{2}}&,\quad m_{2}<M\end{cases}} \end{equation*} \noindent where $m_0 = 0.01m_\odot$, $m_1 = 0.08m_\odot$, $m_2 = 0.5m_\odot$, $\alpha_0 = 0.3$, $\alpha_1 = 1.3$, $\alpha_2 = 2.3$. The last equation is the initial mass function of \cite{salpeter1955}. There is no generally accepted model of binary star formation yet. There are several theoretical models of the collapse of protostellar clouds with an initial rotation. In such models gas compression must lead to the formation of a toroidal structure. This structure then decays into separate protostars, which form a multiple star system (in the simplest case, a binary one). Taking into account various physical mechanisms (for example, the degree of influence of the magnetic field) leads to significantly different results, none of which is fully confirmed by observations. Our study of binary stars statistics is carried out under conditions \cite{Gliese1991} of extremely high probability of duality of objects (about 50\%)and the absence of a generally accepted astophysical theory. \section{HIPPARCOS main catalog} \label{sec:hipparcos} The HIPPARCOS spacecraft became the first specialized astrometric satellite. High-precision optical measurements require a long-focus instrument, which therefore has a small field of view. \subsection{Observations technique and the structure of the catalog. Sources of information and data on binarity} \label{subsec:hip-technic} The task of carrying out high-precision measurements throughout the entire celestial sphere determined the instrument design with two fields of view with a diameter of about $0.9^\circ$ each, spaced from each other by $\approx58^\circ$. The device rotated with a period of 120 minutes around an axis perpendicular to the plane in which the entrance pupils lie \cite{hipp1989a}, \nocite{hipp1989b} and the axis itself (according to the observation plan) slowly precessed along the cone of $43^\circ$ around the direction to the Sun. All these movements combined led to a uniform coverage of the observations of the celestial sphere and, on the other hand, did not allow the satellite's solar panels to deviate significantly from the direction of the Sun and lose power. To increase the stability of the measurements, it wasn't just the passage of a star in the field of view of an electric vacuum device that was being recorded, but its passage through a lattice of 2660 slits. The recorded coordinates thus ended up being one-dimensional, they were tied to a large circle, that remained the same for several revolutions of the satellite. During the processing stage, the parameters of the circles were linked to each other for the entire celestial sphere, then the spherical coordinates of individual stars, their parallaxes and proper motions were calculated \cite{hipp1989c}. The results of the satellite observation \cite{hip1997a} have shown the accuracy of coordinates better than of ground observations by about 100 times. During the experiment, this made it possible to determine the coordinates alongside parallaxes and proper motions with high accuracy of about 1 msec of arc. That is, an experiment of 3.5 years yielded a result comparable to the astrometric activity of an entire century and in some cases (in terms of high-precision parallaxes), even surpassing the results. We have to note that, strictly speaking, this is correct only of the objects of the HIPPARCOS program, with respect to a posteriori confirmation of the accepted source model. The duality of objects in the HIPPARCOS main HIP catalog was initially established by their belonging to the Catalog of Components of Double and Multiple Stars (CCDM) \cite{CCDM1994}. At the same time, new optical binaries reliably detected during the observations, separated by the Rayleigh-type criterion, were added to this catalog (about 5\% of its volume). \subsection{Further work on the detection of double stars in HIP}\label{subsec:hip-detection} After the publication of the main HIPPARCOS catalog and the additional Tycho catalog to the original program, their in-depth research began. The latest catalog was obtained based on observations not of the main HIPPARCOS spacecraft photodetector, but of the signals from service stellar sensors, initially intended to monitor the appearance of an object that is a part of the observation program in the field of view immediately before the start of the main measurements. This additional material allows us to get the coordinates of about a million stars with an accuracy of an order of one to one and a half magnitude worse than in the main HIP catalog. Considerable efforts of researchers were aimed at improving the accuracy of the Tycho catalog's proper motions and increasing its size based on satellite observations records and the use of Carte du Ciel data, so Tycho-2 \cite{Tycho-2} catalog appeared. Later on, this information formed an additional catalog of binary stars Tycho \cite{TDS2002} and the discovery of new binary stars in the main HIP catalog \cite{makkap2005}. We will use the latter list below to test the operation of machine learning algorithms. \subsection{General principles of detection of unresolved binaries in an astrometric survey} \label{subsec:hip-general} It is apparent, that many different criteria for detecting the duality of an object by measuring any one parameter, such as the ellipsoid of the visible image, the deviation of proper motion from a straight line, anomalous photometry, bifurcation of spectral lines ? have already been used, generally speaking, by the authors of observations or their closest followers and cannot yield anything new. The general principle of detecting duality in this case is approximately the same: the Rayleigh criterion in the latter case or the ellipticity of the image significant in comparison with the point spread function (PSF) in the first case, are similar to each other and do not make it possible to detect duality if the separation (or anomalous proper motion) of the components is less than the errors, the width of the spectral line or the same PSF. On the other hand, it is obvious that the optical separation of the components of the assumed binary at distances greater than the errors of the coordinate measurements should somehow manifest itself in the results of the observations. Our work consists of verifying the fact that for unresolved, as well as for resolved, binary stars, anomalies in the errors of the measured values will appear. The proposed method relies on using the widest possible set of data for each object. The next section shows how, based on a training sample, various artificial intelligence algorithms are trained, significant signs of duality and signs of negligible significance are highlighted. An additional check is also performed on the sampling effect, the influence of the chosen machine learning method, the influence of the observational selection in the training sample and the dependence on the imperfection of the reduction procedure of the main catalog. In addition, some of those parameters of stars that should not depend on duality are to be analyzed for the purpose to check the algorithms used. \section{Application of ML methods for detection of binary star systems}\label{sec:ML} In this work we present the results of applying modern ML methods and AI systems to the data of HIP catalog to increase the quality of the data received via the astrometric satellite. More specifically, we used neural networks and decision tree-based models. \subsection{Feature extraction}\label{subsec:ML-features} The HIP catalog contains many characteristics of stars from the main list (there are 77 data fields in the catalog \cite{hip1997a}). From them, we excluded fields like references to other catalogs or data sources. We also excluded astronomical coordinates, because on the one hand, the catalog covers only a small volume of the Galaxy in a close neighborhood of the Sun, where the structure of the Galaxy is not prominent. On the other hand, exact values of star coordinates are enough to uniquely identify a star, so the ML models will just remember which stars are binary and which are not, without trying to extract dependencies from other fields. The spectral class field of a star is presented in the HIP Catalog as a text field, which prevents its direct use as an input to ML models. To use information from this field, the text format was converted to a synthetic (manually created) feature vector which describes the spectral data in a format, suitable for ML models. A complete set of features of this vector is shown in the Appendix A. Thus, the catalog data was mapped in a space of 73 features, including 34 numeric values taken directly from the HIP catalog and 39 features from the spectral classification fields. The table showing all catalog fields used in the analysis is shown in the Appendix B. Non-empty value of CCDM field of HIP catalog was used as a target feature. \subsection{Data analysis models}\label{subsec:ML-models} In the AI research platform we have experimented with two kinds of models to process data: ensemble of neural networks and ensemble of XGBoost (eXtreme Gradient Boostring \cite{XGBoost2016}) decision-tree trained models. Neural network ensembles were built from feed-forward networks composing of two fully-connected hidden layers of 200 and 100 neurons with ReLU (Rectified Linear Unit) activation function, followed by an output layer of a single neuron with a sigmoid activation function. Before feeding data to the neural network, it was passed through a normalization layer. Networks was trained using Adam (Adaptive Moment Estimation) \cite{Adam2015} algorithm, optimizing for a binary cross-entropy metric. Neural networks were implemented and trained using Keras \cite{Keras2015} library, included in Tensorflow deep learning framework \cite{tensorflow2015}. To create an ensemble of neural networks, we split the original HIP catalog dataset into 25 random subsets, while preserving the proportion of target binarity feature in each subset. This was done using StratifiedShuffleSplit function from sklearn library \cite{sklearn2011}. Then, each single subset was used as a training datum for a neural network, while the remaining 24 subsets where jointly used as a test set. Those 25 networks were grouped into an ensemble, which was used to calculate the final output value for all stars in the HIP Catalog. An ensemble of decision-tree models was trained using the same technique. Base models were created using XGBClassifier implementation of the extreme gradient boosting algorithm from XGBoost library \cite{XGBoost2016}. The following parameters were used for each of 25 classifiers: {\tt learning\_rate=0.01}; {\tt n\_estimators=1811}; {\tt max\_depth=6}; {\tt min\_child\_weight=4}; {\tt gamma=0.4}; {\tt subsample=0.9}; {\tt colsample\_bytree=0.8}; {\tt objective= 'binary:logistic'}; {\tt nthread=4}; {\tt scale\_pos\_weight=1}; {\tt seed=27}. Parameters, essential for learning were found using a cross-validation method. Since there is reason to believe that not all multiple systems are marked in the HIPPARCOS catalog, we propose to use an approach that is often used in data analysis to identify labeling errors, to identify candidates for binary systems: train a classifier on an initial (incomplete in terms of labeling) data set and then consider objects with the maximum error as candidates for incorrect labeling \cite{Brodley1999,Zhu2003,Angelova2005,Huang_2019}. In the context of our task this means that we are looking for stars, which are not marked as binary in HIPPARCOS catalog, but, at the same time, show a high probability of being binary based on the results of the ML models, trained on the datasets from the catalog. For such an object, it holds that, despite the fact that it was used in the training process as ``not a binary star'', i.e. the ML model was instructed that the probability of its binarity equals 0, the ML algorithm, nevertheless, insists on its duality, based on the patterns it derives from the data. \subsection{Importance of features required for classifying a star as binary}\label{subsec:ML-ranging} XGBoost library provides means to access various statistics of a model, including feature importance's for trained classifier. Fig. \ref{fig:1-features} shows that various statistical parameters from the catalog are significant, while, for example, the spectral parameters are not. It means that ML algorithms confirmed the well-known from the catalogs of double and multiple stars result, that duality of a star is only loosely correlated with the spectral characteristics of the pair. Thus, we can conclude that training dataset used is consistent in this regard. Importance score shown on x-axis of Figure \ref{fig:2-robustness} represents the number of times the feature was used as a decision variable in trees, averaged over ensemble members. Our calculation experiments show that this feature is consistent across ensemble members and resilient to changes at random stages of algorithm training, like a split of the ensemble members into individual training groups, seed parameter of XGBClassifier. \subsection{Robustness of the classification algorithm}\label{subsec:ML-robustness} The training set for ML algorithms consists a subcategory of catalogs stars \cite{hip1997a} assigned a non-empty value of CCDM \cite{CCDM1994} field, i.e. the star is recognized as a double. The very procedure for establishing binarity based on the anomalous proximity of neighboring stars (see above) is subject to a very strong observational selection: binarity according to catalogs \cite{CCDM1994, wds2021} for bright stars (of which there are few) turns out to be much more probable than for more numerous faint stars. In order to test the influence of this effect on the work of ML algorithms, an independent training was carried out, during which photometric values from model input were excluded. The specific parameters that were used at this stage are given in the table in the Appendix B. The star color indices and photometric errors were retained as input parameters: the former --- to control the adequacy of the classification algorithm, the latter --- as one of the indicators of the possible duality of the object. The results of the work of the HIPPARCOS consortium published in \cite{hip1997a}, included the solution of a system of nonlinear reduction equations for determining the kinematic parameters of stars. The system of equations was solved by the iteration method. One iteration in 1995-1996 took about half a year of calculations, and it was this iteration that was subsequently published. After 10 years, one of the members of the consortium repeated the processing of the observational material that had been preserved, somewhat improving the reduction scheme and achieving convergence of the iterations (at that time, one iteration took about a week of computing time). The resulting new HIPPARCOS reduction was published \cite{hip_new2007a, hip_new2007b} and showed slightly better accuracy, especially for bright stars. It has not been accepted as a coordinate standard, but is actively being used in scientific research. To test the approach proposed in this paper, this new reduction is important, since it allows one to check the stability of ML algorithms to the non-Gaussian nature of the errors of the parameters being analyzed. Since the convergence of the solution of the system of nonlinear equations was not achieved in the HIP catalog (it was only possible to check the absence of noticeable divergence), the errors of the quantities being determined will inevitably have a non-zero mean, which indicates that they will be non-Gaussian. The training of ML models was carried out using data from the original HIP catalog and data from its new reduction together. A specific list of fields taken from the new reduction \cite{hip_new2007b} is given in the table in the Appendix C. Also, two Boolean fields were added, indicating the presence of a 7- and 9-component solution (the presence of a star in the \textbf{hip7p.dat} and \textbf{hip9p.dat} tables from \cite{hip_new2007b}). The specific values of these solutions were not used, since they are available only for a small part of the catalog stars, and it is unlikely that they will be effectively used by the ML models in this regard. The experiment with machine learning based on the material \cite{hip_new2007b} showed stability of the values of the main statistical characteristics in the catalog for the purpose of classifying objects as binary stars (Fig. \ref{fig:2-robustness}). The resilience of the results to the sampling effect was also tested. \subsection{Prediction of duality probability of HIP catalog stars by ML models}\label{subsec:ML-predict} Since the output of the proposed models is the probability of duality for each of the stars of the HIPPARCOS catalog, it is natural to consider stars, for which probability exceeds a certain threshold as candidates for binary stars. Figure \ref{fig:3-prediction} shows how many new candidates for binary systems can be identified using the proposed models for different threshold values. The solid lines correspond to binary system candidates identified in comparison with the labeling of the original HIP catalog, and the dotted lines correspond to those found in work \cite{makkap2005}. The graph at the bottom of the figure shows what the percentage of multiple systems in the catalog will be at different values of the probability threshold. As outputs of two models --- ensemble of neural networks and ensemble of decision trees differ from each other, it is a natural next step to group these models together into one joint ensemble including models of both types. In Fig. \ref{fig:3-prediction}, along with the results of the ensemble of neural networks and the ensemble of decision trees, the results of the combined model are also presented. The data in Fig. \ref{fig:3-prediction} correspond to the results obtained when training models using the data from both the original HIP catalog \cite{hip1997b} and the new reduction \cite{hip_new2007b}, using a set of variables that limits the effect of observational selection. The methodology used in this work does not imply the identification of all binary star candidates in one run. Solving the problem of data analysis under conditions of partially incorrect labeling \cite{Brodley1999,Zhu2003,Angelova2005,Huang_2019} implies an iterative process, during which after identifying the most probable labeling errors (in our case, previously unidentified double stars), it is necessary to check the corresponding objects, correct the labeling and repeat training. This cycle can be repeated several times until a satisfactory result is achieved. This article presents the first iteration of this process. \subsection{Comparison of ML methods' results with other publications}\label{subsec:ML-compare} The comparison was carried out on the most extensive work on the identification of additional double stars in the HIP catalog --- the paper by Makarov and Kaplan \cite{makkap2005}, where the astrometric method was used, with additional information from the Tycho-2 catalog \cite{Tycho-2}. Figure \ref{fig:4-compare}A shows the probabilities computed by our ML models for stars identified as double in \cite{makkap2005}. The axes of the graph correspond to probabilities of duality calculated by the ensemble of neural networks and the ensemble of tree classifiers. In Figure \ref{fig:4-compare}B there are also stars that are not designated as double either in \cite{makkap2005} or in the original HIPPARCOS catalog along with the stars identified in \cite{makkap2005}, with a binarity probability estimates of $> 60$\% from the ML models. Analyzing graph \ref{fig:4-compare}(A), one can see that most of the multiple stars identified in \cite{makkap2005} received low probabilities of duality when assessed using the ML system. On the other hand, plot \ref{fig:4-compare}(B) shows that most of the stars proposed as candidates for duality by the ML models were not identified in \cite{makkap2005}. Thus, we can conclude that the patterns identified by machine learning models and the corresponding candidate stars differ from the results of \cite{makkap2005}. In this regard, it may be of interest to include the binary stars identified in \cite{makkap2005} in the training set, to train and study the resulting models. \subsection{Observation-based model verification} \label{subsec:observ} To verify the obtained results of applying ML to the objects of the HIPPARCOS catalog, the lists of the most probable double stars were compared to the double stars from the Pan-STARRS catalog of objects \cite{PS1}. The search for neighbors was carried out in the vicinity of the HIPPARCOS star with a radius of $5''$. This is the size of the working area of the photodetector that took the measurements. \begin{table} \caption{Identification of double star candidates in PS1} \label{table:panstarrs} \begin{center} \begin{tabular}{|l|r|r|r|r|} \hline selection criteria & number of objects & \multicolumn{1}{p{1.2in}|}{multiplicity found (components found)} & \multicolumn{1}{p{1.2in}|}{components not found} & star not found\footnote{the catalog PS1 was obtained from the observations of the telescope installed in the Hawaiian Islands, part of the sky is not available for observation} \\ \hline NN, $p>0.8$ & 214 & 142 (895*) & 10\footnote{2 spectroscopic binaries, 3 stars with large proper motion, 1 against the background of the galaxy} & 72 \\ \hline XGB, $p>0.7$ & 109 & 69 (430*) & 5\footnote{3 spectroscopic binaries } & 50 \\ \hline \end{tabular} \end{center} \end{table} Thus, the developed mechanism gives a 90-95\% probability of correct prediction of duality (recall that a priori the probability of duality of a randomly selected star is about 50\%). \section{Conclusions} \label{sec:conclusions} The methodological approach developed in the Cognotron research platform and presented in the article and the experiments performed show that the use of machine learning methods on the data of the HIPPARCOS catalog makes it possible to extract additional information and identify double and multiple star systems that could not be previously detected by classical methods. This is the result of a discovery of complex relationships between astrometric and photometric characteristics and the errors of these characteristics by machine learning methods. Classical methods are based on an analysis of isolated characteristics, or small groups of characteristics, and are limited by the accuracy of their measurements. The combination of a larger number of characteristics and their errors in the analysis, which is achieved by using machine learning methods, is equivalent to using a larger number of accumulated light quanta during long-term observation, which makes it possible to increase the accuracy of detecting binary stars. The disadvantage of the proposed method is that it does not allow introducing a strict criterion for the duality of stars. In classical methods, criteria of this kind are formulated on the basis of known physical laws prior to the analysis. However, it is not theoretically possible to formulate such a criterion that would describe the relationship between several dozen characteristics. Machine learning methods in this case rely on an automated extraction of dependencies from labeled data. But, in the case in picture, there is no correctly labeled training sample, and any patterns are extracted from data in which a significant part of binary stars are not labeled as such. An immediate consequence of this situation is that the output values of the data analysis models proposed in this paper cannot be considered as probabilities of stellar duality. Since the proportion of binary stars in the training sample is underestimated, it should be expected that the output values will also be underestimated compared to the true probabilities. The procedure for detecting binary stars is also becoming more complicated. To fully use the potential of the method proposed in this paper, it is necessary to implement an iterative procedure, during which the binary star candidates proposed by the ML models will need to be independently verified; the labeling of the training data will change based on this verification, new models will be trained that will offer the next set of candidates and so on. This article, in fact, presents the results of one such iteration. It is important that this work demonstrates that during such iteration, it is indeed possible to select objects with a high probability of duality (90\% of the objects were confirmed according to the PS1 catalog). It means that: \begin{itemize} \item in the HIPPARCOS data, there are indeed significant dependencies indicating the duality of stellar systems; \item existing machine learning methods allow such dependencies to be detected and used to identify dual stars that were not detected by classical methods. \end{itemize} Significant features identified by ML algorithms for the dual star classification are of high interest as well. In the process of training and verifying ML models, it turned out that machine learning algorithms quite reliably identify a group of significant features associated with the statistical characteristics of the observed values. It is also shown that the identification of duality only loosely depends on the spectral characteristics of the pair. The methods turned out to be resilient to observational selection in the training set itself. In addition, the parameters of objects in the HIPPARCOS catalog, which, according to the studies of other authors, are not related to the multiplicity/duality of stars (for example, various spectral characteristics), showed low significance, which was additional evidence of the effectiveness of this method. The application of the digital ML methods-based approach proposed in this paper to data and catalogs of other missions (for example, GAIA, \cite{GAIA_EDR3_2021}) is also possible and promising, but so far seems premature. Combining data across multiple missions is of theoretical interest, but may be accompanied by difficulties due to the difference in the values obtained by different instruments and due to different observation schemes. In addition, their statistical characteristics will also differ, which may introduce difficulties for ML algorithms. An additional complication arises from the difference in the operating ranges of stellar magnitudes, although an intersection does take place. Such a combination is also complicated by the fact that, according to pre-flight plans, the publication of data on relatively complex objects will be carried out at the final stages of the experiment. Despite the difficulties and obstacles that arise, the ML approach proposed in this paper is more accurate in its measurements and can help extract new knowledge and stimulate the generation of new ideas, compared to the classical approach. A specific AI platform named ``Cognotron'' was used in our study. \newpage \appendix \section{} \label{sec:appA} % \startlongtable \begin{deluxetable*}{llp{2in}p{1in}} \tablecaption{Spectrum Description fields\label{tab:spec}} \tabletypesize{\footnotesize} \tablewidth{0pt} \tablehead{ \colhead{Field} & \colhead{Range} & \colhead{Description} & \colhead{Notation in the field SpType} } \decimalcolnumbers \startdata SpectralClass & 0...8 & Spectral class & O, B, A, F, G, K, M, L, T, C, S, SC, WN, WC, WO, WR, R, N, DA, DB, DC, DO, DZ, DQ, DG, DF, CN\\ \hline Luminosity\_I & Boolean & Luminosity Class I & I, Ia, Iab, Ib\\ \hline Luminosity\_II & Boolean & Luminosity Class II & II, IIa, IIb\\ \hline Luminosity\_III & Boolean & Luminosity Class III & III, IIIa, IIIb\\ \hline Luminosity\_IV & Boolean & Luminosity Class IV & IV, IVa\\ \hline Luminosity\_V & Boolean & Luminosity Class V & V, Va, Vb\\\hline Luminosity\_VI & Boolean & Luminosity Class VI & VI\\\hline Luminosity\_uncertain & Boolean & Luminosity class is not precisely defined & :\\\hline subdwarf & Boolean & Subdwarf & sd\\\hline WhiteDwarf & Boolean & White dwarf & DA, DB, DC, DO, DZ, DQ, DG, DF\\\hline W & Boolean & Wolf-Rayet star & WN\\\hline Carbon & Boolean & C-type star (Carbon star) & C\\\hline S & Boolean & S-type star (Zirconium star) & SC\\\hline R & Boolean & Spectral class R & R\\\hline N & Boolean & Spectral class N & N\\\hline MN & Boolean & ~ & MN\\\hline nebulous & 0, 1, 2 & Wide spectrum lines & n, nn, n:\\\hline enhanced\_metal & Boolean & Strong Metal Lines & m, m:\\\hline peculiar & Boolean & Spectrum Pecularities & p, p:, +pec\\\hline shell & Boolean & Shell Star & sh, +shell, shell\\\hline emission & Boolean & Emission lines & e, e:, eq:, E\\\hline weak\_lines & Boolean & Weak lines & w, wk, wl\\\hline sharp\_lines & Boolean & Narrow lines & s, ss, s:\\\hline variable & Boolean & Variable spectr & v, va, var\\\hline weak\_helium & Boolean & Weak lines of Helium & Hewk\\\hline NIIandHeIIEmission & Boolean & N~III emission, absence or weak absorption of He~II & (f)\\\hline HeIIabsorbtionNIIemission & Boolean & strong He II absorption, weak N III emission & ((f))\\\hline composite & Boolean & A spectrally double star & comp\\\hline undescribed\_peculiarities & Boolean & Other features & ., ..., .., +..., +.., +., +....\\\hline SB & Boolean & Spectroscopic binary & SB, SB1, SB:\\\hline Sr & Boolean & Spectral lines of Strontium & Sr, Sr:\\\hline Cr & Boolean & Spectral lines of Chromium & Cr\\\hline Eu & Boolean & Spectral lines of Europium & Eu\\\hline Mn & Boolean & Spectral lines of Manganese & Mn\\\hline Hg & Boolean & Spectral lines of Mercury & Hg, Hg:\\\hline Si & Boolean & Silicon spectral lines & Si\\\hline Li & Boolean & Lithium spectral lines & Li\\\hline Del & Boolean & Spectrum like~Delta Delphini & delDel, dDel, deltaDel\\\hline Nova & Boolean & Nova Star & Nova\\\hline \enddata \end{deluxetable*} \newpage \section{} \label{sec:appB} % \startlongtable \begin{deluxetable*}{llcc} \tablecaption{Used parameters from the HIPPARCOS catalog\label{tab:hip}} \tabletypesize{\footnotesize} \tablewidth{0pt} \tablehead{ \colhead{Name} & \colhead{Description in the HIPPARCOS catalog} & \colhead{Used in the full set} & \colhead{Used in a set}\\ \colhead{} & \colhead{} & \colhead{} & \colhead{with a restriction of}\\ \colhead{} & \colhead{} & \colhead{} & \colhead{observational selection} } \decimalcolnumbers \startdata Name & & & \\\hline Catalog & Catalogue (H=Hipparcos) & & \\\hline HIP & Identifier (HIP number) & & \\\hline Proxy & Proximity flag & & \\\hline RAhms & Right ascension in h m s, ICRS (J1991.25) & & \\\hline DEdms & Declination in deg ' $''$, ICRS (J1991.25) & & \\\hline Vmag & Magnitude in Johnson V & + & \\\hline VarFlag & Coarse variability flag &+ &+ \\\hline r\_Vmag & Source of magnitude & & \\\hline RAdeg & alpha, degrees (ICRS, Epoch=J1991.25) & & \\\hline DEdeg & delta, degrees (ICRS, Epoch=J1991.25) & & \\\hline AstroRef & Reference flag for astrometry & & \\\hline Plx & Trigonometric parallax &+ &+ \\\hline pmRA & Proper motion mu\_alpha*cos(delta), ICRS &+ &+ \\\hline pmDE & Proper motion mu\_delta, ICRS &+ &+ \\\hline e\_RAdeg & Standard error in RA*cos(Dedeg) &+ & \\\hline e\_DEdeg & Standard error in DE &+ & \\\hline e\_Plx & Standard error in Plx & + & +\\\hline e\_pmRA & Standard error in pmRA & + & +\\\hline e\_pmDE & Standard error in pmDE & + & +\\\hline DE\_RA & Correlation, DE/RA*cos(delta) & + & +\\\hline Plx\_RA & Correlation, Plx/RA*cos(delta) & + & +\\\hline Plx\_DE & Correlation, Plx/DE & + & +\\\hline pmRA\_RA & Correlation, pmRA/RA*cos(delta) & + & +\\\hline pmRA\_DE & Correlation, pmRA/DE & + & +\\\hline pmRA\_Plx & Correlation, pmRA/Plx & + & +\\\hline pmDE\_RA & Correlation, pmDE/RA*cos(delta) & + & +\\\hline pmDE\_DE & Correlation, pmDE/DE & + & +\\\hline pmDE\_Plx & Correlation, pmDE/Plx & + & +\\\hline pmDE\_pmRA & Correlation, pmDE/pmRA & + & +\\\hline F1 & Percentage of rejected data & + & +\\\hline F2 & Goodness-of-fit parameter & + & +\\\hline H31 & HIP number (repetition) & & \\\hline BTmag & Mean BT magnitude & + & \\\hline e\_BTmag & Standard error on BTmag & + & \\\hline VTmag & Mean VT magnitude & + & \\\hline e\_VTmag & Standard error on VTmag & + & \\\hline m\_BTmag & Reference flag for BT and VTmag & & \\\hline B\_V & Johnson B-V colour & + & + \\\hline e\_B\_V & Standard error on B-V & + & +\\\hline r\_B\_V & Source of B-V from Ground or Tycho & & \\\hline V\_I & Colour index in Cousins' system & + & + \\\hline e\_V\_I & Standard error on V-I & + & + \\\hline r\_V\_I & Source of V-I & & \\\hline CombMag & Flag for combined Vmag, B-V, V-I & & \\\hline Hpmag & Median magnitude in Hipparcos system & + & \\\hline e\_Hpmag & Standard error on Hpmag & + & +\\\hline Hpscat & Scatter on Hpmag & + & +\\\hline o\_Hpmag & Number of observations for Hpmag & ~ & ~ \\\hline m\_Hpmag & Reference flag for Hpmag & ~ & ~ \\\hline Hpmax & Hpmag at maximum (5th percentile) & ~ & ~ \\\hline HPmin & Hpmag at minimum (95th percentile) & ~ & ~ \\\hline Period & Variability period (days) & + & +\\\hline HvarType & Variability type & ~ & ~ \\\hline moreVar & Additional data about variability & ~ & ~ \\\hline morePhoto & Light curve Annex & ~ & ~ \\\hline CCDM & CCDM identifier & ~ & ~ \\\hline n\_CCDM & Historical status flag & ~ & ~ \\\hline Nsys & Number of entries with same CCDM & ~ & ~ \\\hline Ncomp & Number of components in this entry & ~ & ~ \\\hline MultFlag & Double/Multiple Systems flag & ~ & ~ \\\hline Source & Astrometric source flag & ~ & ~ \\\hline Qual & Solution quality & ~ & ~ \\\hline m\_HIP & Component identifiers & ~ & ~ \\\hline theta & Position angle between components & ~ & ~ \\\hline rho & Angular separation between components & ~ & ~ \\\hline e\_rho & Standard error on rho & ~ & ~ \\\hline dHp & Magnitude difference of components & ~ & ~ \\\hline e\_dHp & Standard error on dHp & ~ & ~ \\\hline Survey & Flag indicating a Survey Star & ~ & ~ \\\hline Chart & Identification Chart & ~ & ~ \\\hline Notes & Existence of notes & ~ & ~ \\\hline HD & HD number {\textless}III/135{\textgreater} & ~ & ~ \\\hline BD & Bonner DM {\textless}I/119{\textgreater}, {\textless}I/122{\textgreater} & ~ & ~ \\\hline CoD & Cordoba Durchmusterung (DM) {\textless}I/114{\textgreater} & ~ & ~ \\\hline CPD & Cape Photographic DM {\textless}I/108{\textgreater} & ~ & ~ \\\hline V\_I\_red & V-I used for reductions & ~ & ~ \\\hline SpType & Spectral type \footnote{the list of used fields in decoded form is given in the Appendix A} & + & + \\\hline r\_SpType & Source of spectral type & ~ & ~ \\\hline \enddata \end{deluxetable*} \newpage \section{} \label{sec:appC} \startlongtable \begin{deluxetable*}{llcc} \tabletypesize{\footnotesize} \tablewidth{0pt} \tablecaption{Used parameters from the paper of Floor van Leeuwen\label{tab:fvl}} \tablehead{ \colhead{Name} & \colhead{Description from the } & \colhead{Used in the full set} & \colhead{Used in a set}\\ \colhead{} & \colhead{paper of Floor van Leeuwen} & \colhead{} & \colhead{with a restriction of}\\ \colhead{} & \colhead{} & \colhead{} & \colhead{observational selection} } \decimalcolnumbers \startdata HIP & Hipparcos identifier & & \\\hline Sn & [0,159] Solution type new reduction & & \\\hline So & [0,5] Solution type old reduction & & \\\hline Nc & Number of components & + & ~ \\\hline RArad & Right Ascension in ICRS, Ep=1991.25 & ~ & ~ \\\hline DErad & Declination in ICRS, Ep=1991.25 & ~ & ~ \\\hline Plx & Parallax & + & ~ \\\hline pmRA & Proper motion in Right Ascension & + & ~ \\\hline pmDE & Proper motion in Declination & + & ~ \\\hline e\_RArad & Formal error on RArad & + & +\\\hline e\_DErad & Formal error on DErad & + & +\\\hline e\_Plx & Formal error on Plx & + & +\\\hline e\_pmRA & Formal error on pmRA & + & +\\\hline e\_pmDE & Formal error on pmDE & + & +\\\hline Ntr & Number of field transits used & + & ~ \\\hline F2 & Goodness of fit & + & +\\\hline F1 & Percentage rejected data & + & +\\\hline var & Cosmic dispersion added & ~ & ~ \\\hline ic & Entry in one of the suppl.catalogues & ~ & ~ \\\hline Hpmag & Hipparcos magnitude & + & ~ \\\hline e\_Hpmag & Error on mean Hpmag & + & +\\\hline sHp & Scatter of Hpmag & + & +\\\hline VA & [0,2] Reference to variability annex & ~ & ~ \\\hline B-V & Colour index & + & ~ \\\hline e\_B-V & Formal error on colour index & + & + \\\hline V-I & V-I colour index & + & \\\hline UW & Upper-triangular weight matrix\footnote{each element of the matrix is represented as a separate parameter} & + & + \\\hline \enddata \end{deluxetable*} \bibliography{biss_hip}{} \bibliographystyle{aasjournal}
Title: Diffuse radio source candidate in CIZA J1358.9-4750
Abstract: We report results of our upgraded giant metrewave radio telescope (uGMRT) observations of an early-stage merging cluster, CIZA J1358.9-4750 (CIZA1359), in Band-3 (300--500 MHz). We achieved the image dynamic range of $\sim 17,000$ using the direction dependent calibration and found a diffuse source candidate at 4~$\sigma_{rms}$ significance. The flux density of this candidate is $24.04 \pm 2.48$~mJy at 400~MHz, which is sufficiently positive compared to noise. The radio power of the candidate is $2.40 \times 10^{24}$~W~Hz$^{-1}$, which is consistent with those of typical diffuse cluster emissions. The diffuse radio source candidate is associated with a part of a X-ray shock front where the Mach number reaches its maximum value of $\mathcal{M}\sim 1.7$. The observed spectral index ($F_\nu \propto \nu^{\alpha}$) of this source is $\alpha = - 1.06 \pm 0.33$ which consistent with the spectral index expected by the standard diffusive shock acceleration (DSA) model, but such a low Mach number with a short acceleration time would require seed cosmic-rays supplied from past active galactic nucleus (AGN) activities of member galaxies, as suggested in some other clusters. We found seven possible seeded radio sources in the same region as the candidates, which supports a model of radio emission with seeding. The magnetic field strength of this candidate was estimated assuming the energy equipartition between magnetic fields and cosmic-rays to be $2.1~\mu$G. We also find head-tail galaxies and radio phoenixes or fossils near the CIZA1359.
https://export.arxiv.org/pdf/2208.04750
\Received{yyyy/mm/dd} \Accepted{yyyy/mm/dd} \newcommand*{\ta}[1]{\textcolor{cyan}{#1}} \newcommand*{\kk}[1]{\textcolor{magenta}{#1}} \newcommand*{\comment}[1]{\textcolor{blue}{#1}} \newcommand*{\rev}[1]{\textcolor{black}{#1}} \newcommand{\norv}[1]{{\textcolor{blue}{{\bf Nobu:}#1}}} \newcommand{\HA}[1]{{\textcolor{red}{{\bf HA:}#1}}} \newcommand{\CenterRow}[2]{ \dimen0=\ht\strutbox% \advance\dimen0\dp\strutbox% \multiply\dimen0 by#1% \divide\dimen0 by2% \advance\dimen0 by-.5\normalbaselineskip% \raisebox{-\dimen0}[0pt][0pt]{#2}} \title{{Diffuse radio source candidate in CIZA J1358.9-4750}} \author{Kohei \textsc{KURAHARA}\altaffilmark{1,*}% } \altaffiltext{1}{Mizusawa VLBI Observatory, National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan} \email{kohei.kurahara@nao.ac.jp} \author{Takuya \textsc{AKAHORI}\altaffilmark{1,2}} \altaffiltext{2}{Operation Division, Square Kilometre Array Observatory, Lower Withington, Macclesfield, Cheshire SK11 9FT, UK} \author{Ruta \textsc{KALE}\altaffilmark{3}} \altaffiltext{3}{National Centre for Radio Astrophysics, Tata Institute of Fundamental Research, S. P. Pune University Campus, Ganeshkhind, Pune 411007, India} \author{Hiroki \textsc{AKAMATSU}\altaffilmark{4}} \altaffiltext{4}{SRON Netherlands Institute for Space Research, Niels Bohrweg 4, 2333 CA Leiden, The Netherlands} \author{Yutaka \textsc{FUJITA}\altaffilmark{5}} \altaffiltext{5}{Department of Physics, Graduate School of Science, Tokyo Metropolitan University, 1-1 Minami-Osawa, Hachioji-shi, Tokyo 192-0397, Japan} \author{Liyi \textsc{GU}\altaffilmark{4}} \author{Huib \textsc{INTEMA}\altaffilmark{6}} \altaffiltext{6}{Leiden Observatory, Leiden University, Niels Bohrweg 2, 2333 CA, Leiden, The Netherlands} \author{Kazuhiro \textsc{NAKAZAWA}\altaffilmark{7}} \altaffiltext{7}{The Kobayashi-Maskawa Institute for the Origin of Particles and the Universe (or KMI), Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8602, Japan} \author{Nobuhiro \textsc{OKABE}\altaffilmark{8,9}} \altaffiltext{8}{Department of Physical Science, Hiroshima University, 1-3-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8526, Japan} \altaffiltext{9}{Hiroshima Astrophysical Science Center, Hiroshima University, 1-3-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8526, Japan} \author{Yuki \textsc{OMIYA}\altaffilmark{10}} \altaffiltext{10}{Departure of Physics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi 464-8601, Japan} \author{Viral \textsc{PAREKH}\altaffilmark{11,12}} \altaffiltext{11}{Centre for Radio Astronomy Techniques and Technologies, Department of Physics and Electronics, Rhodes University, PO Box 94, Makhanda, 6140, South Africa} \altaffiltext{12}{South African Radio Astronomy Observatory (SARAO), 2 Fir Street, Black River Park, Observatory, Cape Town 7925, South Africa} \author{Timothy \textsc{SHIMWELL}\altaffilmark{13,14}} \altaffiltext{13}{Leiden Observatory, Leiden University, PO Box 9513, NL-2300 RA Leiden, The Netherlands} \altaffiltext{14}{ASTRON, the Netherlands Institute for Radio Astronomy, Oude Hoogeveensedjjk 4, 7991 PD Dwingeloo, The Netherlands} \author{Motokazu \textsc{TAKIZAWA}\altaffilmark{15}} \altaffiltext{15}{Department of Physics, Yamagata University, Kojirakawa-machi 1-4-12, Yamagata, Yamagata 990-8560, Japan} \author{Reinout \textsc{Van WEEREN}\altaffilmark{16}} \altaffiltext{16}{Leiden Observatory, Leiden University, Niels Bohrweg 2, 2300 RA Leiden, The Netherlands} \KeyWords{galaxies: clusters: individual (CIZA J1358.9-4750) - radio continuum: galaxies - X-rays: galaxies: clusters} % \section{Introduction} The largest self-gravitating systems in the universe, galaxy clusters, are several Mpc in size and $10^{14-15}$ M$_{\odot}$ in mass and are known to contain hot ($10^{7-8}$ K) intracluster medium (ICM). This thermal energy is thought to be converted from huge gravitational energy of the large-scale structure through the bottom-up structure formation. Pairs of two sub-clusters in close location are thought to be colliding each other (e.g., \cite{2007PhR...443....1M} for a review), called merging clusters, and they are the sites of this energy conversion. A major energy-conversion mechanism is believed to be a shock wave formed during the merger. However, detailed physical mechanisms of shock wave such as particle acceleration, magnetic-field amplification, and turbulence generation are longstanding questions in astrophysics. Shock wave in the ICM is often identified from X-ray observations of ICM's density and/or temperature jumps. The shock is also found from radio observations of synchrotron radiation, called radio relics, emitted from the shock-accelerated cosmic-ray electrons (e.g., \cite{2012A&ARv..20...54F, 2018PASJ...70R...2A, 2019SSRv..215...16V} for reviews). The Fermi 1st order acceleration, namely the diffusive shock acceleration (DSA, \cite{1987PhR...154....1B}), is thought to be one of the plausible theories for the particle acceleration. Meanwhile, there are other classes of diffuse radio emission in galaxy clusters, radio halos, mini-halos, and radio bridges. Some of them are thought to be formed by turbulence based on the Fermi 2nd order acceleration \citep{2001MNRAS.320..365B, 2003ApJ...584..190F}. Radio bridge is a relatively new class of diffuse radio emission, which was found at the linked region of early-stage merging clusters \citep{2019Sci...364..981G, 2020PhRvL.124e1101B, 2020MNRAS.499L..11B}. Since turbulence acceleration suffers from the acceleration efficiency, seed cosmic-rays supplied from AGN jets of member galaxies were proposed for the radio bridges \citep{2020PhRvL.124e1101B}. Another important factor for thermal evolution of the ICM is AGN jets launched from the supermassive black holes of member galaxies. In the last decades, AGN jet is thought to be a promising source to solve the so-called cooling flow problem (\cite{1994ARA&A..32..277F} for a review), while the co-existence of cooling gas and AGN jets in Phoenix galaxy cluster \citep{2020PASJ...72...33K, 2020PASJ...72...62A} raise a new question on AGN feedback. A bent AGN jet in Abell 3376 indicated a tight connection between the jet and the coherent magnetic field at the cold front of the cluster \citep{2021Natur.593...47C}. The spectral index distribution exhibits a plateau near the bending point, suggesting re-acceleration of cosmic rays likely by magnetic reconnection. Recently, AGN jet is more recognized as a source of cosmic rays in the ICM. AGN jets connecting to radio relics are found in, for example, Abell~3411 \citep{2017NatAs...1E...5V} and Abell 3376 \citep{2022PASJ..tmp...40C}. In Abell~3411, the spectral index changes continuously along the radio structure, indicating spectral aging caused by cosmic-ray electron cooling. Therefore, a detailed study of radio sources in galaxy clusters can provide a new knowledge, such as particle acceleration, in addition to understanding the evolution of the sources themselves. CIZA J1358.9-4750 (CIZA1359) is thought to be one of the only several known early-stage merging galaxy clusters. This object is a Clusters in the Zone of Avoidance (CIZA) survey target, thus CIZA1359 is found relatively close to the Galactic Plane \citep{2007ApJ...662..224K}. These basic information are summarized in table \ref{tab:1}. \citet{2015PASJ...67...71K} performed a detailed analysis of the Suzaku X-ray observation and found a discontinuous high-temperature region in the linked region between the two X-ray peaks of subclusters. They suggested that the high-temperature region was formed by the merger shock wave passing along the merger axis. \citet{2022arXiv221002145O} further studied X-ray properties of CIZA1359 and found another shock front at the northern edge of the hot region. These studies also suggest that the merger axis is off the plane of the sky. CIZA1359 is relatively nearby at redshift $z=0.074$, which is convenient for studying its detail, and is expected to be investigated in more detail. In this paper, we report on the results of the upgraded Giant Metrewave Radio Telescope (uGMRT) observation of CIZA1359, with the aim of detecting any diffuse emission of CIZA1359. In Section 2 we describes the details of the uGMRT observations and the data reduction, and in Section 3 we present some obtained radio image and spectral index map. In Section 4, we discuss on relic candidate of CIZA1359. We have used cosmological parameters $H_0 = 70~{\rm km~s^{-1}~Mpc^{-1}}$, $\Omega_M = 0.3$ and $\Omega_\Lambda = 0.7$ in this work. \section{Observation and Data reduction} \subsection{The Observation} \begin{table}[tp] \tbl{Basic parameters of CIZA1359}{% \begin{tabular}{llc} \hline Parameter & Value & reference \\ \hline RA (J2000)& $13^h58^m40^s$ & [1]\\ Dec (J2000)& $-47^d46^m00^s$ & [1] \\ Redshift & 0.0740 & [1] \\ $f_X[0.1\sim 2.4{\rm keV}]$ & $20.89 \times 10^{-12}~{\rm erg~cm^{-1}}$ & [1] \\ $L_X[0.1\sim 2.4{\rm keV}]$ & $4.88 \times 10^{44}~{\rm erg~s^{-1}}$ & [1] \\ kT (keV) &\begin{tabular}{l} $5.6 \pm 0.2 {\rm keV}$ (south-east)\\ $4.6 \pm 0.2 {\rm keV}$ (north-west) \end{tabular} & [2] \\ \hline \end{tabular}}\label{tab:1} \begin{tabnote} [1] \cite{2007ApJ...662..224K}; [2] \cite{2015PASJ...67...71K} \end{tabnote} \end{table} We conducted uGMRT Band~3 (300 -- 500~MHz) observations of CIZA1359 (the project code 39\_045). Both narrow- and wide-band modes were adopted. The center frequency and the bandwidth of the narrow-band mode are 317~MHz and 33~MHz, respectively, and those of the wide-band mode are 400~MHz and 200~MHz, respectively. \rev{The field of view and the angular resolution are 75~arcmin and 8.3~arcsec, respectively, both in diameter at 400~MHz.} The observations were carried out in Cycle 39 and were split into two separate observations on 13--14 January 2021 (day~1) and 24--25 February 2021 (day~2) in International Atomic Time (TAI). The observing time was 5 hours and 15 minutes on day~1, and 4 hours and 16 minutes on day~2. In each day, we observed a flux density, bandpass, and polarization calibrator, 3C286, for 10 minutes at the beginning and the end of the observation, and observed a phase calibrator, 1349-393, for 5 minutes every 25 minutes. Thus, the target (CIZA1359) on-source time was 337 minutes in total, 196 minutes on day 1 and 141 minutes on day 2. According to the observation log, two 45~meter-diameter antennas, C03 and C11, were not used on day~2. Therefore, only day~1 data were used in this study, as day~2 data tend to be a little noisy. \subsection{The Data Reduction} \begin{table*}[htbp] \tbl{The radio maps in this paper.}{% \begin{tabular}{lllllll} \hline \hline Label & Frequency & BW & R.M.S. & Beam size & Beam PA & Figure \\ Unit & MHz & MHz & mJy~beam$^{-1}$ & asec $\times$ asec & degree &\\ \hline% Narrow-band & 317 & 33 & $8.2 \times 10^{-2} $ & 21.0 $\times$ 6.9 & -0.1 & - \\ Wide-band & 400 & 200 & $3.7 \times 10^{-2} $ & 14.8 $\times$ 5.2 & -6.2 & Figure \ref{fig:radio image2}\\ Sub-band 01 & 317 & 33 & $8.6 \times 10^{-2} $ & 22.8 $\times$ 5.7 & 1.6 & Figure \ref{fig:spec}\\ Sub-band 02 & 350 & 33 & $1.7 \times 10^{-1} $ & 15.1 $\times$ 5.0 & -0.5 & Figure \ref{fig:spec}\\ Sub-band 03 & 385 & 33 & $8.1 \times 10^{-2} $ & 17.8 $\times$ 5.6 & -0.3 & Figure \ref{fig:spec}\\ Sub-band 04 & 417 & 33 & $5.7 \times 10^{-2} $ & 16.2 $\times$ 5.4 & 1.6 & Figure \ref{fig:spec}\\ Sub-band 05 & 450 & 33 & $5.0 \times 10^{-2} $ & 14.7 $\times$ 4.9 & -1.1 & Figure \ref{fig:spec}\\ Sub-band 06 & 481 & 33 & $1.1 \times 10^{-1} $ & 12.7 $\times$ 4.6 & -1.4 & Figure \ref{fig:spec}\\ Smoothed Wide-band & 400 & 200 & $1.0 \times 10^{-1} $ & 25 $\times$ 25 & 0.0 & Figure \ref{fig:radio image2}, \ref{fig:radio image3}, \ref{fig:spec}, \ref{fig:source U} \\ \hline \end{tabular}} \label{tab:2} \end{table*} The data were analysed using the SPAM (Source Peeling and Atmospheric Modeling; \cite{2014ASInC..13..469I}) which is based on the AIPS (Astronomical Image Processing System), produced and maintained by NRAO. The SPAM employed the AIPS 31DEC13 and was controlled by python 2.7. There is a bright compact source of about $2 \times 10^{3}$~mJy in the field of view, and the beam pattern of this bright source is clearly visible in the dirty image, showing that this source contaminates the image by causing strong sidelobes. Therefore, we applied the SPAM's Direction-Dependent Calibration (DDC; \cite{2017A&A...598A..78I}) to improve the dynamic range of the final image. Self-calibration is also applied to the Direction-Independent Calibration (DIC) before the DDC. The CLEAN algorithm is used for imaging in the SPAM. In the analysis of the narrow-band data, we aim to create a catalog of sources for use in the DDC for the wide-band data. To select peeling sources, a list of radio sources in the field-of-view was compiled using a source catalog from the TIFR GMRT Sky Survey (TGSS) in this analysis. Imaging in the SPAM was performed with the Briggs robustness parameter of -1.0. The Python Blob Detector and Source Finder (PyBDSF; \cite{2015ascl.soft02007M}) was used to catalog the compact sources in the field-of-view. In the analysis of the wide-band data, the data were split into six subbands and the DDC was applied to each subband data using the source catalog obtained from the analysis of the narrow-band data. The final output from the SPAM was the outlier-removed uv data. After that, a full-band radio image was derived by combining the uv data of subbands using WSClean (w-stacking clean; \cite{2014MNRAS.444..606O}). In the imaging with WSClean, we first employ {\it uniform} weighting of the robustness to identify compact sources from the data. The modeling and subtraction of the compact sources can reduce sidelobes of them, particularly from bright ones. Next, a multi-scale CLEAN \citep{2008ISTSP...2..793C} was performed with the robustness closer to {\it natural} weighting to derive the diffuse emission from the data. The primary beam effect was corrected by using the AIPS task {\it pbcor}. \rev{ The function used to the applied primary beam model is $f(x) = 1.0 - \frac{2.939x}{10^{3}} + \frac{33.312x^2}{10^{7}} - \frac{16.659x^3}{10^{10}} + \frac{3.066x^4}{10^{13}}$, where $x$ is distance parameter (see AIPS Cookbook for details). } AIPS was also used to edit the data at each frequency to the same pixel size and spatial resolution, if necessary. When calculating the flux density $F_\nu$, \rev{we adopted an empirical 10\% error ($\sigma_{{\rm abs}} = 0.1$) of the absolute flux density (see also Section 3.1 according to the DDC flux decay).} For a diffuse source, the flux density error, $\sigma_{F_{\nu}}$, was given by $\sigma_{F_{\nu}} = \sqrt{\left( \sigma_{rms} \sqrt{N_{\rm b}} \right) ^2 + \left( \sigma_{{\rm abs}} F_\nu \right) ^2 }$, where $\sigma_{{\rm rms}}$ is the rms noise in an image, and $N_{\rm b}$ is the number of beams in the diffuse source (e.g., \cite{2022MNRAS.tmp.1605K}). \subsection{Other Data} In this study, X-ray data from Suzaku are used to confirm the spatial correlation with the ICM \citep{2015PASJ...67...71K}. In addition, the ICM temperature inferred from the XMM-Newton data was used as an indicator of the shock region \citep{2022arXiv221002145O}. The MeerKAT Galaxy Cluster Legacy Survey Data Release 1 (MGCLS DR1; \cite{2022A&A...657A..56K}) were also combined to determine the spectral index. The used MGCLS data were {\it Enhanced imaging products}, which was corrected the primary beam effect. \section{Results} \subsection{Total Intensity Maps} \begin{table*}[htbp] \tbl{Imaging parameters performed to focus on the diffuse emission.}{% \begin{tabular}{clllll} \hline \hline Weighting & R.M.S. & Beam size & Beam PA & Taper & Figure \\ Unit & mJy\~beam$^{-1}$ & asec $\times$ asec & degree & arcsec & \\ \hline % uniform & $0.18$ & $10.5 \times 3.2$ & $-2.4$ & 22.5 & - \\ briggs -1 & $1.23$ & $25.2 \times 21.2$ & 24.7& 22.5 & - \\ briggs 0 & $1.56$ & $36.8 \times 22.7$ & 29.6& 22.5 & Figure \ref{fig:source U} (a) \\ briggs 1 & $6.24$ & $296.3 \times 40.6$ & 24.7& 22.5 & - \\ \hline \end{tabular}}\label{tab:4} \end{table*} We derived the total intensity map of the narrow-band data at 317~MHz. The DDC was applied to the imaging and the robustness parameter of $-1.0$ was set. The rms noise level of the map is $8.2 \times 10^{-2}$~mJy~beam$^{-1}$ with the DDC (table \ref{tab:2}), while that is $1.73$~mJy~beam$^{-1}$ with the IDC. Therefore, the DDC improved the sensitivity and dynamic range by more than one order of magnitude. We found and cataloged 423 radio sources by the PyBDSF. The brightest source is PMN J1401-4733 at the north-east of the field of view, with the total flux density of $2.01 \times 10^{3}$~mJy and a peak intensity of $1.41 \times 10^{3}$~mJy~beam$^{-1}$. Thus, the achieved image dynamic range is 17,195. Next, we derived the total intensity map of the wide-band data at 400~MHz. We used the above source catalog as a prier sky model for the DDC, then obtained the rms noise level of the map, $3.7 \times 10^{-2}$~mJy~beam$^{-1}$ (table \ref{tab:2}), \rev{or the achieved image dynamic range of 38,108} for the robustness parameter of $-1.0$. The wide-band image of CIZA1359 is shown in figure \ref{fig:radio image2}. The background colour is the radio intensity distribution, where the white contour indicates the $0.4$~mJy~beam$^{-1}$ level (4~$\sigma_{\rm rms}$) for the uGMRT smoothed at $25''$. The alphabetic labels indicate 23 distinguished sources. The white labels from A to L are known sources, while the orange labels from M to W are newly-detected\footnote{After we submitted this paper, \citet{2022MNRAS.tmp.1605K} reported Sources from M to Q.} extended sources. \rev{One of the known issues on the DDC analysis is that a lot of DDC solutions result in global decay of the fluxes across the field (e.g., \cite{2016MNRAS.463.4317P}). To assess this decay, we checked the visibility amplitude with respect to the DDC and DIC results. The maximum angular scale of source U, which is the largest feature among the structures detected in this paper, is 6 arcminutes, which is about 0.6 kilo wavelength at 300 MHz. The medians of the visibility amplitude over a range of 0.6 kilo wavelength are 1.168~Jy for the DDC and 1.247~Jy for the DIC, respectively. Therefore, we measured an offset of about 0.94 in the amplitude ratio around the angular scale of source U. The error can be compared to an empirical 10 \% error of the absolute flux we display in this paper. The Flux accuracy checks are summarized in Appendix A.} \rev{To facilitate our discussion, a compact-source-subtracted image for Source U was derived. The compact sources were subtracted from the image by fitting with a point-source model with a Gaussian function using PyBDSF. We explored the robustness closer to {\it natural} weighting to image the diffuse emission. Table \ref{tab:4} summarize the noise level and resolution from the CLEAN with different weightings; we employed the robustness of $0.0$. The results are shown in figure \ref{fig:source U} (a). The hot ICM region is shown as red contours, which would indicate the approximate location of the merger shock front. The white contours are the same as that in figure \ref{fig:radio image2}. The black contours corresponds to 3$\sigma_{rms}$ in the compact-source-subtracted image which is also shown in the background color.} \subsection{Spectral Index Map} We convolved all images to a 25-arcsec square beam using the AIPS task {\it convl}, where the pixel size and the number of pixels were fixed using the AIPS task {\it hgeom}. We then adopted the least-square fit to derive the best-fit spectral index assuming a power-law and calculated the index pixel by pixel. We performed the fitting in a linear space to account for the negative flux value caused by the noise. To derive the spectral index, we added the MGCLS DR1 data (see Section 2.3) to our uGMRT data. Details on the calculation of the spectral index are summarized in Appendix B. \rev{Figure \ref{fig:source U} (b) and} figure \ref{fig:radio image3} show the spectral index maps of \rev{newly detected} extended sources in CIZA1359. The background colour indicates the spectral index, $\alpha$, such that $F_\nu \propto \nu^{\alpha}$, and the white contour is the same as that in figure \ref{fig:radio image2}. \rev{We also attempted to calculate in-band spectral indices for bright compact sources using subband images, for each GMRT and MeerKAT data. We obtained approximately the same index no matter which data is used for fitting for the sources such as Source A. On the other hand, because faint sources including many diffuse-emission features have a low SN in each pixel, the subband spectral fitting results in the spectral index close to the slope of the noise floor. Therefore, for the faint sources we used the combined data in each band, where the center frequencies are 400 MHz and 1280 MHz, to derive the spectral index, $\alpha_{400-1280}$.} In addition to the spectral index of each pixel, we derived the mean spectral index using the total flux densities shown in columns 4 and 5 of table \ref{tab:3}. The total flux density was estimated by assuming the size of each source. The resultant mean spectral index is listed in column 6 of table \ref{tab:3}. Here, the spectral index error was derived from the error propagation equation ($\sqrt{(F_{\nu 1}~{\rm log}(\nu_1 / \nu_2 ) )^{-2} \sigma_{F_{\nu 1}}^2+(-F_{\nu 2}~{\rm log}(\nu_1 / \nu_2 ))^{-2} \sigma_{F_{\nu 2}}^2}$). \subsection{Source Catalog} \begin{table*}[tbp] \tbl{Radio source catalog of the CIZA1359 field. Columns are: (1) Source name. (2) Average intensity in mJy~beam$^{-1}$, which adopts the white contours of figure \ref{fig:radio image2} as the size of the source. The error can be used $3.7 \times 10^{-2}$~mJy~beam$^{-1}$. (3) Peak intensity in mJy~beam$^{-1}$, size of the source is the same as column 2. The error is used a rms noise around the source in the point source subtracted image. (4) Integrated flux which is not subtracting point sources in mJy at 400~MHz, size of the source is the same as column 2. (5) Same as column 4 but frequency is 1280~MHz. (6) Spectral index, which is calculated using the fluxes in columns 4 and 5. (7) Corresponding galaxies nearby in sky-plane. }{% \begin{tabular}{ccccccc} \hline \hline Label & Mean intensity & Peak Flux & \multicolumn{2}{c}{Flux} & $\alpha_{400-1280}$ & Identification \\ & 400~MHz$^{*1}$ & 400~MHz$^{*1}$ & 400~MHz$^{*1}$ & 1280~MHz$^{*2}$ & & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) \\ \hline A & 4.74 & $ 17.85 \pm 0.21 $ & $ 25.90 \pm 2.59 $ & $ 13.10 \pm 1.32 $ & $ - 0.59 \pm 0.28$ & - \\ B & 4.86 & $ 17.41 \pm 0.11 $ & $ 29.62 \pm 2.97 $ & $ 13.50 \pm 1.37 $ & $ - 0.68 \pm 0.28 $ & 2MASX J13593870-4753472 \\ C & 3.72 & $ 17.54 \pm 0.12 $ & $ 43.54 \pm 4.36 $ & $ 17.49 \pm 1.77 $ & $ - 0.78 \pm 0.28 $ & 2MASX J13593065-4754053 \\ D & 2.47 & $ 10.47 \pm 0.16 $ & $ 14.04 \pm 1.42 $ & $ 6.35 \pm 0.67 $ & $ - 0.68 \pm 0.29 $ & 2MASX J13592419-4750253 \\ E & 3.26 & $ 11.29 \pm 0.23 $ & $ 21.98 \pm 2.21 $ & $ 10.84 \pm 1.11 $ & $ - 0.61 \pm 0.28 $ & 2MASX J13592518-4749333 \\ F & 2.56 & $ 9.28 \pm 0.08 $ & $ 20.30 \pm 2.04 $ & $ 9.58 \pm 0.97 $ & $ - 0.65 \pm 0.28 $ & - \\ G & 1.18 & $ 4.63 \pm 0.10 $ & $ 10.93 \pm 1.11 $ & $ 4.88 \pm 0.52 $ & $ - 0.69 \pm 0.29 $ & 2MASX J13590381-4751311 \\ H & 2.48 & $ 9.63 \pm 0.09 $ & $ 15.31 \pm 1.54 $ & $ 9.37 \pm 0.95 $ & $ - 0.42 \pm 0.28 $ & - \\ I & 2.91 & $ 13.71 \pm 0.14 $ & $ 31.22 \pm 3.13 $ & $ 15.12 \pm 1.52 $ & $ - 0.62 \pm 0.28 $ & 2MASX J13575383-4737543 \\ J & 5.79 & $ 28.21 \pm 0.26 $ & $ 38.69 \pm 3.88 $ & $ 17.56 \pm 1.77 $ & $ - 0.68 \pm 0.28 $ & -\\ K & 3.12 & $ 11.54 \pm 0.18 $ & $ 15.18 \pm 1.53 $ & $ 6.16 \pm 0.64 $ & $ - 0.78 \pm 0.29 $ & - \\ L & 5.47 & $ 30.51 \pm 0.07 $ & $ 32.66 \pm 3.27 $ & $ 13.51 \pm 1.37 $ & $ - 0.76 \pm 0.28 $ & -\\ M & 2.20 & $ 8.77 \pm 0.13 $ & $ 9.11 \pm 0.93 $ & $ 2.73 \pm 0.32 $ & $ - 1.04 \pm 0.31 $ & - \\ N & 1.81 & $ 8.20 \pm 0.16 $ & $ 11.13 \pm 1.13 $ & $ 3.63 \pm 0.42 $ & $ - 0.96 \pm 0.30 $ & 1RXS J135821.7-474126 \\ O & 4.70 & $ 19.57 \pm 0.63 $ & $ 50.57 \pm 5.06 $ & $ 4.87 \pm 0.56 $ & $ - 2.01 \pm 0.30 $ & - \\ P & 1.30 & $ 4.12 \pm 0.14 $ & $ 5.55 \pm 0.58 $ & $ 1.83 \pm 0.25 $ & $ - 0.95 \pm 0.34 $ & - \\ Q & 0.72 & $ 1.99 \pm 0.19 $ & $ 5.52 \pm 0.60 $ & $ 2.59 \pm 0.35 $ & $ - 0.65 \pm 0.34 $ & 2MASX J13581085-4741243 \\ R & 1.39 & $ 5.60 \pm 0.42 $ & $ 31.06 \pm 3.13 $ & $ 27.22 \pm 2.75 $ & $ - 0.11 \pm 0.28 $ & 2MASX J14004272-4804474 \\ S & 2.41 & $ 9.79 \pm 0.29 $ & $ 31.32 \pm 3.15 $ & $ 13.29 \pm 1.36 $ & $ - 0.74 \pm 0.28 $ & - \\ T & 0.65 & $ 2.07 \pm 0.32 $ & $ 10.06 \pm 1.06 $ & $ 3.43 \pm 0.48 $ & $ - 0.93 \pm 0.35 $ & 2MASX J13592976-4757043 \\ U & 0.54 & $ 1.29 \pm 0.15 $ & $ 28.93 \pm 2.96 $ & $ 7.03 \pm 0.94 $ & $ - 1.22 \pm 0.33 $ & 2MASX J13580947-4745213\\ &&&&&&\begin{tabular}{c} 2MASX J13581294-4748183\\ 6dFGS gJ135839.1-474723\\ \end{tabular} \\ V & 1.57 & $ 7.34 \pm 0.34 $ & $ 12.75 \pm 1.30 $ & $ 14.39 \pm 1.46 $ & $ 0.10 \pm 0.28 $ & 2MASX J13565832-4729231 \\ W & 0.74 & $ 1.24 \pm 0.20 $ & $ 21.34 \pm 2.18 $ & $ 11.60 \pm 1.25 $ & $ - 0.52 \pm 0.29 $ & - \\ \hline \end{tabular}}\label{tab:3} \begin{tabnote} ${*1}$: This work; ${*2}$: MGCLS \citep{2022A&A...657A..56K} \end{tabnote} \end{table*} We found 23 distinguished radio features in the image. We labeled them from Sources A to W (figure \ref{fig:radio image2}). The parameters for each source are summarised in table \ref{tab:3}. Sources from A to L were reported in the previous ATCA observation \citep{2018PASJ...70...53A}, where several sources are closely concentrated on the south-western rim of the southern subcluster of CIZA1359. The spectral indices are comparable to those estimated with ATCA and those of typical AGN. We found that some of the sources were clearly larger in spatial scale than the synthesized beam and were resolved into multiple components. Source G is located near the center of the south subcluster and has an apparent size of about $25'' \times 22''$ (=38~kpc x 33~kpc) at the 3$\sigma_{rms}$ signal to noise level. It is one order of magnitude smaller than the typical size of mini-halos ($\sim 500$~kpc), so that it is more likely to be an AGN radio lobe. Source G is cataloged in SIMBAD as a galaxy 2MASX J13590381-4751311. It has the redshift of z=0.074, which is the consistent with the redshift of CIZA1359. Sources from M to Q were reported in the previous GMRT observation \citep{2022MNRAS.tmp.1605K}. Similarly to Source G, Source Q is more likely to be an AGN, although it is located near the center of the north subcluster with a size of $60'' \times 30''$ (=90~kpc $\times $ 45~kpc). Source Q is cataloged in SIMBAD as a galaxy 2MASX J13581085-4741243. Its redshift, z=0.074, is consistent with the redshift of CIZA1359. Source O, which is unresolved in TGSS and SUMSS images, consists of a round structure and a narrow east-west linear structure. A high-resolution image such as MGCLS \rev{at 1.28 GHz} shows a head-tail galaxy-like structure, while there is no corresponding source in the ATCA image at \rev{2.1 GHz} \citep{2018PASJ...70...53A}. Indeed, a steep spectral index of $-2.01$ and the 400 MHz total flux density, $48.73 \pm 4.88$~mJy, predicts the 2~GHz flux density of 1.9~mJy (with a size of $90'' \times 90''$), which is almost the same as the sensitivity limit of the ATCA observation. Source O can be classified into an Ultra-Steep Spectrum (USS) source found in galaxy clusters \citep{2019A&A...622A..22M}, so that Source O may be a fossil plasma source. We could not find any corresponding source to Source O in SIMBAD. Source R is about 2~Mpc away from the southern subcluster center to the south-east and is cataloged in SIMBAD as a galaxy 2MASX J14004272-4804474. We found that it has a radio structure like a head-tail galaxy. There is also a spectral index gradient, with aging from the head to the tail. It has the redshift of z=0.075, which is comparable to that of CIZA1359. Source S has a FRII-like radio structure about 1.5~Mpc away from the southern subcluster center. SIMBAD cataloged a galaxy at z=0.054 as 2MASX J13595922-4805486 in the neighbourhood. However, 2MASX J13595922-4805486 may be associated with a faint radio structure seen at 11~arcsec away to the northeast from Source S. Source T is located at the southeast of the CIZA1359 and has a head-tail galaxy-like radio structure. It has an elongated structure to the south and a faint structure to the east. There is no associated source within 30~arcsec in SIMBAD. At the peak flux position of the south component of Source T, there is a galaxy 2MASX J13592976-4757043 in SIMBAD. The redshift of 2MASX J13592976-4757043 is z=0.081, which is located at far side of CIZA1359. Source U is a candidate of diffuse cluster emission. It is located at the south of the northern subcluster and in between the two subclusters. Four galaxies were found by SIMBAD within a 4~arcmin radius centred on source U; 2MASX J13580947-4745213 is located at the north-west of Source U with the redshift, z=0.078, 2MASX J13581294-4748183 is located at the south of Source U with the redshift, z=0.069, 6dFGS gJ135839.1-474723 is located at the east of Source U with the redshift, z=0.067, and LEDA 184317 is located at the north-east of Source U with an unknown redshift. Source U is the largest diffuse source in the uGMRT image (see figure \ref{fig:radio image2}). The signal to noise ratio of Source U is around 4~$\sigma_{{\rm rms}}$, where $\sigma_{{\rm rms}} = 0.10$~mJy~beam$^{-1}$. With $N_{{\rm b}} = 23.0$, the point source subtracted flux density is $24.04 \pm 2.48$~mJy, which significantly deviates from the null. We explore Source U in detail in Section 4. Source V is cataloged in SIMBAD as a galaxy 2MASX J13565832-4729231, with the redshift, z=0.078, similar to that of CIZA1359. It has an elongated structure extending toward north-south. The extended structure appears to be connected to Source W; Source V is like a bipolar radio jets. Source W is quadrangle in shape and is the second largest radio structure in the image. No corresponding sources were found in SIMBAD in the vicinity of this Source. The Sources V and W are similar to the structure known as the radio phoenix. \section{Discussion} We explore Source U in detail in the next section 4.1, followed by the discussion of its origin (likely a radio relic) supposing its detection in section 4.2. \subsection{Source U: Diffuse Radio Structure Candidate} \subsubsection{Location} First, we focus on the spatial location of Source U. As described in Introduction, the recent X-ray observation found a pair of shock fronts in the linked region \citep{2022arXiv221002145O}; the north shock at the northern edge of the hot region, and the south shock at the southern edge of the hot region, where the hot region is shown as the red solid-line in figure \ref{fig:source U} (a). The pair shocks seem to be emerged from the interface of the subclusters and be propagating toward each subcluster core. Such a merger shock has been considered as a site of cluster diffuse radio emission, based on an expectation that the shock accelerates cosmic-ray electrons emitting synchrotron radiation. Actually, the location of Source U is broadly consistent with that of the western part of the north shock indicated by the red contours in figure \ref{fig:source U} (a). In fact, the shape of Source U and the shock plane do not exactly coincide with each other. If Source U was excited by a shock wave, it flows down and ages at the downstream-side of the shock front. But it is located at the slightly upstream side. This would be interpreted due to the misalignment of the merger axis with the sky plane, i.e., a projection effect of the viewing angle. \subsubsection{Structure} The shock-associated diffuse radio emission is often seen in the late-stage merging clusters and they are called ``radio relics". Although CIZA1359 is known as the early-stage merging cluster, Source U extends about $5' \times 6'$ ($=450~{\rm kpc} \times 540~{\rm kpc}$), which is comparable in size to radio relics \citep{2012A&ARv..20...54F}. In the above shock scenario, one may also expect radio emission from the south shock, although our observation did not find any candidate. Interestingly, \citet{2022arXiv221002145O} estimated the Mach number of the shocks and found that the north shock has a higher Mach number $\mathcal{M} = 1.7$, while the other part have a lower value of $\mathcal{M} = 1.4$. Therefore, Source U is consistent with the theoretical expectation that a shock wave with a higher Mach number forms a brighter radio emission so that it accelerates cosmic-rays more efficiently. But DSA does not work well at such low Mach number. We discuss the need for re-acceleration in section 4.2. \subsubsection{Radio Power} The radio power of diffuse radio emission in galaxy clusters has been studied in the literature and thus the radio power is also useful to examine whether Source U is a real emission or not. We calculate the monochromatic radio power using the following equation, \begin{equation} P_\nu = 4\pi D_{\rm L}^2 \int I_\nu d\Omega, \label{equ 1} \end{equation} where $D_{\rm L}(z=0.07)\sim 9.74 \times 10^{24}$~m is the luminosity distance \citep{2006PASP..118.1711W}, $I_\nu$ is the radio intensity, and $\Omega$ is the area of diffuse emission. We integrate $I_\nu$, which is the point source subtracted intensity, within the area of the white contours in figure \ref{fig:source U} (a). To compare the monochromatic radio power in this work with those of 1.4~GHz in the literature, it was converted to the flux at 1.4~GHz from flux at 400~MHz using the spectral index of $-1.22$. The derived radio power is $P_{\rm 1.4~GHz} = 2.40 \times 10^{24}$~W Hz$^{-1}$. This is consistent with the known radio relics, halos, shown in figure \ref{fig:radio vs xray}. Therefore, there is no immediate problem in considering Source U as a diffuse radio source of a galaxy cluster. Note that if we adopt the formula of equation (2) in \citet{2018PASJ...70...53A} to derive the upper limit of the radio power, we obtain the value broadly consistent with that derived in their work. The reason for the non-detection in ATCA can be due to the steep spectrum of Source U. \subsubsection{Magnetic Field } Finally, although there are uncertainties caused by theoretical assumptions, it is possible to derive magnetic-field strength, and to discuss a reality of the candidate from comparison with previous estimations. Assuming the energy equipartition between magnetic fields and cosmic-rays, the magnetic field strength can be estimated from synchrotron radiation as follows \citep{2005AN....326..414B}, \begin{equation} B_{\rm eq} = \left\{ \frac{4 \pi (-2\alpha + 1)( K_0 + 1)I_\nu E_{\rm p}^{1+2\alpha}(\nu/2c_1)^{-\alpha}}{(-2\alpha -1)c_2({-\alpha})L\ c_4(i)} \right\} ^{1/(-\alpha+3)} , \label{equ:strength} \end{equation} where $\alpha$ denotes the spectral index of synchrotron radiation, $K_0$ is the number densities ratio of cosmic-ray nuclei to that of the electrons, $L$ is the path length of the synchrotron emitting media, $I_\nu $ is the intensities at frequency $ \nu $, and $E_{\rm p} $ is the proton rest energy. The coefficients are, $c_1 = 3e/(4\pi m_{\rm e}^3c^5) = 6.3 \times 10^{18}\ {\rm erg^{-2} s^{-1} G^{-1}}$, $c_2 = 4.56 \times 10^{-24}\ {\rm erg~G^{-1} sterad^{-1}}$, $c_4 = 1$, and $E_{\rm p} = 1.5 \times 10^{-3}\ {\rm erg}$. We adopted our best-fit value of $\alpha = - 1.22$ and $\nu = 400$~MHz, and we assumed the typical values of $L=500~{\rm kpc} \sim 1.5\times 10^{24}\ {\rm cm}$ and $K_0=100$ \citep{2017A&A...600A..18K}. We then obtained the field strength of 2.1~$\mu$G from the intensity of Source U, $0.4$~mJy~beam$^{-1}$. Such $\mu$G magnetic field is commonly found in galaxy clusters (e.g., \cite{2017A&A...600A..18K}). We note that this strength is insensitive to the parameters of equation (\ref{equ:strength}). 1 $\mu$G-order field strength is derived even when the parameters are changed. However figure \ref{fig:source U} suggests that a patchy structure exists within Source U. The structure can cause error in the estimation of the average strength in equation (\ref{equ:strength}). We can check the field strength from an empirical radial dependence of magnetic field. Using the following equation \citep{2010A&A...513A..30B}, \begin{equation} B(r) = B_0 \times \left( \frac{n_e(r)}{n_0} \right) ^{\eta}, n_e = n_0 \left( 1+\frac{r^2}{r_c ^2} \right) ^{-\frac{3}{2} \beta}, \label{equ:mag strength} \end{equation} the field strength is estimated to be $2.7~\mu$G at a relic position, $2.'5$~(225~kpc) away from the cluster centre in the north, assuming the central magnetic strength, $B_0 = 4.7~\mu {\rm G}$ , radial power-law slope, $\eta = 0.5$ \citep{2010A&A...513A..30B}. Also we adopt the central gas density, $n_0 = 2.54 \times 10^{-3}~{\rm cm^{-3}}$, the core radius, $r_c = 165~{\rm kpc}$, and the $\beta$-model parameter, $\beta = 0.67$ from \citep{2022arXiv221002145O}. Note that $B_0$ and $\eta$ are from the Coma cluster and be expected to vary with cluster. Indeed, in Abell 2382, $B_0 = 3.5~\mu {\rm G}$ and $\eta = 0.5$ are obtained \citep{2008A&A...483..699G}, and in Abell 2255, $B_0 = 2~\mu {\rm G}$ and $\eta = 0.5$ are obtained \citep{2006A&A...460..425G}. Since these value of CIZA1359 is unknown, we adopted those of the Coma cluster as representative values. These magnetic field strengths of CIZA1359 estimated by these two independent methods are in good agreement, although they are large \rev{theoretical assumptions}. This means that the magnetic field strength of CIZA1359 is consistent with other galaxy clusters. \subsection{Origin of Source U} Our assessment of Source U based on its location, structure, spectrum, power, and magnetic field do not suggest that Source U is a noise. In this subsection, we discuss the origin of Source U, supposing that Source U is real. \rev{\subsubsection{Comparison with other early-stage merging clusters}} \rev{To understand the origin of Source U, it is useful to compare it with diffuse radio sources in early-stage merging clusters. This is because the cluster's physical properties that would be related to the origin are very different between early- and late-stage merging clusters. Here, since it is rather difficult to identify early-stage merging clusters, and there is no catalog of early-stage merging clusters yet, we look for them using the following two methods. The first is to find radio-associated cluster pairs, and the second is to make a list of well-known early-stage merging clusters.} \rev{Radio-associated cluster pairs were searched by catalog matching. We (1) checked the coordinates of the radio-associated clusters listed in tables 1 and 3 of \citet{2012A&ARv..20...54F} using SIMBAD, and (2) cataloged if they are within the Plank's beam FWHM (7.18 arcmin) from the coordinates listed in table 1 of \citet{2013A&A...550A.134P} using TOPCAT \citep{2005ASPC..347...29T}. As a result, we found 7 radio-associated cluster pairs (table \ref{tab:6}). It should be noted that this catalog may contain not only early-stage merging clusters but also late-stage merging clusters which have close separations of subclusters. Even more, some may be random pairs which are only close in projection on the plane of the sky and have significantly different redshift, although such a case is rare because galaxy clusters sparsely exist in the Universe.} \begin{table*}[tbp] \tbl{Radio parameters of the cluster pairs. Columns are: (1) Name of radio associated cluster. (2) Right-ascension of the cluster. (3) Declination of the cluster. (4) Structure with radio information in column 5 and 6. (5) Logarithm of radio power at 1.4~GHz from \citet{2012A&ARv..20...54F}. (6) X-ray luminosity in the 0.1--2.4~keV band in $10^{44}$ units from \citet{2012A&ARv..20...54F}. (7) Names of cluster that pair with the one in column 1. (8) Expected merging phase that the "early" and "complex" mean the early stage which have not completed a core-crossing and the multiple merger, respectively. }{% \begin{tabular}{cccccccc} \hline \hline Name & R.A. & Dec & Structure & Log~$P$(1.4) & $L_X (10^{44})$ & pair & merger phase \\ & deg & deg & & W/Hz & erg/sec & & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) \\ \hline A209 & 22.990 & -13.576 & halo & 24.31 & 6.17 & A222 & - \\ A399 & 44.485 & 13.016 & halo & 23.30 & 3.80 & A401 & early$^{(a)}$ \\ A401 & 44.737 & 13.582 & halo & 23.34 & 6.52 & A399 & early$^{(a)}$ \\ A2061 & 230.336 & 30.671 & relic & 23.65 & 3.95 & A2067 & - \\ A2063 & 230.758 & 8.639 & relic & 23.26 & 0.98 & MKW3s & - \\ A2256 & 255.931 & 78.718 & halo & 23.91 & 3.75 & A2271 & complex$^{(b)}$ \\ A2256 & 255.931 & 78.718 & relic & 24.56 & 3.75 & A2271 & complex$^{(b)}$ \\ A3562 & 203.383 & -31.673 & halo & 23.04 & 1.57 & A3558 & complex$^{(c)}$ \\ \hline \end{tabular}}\label{tab:6} \begin{tabnote} $^{(a)}$\citet{2019Sci...364..981G}; $^{(b)}$\citet{2020MNRAS.495.5014B}; $^{(c)}$\citet{2018MNRAS.481.1055H}; \end{tabnote} \end{table*} \rev{Some clusters including new discoveries are well-known as early-stage merging clusters (e.g., 1E 2216.0-0401 and 1E 2215.7-0404; \cite{2019NatAs...3..838G}). Table \ref{tab:7} summarizes the information of the clusters that are believed to be early-stage merging clusters. 1E2216.0-0401 indicates a temperature jump of the ICM between the cluster pair \citep{2019NatAs...3..838G}. They suggested that the system is an early-stage merging cluster. They also reported diffuse radio sources between the cluster pairs. They concluded that they are bright AGNs affected in part by the merger shock. Abell~141 \citep{2021PASA...38...31D} also has a temperature jump and was reported to be an early-stage merging cluster. Radio structures were also found between the subclusters, but it is not possible to isolate whether they are radio bridge, relic, or halo due to lack of spatial resolution. Abell~1775 has a similar X-ray morphology like early-stage merging clusters \citep{2021A&A...649A..37B}. Sloshing or slingshot effects have been reported. Diffuse radio sources were detected and reported that their structures seem to be slingshot radio halos associated with the X-ray structure. Abell~115 clearly shows two subclusters in its X-ray morphology \citep{2020ApJ...894...60L}. No radio structure was detected between them, while a radio relic is present and implies rather a late-type merging cluster. A3391-A3395 \citep{2021A&A...647A...3B} and A98 have radio structure that seem to associate with the head-tail galaxy \citep{2014ApJ...791..104P}. } \begin{table*}[tbp] \tbl{List of well-known early-stage merging clusters. Columns (1) to (6) are the same as in table \ref{tab:6}. Column (7) is cluster pair number. }{% \begin{tabular}{ccccccl} \hline \hline Name & R.A. & Dec & Radio source & Log~$P$(1.4)& $L_X (10^{44})$ & Pair \\ & deg & deg & & W/Hz & erg/sec & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7)\\ \hline A98 & 11.608 & 20.490 & galaxy & - & - & 1 \\ \hdashline A115 & 13.998 & 26.321 & relic & 25.18 & 8.9 & 2 \\ \hdashline A141 & 16.376 & -24.655 & unknown & 24.16 & 12.6$^{(d)}$ & 3 \\ \hdashline A399 & 44.485 & 13.016 & halo & 23.36 & 3.8 & \CenterRow{4}{4} \\ A399/401 d & 44.737 & 13.582 & relic & 23.39 & 5.2$^{(e)}$ & \\ A399/401 f & 44.737 & 13.582 & relic & 23.36 & 5.2$^{(e)}$ & \\ A401 & 44.737 & 13.582 & halo & 23.38 & 6.5 & \\ \hdashline A1758 & 203.134 & 50.510 & relic & 23.57 & 9.4$^{(f)}$ & \CenterRow{3}{5} \\ A1758N & 203.134 & 50.510 & halo & 24.79 & 7.1 & \\ A1758S & 203.134 & 50.510 & halo & 23.89 & 7.1 & \\ \hdashline A1775 & 205.482 & 26.365 & halo & - & 1.5$^{(f)}$ & 6 \\ \hdashline A3391 & 96.564 & -53.681 & galaxy & - & 1.3$^{(g)}$ & \CenterRow{2}{7}\\ A3395 & 96.880 & -54.399 & galaxy & - & 1.3$^{(g)}$ & \\ \hdashline 1E 2215.7-0404 & 334.585 & -3.828 & galaxy & - & 0.8$^{(h)}$ & \CenterRow{2}{8}\\ 1E 2216.0-0401 & 334.673 & -3.766 & galaxy & - & 0.8$^{(h)}$ & \\ \hdashline CIZA1359 & 209.667 & -47.767 & relic & 24.38 & 3.1$^{(i)}$ & 9 \\ \hline \end{tabular}}\label{tab:7} \begin{tabnote} $^{(d)}$\citet{1996MNRAS.281..799E}; $^{(e)}$Average value of A399 and A401; $^{(f)}$\citet{1998MNRAS.301..881E}; $^{(g)}$\citet{2009ApJ...692.1033V}; $^{(h)}$\citet{1990ApJS...72..567G}; $^{(i)}$\citet{2015PASJ...67...71K}; \end{tabnote} \end{table*} \rev{ Figure \ref{fig:radio vs xray} plots the radio-associated cluster pairs (table \ref{tab:6}) and the early-stage merging clusters (table \ref{tab:7}) as black circles and black crosses, respectively. We see that the distributions of them are very scattered in the relation between radio power and X-ray luminosity. CIZA1359, the green cross, appears to be inside the dispersed distribution, supporting that there is no contradiction in considering CIZA1359 to be a member of these families. } \rev{\subsubsection{Is Source U a radio relic?}} \rev{We next discuss which Source U can be classified into. As broadly described in Introduction, there are some known classifications of diffuse radio emission of galaxy clusters (see e.g.,\cite{2012A&ARv..20...54F, 2019SSRv..215...16V} for reviews). We look into acceleration mechanism of cosmic-ray electrons which produce the observed radio emission of Source U. Discussion about the acceleration mechanism is helpful for the source classification.} As discussed above, Source U seems to be associated with the northern shock with the Mach number of 1.7. Based on the standard DSA and test-particle regime, the energy spectrum of the relativistic electrons, $p$, of $n(E)dE\propto E^{-p}dE$, depends on the shock compression, $C$, as $p = (C+2)/(C-1)$. We obtain $C \sim 1.96$ ($=(\frac{3}{4\mathcal{M}^2}+0.25)^{-1}$) for the Mach number 1.7 and give $p \approx 4.1$. If magnetic field is roughly constant over the radio source, such a power-law electron distribution will lead synchrotron emission with a spectral index $\alpha_{\mathcal{M}} = - (p-1) / 2 \approx -1.56$ for $F_{\nu} \propto \nu^{\alpha_{\mathcal{M}}}$. The observed spectral index and its error $\alpha \pm \sigma_{\rm ind} = - 1.22 \pm 0.33$ is consistent within $\sim 1$ $\sigma_{\rm ind}$ error with the estimated spectral index. This could be the case that the DSA is working for the particle acceleration at cluster shocks. It is, on the other hand, already known that the acceleration efficiency at a weak shock is way too low to reproduce the observed radio luminosity (e.g., \cite{2012ApJ...756...97K,2013MNRAS.435.1061P, 2014ApJ...780..125V}). To explain this issue, several possibilities are proposed which include 1.) re-acceleration of pre-accelerated electrons (e.g., \cite{2005ApJ...627..733M, 2012ApJ...756...97K, 2016ApJ...818..204V}), 2.) shock drift accelerations (e.g., \cite{2011ApJ...742...47M, 2014ApJ...797...47G, 2014ApJ...794..153G}), 3.) other mechanisms, for instance turbulence accelerations (e.g., \cite{2015ApJ...815..116F, 2016PASJ...68...34F, 2017ApJ...840...42K}) and of course all observational systematic on both X-ray and radio side (e.g., \cite{2017A&A...600A.100A, 2017MNRAS.465.2916S, 2018MNRAS.478.2218H}). Indeed, there are seven compact sources in the region of Source U, with spectral indices of $-1.5$ to $-0.5$ which are comparable to the typical value of radio jets \citep{2019A&A...622A..17S}. Moreover, Source U is well aligned with the temperature jump (figure \ref{fig:source U}). Therefore, a possible origin of Source U would be the result of merger shock re-acceleration of pre-seeded cosmic-ray electrons by AGN. Note that the spectral index map (figure \ref{fig:source U} (b)) shows patchy distribution and less clear global gradient due to the aging with respect to the shock propagation direction, implying multiple seeding from these AGN. Moreover, our no detection of diffuse radio emission associated with the southern shock with the Mach number of $1.4$ may indicate that there is a threshold of efficient (re-)acceleration at the shock age of $\sim 50$~Myr \citep{2016HEAD...1511103K}. It is thus important to examine whether theoretical models of particle acceleration can explain this marginal detection and non detection simultaneously. There are also compact radio sources in the other shock regions, and for example, the spectral indices range from $-1.5$ to $-0.5$ for six compact sources at the eastern part of the south shock. Thus, there is also a possibility of seeding cosmic-ray electrons from AGN. It is necessary to clarify whether they are member galaxies associated with CIZA1359. Those are future works of this paper. Finally, we discuss that which of the traditional classifications for cluster diffuse emission this fits into. Source U indicates that (I) it is found in between two subclusters of an early-stage merging cluster, (II) it has a structure along the shock, (III) it possesses a relatively flat spectral index of $-1.22$, \rev{ and (IV) multiple non-bright radio point sources are located within the diffuse radio emission and there are no bright AGN. } These facts implies that Source U is a radio relic. Feature (I) is also seen in the radio bridges of early-stage merging galaxy clusters, such as Abell~399 and 401 \citep{2019Sci...364..981G} and Abell~1758 \citep{2020MNRAS.499L..11B}. However, these clusters also have radio halos and relics \citep{2018MNRAS.478..885B}, while CIZA1359 has no any other diffuse sources. Radio relics tend to be brighter than radio bridges, so that we expect which applies to CIZA1359 as well. As no other diffuse emission has been detected in CIZA1359, it is more reasonable to consider Source U as a relic than a bridge. \citet{2019Sci...364..981G} found the formation scenario of the radio bridges between Abell 399 and 401. That is, contact of two clusters generates a shock and turbulence is excited at the post-shock region. Seed cosmic-ray electrons are re-accelerated through turbulence by the Fermi 2nd order acceleration mechanism. Such turbulence-(re)acceleration could be realized in CIZA1359 as well, though the short age of $\sim 50$ Myr prefers the direct shock acceleration by the Fermi 1st order acceleration mechanism. In other words, the radio relic candidate of CIZA1359 is a precursor of radio bridge. Even in the case, the low Mach number such as $\mathcal{M} = 1.7$ would require seeding of cosmic-rays to achieve efficient acceleration and radio emission (e.g., \cite{2019NatAs...3..838G}). \section{Summary} We reported the results on a SPAM-based analysis of uGMRT observations at 300--500~MHz for the early-stage merging galaxy cluster, CIZA J1358.9-4750 (CIZA1359). We found many radio sources such as a head-tail galaxies, FRII types radio lobes, AGNs, and so-called radio phoenixes or fossils. We found a diffuse radio source candidate named Source U with the flux density of $24.04 \pm 2.48$~mJy roughly along a part of the shock front found in the previous X-ray observations. We discussed whether Source U is real or noise from several aspects of its properties. First, the location of Source U is consistent with that of the shock front. Such an association is often seen in radio relics. Second, the size is comparable to known radio relics. Interestingly, the structure of Source U coincides with the shock structure where the Mach number of the shock wave reaches its maximum value of $\mathcal{M} \sim 1.7$. Third, the relation between the radio power and the X-ray luminosity is in good agreement with that of other radio relics. And finally, the energy-equipartition magnetic-field strength, 2.1~$\mu$G, is a typical value seen in galaxy clusters and relics. The above facts favor that Source U is a real radio relic. If Source U is a real diffuse radio source, this study confirmed that even a very weak ($\mathcal{M} \sim 1.7$) shock can accelerate cosmic-rays and emit observable radio emission. Moreover, we did not find any radio candidate at the shock with $\mathcal{M} \sim 1.4$, suggesting the existence of an acceleration-efficiency threshold around the Mach number. We suspect that seed cosmic rays were supplied by some of compact radio sources (AGN) associated with Source U and the re-acceleration was taken place at the shock. It is important to identify the redshifts of the radio sources in order to elucidate the origin of the relic candidates. The identification is also necessary to examine whether head-tail galaxies seen in this observation interact with CIZA1359 or not. In addition, the relic candidate has a relatively steep spectrum. Therefore, future observations at lower frequencies such as 144~MHz would be promising to detect the candidate. Finally, ultimate sensitive observations with the Square Kilometre Array (SKA) will enable us to understand the details of radio sources as well as its polarization. \begin{ack} This work was supported in part by JSPS KAKENHI Grant Numbers, 17H01110(TA), and 21H01135(TA). AIPS is produced and maintained by the National Radio Astronomy Observatory, a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. Data analysis was carried out on the Multi-wavelength Data Analysis System operated by the Astronomy Data Center (ADC), National Astronomical Observatory of Japan. RJvW acknowledges support from the ERC Starting Grant ClusterWeb 804208. \end{ack} \appendix \section*{A. Flux accuracy} \rev{ We confirmed the amplitude of visibility with respect to the DDC and DIC results as described in section 3.1. Figure \ref{fig:A03} shows the amplitude of visibility at each baseline length. } \rev{ The DDC calibration employs data over 1k$\lambda$. The amplitudes of DDC and DIC are different in each region of the figure \ref{fig:A03}. At scales below the most diffuse component (Source U), DDC has an amplitude that is about 6 \% lower than DIC. } \rev{ We further compared our results to the flux of the TIFR GMRT Sky Survey (TGSS;\cite{2017A&A...598A..78I}) to test the validity of the 10\% error in absolute flux. First, we convolved TGSS and our results to the same resolution (65~asec). Second, we performed Gaussian fitting on each component with SN$>20$ using pybdsf. Finally, we matched for each component by position using TOPCAT \citep{2005ASPC..347...29T}. The separation used for the matching was the same value as the resolution, and since TGSS is a 150~MHz band, it was re-scaled using $\alpha = -0.8$. The resulting flux relationship is shown in the figure \ref{fig:A04}. These results mean that the error in flux of this paper is usually well within 10\%. The minimum flux in the figure \ref{fig:A04} depends on the noise level of the TGSS. } \section*{B. Spectral index} An example of a spectral fit is shown in figure \ref{fig:spec}. We examine in this section whether the spectral index of Source U is due to noise behavior. Figure \ref{fig:hist}(a) show the histogram of the spectral indices for the pixels within the Source U. We obtained the average spectral index of $-1.23 \pm 0.27$, the error means the standard deviation of the histogram. However, we need a careful assessment of the spectral index for marginal sources like Source U, because of the significant contribution of noise for the spectrum index fitting. We have checked the spectral indices of all pixels in the image and found that they distributes with three peaks at 0, $-0.7$ and $-6.8$ as shown in figure \ref{fig:hist}(b). The 0-centered peak is thought to be derived from spectral fitting by bad pixels, which the initial value of 0 remains as it is because the fitting does not converge; indeed the 0-centered peak disappears in the histograms of the pixels brighter than $0.5$~mJy~beam$^{-1}$, i.e. the pixels possessing high signal-to-noise ratio (SN $\gtrsim 5$), which has peak at $-0.71 \pm 0.47$ (figure \ref{fig:hist}(c)). The other, $-0.7$- and $-6.8$-centered peaks are thought to be derived from spectral fitting between noise floors in the data of uGMRT and MeerKAT\footnote{The image rms noise of uGMRT is larger than that of MeerKAT. When given error values of uGMRT and MeerKAT with the same sign, they make an artificial negative spectral index up. In our data, this artificial index is $-0.7$}. The $-0.7$-centered peak is consistent with the values estimated from the noise distribution such as shown in figure \ref{fig:spec}. The $-6.8$-centered peak occurs frequently when fitting negative and positive noise (see figure \ref{fig:ape2} (b)). Indeed, the index histogram calculated from only uGMRT data (figure \ref{fig:hist}(d)) does not show the $-0.7$- and $-6.8$-centered peaks which has no clear peak other than 0-centered. We also show the spectra in linear space in figure \ref{fig:ape2}. The left panel (a) of the figure \ref{fig:ape2} shows the spectrum of the same pixel as in figure \ref{fig:spec}, and the right panel (b) of the figure \ref{fig:ape2} shows the spectrum of the noise pixel at the outer edge of the field of view. The figure \ref{fig:ape2} (a) tends to have predominantly positive data compared to noise pixel (figure \ref{fig:ape2} (b)), which suggests that Source U is different from the noise trend. Also, the green line in figure \ref{fig:ape2} (b) shows that the spectral index is very steep because of the fitting with negative 400~MHz data and positive 1280~MHz data. The above results support that it is more favor to consider that Source U as a real radio source. The value, $- 1.23 \pm 0.27$, is consistent with the spectral index of Source U inferred from the flux density.
Title: MAORY/MORFEO and rolling shutter induced aberrations in laser guide star wavefront sensing
Abstract: Laser Guide Star (LGS) Shack-Hartmann (SH) wavefront sensors for next generation Extremely Large Telescopes (ELTs) require low-noise, large format (about 1Mpx), fast detectors to match the need for a large number of subapertures and a good sampling of the very elongated spots. One path envisaged to fulfill this need has been the adoption of CMOS detectors with a rolling shutter read-out scheme, that allows low read-out noise and fast readout time at the cost of image distortion due to the detector rows exposed in different moments. In this work we analyze the impact of the rolling shutter read-out scheme when used for LGS SH wavefront sensing of the Multiconjugate adaptive Optic Relay For ELT Observations (MORFEO, formerly known as MAORY) for ESO ELT; in particular, we focus on the impact on the adaptive optics correction of the distortion-induced aberrations created by the rolling exposure in the case of fast varying aberrations, like the ones coming from the LGS tilt jitter due to the up-link propagation of laser beams. We show that the LGS jitter-induced aberration for MORFEO can be as large as 100nm rms and we discuss possible mitigation strategies.
https://export.arxiv.org/pdf/2208.02661
\keywords{laser guide stars, wavefront sensor, detector, rolling shutter, extremely large telescope, adaptive optics, multi conjugate adaptive optics, simulations} \section{Introduction} \label{sect:intro} Next-generation Extremely Large Telescopes (ELTs)\cite{2020SPIE11445E..08B,2012SPIE.8447E..1JE,2020SPIE11445E..1ET} foresee the use of Laser Guide Stars (LGSs) in their adaptive optics (AO) systems \cite{2020SPIE11448E..0YC,10.1117/12.2231681,2018SPIE10703E..3VC,10.1117/12.2232411,10.1117/12.923486,2013aoel.confE...4C,10.1117/12.2314255}. They all use Shack-Hartmann (SH) LGS wavefront sensors (WFSs) that are highly demanding in terms of detector's specifications\cite{Hippler2019,2016SPIE.9909E..5ZG,oberti:hal-02614170,2021A&A...649A.158B}. In this work we focused on the Multiconjugate adaptive Optic Relay For ELT Observations\cite{2020SPIE11448E..0YC,2021Msngr.182...13C,ciliegi2022} (MORFEO, formerly known as MAORY) and on the impact of rolling shutter detectors in the LGS WFS. In fact, during phase B of MORFEO development the two options for the LGS WFS detector that satisfy the requirements are: ESO/e2v LISA camera\cite{10.1117/12.2314489,BeleticRadialCCD, DowningNGSD2014, MorenoNGSD2016, FeautrierSPIE2012} with rolling shutter read-out and cameras based on SONY IMX 425 chip with global shutter read-out. ESO and the MORFEO consortium decided to make a comparison study to evaluate which is the best solution for MORFEO. As we reported in Agapito, Busoni \textit{et al.} 2022\cite{2022JATIS...8b1505A}, complementary metal oxide semiconductor (CMOS) sensors with a rolling shutter readout are an alternative solution to charge coupled device (CCD) with global shutter, but WFSs provided with such devices are characterized by distortion induced aberration (DIA). In this paper we present the performance estimation for MORFEO with rolling shutter and global shutter read-out detectors, in terms of the expected SR in the best atmospheric conditions and the sky coverage at the south galactic pole. In Sec.~\ref{sect:jitter} we quantify the residual LGS tilt in both amplitude and temporal evolution for a median atmospheric condition\cite{2013aoel.confE..89S}, taking into account the MORFEO jitter compensation; in Sec.~\ref{sec:mitigation} we present the level of distortion induced aberrations and discuss possible mitigation of them; in Sec.~\ref{sec:prop} we study the propagation of distortion induced aberrations in the MORFEO tomographic reconstructor; in Sec.~\ref{sec:performance} we present the performance estimation; in Sec.~\ref{sec:conclusion} we report our conclusion. \section{LGS jitter} \label{sect:jitter} LGS tilt jitter is determined by turbulence-induced fluctuations introduced during both the upward and downward propagation of the laser beam. We consider negligible the downward propagation for a 39m telescope, because the upward propagation is characterized by a significantly smaller beam. We run a set of end-to-end simulations with PASSATA\cite{doi:10.1117/12.2233963} to characterize this jitter and its residual after closed loop correction. The uplink projection is considered to be a gaussian beam with a waist radius of 108mm and a full aperture of 302mm diameter\cite{2011aoel.confE..56H,2015aoel.confE..45H,2021A&A...649A.158B} (see Fig.\ref{fig:laser_beam}). We compute the laser beacon image at 90km and we extract with a centroid the open loop jitter: we got 340mas RMS (square sum of the two axes) for the median atmospheric profile\cite{2013aoel.confE..89S}. This value is in good agreement with ESO VLT GALACSI data as can be seen in Fig. \ref{fig:GALACSI_Jitter}. Then, we compute the closed loop residual considering as input the centroid signal, as control a pure integrator and as plant a pure delay of 3.35 frames (1 due to read-out, 1 due to sample$\&$hold, 0.5 due to real time computer and 0.5 due to deformable mirror response time and 0.35 are the delay due to the round-trip propagation of the laser beam to the sodium layer\footnote{In MORFEO the correction is done using a fast tip/tilt mirror in the launch telescopes.}), with a frame period of 2ms. We found a residual jitter of 190mas RMS. The Power Spectral Densities of these signals are reported in Fig.\ref{fig:laser_jitter_PSD}. In the following sections we will use these results, though it is worth noting that the amount of residual jitter varies according to the atmospheric conditions. \section{Mitigation of distortion induced aberrations}\label{sec:mitigation} We computed the DIA from the residual jitter as described in Ref. \citeonline{2022JATIS...8b1505A} for a detector with rolling shutter read-out on two sectors and we got, for the median profile\cite{2013aoel.confE..89S}, a wavefront error due to DIA of 370 nm rms. This is an extremely large value and it will compromise the functionality of MORFEO, for this reason we study a mitigation strategy. A possible mitigation could be based on the forecast of the tilt temporal evolution using the LGS WFS measurements from the current and previous steps. In fact, knowing accurately the tilt temporal evolution during the current (\emph{i.e.} last) exposure of the frame will allow correction of the DIA. We analyze in this section this approach and its efficiency. We considered a linear extrapolation of the speed based on the estimation of the past values of speed and acceleration\cite{LINEST_GAGO}: \begin{equation}\label{eq:v0} v_0[k] = p[k] - p[k-1] \end{equation} \begin{equation}\label{eq:a0} a_0[k] = v_0[k] - v_0[k-1] \end{equation} \begin{equation}\label{eq:v} v[k] = v[k] + 0.5 a_0[k] \end{equation} \begin{equation}\label{eq:a} a[k] = v[k] - v[k-1] \end{equation} where $p[k]$ is the average tilt value, $v_0[k]$ is the initial estimation of average tilt speed in nm/frame, $a_0[k]$ is the initial estimation of average tilt acceleration in nm$^2$/frame, $v[k]$ is the final estimation of average tilt speed in nm/frame, and $a[k]$ is the final estimation of average tilt acceleration in nm$^2$/frame. All these average values refer to the detector integration that ends at the time step $k$.\\ This estimation is used to compute the tilt speed during the detector integration\footnote{please note that it means a forecast of half frame, because the speed is estimated from the average measurements of tilt during the frame integration.} and to remove from the slopes the DIA. Then we can estimate the error on this mitigation computing the error on the tilt speed: in fact the error of this mitigation strategy is proportional to the error on tilt speed estimation. Please note that the tilt speed is the main contributor to the DIA as shown in Ref. \citeonline{2022JATIS...8b1505A} and we can neglect the DIA due to higher orders. We considered a baseline-like case with 60$\times$60sub-apertures, 15arcsec FoV, elongated spots (``multi-peak'' sodium profile \cite{2014A&A...565A.102P} at z=30deg), windowed CoG, photon noise and read-out noise (considering ~1000ph/frame/sa flux and 3e- RON) to have a more realistic condition even if some details are missing like turbulence and the tomographic closed loop. We combined this with the temporal filtering given by the closed loop transfer function. In Fig. \ref{fig:cumulative_RMS_jitter_speed} we reported the cumulative RMS of the jitter speed and its forecast error. We can see in this figure that the error after the forecast is about half of the input, but it is concentrated at the high frequencies where the closed loop is able to reduce it effectively. Note that the forecast is accurate on low frequencies and it introduces errors above 10Hz. In particular, it magnifies disturbances at high frequencies like measurement noise, non-linearity due to elongated/truncated spots and possible vibrations in the launcher, because it comprises two derivatives in cascade, one to compute speed from position and one to compute acceleration from speed (see Eq.~\ref{eq:v0} and \ref{eq:a0}). It is interesting to note that we could reduce the closed loop transfer function bandwidth, acting on the gain of the loop, to mitigate the distortion induced aberrations propagation but, unfortunately, this will impact on the ability of MORFEO to correct turbulence aberrations. For the median atmospheric profile\cite{2013aoel.confE..89S} we expect a reduction of a factor 0.21 meaning that we should be able to mitigate the distortion induced aberration to 78nm from the original 370nm. Note that actually the tomographic reconstructor is applied before the temporal filtering, but since they are linear operators we reverse their order in this analytical estimation (the propagation in the tomographic reconstructor is presented in Sec.\ref{sec:prop}). It is worth noticing that the mitigation is mostly effective after the tomographic propagation when the temporal filter is applied, this means that the pseudo-open loop will be characterized by a strong disturbance. Hence all the tasks that use this data, like PSF reconstruction and atmospheric profile estimation, vibration peaks estimation will require additional data processing to deal with the residual distortion induced aberrations. \section{Propagation of distortion induced aberrations in MORFEO} \label{sec:prop} The residual presented in the previous section is the one seen by a single WFS and, in MORFEO, it will propagate through the tomographic reconstruction and projection matrices. Please note the propagation through the noise transfer function has already been considered in the previous section. We compute the propagation coefficient (from WFS error to on-axis error) in the MORFEO reconstruction and projection matrices for third order modes only, that are the dominant term of the distortion induced aberrations and we found 0.805 and 0.795 for M4 and for the post focal DM (we are considering the single post focal DM case) respectively. These coefficients were determined by computing modal reconstruction and projection of 1000 random realizations of coma and trefoils with unitary RMS value. Then, combining the DIA presented in Sec.~\ref{sec:mitigation} and the propagation coefficients we have 62nm per mirror that is a total of approximately 90nm. \section{Performance estimation} \label{sec:performance} As reported in the previous section the impact on the LGS loop due to rolling shutter read-out is about 90nm, but it could be as high as 350m in the worst case scenario (that is without mitigation). An additional error of 90nm means that K band SR scales of 0.94, H band SR of 0.89 and J band SR of 0.81 respectively. Then, we consider also 40nm of additional error due to other differences of the LISA and SONY detectors: number of pixels (800 vs 1100) and quantum efficiency (0.95 vs 0.7). The number of pixels allow for a larger number of sub-apertures with the same sub-aperture FoV, but this gives an improvement lower than the one expected with a single conjugate system due to the super-resolution\cite{2022JATIS...8b1514F}, while quantum efficiency is better for the LISA camera. So the final difference becomes about 100nm. The effect of this 100nm can be seen in Fig. \ref{fig:Q1_perf}, where we report the expected SR in the best atmospheric conditions (seeing 0.43arcsec at zenith). We evaluate the impact of such additional error on the sky coverage considering also the reduced Signal-to-Noise Ratio (SNR) on the LO WFS given by the lower H band SR on the technical FoV\footnote{MORFEO NGS works in H band, see Ref. \citeonline{2018SPIE10703E..4DB}.}. Note that this is a simplified (see Ref. \citeonline{2022JATIS...8b1509P} for a description of the sky coverage estimation method) and optimistic approach, because the correlation between correction in the technical FoV and LO correction is not limited to the SNR only. We decided to focus on comparing the impact of a rolling shutter detector with respect to the impact of the 2nd post focal DM. We can see in Fig. \ref{fig:median_sky_cov} that for half of the pointings (40\% for the large MICADO FoV, 70\% for the small MICADO FoV) where we expect the best performance MORFEO with a single DM performs better than MORFEO with the 2nd post focal DM and with a rolling shutter detector with the mitigation. This means that we lose the advantage of the second post focal DM in particular for Galactic Astronomy and Resolved Stellar Populations. The 2nd post focal DM holds its advantage at high sky coverages so, in particular, for Extragalactic Astronomy. Moreover, we verified that the 2nd post focal DM is not able to compensate for the impact of a rolling shutter detector in the best atmospheric conditions. Hence, while it is possible to fulfill the requirements, because we have a K band SR$>50\%$ for the best atmospheric conditions and $>30\%$ for a sky coverage of 50\%, the impact of a rolling shutter detector is greater than the one given by the 2nd post focal DM in about half of the observations, in particular the ones where we expect the best performance from MORFEO. \section{Conclusion} \label{sec:conclusion} We have analyzed the effect of the image distortion in the LGS WFS of MORFEO equipped with a detector using a rolling shutter read-out scheme. These rolling shutter CMOS detectors are particularly attractive for MORFEO because of their large format, low-noise and low-latency features. A typical residual LGS jitter of 100-200 mas, although negligible in term of tilt signal for an ELT LGS WFS, corresponds to a large residual aberration of a few $\mu m$ RMS and propagates rolling shutter induced aberration of more than 300nm RMS on modes of the third radial order and above. Mitigation combined with the closed loop filtering is able to reduce this distortion induced aberration by a factor 5, but after the tomographic reconstruction and projection we have still 90nm of additional error. We evaluate the performance of MORFEO with such additional error and we verify that while requirements are still met the advantage given by a second post focal DM is almost cancelled. Therefore MORFEO consortium asked ESO to adopt as LGS WFS detector a camera based on SONY IMX 425 chip. \subsection* {Acknowledgments} We acknowledge Fernando Gago for sharing with us the mitigation strategy presented in this work and Sylvain Oberti for sharing with us GALACSI NFM jitter data.\\ This work has been partially funded by ADONI – the ADaptive Optics National laboratory of Italy. \bibliography{report} % \bibliographystyle{spiejour} %
Title: A joint measurement of galaxy luminosity functions and large-scale field densities during the Epoch of Reionization
Abstract: One of the most exciting advances of the current generation of telescopes has been the detection of galaxies during the epoch of reionization, using deep fields that have pushed these instruments to their limits. It is essential to optimize our analyses of these fields in order to extract as much information as possible from them. In particular, standard methods of measuring the galaxy luminosity function discard information on large-scale dark matter density fluctuations, even though this large-scale structure drives galaxy formation and reionization during the Cosmic Dawn. Measuring these densities would provide a bedrock observable, connecting galaxy surveys to theoretical models of the reionization process and structure formation. Here, we use existing Hubble deep field data to simultaneously fit the universal luminosity function and measure large-scale densities for each Hubble deep field at $z =$ 6--8 by directly incorporating priors on the large-scale density field and galaxy bias. Our fit of the universal luminosity function is consistent with previous methods but differs in the details. For the first time, we measure the underlying densities of the survey fields, including the most over/under-dense Hubble fields. We show that the distribution of densities is consistent with current predictions for cosmic variance. This analysis on just 17 fields is a small sample of what will be possible with the James Webb Space Telescope, which will measure hundreds of fields at comparable (or better) depths and at higher redshifts.
https://export.arxiv.org/pdf/2208.06059
\label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \begin{keywords} galaxies: high-redshift -- methods: data analysis \end{keywords} \section{Introduction} Over the past three decades, astronomers have put enormous effort -- and time with facilities like the Hubble Space Telescope (HST) -- into observing the most distant galaxies. As we approach the era of the James Webb Space Telescope (JWST), we expect a revolution in our understanding of the early Universe. JWST's first observational campaigns will uncover many interesting and complex phenomena in the Cosmic Dawn, but interpreting these new observations will require a solid bedrock of survey analysis. A key observable of the Cosmic Dawn is the galaxy luminosity function, which describes the galaxy population and its growth as a whole. Much effort has been put into its study, as its evolution in shape and normalization have important implications for the ways galaxies form and evolve \citep[see e.g.,][]{Schenker2013,McLure2013,Bouwens2015, Finkelstein2015, Bowler2015, Livermore2017, Atek2018, Oesch2018, Behroozi2019, Bouwens2021, Finkelstein2022}. These studies have pinned down the abundance of relatively bright galaxies at $z \lesssim 8$, with results largely consistent with models extrapolated from lower redshift \citep[see e.g.,][]{Tacchella2013, Mason2015, Furlanetto2017, Mirocha2017}. However, above $z \gtrsim 9$, galaxies are currently too rare to decisively measure their abundances, although the observations still provide important insights into early galaxies \citep{Oesch2013,Oesch2015,Bouwens2015, Ishigaki2015,Mcleod2015,Mcleod2016,Bouwens2019,RobertsBorsani2022}. These measurements have been possible thanks to several large observing campaigns across a few distinct fields: only by combining many such efforts have astronomers managed to obtain the current constraints. One of the (many) challenges in measuring the luminosity function is the uncertainty due to cosmic variance\footnote{In this paper, we use the term ``cosmic variance'' to describe dark matter density fluctuations between volumes in our Universe and the subsequent consequences for the galaxy population. To be precise, this is a case of sample variance. The term cosmic variance is sometimes reserved for the errors stemming from having only one Universe to observe.}: the normalization and shape of the luminosity function differ between distinct volumes due to fluctuations in the large-scale dark matter density field \citep[see Figure~\ref{hst_fig:shapechange}, and][]{Trapp2020}. However, to the extent that it reflects real large-scale structure in the Universe, cosmic variance is not just a nuisance; it is itself a key driver of both galaxy formation and reionization during the Cosmic Dawn. If large-scale densities can be measured, they can complement the luminosity function as another bedrock observable. The insights to be gained from such measurements include: \textit{(i)} Reionization likely began in the densest parts of the Universe and ended in the largest voids. Identifying such over/under-densities is an area of great interest \citep[see e.g.][]{Zitrin2015,Jung2020,Tilvi2020,Hu2021,Endsley2021,Becker2018, Davies2018, Christenson2021}. \textit{(ii)} Large-scale feedback mechanisms, driven by large-scale structure, are likely to strongly affect the galaxy population before and during reionization \citep{Thoul1996, Iliev2007, Noh2014}. \textit{(iii)} Measuring large-scale densities at early times facilitates the understanding of the assembly history of rare objects like galaxy clusters (e.g., \citealt{Chiang2017}), which form from the densest environments. \textit{(iv)} Finally, comparing large-scale density measurements from surveys with theoretical predictions of cosmic variance can help test models of the galaxy--halo connection \citep{Trapp2020}. In \citet{Trapp2022}, we developed a framework that simultaneously measures field densities \textit{and} the high-$z$ luminosity function given a set of galaxy surveys. Unlike the standard approach to estimating luminosity functions, which acknowledges the existence of cosmic variance but does not attempt to model it, our new framework uses Bayesian statistics to fold in a comprehensive model of cosmic variance \citep{Trapp2020} and its effect on the galaxy population (which changes the shape of the luminosity function, see Figure~\ref{hst_fig:shapechange}). As a result, our method also measures the large-scale densities of the survey fields. We then predicted the precision of various JWST cycle-1 surveys, finding that these surveys can measure field densities to the maximum precision allowed by Poisson noise. We also found these surveys can measure the luminosity function at $z =$ 12 with comparable precision to HST's existing constraints at $z =$ 8, but only if the data sets can be combined effectively. In this paper, we apply that same framework to existing HST galaxy data from \citet{Bouwens2021}\footnote{For $z =$ 6--8, the data-set from \citet{Bouwens2021} is the same as \citet{Bouwens2015} with the addition of the COSMOS, UDS, and EGS fields.} and \citet{Finkelstein2015} (see Table~\ref{hst_tab:fields}). We obtain a new measurement of the galaxy luminosity function for $z =$ 6--8 and, for the first time, measure the underlying large-scale density of every HST survey field in this data set. This work also demonstrates the power of measuring field densities, with an eye forward to the much larger, deeper, and more comprehensive data set that will be arriving in cycle-1 of JWST, allowing for many more densities to be measured at higher precision. After analyzing the existing data with our framework, we compare to earlier luminosity function estimates and present the environment measurements. We also develop a new method of calculating individual field densities \textit{after} a global luminosity function fit. This method drastically reduces computation time required to obtain field densities with a minimal loss in precision. In section~\ref{hst_sec:methods}, we describe the data sets we use, briefly summarize the framework from \citet{Trapp2022}, and describe the new method of measuring environments. In sections~\ref{hst_sec:LFresults} and ~\ref{hst_sec:Envresults}, we present our new measurements of the $z =$ 6--8 luminosity function and field densities. In section~\ref{hst_sec:conclusions} we discuss our results. We use the following cosmological parameters: $\Omega_m = 0.308$, $\Omega_\Lambda=0.692$, $\Omega_b=0.0484$, $h=0.678$, $\sigma_8=0.815$, and $n_s=0.968$, consistent with recent Planck Collaboration XIII results \citep{PlanckCollaboration2016}. We provide all distances in comoving units. We present all luminosities as rest-frame ultra-violet ($1500-2800$ \AA)\footnote{This wavelength range corresponds to $H$-band in the redshift range of $z\approx5$--$9$ and $K$-band for $z\approx8$--$12$.} luminosities, and all magnitudes are AB magnitudes. \begin{table*} \centering \caption{The area, magnitude limit, and source counts of each field. For fields with both \citet{Bouwens2021} and \citet{Finkelstein2015} data, the latter number count is in parentheses. \citet{Finkelstein2015} uses redshift bins of size $\Delta z = 1$ centered at $z =$ 6, 7, 8. \citet{Bouwens2021} use redshift intervals of 5.5 $<z<$ 6.3 for their $z\sim$ 6 sample, 6.3 $<z<$ 7.3 for their $z\sim$ 7 sample, and 7.3 $<z<$ 8.4 for their $z\sim$ 8 sample. } \label{hst_tab:fields} \begin{tabular}{cccccc} \hline \hline Field & Area & approx depth & & Number & \\ & [arcmin$^2$] & [rest-UV] & z = 6 & z = 7 & z = 8 \\ \hline Bouwens (Finkelstein) & & & & &\\ & & & & &\\ CANDELS-GS-DEEP & 64.5 & 27.9 & 198 (142) & 77 (48) & 26 (16)\\ ERS & 40.5 & 27.8 & 61 (80) & 46 (48) & 5 (6)\\ CANDELS-GS-WIDE & 34.2 & 27.4 & 43 (40) & 5 (4) & 3 (1)\\ CANDELS-GN-DEEP & 68.3 & 28.1 & 188 (180) & 134 (92) & 51 (18)\\ CANDELS-GN-WIDE & 65.4 & 27.4 & 69 (63) & 39 (24) & 18 (14)\\ HUDF/XDF & 4.7 & 29.7 & 97 (94) & 57 (40) & 29 (15)\\ HUDF09-1 & 4.7 & 28.8 & 38 (35) & 22 (16) & 18 (4)\\ HUDF09-2 & 4.7 & 28.9 & 32 (31) & 23 (10) & 15 (3)\\ MACS0416-Par & 4.9 & 29.0 & 25 (24) & 19 (8) & 4 (1)\\ Abell 2744-Par & 4.9 & 28.9 & 20 (13) & 11 (8) & 4 (1)\\ \hline Bouwens only & & & & &\\ & & & & &\\ CANDELS-UDS & 151.2 & 26.8 & 33 & 18 & 6\\ CANDELS-COSMOS & 151.9 & 26.8 & 48 & 15 & 9\\ CANDELS-EGS & 150.7 & 26.9 & 50 & 43 & 9\\ MACS0717-Par & 4.9 & 28.8 & 41 & 21 & 10\\ MACS1149-Par & 4.9 & 28.8 & 36 & 31 & 6\\ Abell S1063-Par & 4.9 & 28.8 & 40 & 20 & 7\\ Abell 370-Par & 4.9 & 28.8 & 47 & 20 & 3\\ \hline \end{tabular} \end{table*} \section{Methods and Data}\label{hst_sec:methods} \subsection{Fitting the luminosity function}\label{hst_sec:LFmethods} We fit a Schechter luminosity function to galaxy catalogs using a Bayesian fitting framework. This framework is described in detail in section 2.1 and 2.2 of \citet{Trapp2022}, but we summarize it here. Let us assume that the average number density of galaxies with absolute magnitudes between $(\mabs, \mabs+d\mabs)$ is described by $\Phi_{\textrm{avg}}(\mabs,z) d\mabs$, which is a Schechter function with the following redshift-dependent parameters $\vec{\phi}(z)$: \textit{(i)} the normalization $\phi^*$, \textit{(ii)} the characteristic magnitude $M^*$, and \textit{(iii)} the faint-end power-law slope $\alpha$: \begin{equation} \begin{aligned} &\Phi_{\textrm{avg}}(\mabs,z) d\mabs = \\ &(0.4~{\textrm{ln}}10)\phi^*[10^{0.4(\mabs^*-\mabs)}]^{\alpha+1}{\textrm{exp}}[-10^{0.4(\mabs^*-\mabs)}]d\mabs. \end{aligned} \end{equation} The luminosity function that can actually be observed also depends on: \textit{(i)} the effects of cosmic variance and \textit{(ii)} observational features like the completeness and contamination functions (which we combine into a single function $f(\mabs, z)$) that are unique to each survey volume. The luminosity function in each survey volume becomes: \begin{equation}\label{hst_eq:phiobs} \begin{aligned} &\Phi_{\textrm{obs}}(\mabs,\vol,z,\hstenvreal)= \\ &f(\mabs, z) \cdot \Phi_{\textrm{avg}}(\mabs,z) \left( 1 + \frac{\hstenvreal}{\sigmapb}~\epcv(\mabs,\vol,z)\right). \end{aligned} \end{equation} where $\hstenvreal = (\rho - \bar{\rho})/\bar{\rho}$ is the relative linear dark matter density in the volume $\vol$, $\sigmapb$ is the rms fluctuation of the linear dark matter density field on the scale $\vol$ (and `PB' refers to the pencil-beam shape typical of real surveys), and $\epcv(\mabs,\vol,z)$ parameterizes the luminosity-dependent cosmic variance using the model from \cite{Trapp2020}. The cosmic variance function $\epcv(\mabs,V,z)$ combines non-linear halo clustering with a self-consistent analytical galaxy model (see Figure~\ref{hst_fig:shapechange} for example values of $\epcv$). It also corrects for the `pencil-beam' shape of survey volumes and is comparable to simulation-based estimates of cosmic variance from \citet{Bhowmick2020} and \citet{Ucci2021}. The largest uncertainty in $\epcv$ comes from the models of non-linear halo clustering; $\epcv$ varies $\sim$25$\%$ between models. The galaxy model can also affect $\epcv$, although to a much lesser extent because $\epcv$ is not a strong function of magnitude (at increasing redshift, however, $\epcv$'s dependence on magnitude increases). We test the impact of uncertainty in $\epcv$ on our results in section~\ref{hst_sec:measuringcosmicvariance}. For more information on $\epcv$, we refer the reader to \cite{Trapp2020}, where we provide a full description of the construction of $\epcv$, explore its uncertainties and model dependence, and quantitatively compare to simulation-based estimations of cosmic variance from \citet{Bhowmick2020} and \citet{Ucci2021}. We make $\epcv$ available to the public via the Python package: \pakidge. Given data $\vec{D}$ from a large suite of galaxy surveys composed of $\snum$ fields each with their own volume, $f(\mabs, z)$, and density $\hstenvreal$, we would like to determine the probability density of the luminosity function parameters given the data: $p(\vec{\phi} | \vec{D})$, where $\vec{D}$ contains many galaxies with measured magnitudes and redshifts.. We are also interested the probability density of the dark matter densities of the $\snum$ fields given the data: $p(\vec{\hstenvreal} | \vec{D})$, where $\vec{\hstenvreal}$ is a vector containing $\hstenvreal$ for each survey. Starting with the joint posterior $p(\vec{\phi}, \vec{\hstenvreal} | \vec{D})$ and applying Bayes' theorem, we have: \begin{equation} p(\vec{\phi},\vec{\hstenvreal} | \vec{D}) \propto p(\vec{D} | \vec{\phi},\vec{\hstenvreal}) \times p(\vec{\phi}) \times p(\vec{\hstenvreal}) \end{equation} where $p(\vec{D} | \vec{\phi},\vec{\hstenvreal})$ is the likelihood $\mathcal{L}$ given the average luminosity function parameters and densities, and $p(\vec{\phi})$ and $p(\vec{\hstenvreal})$ are their priors. We assume flat priors for each luminosity function parameter. The prior for each density $p(\hstenvreal_i)$ is simply a normal function centered at zero with standard deviation equal to $\sigmapbi$. From \citet{Trapp2022}, the log likelihood is \begin{equation}\label{hst_eq:likelihood} \begin{aligned} &\textrm{ln}\mathcal{L} \propto \sum_i^{\snum} \Bigg \{- n_{i,\textrm{exp}} +\\ &\sum_{j}^{n_i} \left[\textrm{ln}\Phi_{\textrm{avg}}(M_j,\vec{\phi}) + \textrm{ln}\left(1+\frac{\hstenvreal_{i}}{\sigmapbi}\epcvi(M_j,V_i)\right)\right] \Bigg \}, \end{aligned} \end{equation} where the first sum is over each field, and the second sum is over each source in the $i^{\textrm{th}}$ field. Also, $n_{i,\textrm{exp}}$ is the number of sources \textit{expected} in the $i^{\textrm{th}}$ field given the average luminosity function parameters $\vec{\phi}_{\textrm{avg}}$ and the local density $\hstenvreal_{i}$. We can then write the posterior as \begin{equation}\label{hst_eq:posterior} p(\vec{\phi},\vec{\hstenvreal} | \vec{D}) \propto \mathcal{L} \times p(\vec{\hstenvreal}) \times p(\vec{\phi}). \end{equation} Finally, we can marginalize over $\vec{\phi}$ or $\vec{\hstenvreal}$ to get $p(\vec{\hstenvreal} | \vec{D})$ or $p(\vec{\phi} | \vec{D})$, respectively. At these high redshifts, the exponential cutoff can be poorly sampled by data. If $M^*$ is brighter than the brightest galaxy in the sample, the data would be better fit by a single power-law. This results in an extreme degeneracy between the normalization $\phi^*$ and cutoff location $M^*$. To address this, we restrict the value of $M^*$ in our fits to be fainter than the brightest galaxy in our sample. When $\epcv$ becomes large, the factor $\left( 1 + \hstenvreal / \sigmapb \cdot \epcv \right)$ can become negative for moderate under-densities, implying a negative expectation value for the number of galaxies. This is a limitation of the Gaussian approximation to cosmic variance: in reality, we must have $\delta \ge -1$. The dark matter fluctuations on the relevant scales (of the survey fields) are much smaller than this, but the relative density of highly-biased, luminous galaxies can reach this limit. Cosmic variance's effects on a local luminosity function are better described by a distribution function in which the probability density vanishes at negative number counts but reduces to a Gaussian when $\epcv$ is small, like a log-normal or gamma distribution. In this paper, we use the latter, and the observed luminosity function becomes \begin{equation}\label{hst_eq:phiobsgamma} \Phi_{\textrm{obs}}(\mabs,\vol,z,\hstenvreal)= f(\mabs, z) \cdot f_{\Gamma}(x,k,\theta), \end{equation} where the gamma distribution $f_{\Gamma}$ is defined as \begin{equation} f_{\Gamma}(x) = \frac{1}{\Gamma(k)\theta^k}x^{k-1}e^{-x/\theta}. \end{equation} The mean of this distribution is $k\theta$ and the variance is $k\theta^2$. We choose $k$ and $\theta$ such that those values match the Gaussian case: $k\theta = \Phi_{\textrm{avg}}(M,z)$ and $k\theta^2 = \Phi_{\textrm{avg}}^2(\mabs,z)\epcv^2(\mabs,\vol,z)$. $\Gamma(k)$ is the gamma \textit{function}. The variable $x$ is chosen such that the the gamma \textit{cumulative} distribution function at $x$ is equal to the normal cumulative distribution function at $\hstenvreal/\sigmapb$. This switch also carries over to the likelihood function in the following way: \begin{equation}\label{hst_eq:likelihoodgamma} \textrm{ln}\mathcal{L} \propto \sum_i^{\snum} \left[- n_{i,\textrm{exp}} + \sum_{j}^{n_i} \textrm{ln}f_{\Gamma}(x,k,\theta) \right]. \end{equation} \subsection{Data Sets}\label{hst_sec:data} Our analysis makes use of existing data in the redshift range $z =$ 6--8. We use the public galaxy catalogs from \citet{Bouwens2021} and \citet{Finkelstein2015}. Table~\ref{hst_tab:fields} lists the fields used and the number counts of galaxies in those fields from each group. These catalogs contain photometric redshift and rest-frame UV magnitudes for each galaxy in their samples. We also use the completeness and contamination functions calculated for each field by those groups. These functions are obtained by simulations and become uncertain for very faint galaxies, which can affect the results. For \citet{Finkelstein2015}/\citet{Bouwens2021}, we discard all sources fainter than the magnitude at which the effective volume curve drops below 50\%/33\%, respectively. The \citet{Bouwens2021} data set contains all the fields covered by \citet{Finkelstein2015}, plus four additional HST parallel fields and three shallow wide field surveys. Where they overlap, these two groups start with similar raw data, but use different selection criteria and reduction pipelines. For example, their filter thresholds select galaxies over slightly different redshift intervals, as specified in Table~\ref{hst_tab:fields}. We consider both final data sets in parallel to compare their results and to test the robustness of our method for calculating large-scale environments in regards to these systematic choices. A challenge of considering the density of each field is the expanded dimensionality of the parameter space; each field introduces a new density parameter to the fit. That new parameter has a tight prior, however: a normal distribution centered at zero with standard deviation equal to $\sigmapbi$, as all of these fields are large enough to be in the linear regime. Unfortunately, sampling the posterior with many sub-fields can still be costly. To alleviate this limitation, in \citet{Trapp2022} we developed a method to combine many different fields into ``composite'' fields with a single density parameter and an ``effective'' cosmic variance value, with the treatment of the combination depending on whether the fields are contiguous or treated as independent. In this work, we combine the following groups of fields into ``composite'' fields: \begin{enumerate} \item \textbf{GS}: CANDELS-GS-DEEP, ERS, and CANDELS-GS-WIDE are combined \textit{contiguously}. \item \textbf{GN}: CANDELS-GN-DEEP and CANDELS-GN-DEEP are combined \textit{contiguously}. \item \textbf{PAR}: HUDF09-1, HUDF09-2, MACS0416-Par, and Abell 2744-Par are combined \textit{independently}. For fits with \citet{Bouwens2021} data, this grouping also includes MACS0717-Par, MACS1149-Par, Abell S1063-Par, and Abell 370-Par. \item \textbf{XDF}: The HUDF/XDF field is not combined with any others. \item \textbf{UCE}: For fits with \citet{Bouwens2021} data, the CANDELS-UDS, CANDELS-COSMOS, and CANDELS-EGS fields are combined \textit{independently}. \end{enumerate} We test the effects of these combinations in section~\ref{hst_sec:validation}. In short, we find that combining fields can have a non-negligible effect, but the changes are well within the current uncertainties, so the existing HST data do not demand a more intensive treatment. \subsection{Measuring environments}\label{hst_sec:postprocess} In the previous section, we combined survey fields in order to speed up the calculation of the luminosity function posterior. Unfortunately, the densities of the individual fields are lost when combining them in this way (except for HUDF/XDF, which is not combined with other fields). In this section, we introduce a new ``post-processing'' method to measure their densities efficiently. We want $p(\hstenvreal_i | D_i)$ for the $i^{\textrm{th}}$ field. The likelihood $\mathcal{L}_i = p(D_i | \hstenvreal_i, \vec{\phi})$ of the data is similar to equation~(\ref{hst_eq:likelihood}), but applied only to one field: \begin{equation}\label{hst_eq:likelihoodi} \textrm{ln}\mathcal{L}_i \propto - n_{i,\textrm{exp}} + \sum_{j}^{n_i} \textrm{ln}f_{\Gamma}(x,k,\theta). \end{equation} The posterior for the density of the $i$th field then becomes \begin{equation}\label{hst_eq:posteriori} p(\hstenvreal_i | \vec{D}) \propto p(\hstenvreal_i)\int\mathcal{L}_i \times p(\vec{\phi}) d\vec{\phi}, \end{equation} where the prior on the Schechter parameters $p(\vec{\phi})$ is the \textit{posterior} on those parameters that were found using section~\ref{hst_sec:LFmethods}. Technically, the Schechter parameter prior $p(\vec{\phi})$ and the likelihood $\mathcal{L}_i$ are not completely independent, as the data from the $i^{\textrm{th}}$ field was used to create $p(\vec{\phi})$. The correct thing to do is recalculate the $p(\vec{\phi})$ using all fields \textit{except} the field for which we wish to measure the density. However, this procedure would require re-calculating $p(\vec{\phi})$ $\snum$ times, which would be very computationally expensive. Further, each individual field is only a small part of the data set, having a small effect on the calculation of $p(\vec{\phi})$, making them only weakly correlated. We verify this claim in section~\ref{hst_sec:validation}. \section{Measurements of the Luminosity Function} \label{hst_sec:LFresults} We plot the \citet{Bouwens2021} and \citet{Finkelstein2015} data with our best-fit average luminosity functions and each composite field's luminosity function in Figure~\ref{hst_fig:LFs}. Figure~\ref{hst_fig:posterior} is an example of the Schechter function parameter posteriors produced by our framework. We provide full Schechter function fit posteriors along with this work as supplementary data (see Data Availability Section). \begin{table*} \centering \caption{Constraints on luminosity functions from various survey combinations. All error bars are 68.27\% credible intervals} \label{hst_tab:params} \begin{tabular}{clccc} \hline \hline Redshift & Data Set & & Parameter Posteriors & \\ & & $\phi^* \times 10^4$ & $\alpha$ & $M^*$\\ \hline 6 & & & & \vspace{4pt}\\ & \citet{Bouwens2021} & 5.1$_{-1.0}^{+1.2}$ & -1.93$_{-0.08}^{+0.08}$ & -20.93$_{-0.09}^{+0.09}$ \vspace{4pt}\\ & Bouwens & 3.5$^{+1.3}_{-1.0}$ & -1.96$^{+0.08}_{-0.07}$ & -21.23$^{+0.18}_{-0.20}$\vspace{4pt}\\ & \citet{Finkelstein2015} & 1.9$_{-0.8}^{+0.9}$ & -2.02$_{-0.1}^{+0.1}$ & -21.13$_{-0.31}^{+0.25}$\vspace{4pt}\\ & Finkelstein & 2.4$^{+1.2}_{-0.9}$ & -1.96$^{+0.10}_{-0.10}$ & -21.11$^{+0.25}_{-0.29}$ \vspace{4pt}\\ \hline 7 & & & &\vspace{4pt}\\ & \citet{Bouwens2021} & 1.9$_{-0.6}^{+0.8}$ & -2.06$_{-0.11}^{+0.11}$ & -21.15$_{-0.13}^{+0.13}$ \vspace{4pt}\\ & Bouwens & 2.2$^{+1.1}_{-0.8}$ & -1.95$^{+0.10}_{-0.09}$ & -21.26$^{+0.23}_{-0.28}$\vspace{4pt}\\ & \citet{Finkelstein2015} & 1.6$_{-1.0}^{+1.5}$ & -2.03$_{-0.20}^{+0.21}$ & -21.03$_{-0.50}^{+0.37}$ \vspace{4pt}\\ & Finkelstein & 2.2$^{+1.7}_{-1.1}$ & -2.00$^{+0.19}_{-0.18}$ & -20.91$^{+0.32}_{-0.39}$ \vspace{4pt}\\ \hline 8 & & & &\vspace{4pt}\\ & \citet{Bouwens2021} & 0.9$_{-0.5}^{+0.9}$ & -2.23$_{-0.20}^{+0.20}$ & -20.93$_{-0.28}^{+0.28}$ \vspace{4pt}\\ & Bouwens & 1.9$^{+1.3}_{-0.9}$ & -1.93$^{+0.17}_{-0.16}$ & -20.72$^{+0.28}_{-0.33}$ \vspace{4pt}\\ & \citet{Finkelstein2015} & 0.7$_{-0.7}^{+2.5}$ & -2.36$_{-0.40}^{+0.54}$ & -20.89$_{-1.08}^{+0.74}$ \vspace{4pt}\\ & Finkelstein & 2.8$^{+4.2}_{-2.0}$ & -2.20$^{+0.44}_{-0.38}$ & -20.32$^{+0.45}_{-0.56}$ \vspace{4pt}\\ \hline \end{tabular} \end{table*} \subsection{Comparison of luminosity function parameters}\label{hst_sec:LFresultscomparison} Figure~\ref{hst_fig:LFMeta} compares the Schechter function parameter measurements using our method with the results of \cite{Finkelstein2015} and \citet{Bouwens2021}. While our results agree broadly with these works, we do differ in the details. This is not surprising, as our method is more constrained when it comes to the normalization of each individual field and allows for slightly different luminosity function shapes through cosmic variance \citep{Trapp2020,Trapp2022}. In particular, compared to \citet{Bouwens2021}, our framework prefers a 1-$\sigma$ lower $\phi^*$ and 1.5-$\sigma$ lower $\mabs^*$ at $z =$ 6. At $z =$ 7, our framework prefers a 1-$\sigma$ higher $\phi^*$ and 1.5-$\sigma$ shallower $\alpha$. At $z = 8$, our framework prefers a 1-$\sigma$ higher $\phi^*$ and 1-$\sigma$ shallower $\alpha$. In their work, \citet{Bouwens2021} include a treatment of the uncertainty in measured source luminosity, an effect not considered in this work. This would have the strongest effect at the bright end of the luminosity function, and could contribute to the different findings for the highly-correlated $\mabs^*$ and $\phi^*$. Despite these differences, our best-fit luminosity function matches the total number density of sources measured by \citet{Bouwens2021} between $m_{\textrm{app}} = 26-29$ within 10\% at all redshifts. Our framework recovers the \citet{Finkelstein2015} results within 1-$\sigma$ across the board. However, our best-fit luminosity function predicts 10\%, 20\%, and 35\% more sources than \citet{Finkelstein2015}'s at $z =$ 6, 7, and 8, respectively (between $m_{\textrm{app}} = 26-29$). \subsection{Exploring systematics}\label{hst_sec:systematics} There are many subtleties that affect the measurement of the luminosity function. For example, in our framework, the choice of where to cut off the faintest sources when fitting the luminosity function can have a $\sim$1-$\sigma$ effect on the resulting parameters, especially for the faint-end slope. This systematic will be immediately improved in the first cycle of JWST data, with multiple large and deep galaxy surveys like PRIMER, CEERS, JADES, PANORAMIC, WDEEP, and COSMOS-Web. At present, differing treatments of the faint end contribute to the differences between methods shown here. Different groups also have different selection criteria and analysis pipelines. They may therefore be probing slightly different populations of galaxies or physical locations. For example, the results reported in \citet{Bouwens2021} and \citet{Finkelstein2015} agree with one another at the level of 1-2-$\sigma$, with \citet{Finkelstein2015} preferring a lower normalization parameter and steeper faint-end slope. At $z=6$, 7, and 8, the best fit luminosity functions from \citet{Finkelstein2015} predict 45\%, 32\%, and 3\% fewer sources than the best fit luminosity functions from \citet{Bouwens2021} in the range $m_{\textrm{app}} = 26 - 29$. Similarly, at $z=6$, 7, and 8, the best fit luminosity function from \citet{Finkelstein2015} predicts 37\% less, 36\% less, and 45\% \textit{more} total star formation rate density than the best fit luminosity function from \citet{Bouwens2021} (integrating down to $M_{\textrm{abs}} = -13$). The exact reasons for these differences are not clear, and may be a combination of one or more of the following: \textit{(i)} differences in methodology for generating effective volume curves \citep[which is most important at the faint-end, see][for details about these methodologies]{Finkelstein2015,Bouwens2021}; \textit{(ii)} differences in redshift intervals being probed, leading to sampling different physical volumes (see Table~\ref{hst_tab:fields}); \textit{(iii)} differences in the intrinsic galaxy population arising from the detailed selection criteria; and \textit{(iv)} other systematics. In the next section, we analyze the individual environments of each field using both groups' data sets. Discrepancies in these densities could help illuminate differences between the groups. In Figure~\ref{hst_fig:LFs}, the \citet{Bowler2014,Bowler2015,Bowler2020} best-fit binned luminosity function points (\textit{open squares}) lay somewhat below the best-fit lines at $z=$ 6 and especially at $z=$ 7 when compared to the \citet{Bouwens2021} data set (\textit{left column}). However, the points agree more closely with \citet{Finkelstein2015} data set (\textit{right column}). This highlights the potential effects of systematics inherent in different reduction techniques as well as space- vs ground-based measurements. However, there is considerable Poisson variance in the space-based data at these bright magnitudes due to low numbers of sources, which could account for these differences. Future wide-field space-based surveys such as those conducted by the Roman Space Telescope will be very important in measuring this bright-end cutoff. Some studies of the high-$z$ galaxy distribution have found evidence that a Schechter function does not provide a good fit to the bright end of the luminosity function \citep[e.g.,][]{Bowler2014,Bowler2015,Bowler2020}. To test this, we fit the data again using a modified Schechter function where we change the exponential factor to $e^{(L/L^*)^\varGamma}$, with $\varGamma$ a constant parameter. When $\varGamma$ is less than one, this has the effect of flattening out the exponential cutoff, resulting in more bright galaxies. We find adding this extra parameter is disfavored given the data-set we use, increasing the Bayesian Information Criteria (BIC) by $\sim$5. JWST alone will provide strong constraints on the shape of the luminosity function at the bright-end, especially at $z \sim 6$. At higher redshifts, the Roman Telescope will be crucial in measuring the bright-end of the luminosity function, especially because its data will be easily comparable to JWST due to the considerable overlap in their observable magnitudes. Wide-field ground-based surveys will also help in measuring the brightest galaxies. However, these ground-based surveys are limited in the redshifts they can reach, and when combining deep but narrow space-based images with shallow but wide ground-based images, one must consider carefully potential systematic normalization offsets between space- and ground-based measurements. \section{Measurements of Large-Scale Structure}\label{hst_sec:Envresults} \subsection{Survey Field Environments}\label{hst_sec:fieldenvs} Figure~\ref{hst_fig:realEnvs} shows the physical dark matter densities of each Hubble field $\rho$ relative to the average dark matter density of the Universe $\bar{\rho}$. We list the numerical results in Table~\ref{hst_tab:densities}. We display the results when using the \citet{Bouwens2021} data-set as well as when using the \citet{Finkelstein2015} data-set. In general, the results are consistent between groups, with a few exceptions that will be discussed below. The \textit{normalized} relative densities (in units of standard deviations $\sigmapb$ from average) are also given in Table~\ref{hst_tab:densities}. We convert from a dark matter over/under-density to a theoretical $M^*$ galaxy number over/under-density using the bias value calculated using our public Python package \pakidge~\citep{Trapp2020}. Typical bias values for an $M^*$ galaxy these surveys are 6--9. At $z =$ 6, the densest \citet{Bouwens2021} field is Abell 370, with 9\% more dark matter than the Universe average for that volume, corresponding to a 60\% overdensity in $M^*$ galaxies. The least dense fields are Abell 2744 and GOODS-North Wide, with 7\%/7\% less matter than average and 43\%/48\% fewer $M^*$ galaxies. At $z =$ 7, the densest field is MACS1149 with 10\% more matter and 80\% more $M^*$ galaxies than average, and the least dense field is GOODS-South Wide with 11\% less matter and 90\% fewer $M^*$ galaxies than average. Finally, at $z =$ 8, the densest field is HUDF091 with 6\% more matter and 45\% more $M^*$ galaxies than average, and the least dense field is Abell 370 with 6\% less matter and 47\% fewer $M^*$ galaxies than average. At $z =$ 8 however, the uncertainty is much higher in these measurements and there is more disagreement between the data-sets. \begin{table*} \centering \caption{The \textit{real} field densities, $\delta = (\rho - \bar{\rho})/\bar{\rho}$, and the \textit{normalized} field densities, $\delta/\sigma_{\textrm{PB}}$, with their uncertainties.} \label{hst_tab:densities} \begin{tabular}{rrrrrrr} \hline \hline \multicolumn{1}{c}{Field \& Data-set} & \multicolumn{1}{c}{z = 6} & & \multicolumn{1}{c}{z = 7} & & \multicolumn{1}{c}{z = 8} \\ \multicolumn{1}{c}{$[B]$ouwens or $[F]$inkelstein} & \multicolumn{1}{c}{[norm]} & \multicolumn{1}{c}{[real]} & \multicolumn{1}{c}{[norm]} & \multicolumn{1}{c}{[real]} & \multicolumn{1}{c}{[norm]} & \multicolumn{1}{c}{[real]}\\ \hline & & & & & &\\ CANDELS-GS-DEEP $[B]$ & 0.8$\pm$0.5 & 0.03$\pm$0.02 & -1.2$\pm$0.6 & -0.05$\pm$0.02 & -1.5$\pm$0.7 & -0.06$\pm$0.03\\ CANDELS-GS-DEEP $[F]$ & -0.8$\pm$0.7 & -0.04$\pm$0.03 & -1.0$\pm$0.7 & -0.04$\pm$0.03 & 0.4$\pm$0.8 & 0.01$\pm$0.03\vspace{4pt}\\ ERS $[B]$ & -0.4$\pm$0.6 & -0.02$\pm$0.03 & -0.9$\pm$0.6 & -0.04$\pm$0.02 & -0.6$\pm$0.8 & -0.02$\pm$0.03\\ ERS $[F]$ &-0.2$\pm$0.7 & -0.01$\pm$0.03 & 1.4$\pm$0.8 & 0.06$\pm$0.03 & -1.0$\pm$0.8 & -0.04$\pm$0.03\vspace{4pt}\\ CANDELS-GS-WIDE $[B]$ & -0.9$\pm$0.6 & -0.04$\pm$0.03 & -2.5$\pm$0.7 & -0.11$\pm$0.03 & -0.7$\pm$0.8 & -0.03$\pm$0.03\\ CANDELS-GS-WIDE $[F]$ & -1.1$\pm$0.6 & -0.05$\pm$0.03 & -1.8$\pm$0.8 & -0.08$\pm$0.04 & -0.5$\pm$0.9 & -0.02$\pm$0.04\vspace{4pt}\\ CANDELS-GN-DEEP $[B]$ & 0.6$\pm$0.5 & 0.03$\pm$0.02 & 0.9$\pm$0.6 & 0.03$\pm$0.02 & 0.3$\pm$0.7 & 0.01$\pm$0.02\\ CANDELS-GN-DEEP $[F]$ & -0.4$\pm$0.7 & -0.02$\pm$0.03 & 0.8$\pm$0.8 & 0.03$\pm$0.03 & 0.2$\pm$0.8 & 0.01$\pm$0.03\vspace{4pt}\\ CANDELS-GN-WIDE $[B]$ & -1.8$\pm$0.5 & -0.07$\pm$0.02 & -0.6$\pm$0.6 & -0.02$\pm$0.02 & 0.5$\pm$0.8 & 0.02$\pm$0.03\\ CANDELS-GN-WIDE $[F]$ & -1.5$\pm$0.7 & -0.06$\pm$0.03 & -0.8$\pm$0.7 & -0.03$\pm$0.03 & 1.0$\pm$0.9 & 0.04$\pm$0.03\vspace{4pt}\\ HUDF/XDF $[B]$ & 0.8$\pm$0.6 & 0.05$\pm$0.03 & 0.1$\pm$0.7 & 0.01$\pm$0.04 & 0.5$\pm$0.7 & 0.02$\pm$0.04\\ HUDF/XDF $[F]$ & 1.6$\pm$0.7 & 0.10$\pm$0.04 & 0.7$\pm$0.7 & 0.04$\pm$0.04 & -0.1$\pm$0.9 & -0.00$\pm$0.05\vspace{4pt}\\ HUDF09-1 $[B]$ & 0.2$\pm$0.6 & 0.01$\pm$0.04 & 0.7$\pm$0.7 & 0.04$\pm$0.04 & 1.1$\pm$0.7 & 0.06$\pm$0.04\\ HUDF09-1 $[F]$ & 1.2$\pm$0.7 & 0.07$\pm$0.04 & 1.1$\pm$0.8 & 0.06$\pm$0.05 & 0.4$\pm$0.9 & 0.02$\pm$0.05\vspace{4pt}\\ HUDF09-2 $[B]$ & -0.2$\pm$0.6 & -0.01$\pm$0.04 & 0.4$\pm$0.7 & 0.02$\pm$0.04 & 0.8$\pm$0.7 & 0.04$\pm$0.04\\ HUDF09-2 $[F]$ & 0.2$\pm$0.7 & 0.01$\pm$0.04 & 0.7$\pm$0.7 & 0.04$\pm$0.04 & -0.3$\pm$0.9 & -0.01$\pm$0.05\vspace{4pt}\\ MACS0416-Par $[B]$ & -0.6$\pm$0.7 & -0.03$\pm$0.04 & 0.0$\pm$0.7 & 0.00$\pm$0.04 & -0.8$\pm$0.8 & -0.05$\pm$0.04\\ MACS0416-Par $[F]$ & 0.2$\pm$0.7 & 0.01$\pm$0.04 & -0.7$\pm$0.8 & -0.04$\pm$0.04 & -0.5$\pm$0.9 & -0.02$\pm$0.05\vspace{4pt}\\ Abell 2744-Par $[B]$ & -1.1$\pm$0.7 & -0.07$\pm$0.04 & -0.6$\pm$0.7 & -0.03$\pm$0.04 & -0.8$\pm$0.8 & -0.04$\pm$0.04\\ Abell 2744-Par $[F]$ & -0.7$\pm$0.7 & -0.04$\pm$0.04 & -0.2$\pm$0.8 & -0.01$\pm$0.04 & -0.5$\pm$0.9 & -0.03$\pm$0.05\vspace{4pt}\\ \hline CANDELS-UDS $[B]$ & -0.9$\pm$0.7 & -0.03$\pm$0.02 & -1.3$\pm$0.7 & -0.04$\pm$0.03 & 0.0$\pm$0.9 & 0.00$\pm$0.03\vspace{4pt}\\ CANDELS-COSMOS $[B]$ & 0.1$\pm$0.7 & 0.01$\pm$0.02 & -1.5$\pm$0.7 & -0.05$\pm$0.02 & 0.2$\pm$0.9 & 0.01$\pm$0.03\vspace{4pt}\\ CANDELS-EGS $[B]$ & -0.1$\pm$0.7 & -0.01$\pm$0.02 & 0.4$\pm$0.7 & 0.01$\pm$0.02 & 0.8$\pm$0.8 & 0.03$\pm$0.03\vspace{4pt}\\ MACS0717-Par $[B]$ & 0.8$\pm$0.6 & 0.05$\pm$0.04 & 1.0$\pm$0.7 & 0.06$\pm$0.04 & 0.3$\pm$0.7 & 0.02$\pm$0.04\vspace{4pt}\\ MACS1149-Par $[B]$ & 0.9$\pm$0.6 & 0.05$\pm$0.04 & 1.9$\pm$0.7 & 0.10$\pm$0.04 & -0.4$\pm$0.8 & -0.02$\pm$0.04\vspace{4pt}\\ Abell S1063-Par $[B]$ & 0.6$\pm$0.6 & 0.03$\pm$0.04 & 0.6$\pm$0.7 & 0.03$\pm$0.04 & -0.2$\pm$0.8 & -0.01$\pm$0.04\vspace{4pt}\\ Abell 370-Par $[B]$ & 1.6$\pm$0.6 & 0.09$\pm$0.04 & 0.6$\pm$0.7 & 0.03$\pm$0.04 & -1.1$\pm$0.8 & -0.06$\pm$0.04\vspace{4pt}\\ \hline \end{tabular} \end{table*} We find, in general, our measurements of the dark matter density are consistent when using data from \citet{Finkelstein2015} or \citet{Bouwens2021}. However, GSD at $z =$ 6 and 8, and ERS at $z =$ 7, have very discrepant density measurements between the two analyses. The exact reasons for these differences are unknown; the potential culprits are likely the same as those for the differences in the determination of the luminosity function discussed in section~\ref{hst_sec:systematics}. The XDF field lays inside the larger GOODS-S field, yet at $z =$ 6/7, their densities differ by more than one standard deviation (see Figure~\ref{hst_fig:realEnvs}). This is to be expected, as they are probing different scales. Using the excursion-set formalism \citep{Bond1991,Lacey1993}, we can calculate the expected root variance in the density field around a single point between two scales. Between scales defined by a 5 arcmin$^2$ (XDF) and 100 arcmin$^2$ (GOODS-S) with $z =$ 5.5 - 6.5, the root variance in the density field is 0.044. This variance combined with the uncertainties in the density measurements themselves makes it unsurprising to find e.g., an over-dense XDF field inside an under-dense GOODS-S field. Finally, we find the most and least-dense fields have luminosity functions that are distinguishable by their normalization, but not by their shape (see Fig.~\ref{hst_fig:LFextreme}). JWST will be able to measure individual fields to much higher precision, potentially being able to distinguish over/under-densities by the shapes of their luminosity functions. \subsection{Validation} \label{hst_sec:validation} \subsubsection{Combining surveys} In section~\ref{hst_sec:data}, we described creating composite surveys that cut down on the time it takes to fit the luminosity function. We tested the effect of combining surveys in this way by using an alternate grouping: folding the HUDF/XDF field into the \textbf{PAR} composite group. At all redshifts, this re-grouping does not affect the width of the resulting posterior, but it does shift its position in the following way. At $z =$ 6, the parameters $\phi^*$, $\alpha$, and $M^*$ decrease by 0.10$\sigma$, 0.18$\sigma$, and 0.15$\sigma$. At $z =$ 7, the same parameters decrease by 0.51$\sigma$, 0.76$\sigma$, and 0.48$\sigma$, respectively. Finally, at $z =$ 8, the same parameters \textit{increase} by 0.25$\sigma$, 0.47$\sigma$, and 0.30$\sigma$, respectively. This difference most likely occurs because the XDF is significantly deeper than the other \textbf{PAR} fields, so its completeness function is reasonably different from the other fields as well. Fields should only combined into composite fields if they have similarly-shaped effective volume curves and cover similar magnitude ranges, as is the case for the rest of our composite fields. Even so, the modest changes resulting from the different treatment of the XDF demonstrate that our results are robust to the details of the composite fields. \subsubsection{Post-processed environments} In section~\ref{hst_sec:postprocess}, we described a process to measure the densities of each individual field, marginalizing over the posterior of the luminosity function $p(\vec{\phi} | \vec{D})$. Technically, this ``double-counts'' the fields, as each field was used to create that posterior in the first place. We test this by comparing the density of XDF generated by the full fitting framework (not double-counted) with the density of XDF using the ``post-processing'' described in section~\ref{hst_sec:postprocess} (double-counting). The densities are the same within 0.02$\sigma$ at all redshifts, showing the process is robust despite the double-counting. As described previously, the XDF has a relatively strong effect on the determination of $p(\vec{\phi} | \vec{D})$, so the double-counting effect is even smaller for less-influential fields. \subsubsection{Measuring cosmic variance}\label{hst_sec:measuringcosmicvariance} With the density measurements from Section~\ref{hst_sec:fieldenvs} in hand, one can ask: does the variance in measured densities match the expectation from theory? Given the range of physical scales probed by the fields we consider, this can only be explored in a global sense. That is, we can evaluate our model of cosmic variance with our results by calculating the standard deviation of all of the \textit{normalized} density measurements $(\delta/\sigma_{\textrm{PB}})$ weighted by the sizes of the error bars. By definition, the standard deviation of the normalized densities should equal unity. If our model for cosmic variance gave values that were globally too large, we would expect the standard deviation of our measured normalized densities to be less than one, or greater than one if our model for cosmic variance was too small. This provides a way to test whether our theoretical inputs (including the conditional mass function and galaxy bias model) are reasonable. The standard deviation of the measured normalized densities is 0.92$\pm$0.09. Therefore, the data do not prefer globally larger or smaller cosmic variance model. Doing the same calculation but splitting out into the different redshifts gives us standard deviation values of 0.90$\pm$0.16, 1.06$\pm$0.19, and 0.74$\pm$0.13 for redshifts 6, 7, and 8, respectively. Redshifts 6 and 7 do not prefer larger or smaller values of cosmic variance, but redshift 8 prefers a 2-$\sigma$ smaller value for cosmic variance. The largest uncertainty in the \citet{Trapp2020} model of cosmic variance is the uncertainty in the conditional halo mass function -- the cosmic variance of dark matter haloes themselves. In \citet{Trapp2020}, we estimate this uncertainty to be $\sim$25\% globally. To test the effects of this uncertainty on our results, we allow the overall normalization of $\epcv$ to vary as a free parameter in one of our medium-resolution fits, constrained by a gaussian prior with relative width 0.25. We find that a globally larger/smaller value of $\epcv$ is not preferred, and marginalizing over this free parameter does not change the luminosity function posterior significantly. Therefore, to save computation time in the final, higher-resolution fits, we do not consider the uncertainty in $\epcv$. Such improvements will be important for deeper surveys with smaller errors on the density field. \subsection{Ionization Environment} \label{hst_sec:ionization} In this section, we convert the density measurements from section~\ref{hst_sec:fieldenvs} to ionization states using a very simple mapping. This section serves as an example of how to make use of information on the large-scale density of a region. To do this, we construct a simple toy model of reionization. Our prescription can be made more rigorous by comparing to more detailed reionization models, such as those generated by \texttt{21cmFAST} \citep{Mesinger2011, Park2019}, but we focus here on a very simple prescription to make the inference as transparent as possible. In this model, we assume the ionized fraction of hydrogen $Q$ in a region is linearly dependent on the fraction of baryons that have collapsed into haloes $f_{\textrm{coll}}$ through an efficiency parameter $\zeta$: \begin{equation}\label{lae_eq:Qzetafcoll} Q = \zeta \cdot f_{\textrm{coll}}. \end{equation} Within the Press-Schechter model \citep{Press1974, Lacey1993}, \begin{equation}\label{hst_eq:fcoll} f_{\textrm{coll}}(\delta, R_\alpha, z) = {\textrm{erfc}}\left( \frac{\delta_{\textrm{crit}}(z) - \delta_0}{\sqrt{2(\sigma^2_{\textrm{min}} - \sigma^2_{R})}} \right), \end{equation} where $\delta_{\textrm{crit}}(z)$ is the linearized density required for spherical collapse \citep[approximately 1.69 divided by the growth factor,][]{Eisenstein1998}, $\delta_0$ is the density of the region $\delta$ scaled to $z = 0$ (via the growth factor), $\sigma_{R}$ is the linear r.m.s fluctuation of the dark matter density field on a scale of $R$, and $\sigma_{\textrm{min}}$ is the same but on the scale of the smallest virialized halo that can form a galaxy, both evaluated at $z=$ 0. We take that smallest halo to have a virial temperature $T_{vir}=10^4K$, when atomic line cooling becomes efficient enough to collapse and fragment gas clouds for star formation \citep{Loeb2013}. For concreteness, we then assert that reionization is complete for an \emph{average} region at $z=$ 6 (i.e., $Q = 1$ for a large region with $\delta_0=0$). This allows us to define $\zeta = 1 / f_\textrm{coll,6}$, where $f_\textrm{coll,6} = f_\textrm{coll}(z=6, \sigma_R = 0, \delta_0 = 0)$, the average collapse fraction of the Universe at $z=$ 6. We can then calculate the ionization fraction of each Hubble field by multiplying its collapse fraction (eq.~\ref{hst_eq:fcoll}) by $\zeta$, with the density of the region taken from Table~\ref{hst_tab:densities}. This simple model makes three major assumptions about reionization. First, we assume that it completes (for an average region) at $z=6$. This is simply to fix numbers; if reionization ended at a slightly different time, our general conclusions would not change. Second, it assumes that galaxies' ionizing efficiencies are independent of their masses. This is very unlikely to be true of actual galaxies (e.g., \citealt{Trenti2010, Tacchella2013, Mason2015, Behroozi2015, Furlanetto2017}), but for the relative ionized fractions across survey fields (the only purpose we require), it should be reasonably accurate. Third, it assumes that all ionizing photons produced in a given volume contribute to the ionization state of that volume, rather than being absorbed by Lyman-limit systems or escaping to help ionize other regions. The latter is likely a reasonable assumption, because recent measurements show that the mean free path is quite small at $z \sim 6$ \citep{Becker2021}. However, ignoring absorption by dense regions of the IGM will overestimate the actual ionized fraction, especially at the tail end of of reionization. For that reason, values $Q > 1$ should be interpreted as the production of an excess of ionizing photons. We plot the results in Figure~\ref{hst_fig:Qs}. The \textit{vertical dashed} lines in each panel correspond to the overall ionization of the Universe according to the model at each redshift. The fields with $Q > 1$ can be interpreted as having reionized before the rest of the Universe. For example, the most dense field at $z=$ 6, Abell370, reionized at $z=6.4 \pm 0.2$ according to this model. While these estimates of the ionization environments are obviously extremely crude, they demonstrate that even on the large scales of some HST surveys, the density fluctuations are large enough to cause substantial differences in the progress of reionization between survey volumes. For example, our results suggest that the $z=6$ GOODS North Wide field is delayed in its reionization compared to the GOODS South Wide field. Such inferences provide targets for investigations of the interplay between reionization and galaxy formation. \section{Conclusions}\label{hst_sec:conclusions} We measure the universal luminosity function of galaxies at $z =$ 6--8 using existing data \citet{Finkelstein2015,Bouwens2021}. We use a new fitting method that is more constrained than existing methods and also incorporates the density of individual fields or composited groups of fields \citep{Trapp2022}. Our results are consistent with existing studies \citep{Finkelstein2015,Bouwens2021}, but differ in the details (see Fig.~\ref{hst_fig:LFMeta}). Our method has the benefit of considering the shape change of the luminosity function for different densities, an effect that will become more pronounced at higher redshift \citep{Trapp2020} and deeper observations. We measure the dark matter density of most deep Hubble galaxy fields from $z =$ 6--8. We find the least/most-dense Hubble deep fields at $z =$ 6, 7, 8 are GNW/Abell370, GSW/MACS1149, and Abell370/HUDF091, respectively. These fields have expected dearths/excesses of $M^*$ galaxies of -48\%/60\%, -90\%/80\%, and -47\%/45\%, respectively. We find dark matter densities are distributed in a way that is consistent with current estimations of cosmic variance \citep{Trapp2020,Bhowmick2020,Ucci2021}. JWST will obtain many more dark-matter measurements of survey fields and at a higher precision than currently possible. These densities can be sorted and used to compare many statistical aspects of galaxies in under/over-dense environments, from the shape of the luminosity function, to the star-formation histories of galaxies, to the number of LAEs or QSOs in a region. For example, in Figure~\ref{hst_fig:Qs}, we used a simple reionization model to associate the underlying density of the field with its ionization state, showing that even large-scale surveys (such as the GOODS fields) can have substantially different ionization states. The pencil-beam shape of these volumes make interpreting a \textit{high} density complicated, as galaxies are likely clustered radially within the pencil-beam. If galaxies can be sorted more precisely in redshift space (through accurate photometric redshifts, for example), it will be possible to make field density estimates on smaller, more spherical volumes, which will allow for closer comparison to observations. In particular, our method could be improved to incorporate the probability distributions of photometric redshifts (and luminosities) as measurements improve. We do not analyze the density sub-structure of fields, something that would be especially useful for large contiguous fields like COSMOS-Web. This is in-principle doable, as we will have 3-D positions of each source in each field. We do not attempt this here because of the substantial radial widths of the HST fields, but it should be possible with JWST. \section*{Acknowledgements} We would like to thank our reviewer for their comments and suggestions. We thank R. Bouwens and S.~L. Finkelstein for sharing their source catalogs and completeness functions for their surveys. We also thank R.~Bowler for helpful discussions and R.~Bouwens for helpful comments on the manuscript. This work was supported by the National Science Foundation through award AST-1812458. In addition, this work was directly supported by the NASA Solar System Exploration Research Virtual Institute cooperative agreement number 80ARC017M0006. We also acknowledge a NASA contract supporting the ``WFIRST Extragalactic Potential Observations (EXPO) Science Investigation Team'' (15-WFIRST15-0004), administered by GSFC. \textit{Software used:} This work uses iPython \citep{Perez2007} and the following Python packages: Matplotlib \citep{Hunter2007}, pandas \citep{McKinney2010}, NumPy \citep{Walt2011}, and SciPy \citep{Virtanen2020}. \section*{Data Availability} We include all luminosity function fits and density results in Tables~\ref{hst_tab:params} and \ref{hst_tab:densities}. We provide the Schechter function posterior fits for both the \citet{Bouwens2021} and \citet{Finkelstein2015} data sets at $z =$6, 7, and 8 as supplementary data. \bibliographystyle{mnras} \bibliography{me} % \bsp % \label{lastpage}
Title: Quark Matter in the NJL Model with a Vector Interaction and the Structure of Hybrid Stars
Abstract: The properties of hadron-quark hybrid stars are studied when the quark phase is described in terms of a local SU(3) Nambu--Jona-Lasinio (NJL) model taking into account the contribution of the vector and axial-vector interaction between the quarks, and the hadronic phase, in the relativistic mean field (RMF) model. For different values of the vector coupling constant $G_V$, the equations of state of the quark matter are calculated and the parameters of the hadron-quark phase transition are determined under the assumption that the phase transition takes place in accordance with Maxwell's construction. It is shown that for a larger vector coupling constant, the equation of state of the quark matter will be "stiffer" and the coexistence pressure $P_0$ of the phases will be greater. Using the resulting hybrid equations of state, the TOV equations are integrated numerically and the mass and radius of the compact star are determined for different values of the central pressure $P_c$. It is shown that when $G_V$ is larger, the maximum mass of the compact star will be larger and thereby, the radius of the configuration with maximum mass will be smaller. Questions of the stability of hybrid stars are also discussed. It is shown that in terms of the model examined here, for all values of the vector coupling constant, a hybrid star with an infinitely small quark core is stable. These results are compared with recent measurements of the mass and radius of the pulsars PSR J0030+0451 and PSR J0740+6620, carried out at the International Space Station with the NASA's Neutron star Interior Composition Explorer (NICER) X-ray telescope. A comparison of the theoretical results with observational data does not exclude the possibility of quark deconfinement in the interiors of compact stars.
https://export.arxiv.org/pdf/2208.00466
\keywords{hadronic matter\and relativistic mean field (RMF) theory\and delta meson\and quark matter\and NJL model\and vector interaction\and quark phase transition\and hybrid stars} \section{Introduction} Studies of the properties and composition of compact stars are an important area of modern physics. It is known that matter in these kind of celestial objects has a low temperature and an extremely high density in the central part of the object. To describe the structure of a compact star it is necessary to know the equation of state of strong interacting matter at the baryon number densities $n_B\in[0-10n_0]$, where $n_0=0.16$ fm$^{-3}$ is the saturation density of nuclear matter. The matter in the central part of a compact star is essentially a system of particles described by quantum chromodynamics (QCD). At the extremely high densities inside a compact star various exotic structures may appear, such as hyperonic matter, pion condensate, kaon condensate, a quark phase, a color superconducting phase. Over the last few decades, many papers have dealt with the study of the thermodynamic properties of quark matter, obtaining the corresponding equation of state, and the application of these results to the study of the properties of compact stars. Work in this area has become more intense, especially after the discovery of the two pulsars PSR J1614-2230 \cite{1} and PSR J0348+0432 \cite{2} with a mass on the order of twice that of the sun. Exact measurements of the mass of compact stars are a good way of obtaining limits on the equation of state of superdense matter inside a star. Simultaneous measurement of the mass and radius of a neutron star undoubtedly opens up new possibilities of obtaining stricter limitations on the equation of state of superdense matter. Recent measurements with the NICER X-ray telescope installed on the International Space Station have made it possible simultaneously to measure both the mass and radius of PSR J0030+0451. The following values have been obtained for this pulsar: $M=1.44^{+0.15}_{-0.14}~ M_\odot$, $R=13.02^{+1.24}_{-1.06}$ km \cite{3}. Observations of the pulsar PSR J0740+6620 and an analysis of the data in the framework of this same program have made it possible to determine the mass and radius: $M=2.08^{+0.07}_{-0.07}~ M_\odot$, $R=13.7^{+2.6}_{-1.5}$ km \cite{4,5}. We note that this pulsar is the most massive neutron star known at this time. In many studies of the quark phase inside hybrid stars (e.g., Refs.\cite{{6},{7},{8},{9},{10},{11},{12},{13},{14},{15},{16},{17}}), the MIT quark bag phenomenological model \cite{18} is used. Recetly, the Nambu--Jona-Lasinio (NJL) model \cite{19,20} has very often been used for describing quark matter. This model successfully reproduces many features of quantum chromodynamics \cite{{21},{22},{23},{24},{25}}. By combining different modifications of the quark NJL model with various models describing hadronic matter, a number of authors have constructed a quark-hadron hybrid equation of state (e.g., Refs.\cite{{26},{27},{28},{29},{30}}). In our paper \cite{31} we studied the structure of hybrid stars using an equation of state with a quark phase transition at a constant pressure, when the hadronic phase is described in a relativistic mean field (RMF) model \cite{32,33}, while the quark phase is described by a local SU(3) NJL model, where the terms of the vector and axial-vector channels of the quark interaction were neglected. Recently, we have studied the thermodynamic characteristics of quark matter in terms of local NJL model, in which the terms of the vector and axial-vector interactions were also taken into account \cite{34}. The purpose of this paper is to obtain a hybrid equation of state in terms of the local NJL model with the vector and axial-vector interaction channels taken into account for the quarks at different values of the vector interaction coupling $G_V$ and to study the influence of this type of interaction on the structure of a compact star. The hadronic phase is described using an expanded RMF model where, besides the fields of the exchanged $\sigma$-, $\omega$-, and $\rho$- mesons, the scalar-isovector $\delta$-meson \cite{11} also taken into account. In constructing the hybrid equation of state it was assumed that the surface tension between the quark and hadronic phases is so strong that the formation of a mixed phase is energetically unfavorable and the phase transition takes place at a constant pressure in accordance with a Maxwell construction. This assumption is also supported by the conclusion of a recent paper \cite{35} to the effect that repulsive vector interactions strongly enhance the surface tension of the quark matter. In this paper we shall use the "natural" system of units in which $\hbar=c=1$. \section{The hadronic phase in the RMF model with the contribution of the scalar-isovector $\delta$-meson field taken into account} In this paper we have assumed that in the supernuclear density region electrically neutral hadronic matter in $\beta$ equilibrium consists of protons $p$, neutrons $n$, and electrons $e$. For the thermodynamic description of hadronic matter, the relativistic mean field (RMF) theory based on a quantum-field model of meson exchange was used. To describe the strong interaction between nucleons the channels corresponding to exchange mesons with different transformation properties are taken into account in isotopic and ordinary space: the isoscalar-scalar $\sigma$-meson, isoscalar-vector $\omega$-meson, isovector-scalar $\delta$-meson, and isovector-vector $\rho$-meson. The density of the relativistic Lagrangian of the hadronic matter consisting of protons, neutrons, and electrons in terms of the RMF model is given by \cite{11} \begin{eqnarray} \label{eq1} { \cal L}_{RMF}=\bar {\psi} _{N}\left[ \gamma ^{\mu }\left(i\partial _{\mu }-g_{\omega }\omega _{\mu }(x)-\frac{1}{2}g_{\rho }% \overrightarrow{\tau }_N\overrightarrow{\rho }_{\mu }(x)\right) -\left( m_{N}-g_{\sigma }\sigma (x)-g_{\delta }\overrightarrow{\tau }_{N}\overrightarrow{\delta }(x)\right) \right] \psi _{N} \nonumber \\ +\frac{1}{2}\left( \partial _{\mu }\sigma (x)\partial ^{\mu }\sigma (x)-m_{\sigma }\sigma (x)^{2}\right) -\frac{b}{3}m_N \left(g_\sigma \sigma(x)\right)^3-\frac{c}{4} \left(g_\sigma \sigma(x)\right)^4 \nonumber \\ +\frac{1}{2}m_{\omega }^{2}\omega ^{\mu }(x)\omega _{\mu }(x)-\frac{1}{4}\Omega _{\mu \nu}(x)\Omega ^{\mu \nu }(x) +\frac{1}{2}\left( \partial _{\mu }\overrightarrow{\delta }(x)\partial ^{^{\mu }}\overrightarrow{\delta }(x)-m_{\delta }^{2}\overrightarrow{% \delta }(x)^{2}\right) \\ +\frac{1}{2}m_{\rho }^{2}\overrightarrow{\rho }^{\mu }(x)% \overrightarrow{\rho }_{\mu }(x)-\frac{1}{4}\Re_{\mu \nu }(x)\Re^{\mu \nu}\left( x\right)+\bar {\psi} _{e}\left( i\gamma ^{\mu }\partial _{\mu }-m_e \right)\psi _{e} \nonumber \end{eqnarray} Here $\psi _{N} = \left({{\begin{array}{*{20}c} {\psi _{p}} \hfill \\ {\psi _{n}} \hfill \\\end{array}} } \right)$ is the isospin doublet of the nucleon bispinor, $\psi_e$ is the electron wave function, $\vec {\tau}_N$ are the isospin $2\times2$ Pauli matrices, $\sigma(x)$, $\omega _{\mu }(x)$, $\overrightarrow{\delta }(x)$, and $\overrightarrow{\rho }_{\mu }(x)$ are the fields of the exchange mesons at the space-time point $x=x_{\mu}=(t,x,y,z)$, $m_N$, $m_e$, $m_\sigma$, $m_\omega$, $m_\delta$, and $m_\rho$ are the masses of the free particles, $\Omega_{\mu \nu }(x)$ and $\Re_{\mu \nu }(x)$ are the antisymmetric tensors of the vector fields $\omega _{\mu }(x)$ and $\overrightarrow{\rho }_{\mu }(x)$, respectively: \begin{equation} \label{eq2} \Omega _{\mu \nu} \left( {x} \right) = \partial _{\mu} \omega _{\nu} \left( {x} \right) - \partial _{\nu} \omega _{\mu} \left( {x} \right),\quad \;\Re _{\mu \nu} \left( {x} \right) = \partial _{\mu} \rho _{\nu} \left( {x} \right) - \partial _{\nu} \rho _{\mu} \left( {x} \right). \end{equation} In Eq. (1), $g_\sigma$, $g_\omega$, $g_\delta$, and $g_\rho$ denote the coupling constants of a nucleon with the corresponding exchange meson, $b$ and $c$ the constants characterizing the contribution of the nonlinear part of the potential of the $\sigma$-field owing to the self-interaction of the $\sigma$-mesons \cite{36}. In the RMF approximation the meson fields are replaced by effective average fields. The energy density and pressure of the hadronic matter in this approximation take the form (for details, see Ref. 11) \begin{eqnarray}{}\label{eq3} \varepsilon_{HP} = \frac{{1}}{{\pi^{2}}}\int\limits_{0}^{k_{F} (n_B) \left( {1 - \alpha} \right)^{1/3}} {dk~k^{2}\sqrt {k^{2} + \left( {m_{N} - \sigma - \delta} \right)^{\,2}}}+\frac{b}{3}m_N \sigma^3+\frac{c}{4} \sigma^4 \nonumber\\ +\frac{{1}}{{\pi ^{2}}}\int\limits_{0}^{k_{F\,} \left( {n_B} \right)\left( {1 + \alpha} \right)^{1/3}} {dk~k^{2}\sqrt {k^{2} + \left( {m_{N} - \sigma + \delta} \right)^{\,2}}} + \frac{{1}}{{2}}\left( {\,\frac{{\sigma ^{\,2}}}{{a_{\sigma} } } + \,\frac{{\omega ^{2}}}{{a_{\omega} } } + \frac{{\delta ^{\,2}}}{{a_{\delta} } } + \frac{{\rho ^{\,2}}}{{a_{\rho} } }}\right)\\ +\frac{{1}}{{\pi^{2}}}\int\limits_{0}^{\sqrt{\mu_e^2-m_e^2}} {dk~k^{2}\sqrt {k^{2} + m_e^2}}\nonumber~~~ , \end{eqnarray} \begin{eqnarray}{}\label{eq4} P_{HP} = \frac{{1}}{{\pi^{2}}}\int\limits_{0}^{k_{F} (n_B) \left( {1 - \alpha} \right)^{1/3}} {dk~k^{2}\left(\sqrt {k_F(n_B)^{2}(1-\alpha )^{2/3} + \left({m_{N} - \sigma - \delta} \right)^{2}}-\sqrt {k^{2} + \left({m_{N} - \sigma - \delta} \right)^{\,2}}\right)}\nonumber\\ +\frac{{1}}{{\pi^{2}}}\int\limits_{0}^{k_{F} (n_B) \left( {1 + \alpha} \right)^{1/3}} {dk~k^{2}\left(\sqrt {k_F(n_B)^{2}(1+\alpha )^{2/3} + \left({m_{N} - \sigma + \delta} \right)^{2}}-\sqrt {k^{2} + \left({m_{N} - \sigma + \delta} \right)^{\,2}}\right)}\nonumber\\ -\frac{b}{3}m_N \sigma^3-\frac{c}{4} \sigma^4 + \frac{{1}}{{2}}\left( {-\frac{{\sigma ^{\,2}}}{{a_{\sigma} } } + \frac{{\omega ^{2}}}{{a_{\omega} } } - \frac{{\delta ^{\,2}}}{{a_{\delta} } } + \frac{{\rho ^{\,2}}}{{a_{\rho} } }}\right)\\ -\frac{{1}}{{\pi^{2}}}\int\limits_{0}^{\sqrt{\mu_e^2-m_e^2}} {dk~k^{2}\sqrt {k^{2} + m_e^2}}+\frac{1}{3\pi^2}\mu_e\left( \mu_e^2-m_e^2 \right)^{3/2}\nonumber~~~ , \end{eqnarray} where $n_B$ is the density of the baryonic charge of the hadronic matter, $\alpha=(n_n-n_p)/n_B$ is the asymmetry parameter, $\mu_e$ is the chemical potential of an electron, and $k_F(n_B)=\left(3\pi^2n_B/2\right)^{1/3}$. The redefined mean meson-fields $\sigma$, $\omega$, $\delta$, and $\rho$ in Eqs. (3) and (4) are expressed in terms of the average fields of the corresponding mesons in the following way: \begin{equation} \label{eq5} \sigma \equiv g_{\sigma} \langle {\sigma}(x) \rangle \,,\quad \omega \equiv g_{\omega} \langle {\omega}(x) \rangle \,,\quad \delta \equiv g_{\delta} \langle {\delta}(x) \rangle \,,\quad \rho \equiv g_{\rho} \langle {\rho}(x) \rangle \,. \end{equation} The model parameters $a_\sigma$, $a_\omega$, $a_\delta$, and $a_\rho$ are expressed in terms of the coupling constants and masses of the mesons as: \begin{equation} \label{eq6} a_{\sigma}\equiv (g_{\sigma}/m_{\sigma})^{2}, \quad a_{\omega}\equiv (g_{\omega}/m_{\omega})^{2}, \quad a_{\delta}\equiv (g_{\delta}/m_{\delta})^{2}, \quad a_{\rho}\equiv (g_{\rho}/m_{\rho})^{2}\, \end{equation} The average meson-fields satisfy the equations \begin{equation} \label{eq7} \sigma = a_{\sigma} \left( {n_{sp} \left( {n_B,\alpha} \right) + n_{sn} \left( {n_B,\alpha} \right)\, - b\,m_{N} \sigma ^{2} - c\,\sigma ^{3}} \right), \end{equation} \begin{equation} \label{eq8} \omega = a_{\omega} n_B, \end{equation} \begin{equation} \label{eq9} \delta = a_{\delta} \left( {n_{sp} \left({n_B,\alpha} \right) - n_{sn} \left( {n_B,\alpha} \right)\,} \right), \end{equation} \begin{equation} \label{eq10} \rho = - \frac {1}{2} a_{\rho}n_B\,\alpha \,. \end{equation} The scalar densities of the protons and neutrons $n_{sp} ( {n_B,\alpha})$ and $n_{sn}( {n_B,\alpha})$ are defined by \begin{equation} \label{eq11} n_{s\,p} \left( {n_B,\alpha} \right) = \frac{{1}}{{\pi ^{2}}}\int\limits_{0}^{k_{F\,} \left( {n_B} \right)\left( {1 - \alpha} \right)^{1/3}} dk\, k^{2} {\frac{{m_N-\sigma-\delta}}{{\sqrt {k^{2} + \left(m_N-\sigma-\delta\right)^2}} }} \;\; \quad , \end{equation} \begin{equation} \label{eq12} n_{s\,n} \left( {n_B,\alpha} \right) = \frac{{1}}{{\pi ^{2}}}\int\limits_{0}^{k_{F\,} \left( {n_B} \right)\left( {1 + \alpha} \right)^{1/3}} dk\, k^{2} {\frac{{m_N-\sigma+\delta}}{{\sqrt {k^{2} + \left(m_N - \sigma+\delta\right)^2}} }} \;\; \quad , \end{equation} The chemical potentials of a proton, neutron, and electron are expressed in terms of the baryonic charge density $n_B$, the asymmetry parameter $\alpha$, and the meson mean-fields \begin{eqnarray} \label{eq13} \mu_p(n_B,\alpha)=\sqrt {k_{F} \left( {n_B} \right)^{2}\left( {1 - \alpha} \right)^{2/3} + \left( {m_{N} - \sigma - \delta} \right)^{\,2}} + \omega + \frac{1}{2}\rho,\nonumber\\ \mu_{n}(n_B,\alpha)=\sqrt {k_{F} \left( {n_B} \right)^{2}\left( {1 + \alpha} \right)^{2/3} + \left( {m_{N} - \sigma + \delta} \right)^{\,2}}+ \omega - \frac{1}{2}\rho,\\ \mu_{e}(n_B,\alpha)=\sqrt {\left( \frac{3}{2}\pi^2n_B(1-\alpha)\right)^{2/3}+{m_e}^2}.\nonumber \end{eqnarray} For the matter in a neutron star consisting of protons, neutrons, and electrons, the condition for $\beta$-equilibrium after emergence from the system of all neutrinos will have the form \begin{equation} \label{eq14} \mu_n(n_B,\alpha)=\mu_p(n_B,\alpha)+\mu_e(n_B,\alpha) . \end{equation} The system of Eqs. (7)-(10) and (14) for a specified value of the baryonic density $n_B$ can be solved to find the asymmetry parameter $\alpha$ and the mean meson-fields $\sigma$, $\omega$, $\delta$ and $\rho$. Knowledge of these quantities makes it possible to calculate the energy density and pressure for a specified value of the baryonic density $n_B$. Thus, carrying out the above algorithm for the numerical calculation lets us have an equation of state for cold hadronic matter in the parametric form $\{ \varepsilon_{HP}(n_B); P_{HP}(n_B) \}$. In carrying out these numerical calculations we have used the values for the parameters of the RMF model given in Ref. 11: $a_\sigma=9.154$ fm$^2$, $a_\omega=4.828$ fm$^2$, $a_\delta=2.5$ fm$^2$, $a_\rho=13.621$ fm$^2$, $b=1.654~10^{-2}$ fm$^{-1}$, and $c=1.319~10^{-2}$ fm$^{-2}$. \section{Quark phase in the NJL model with the vector interaction taken into account} For describing the properties of the quark phase in a neutron star we used a local SU(3) NJL model in which, besides the scalar channel for the interaction between quarks, a vector interaction channel is also taken into account. The Lagrangian density for the system consisting of the $u$, $d$, and $s$ quarks and electrons in terms of this model has the form: \begin{equation} \label{eq15} {\cal L}_{RMF}= {\cal L}_{F}+{{\cal L}_F}^{(e)}+{\cal L}_{S}+{\cal L}_{D}+{\cal L}_{V}. \end{equation} Here the first term ${\cal L}_{F}$ is the density of the Dirac Lagrangian for the fields of the free quarks: \begin{equation} \label{eq16} {\cal L}_{F}=\overline{\psi}\left(i\gamma ^{\mu } \partial _{\mu }-\hat{m}_0\right)\psi, \end{equation} where $\psi$ is the spinor field of the ${\psi_f}^c$c quarks with three flavors $f = u,~ d,~ s$ and three colors $c = r,~ g,~ b$, while $\hat{m}_0$ is the diagonal matrix of the current masses of the quarks ${\hat{m}_0}=$ diag $(m_{0u},~m_{0d},~m_{0s})$. The second term ${{\cal L}_F}^{(e)}$ is the density of the Dirac Lagrangian for the system of free electrons: \begin{equation} \label{eq17} {{\cal L}_F}^{(e)}=\overline{\psi}_e\left(i\gamma ^{\mu } \partial _{\mu }-m_e\right)\psi_e, \end{equation} ${{\cal L}_S}$ corresponds to the four-quark chirally-symmetric interaction of a scalar and pseudoscalar type with a coupling constant $G_S$: \begin{equation} \label{eq18} {\cal L}_S=G_S \sum_{a=0}^{8}\left[(\overline{\psi}\lambda_a\psi)^2+(\overline{\psi}i\gamma_5\lambda_a\psi)^2\right], \end{equation} where $\lambda_a$ ($a = 1, 2, ..., 8$) are the Gell-Mann matrices and simultaneously the generators of the SU(3) group in flavor space, while $\lambda_0=\sqrt{2/3}\hat I$ ( $\hat I$ is the unit 3 $\times 3$ matrix). The term ${\cal L}_D$ in the Lagrangian (15) corresponds to the Kobayashi-Maskawa-’t Hooft six-quark interaction \cite{37} and has the form \begin{equation} \label{eq19} {\cal L}_D=-K \left\{ det_f \left(\overline{\psi}(1+\gamma_5)\psi\right)+det_f \left(\overline{\psi}(1-\gamma_5)\psi\right) \right\}. \end{equation} Introducing this kind of interaction term in the Lagrangian has made it possible to reproduce the value of the masses of the pseudoscalar isosinglet mesons $\eta'(958)$ and $\eta(547)$ in the NJL model. The last term ${\cal L}_V$ in the Lagrangian density (15) is the vector and axial-vector interaction among the quarks with a vector coupling constant $G_V$: \begin{equation} \label{eq20} {\cal L}_V=-G_V \sum_{a=0}^{8}\left[(\overline{\psi}\gamma_{\mu}\lambda_a\psi)^2+(\overline{\psi}i\gamma_{\mu}\gamma_5\lambda_a\psi)^2\right], \end{equation} By using the mean field approximation from the Lagrangian (15) for the energy density and pressure of the quark phase one can obtain \cite{34} \begin{eqnarray} \label{eq20} \varepsilon_{QP}=\frac{3}{\pi^2}\sum_{f=u,d,s}\left[~{\int_0^\Lambda{dk~k^2\sqrt{k^2+{M_{f0}}^2}} -\int_{(\pi^2n_f)^{1/3}}^\Lambda{dk~k^2\sqrt{k^2+{M_{f}}^2}}}~\right] \nonumber \\ +2G_S({\sigma_u}^2+{\sigma_d}^2+{\sigma_s}^2 -{\sigma_{u0}}^2-{\sigma_{d0}}^2-{\sigma_{s0}}^2)-4K(\sigma_u~\sigma_d~\sigma_s-\sigma_{u0}~\sigma_{d0}~\sigma_{s0})\\ +2G_V({n_u}^2+{n_d}^2+{n_s}^2)+\frac{1}{\pi^2}\int_0^{(3\pi^2n_e)^{1/3}}{dk~k^2\sqrt{k^2+{m_e}^2}}~~,\nonumber \end{eqnarray} \begin{eqnarray} \label{eq21} P_{QP}=-\frac{3}{\pi^2}\sum_{f=u,d,s}\left[~{\int_0^\Lambda{dk~k^2\sqrt{k^2+{M_{f0}}^2}} -\int_{(\pi^2n_f)^{1/3}}^\Lambda{dk~k^2\sqrt{k^2+{M_{f}}^2}}}~\right] \nonumber \\ +\sum_{f=u,d,s}n_f\sqrt{(\pi^2n_f)^{2/3}+{M_f}^2}-2G_S({\sigma_u}^2+{\sigma_d}^2+{\sigma_s}^2 -{\sigma_{u0}}^2-{\sigma_{d0}}^2-{\sigma_{s0}}^2)\\ +4K(\sigma_u~\sigma_d~\sigma_s-\sigma_{u0}~\sigma_{d0}~\sigma_{s0})+2G_V({n_u}^2+{n_d}^2+{n_s}^2)\nonumber \\ -\frac{1}{\pi^2}\int_0^{(3\pi^2n_e)^{1/3}}{dk~k^2\sqrt{k^2+{m_e}^2}+n_e\sqrt{(3\pi^2n_e)^{2/3}+{m_e}^2}}~~.\nonumber \end{eqnarray} Here $M_f$ is the constituent or dynamic mass of a quark with flavor $f$ in this state, and $M_{f0}$ is the constituent mass for $n_u=n_d=n_s=0$ . $\sigma_f$ and $\sigma_{f0}$ ($f = u,~ d, ~s$) denote the quark condensates $\langle \bar{\psi}_f$ $\psi_f\rangle$, respectively, in this state and in the vacuum ( $n_u = n_d = n_s = 0$), which are defined as \begin{equation} \label{eq23} \sigma_f=-\frac{3M_f}{\pi^2}\int_{(\pi^2n_f)^{1/3}}^\Lambda dk~\frac{k^2}{\sqrt{k^2+{M_f}^2}}~,~~~\sigma_{f0}=-\frac{3M_{f0}}{\pi^2}\int_{0}^\Lambda dk~\frac{k^2}{\sqrt{k^2+{M_{f0}}^2}}~. \end{equation} Since the model examined here is nonrenormalizable, there is a need for an ultraviolet cutoff in the diverging integrals. $\Lambda$ is the ultraviolet cutoff parameter in momentum space. The chemical potentials of the particles in the quark phase are determined by the expressions \begin{eqnarray} \label{eq24} \mu_f=\sqrt{(\pi^2n_f)^{2/3}+{M_f}^2}+4\,G_V\,n_f,~~(f=u,d,s), ~~~~~\mu_e=\sqrt{(3\pi^2n_e)^{2/3}+{m_e}^2}. \end{eqnarray} In the NJL model the constituent masses of the quarks satisfy the “gap” equations \begin{eqnarray} \label{eq25} M_u=m_{0u}-4\,G_S\,\sigma_u+2\,K\sigma_d~\sigma_s, \nonumber \\ M_d=m_{0d}-4\,G_S\,\sigma_d+2\,K\sigma_s~\sigma_u, \\ M_s=m_{0s}-4\,G_S\,\sigma_s+2\,K\sigma_u~\sigma_d. \nonumber \end{eqnarray} Assuming that all the neutrinos have been able to leave the system, the condition for $\beta$-equilibrium can written in the form \begin{equation} \label{eq26} \mu_d=\mu_u+\mu_e\,;~~~\mu_s=\mu_d\,. \end{equation} For electrically neutral quark matter we shall have \begin{equation} \label{eq27} \frac{2}{3}n_u-\frac{1}{3}n_d-\frac{1}{3}n_s-n_e=0\,. \end{equation} The baryonic charge density is defined by \begin{equation} \label{eq28} n_B=\frac{1}{3}(n_u+n_d+n_s)\,. \end{equation} On numerically solving the system of equations (25)-(28) for a specified value of the baryonic density $n_B$ it is possible to find the constituent masses of the quarks $M_u$, $M_d$, and $M_s$ as well as the particle concentrations $n_u$, $n_d$, $n_s$, and $n_e$. Equations (21) and (22) will then represent the equation of state of the quark phase in the parametric form $\{ \varepsilon_{QP}(n_B); P_{QP}(n_B) \}$ . In carrying out the numerical calculations we have used the values for the parameters of the NJL model given in Ref. 23: $m_u = m_d = 5.5$ MeV, $m_s = 140.7$ MeV, $\Lambda=602.3$ MeV, $G_S=1.835/\Lambda^2$, and $K=12.36/\Lambda^5$. In Ref. 23 these parameters of the NJL model were obtained by fitting from a reproduction of the values of the masses of the pseudoscalar $\pi$, $K$, and $\eta'$ mesons, as well as the decay constant $f_\pi$ of the pion. The vector coupling constant $G_V$ is a free parameter in this paper. In order to study the effect of the vector interaction on both the parameters of the hadron-quark phase transition and on the properties and structure of a hybrid star, we have used several values of this constant: $G_V=0$, $G_V=0.2\,G_S$, $G_V=0.4\,G_S$, and $G_V=0.6\,G_S$. \section{The hadron-quark phase transition and the hybrid equation of state for cold $\beta$-equilibrium electrically neutral superdense matter} To obtain the hybrid equation of state with a hadron-quark phase transition it is necessary to know, not only the equation of state of the hadronic matter and the quark matter individually, but also the type of phase transition between these phases. It has been shown \cite{38} that in the case of a hadron-quark phase transition, because there are two conserved quantities, the phase transition may take place with formation of a mixed phase. In the mixed phase the condition of global electrical neutrality is fulfilled, when the quark matter and the hadronic matter are separately electrically charged. With this sort of phase transition the energy density and density of the baryonic charge are continuous in character, as opposed to the usual phase transition of the first-order, where these quantities vary discontinuously, while the pressure then remains constant. For an ordinary phase transition with constant pressure, the parameters of the transition are determined by constructing a common tangent to the curves $P(1/n_B)$ of the individual phases (a Maxwell construction). In the case of a transition of this type each of the phases is separately electrically neutral. Which of the above scenarios for a hadron-quark phase transition takes place in reality depends on the amount of energy of electrostatic and surface natures required to form the different geometric structures in the mixed phase consisting of hadronic matter and quark matter. For sufficiently high values of the surface tension coefficient of the quark matter, formation of geometric structures in the mixed phase will be energetically unfavorable and the phase transition will take place in accordance with a Maxwell construction. The unknowability of the surface tension coefficient of the quark matter makes it impossible to uniquely determine which of the two scenarios of the hadron-quark phase transition actually takes place. In this paper we assume that the phase transition of the deconfinement takes place in a Maxwell construction scenario, i.e., at a constant pressure $P = P_0$ and with a discontinuous change in the baryonic density from $n_{H0}$ to $n_{Q0}$. The parameters of the phase transition can be found by solving the equations for the conditions of thermal,mechanical, and chemical equilibrium between the two phases. For cold matter, these equations have the form \begin{equation} \label{eq29} T_{HP}=T_{QP}=0,~~~P_{HP}=P_{QP}=P_0,~~~{\mu_B}^{HP}={\mu_B}^{QP}=\mu_{B0}\,, \end{equation} where ${\mu_B}^{HP}=\mu_n$ is the baryonic chemical potential in the hadronic phase, and ${\mu_B}^{QP}=\mu_u+\mu_d+\mu_s= \mu_u+2\mu_d$ in the quark phase. The top frame of Fig. 1 shows the dependence of the pressure $P$ on the baryonic chemical potential $\mu_B$ for hadronic matter obtained in the RMF model and for the deconfined quark matter obtained in the NJL model for four values of the vector coupling constant $G_V=0$, $G_V=0.2\,G_S$, $G_V=0.4\,G_S$, and $G_V=0.6\,G_S$. The figure shows that large values of the vector coupling constant for the interaction between the quarks correspond to a “stiffer” equation of state for the quark phase. We note that stiffer equations of state correspond to higher baryonic chemical potentials at equal pressure. The intersection point of the curves for the hadronic and quark phases in the $P-\mu_B$ plane correspond to a state of equilibrium coexistence of the two phases. It can be seen that increasing the intensity of the vector-type interaction between the quarks leads to an increase in the pressure of the phase coexistence $P_0$ and the baryonic chemical potential $\mu_{B0}$ in the state of an equilibrium coexistence of the two phases. We note that this kind of behavior is observed as well in the modified MIT bag model with a vector interaction \cite{39}. The bottom frame of Fig. 1 shows plots of the dependence of the baryonic charge $n_B$ on the baryonic chemical potential $\mu_B$ for the cases shown in the top frame of the figure. It is clear that in the state where the phases coexist, which corresponds to the intersection point of the curves in the $P-\mu_B$ plane, the densities of the baryonic charge for the hadronic phase and for the quark phase are different. At the same time, since the pressure $P$ and the baryonic chemical potential $\mu_B$ are continuously variable quantities, the baryonic charge density $n_B$ has a discontinuity (discontinuous change) at the phase transition point. The energy density has the same kind of behavior. \begin{table} \caption{First order phase transition parameters for different ratios of vector and scalar coupling constants.} \centering \begin{tabular}{cccccccc} \toprule \textbf{Quark} & \textbf{$G_V/G_S$} & \textbf{$P_0$} & \textbf{$\varepsilon_{H0}$} & \textbf{$n_{H0}$} & \textbf{$\varepsilon_{Q0}$} & \textbf{$n_{Q0}$} & \textbf{$\mu_{B0}$} \\ \textbf{model} & & MeV/fm$^3$ & MeV/fm$^3$ & fm$^{-3}$ & MeV/fm$^3$ & fm$^{-3}$ & MeV \\ \midrule a & 0 & 150.2 & 646.88 & 0.5841 & 958.56 & 0.8128 & 1364.4 \\ b & 0.2 & 258.5 & 879.39 & 0.7449 & 1347.69 & 1.1343 & 1527.6 \\ c & 0.4 & 409.9 & 1163.47 & 0.9205 & 1515.03 & 1.3189 & 1709.4 \\ d & 0.6 & 659.5 & 1582.41 & 1.1495 & 1680.02 & 1.5420 & 1950.8 \\ \bottomrule \end{tabular} \end{table} Table 1 shows the parameters of the phase transition of the first-order between the hadronic matter and the quark matter obtained by numerical solution of the equations of the coexistence of the two phases (29) for different values of the vector coupling constant of the quarks. Here $P_0$ is the pressure of the coexisting phases, $\mu_{B0}$ is the baryonic chemical potential, and $n_{H0}$ and $n_{Q0}$ are the densities of the baryonic charge of the hadronic and quark phases, respectively, while $\varepsilon_{H0}$ and $\varepsilon_{Q0}$ are the energy densities of the hadronic phase and quark phase, respectively, in the state where the phases coexist. When the parameters of the phase transition and the equation of state of both the hadronic phase and the quark phase are known, it is possible to construct a hybrid equation of state of the superdense strongly interacting matter with a quark transition. Figure 2 shows plots of the hybrid equations of state according to a Maxwell construction for a compact star with different values of the vector coupling constant for the interaction among the quarks. If the phase transition for the deconfinement follows a Maxwell construction, then a compact star, with a pressure at its center which exceeds the value for the coexistence of the phases $P_0$ will have a central core of quark matter surrounded by ordinary matter with a hadronic structure. It is known that for a stable neutron star a decisive role is played by the sign of the derivative $dM/\varepsilon_c$. For $dM/\varepsilon_c>0$ the star is stable; otherwise it is unstable. An ordinary phase transition with constant pressure leads to a break in the plot of the mass of the neutron star on the central pressure. Depending on the value of the parameter of the jump, $\lambda=\varepsilon_{Q0}/(\varepsilon_{H0}+P_0)$, this break may lead to a change in the sign of the derivative $dM/\varepsilon_c$. For $\lambda\leq3/2$, the derivative $dM/\varepsilon_c$ does not change sign, while for $\lambda>3/2$ it does. This means that the newly formed infinitely small new phase will be stable for $\lambda\leq3/2$ and unstable for $\lambda>3/2$ \cite{40}. For the hybrid equations of state obtained in this paper, the jump parameter $\lambda$ has the following values: a) $\lambda=1.20$ , b) $\lambda=1.18$, c) $\lambda=0.98$, d) $\lambda=0.75$. Thus, the hybrid equations of state with a quark phase transition that we have obtained ensure stability of the configuration of a compact star with a newly formed infinitely small core of quark matter. \section{The structure of hybrid stars} The hydrostatic equilibrium properties of spherically-symmetric and isotropic compact stars in the general theory of relativity are described by the Tolman-Oppenheimer-Volkoff (TOV) equations \cite{41,42}: \begin{equation} \label{eq30} \frac{dP}{dr}=-G\frac{(P+\varepsilon)\left( m+4\,\pi r^3P\right)}{r(r-2\,Gm)},~~~~\frac{dm}{dr}=4\,\pi r^2\varepsilon\,, \end{equation} where $G$ is the gravitational constant, $r$ is the distance from the star’s center, m is the so-called accumulated mass, i.e., the mass inside a sphere of radius $r$, $P$ is the pressure, and $\varepsilon$ is the energy density at distance r from the star’s center. This system of equations becomes closed if the function $\varepsilon(P)$, i.e., the equation of state of the stellar matter, is known. Numerical integration for a given value of the central pressure $P(0) = P_c$ begins at the center, where the boundary condition $m(0) = 0$ is satisfied. The star’s radius $R$ is determined from the condition $P(R) = 0$, and the gravitational mass of the star is $M = m (r = R)$. Using the hybrid equations of state obtained in this paper, we have numerically integrated the system of TOV equations and determined such characteristics of compact stars as the mass $M$, radius $R$, and the profiles of the pressure and energy density $P(r)$ and $\varepsilon(r)$ respectively, for different values of the central pressure $P_c > P_0$. For configurations with a central pressure $P_c > P_0$, the radius and mass of the quark core, $R_{core}$ and $M_{core}$, have also been determined. In the range of densities corresponding to the inner and outer crust of the star, the well-known Baym-Bethe-Pethick equation of state \cite{43} was used. Figure 3 shows the dependence of the gravitational mass of a compact star on the central pressure for different values of the vector coupling constant: a) $G_V/G_S=0$ , b) $G_V/G_S=0.2$ , c) $G_V/G_S=0.4$ , d) $G_V/G_S=0.6$. The smooth curve corresponds to a compact star consisting of hadronic matter. Hybrid stars correspond to the branches denoted by a, b, c, and d, in accordance with the values of the vector coupling constant introduced above. In this figure the horizontal bands represent the results of measurements of the mass of the pulsar PSR J0348+0432 - $M=(2.01\pm 0.04) M_\odot$ \cite{2} and refined data on the mass of PSR J1614-2230 $M=(1.908 \pm 0.016) M_\odot$ \cite{1,44,45}. Since the jump parameter $\lambda$ for the hybrid equations of state we have examined satisfies the condition $\lambda<3/2$, for values of the central pressure slightly greater than the coexistence pressure $P_0$ of the phases, the derivative $dM/d\varepsilon_c$ satisfies the stability condition $dM/d\varepsilon_c>0$. As can be seen from Fig. 3, a fairly narrow, perhaps even unresolvable, range of values of the central pressure corresponds to stable hybrid stars. The maximum mass of hybrid stars is very close to the maximum mass of a hadronic star. A comparison of the results of our calculations with observations of massive pulsars does not eliminate the possibility of deconfinement of quarks in the depths of a compact star. Hybrid stars with a significant size of the quark core satisfy the condition of statistical instability $dM/d\varepsilon_c<0$. The condition for dynamic stability of a cold spherically-symmetric compact star found in a chemical equilibrium is the nonnegativity of the square of the fundamental mode of the small oscillations $( {\omega_0}^2\ge 0)$. Using a variational principle, Chandrasekhar obtained an equation which makes it possible to determine numerically the eigenvalues of the different oscillatory modes [46] and clarify the stability question. When obtaining the equation it was assumed that the star consists of one phase and that the energy density inside the star varies continuously. Precisely for a continuous energy density inside the star it was shown that at the extremum point for the function $M(\varepsilon_c)$ the square of the fundamental mode ${\omega_0}^2$ changes sign. The Chandrasekhar equation is not applicable to the case of a compact star with a quark core, since the oscillations of a hybrid star, with a sharp boundary between two phases, will be accompanied by a process of mutual transformation of one phase into the other. The stability will depend on the relationship between the characteristic times of the oscillation and transformation of one phase into the other \cite{35,47,48,49}. It was shown \cite{35,47,48,49} that for a slow mutual conversion of the phases among stars on the branch of the $M(\varepsilon_c)$ curve, there will be some for which the condition $dM/d\varepsilon_c<0$ is satisfied, but these configurations will be stable with respect to the small radial oscillations. Hybrid stars of this type were referred to as slow-stable stars. Figure 4 shows the mass-radius relationship for compact stars with different values of the vector-coupling constant $G_V$ listed in Table 1. Compact stars consisting of hadronic matter correspond to the smooth curve, and the hybrid stars, to branches a, b, c, and d, in accordance with above values of the vector-coupling constant. The horizontal bands, as in Fig. 3, show the measured mass of the pulsar PSR J0348+0432 \cite{2} and a refined value of the mass of the pulsar PSR J1614-2230 \cite{44,45}, while the shaded rectangles indicate the regions corresponding to the results of these observations of the mass and radius of the pulsars PSR J0030+0451 \cite{3} and PSR J0740+6620 \cite{5}. As Fig. 4 shows, increasing the intensity of the vector interaction increases the maximum mass of the compact star and simultaneously reduces the radius of the star with the maximum mass. The theoretical calculations in the version of the RMF model that we have used yield an elevated value for the radius of the neutron star compared to that found by analyzing data from observations for the pulsar PSR J0030+0451 given in Ref. 3. Our results are close to the maximum values of the observations for the mass and radius of this pulsar. The results of our calculation are in agreement with the measured mass and radius of the pulsar PSR J0740+6620 \cite{5}. \section{Conclusion} In this paper a local SU(3) NJL model has been used to study the influence of the vector interaction channel on the equation of state of quark matter, on the parameters of the phase transition between hadrons and quark matter, and on the structural characteristics of hybrid stars. The equation of state of the quark phase was found using values of the NJL model obtained in Ref. 23 by fitting from a reproduction of known experimental data on the characteristics of mesons. The only free parameter not subject to determination by this procedure in the model we have used is the vector coupling constant $G_V$. For the numerical calculations we have used four values of this constant: $G_V/G_S=0;~ 0.2;~ 0.4;~ 0.6$. Our calculations showed that for $G_V/G_S> 0.6$ the pressure of the phase transition $P_0$ exceeds the value of the pressure in the center of the compact star with the maximum mass with a hadronic structure. To describe the hadronic phase we have used an expanded RMF model in which, besides the fields $\sigma$, $\omega$, and $\rho$ mesons, the field of the scalar-isovector $\delta$-meson was also taken into account. For the numerical calculations we used the values of the parameters of the RMF model obtained in Ref. 11. Given that the phase transition from hadronic matter into quark matter is an ordinary phase transition of the first-order described by a Maxwell construction, we determined the parameters of the phase transition for different values of the constant $G_V$. It has been shown that high values of the vector coupling correspond of “stiffer” equations of state for the quark phase. Here when $G_V$ is higher the phase transition pressure $P_0$ is higher. Using the obtained hybrid equations of state, the TOV equations have been integrated numerically and for different values of the central pressure $P_c$ the mass and radius have been determined both for a compact star with a hadronic structure, and for a hybrid star with a quark core. It has been shown that when the vector constant is higher, the maximum mass of the compact star will be higher and the radius of the configuration with the maximum mass will be smaller. A comparison of the results of the theoretical calculations with the measured masses of the currently known most massive pulsar PSR J0740+6620 shows that the resulting hybrid equations of state do not conflict with a limitation on the mass of $M_{max} \ge 2.01\, M_\odot$. The results are also consistent with the measured radius of this pulsar, $R\in [12.2 \div 16.3]$ km. It has been shown that the density jump parameter $\lambda=\varepsilon_{Q0}/(\varepsilon_{H0}+P_0)$ for all the values of $G_V$ that we have examined satisfies the condition $\lambda<3/2$. This means that at the point of the phase transition in the $M-\varepsilon_c$ plane there is a break such that the sign of the derivative $dM/d\varepsilon_c$ does not change, i.e., $dM/d\varepsilon_c>0$ both for configurations of the hadronic branch and for configurations of the hybrid branch in the immediate vicinity of the critical configuration. A hybrid star with an infinitely small central core of quark matter is stable. Hybrid stars with a substantial quark core satisfy the condition $dM/d\varepsilon_c<0$. If the mutual phase conversions take place fast enough, then these configurations are clearly unstable. In the case of a slow conversion, obtained neglecting the effects of phase transitions at the interface boundary of the phases during the time of small oscillations of a hybrid star, the instability condition $dM/d\varepsilon_c<0$ appears to be inapplicable (see, e.g., Ref. 35). In this case we are dealing with a slowly stable configuration. \section*{Acknowledgments} This work was carried out in the scientific-research laboratory for the physics of superdense stars in the department of applied electrodynamics and modeling at Yerevan State University, financed by the committee on science of the Ministry of Education, Science, Culture, and Sport of the Republic of Armenia.
Title: Binary neutron star mergers in Einstein-scalar-Gauss-Bonnet gravity
Abstract: Binary neutron star mergers, which can lead to less massive black holes relative to other known astrophysical channels, have the potential to probe modifications to general relativity that arise at smaller curvature scales compared to more massive compact object binaries. As a representative example of this, here we study binary neutron star mergers in shift-symmetric Einstein-scalar-Gauss-Bonnet gravity using evolutions of the full, non-perturbative evolution equations. We find that the impact on the inspiral is small, even at large values of the modified gravity coupling (as expected, as neutron stars do not have scalar charge in this theory). However, post-merger there can be strong scalar effects, including radiation. When a black hole forms, it develops scalar charge, impacting the ringdown gravitational wave signal. In cases where a longer-lived remnant star persists post-merger, we find that the oscillations of the star source levels of scalar radiation similar to the black hole formation cases. In remnant stars, we further find that at coupling values comparable to the maximum value for which black hole solutions of the same mass exist, there is significant nonlinear enhancement in the scalar field, which if sufficiently large lead to a breakdown in the evolution, seemingly due to loss of hyperbolicity of the underlying equations.
https://export.arxiv.org/pdf/2208.09488
\title{ Binary neutron star mergers in Einstein-scalar-Gauss-Bonnet gravity } \author{William E. East} \email{weast@perimeterinstitute.ca} \affiliation{% Perimeter Institute for Theoretical Physics, Waterloo, Ontario N2L 2Y5, Canada }% \author{Frans Pretorius} \email{fpretori@princeton.edu} \affiliation{Department of Physics, Princeton University, Princeton, New Jersey 08544, USA} \section{Introduction}% Recent breakthroughs in gravitational wave astronomy have allowed for unprecedented tests of general relativity (GR) in the strong field regime~\cite{LIGOScientific:2020tif,LIGOScientific:2021sio}. However, a crucial step in being able to perform the most sensitive searches for modifications to GR, or in the absence of deviations, place the most stringent constraints, is obtaining predictions in alternative theories, in particular in the strong field regime. A common feature of many proposed modifications to GR is that they show the strongest effects in the presence of the shortest curvature lengths. This is a natural consequence of adding additional curvature terms to the Einstein-Hilbert action multiplied by constants whose dimension are some positive powers of length, as in dynamical Chern-Simons gravity~\cite{Alexander:2009tp}, the most generic of the Horndeski class of theories~\cite{1974IJTP...10..363H}, or theories that add terms constructed out of higher powers of the Riemann tensor without introducing additional light degrees of freedom~\cite{Endlich:2017tqa,deRham:2020ejn}. An ideal way to look for evidence of, or to constrain, such theories is by observing the smallest mass compact objects. The vast majority of observed galactic black holes have masses $>5\ M_{\odot}$~\cite{Miller:2014aaa}, with the candidate lowest mass black hole having a mass $3.3_{-0.7}^{+2.8}\ M_{\odot}$~\cite{Thompson:2018ycv}, leading to a hypothesized so-called lower-mass gap between the highest mass neutron star and the lowest mass black hole. The gravitational wave event GW190814 from a binary with a $2.6\ M_{\odot}$ compact object~\cite{LIGOScientific:2020zkf}, which could potentially be a neutron star or black hole, has renewed debate about the lower-mass gap, though population models currently have difficulty explaining such a low-mass black hole~\cite{Zevin:2020gma}. Although there are a number of speculative or exotic formation channels that could lead to low mass black holes, one likely way to form a black hole of mass $\sim 3\ M_{\odot}$ is from the merger of a binary neutron star. In this work, we study how binary neutron star mergers can be used to probe a representative modified gravity theory, Einstein-Scalar-Gauss-Bonnet (ESGB) theory, which introduces modifications to GR at small curvature length scales (corresponding to sufficiently high curvature). There have been numerous studies of neutron star mergers in theories that do not modify the principal part of the Einstein equations, in particular scalar-tensor theories. Here, it is the introduction of a new scalar degree of freedom that mediates a prescribed conformal rescaling of the metric, rather than a modification of the Einstein equations themselves, that can lead to novel physics. For example, neutron stars typically develop scalar charge, which can lead to dipole radiation in a binary system containing a charged neutron star. The lack of any observed signatures of this in binary pulsar systems give tight constraints on such scalar-tensor theories~\cite{Damour:1996ke,Will:2014kxa}. However, there are some notable examples where such pulsar systems may not be strongly affected by scalar modifications at their current separations, yet where there could be significant modifications to the late inspiral or merger phase. For example, scalar-tensor theories with screening mechanisms\cite{terHaar:2020xxb,Bezares:2021yek,Bezares:2021dma}, or in the class of scalar-tensor theories developed by Damour and Esposito-Far\`{e}se~\cite{Damour:1992we,Damour:1993hw}, where in some cases only neutrons above a certain mass can develop scalar charge (so-called ``spontaneous scalarization''), or even only develop this charge in the late stages of inspiral (``dynamical scalarization'')~\cite{Barausse:2012da,Palenzuela:2013hsa}. Though the observation of a $\sim2\ M_{\odot}$ neutron star in orbit with a white dwarf severely constrains even this class of scalar-tensor theory~\cite{Antoniadis:2013pzd}, there is still some theoretical maneuvering that can evade these constraints, for example by giving the scalar field a small mass~\cite{Ramazanoglu:2016kul}. In contrast, full compact object mergers in modified theories that do change the principal part of the Einstein equations have been less well studied, in part because of difficulties with finding well-posed formulations of the evolution equations of such theories. In this work, we take advantage of recent advances in solving the full equations of shift-symmetric ESGB gravity to study binary neutron star mergers, as well as the collapse of isolated, hypermassive neutron stars to black holes. In particular, we use the modified harmonic formulation~\cite{Kovacs:2020pns,Kovacs:2020ywu} and the methods developed in Ref.~\cite{East:2020hgw} for evolving binary black holes in Horndeski theories. For a recent, detailed review, see Ref.~\cite{Ripley:2022cdh}. To our knowledge, the only prior numerical study of the dynamics of neutron stars within ESGB gravity is the work of Ref.~\cite{R:2022cwe}, where the collapse of a neutron star to a black hole in the decoupling limit of ESGB gravity was considered (see related earlier work in Ref.~\cite{Benkel:2016rlz} where Oppenheimer-Snyder collapse of a pressureless fluid was examined). In the decoupling limit, the backreaction of the ESGB scalar is ignored and the ESGB scalar is evolved on the pure-GR background of a collapsing neutron star spacetime. Though this approach, as detailed in Ref.~\cite{R:2022cwe}, gives important information regarding the growth of scalar hair about the nascent black hole, it is unable to address at least two important questions: what the potential gravitational wave signatures of ESGB gravity are (the scalar radiation is by itself not measurable with present detectors), and what the realm of validity of the small coupling approximation is (including what happens when this approximation is no longer valid). Regarding potential observational signatures, an interesting aspect of ESGB gravity is that neutron stars carry no scalar charge, yet black holes do. (Though note, as discussed below, stationary black hole solutions only exist above a minimum mass set by the coupling scale of the theory.) Similar to the class of Damour-Esposito-Far\`{e}se scalar-tensor theories mentioned above, this then implies ESGB gravity can easily evade binary pulsar system constraints, and instead one would need to look to compact object merger dynamics to uncover signs of it (or hope for the discovery of a galactic black hole-pulsar binary). There has been much work constraining ESGB gravity with binaries containing one or two black holes (see, e.g., Ref.~\cite{Nair:2019iur,Perkins:2021mhb,Lyu:2022gdr} and references therein), with the upshot, as discussed further in Sec.~\ref{sec_id}, that they constrain the relevant coupling length scale ($\sqrt{\alpha_{\rm GB}}$, defined below) to be on the order of a kilometer or less. The effect of ESGB modifications on the neutron star maximum mass and tidal deformability has also been considered~\cite{Saffer:2021gak}, though this is more difficult to separate from the unknown neutron star equation of state. Since the smallest compact objects offer the best probes of ESGB gravity, barring the confirmed existence of subsolar mass black holes of primordial or other exotic origin, it seems likely that observing gravitational waves from compact object mergers will continue to be able to place the tightest constraints on ESGB gravity. As the majority of theoretical work has focused on black hole binaries in ESGB gravity~\cite{Yagi:2011xp,Witek:2018dmd,Silva:2020omi,Okounkova:2020rqw, East:2020hgw,East:2021bqk,Shiralilou:2020gah,Shiralilou:2021mfl}, there still is an open question regarding whether binary neutron star mergers could give comparable or better constraints than the typical merger involving a black hole. This could either be due to the formation of a small, scalar-charged black hole post merger, or in the late stages of inspiral, merger, and evolution of a hypermassive neutron star remnant, where nonlinear or strong coupling effects could be significant (and note that, unlike with spontaneous scalarization, a neutron star in ESGB gravity will have a scalar cloud around it sourced by the Gauss Bonnet (GB) curvature---it is just that this cloud falls off much more rapidly than the $1/r$ decay that would be required for the neutron star to register a scalar charge). The main goal of this paper is to begin to address the questions just posed. Qualitatively, the answers suggested by our results are mixed in this regard. On the optimistic side, the apparent breakdown of hyperbolicity in the evolution for large values of the ESGB coupling suggest that a typical binary neutron star merger, even without assuming black hole formation, pushes shift-symmetric ESGB past the breaking point of theory unless $\sqrt{\alpha_{\rm GB}}$ of $\lesssim 1$ km, comparable to the best existing constraints from mergers containing a black hole. Less optimistic are if one hopes to do better than this by measuring details of the gravitational wave emission. We find that the effects of ESGB on the gravitational wave emission show up primarily in the postmerger signal: for a hypermassive remnant, the oscillating high density core can excite the scalar field, and for prompt collapse to a black hole the ringdown signal is affected by the development of scalar charge. However, even for strong couplings close to the maximum allowed, these appear sufficiently minor that it may be difficult to disentangle the effects of departures from GR from parameter uncertainties or limited knowledge of the neutron star equation of state (though a more quantitative analysis, beyond the scope of this paper, would be needed for more conclusive answers). Adding to the challenge, these parts of the gravitational wave signal are at high frequencies that ground-based detectors are less sensitive to. In earlier, full nonlinear studies of collapse and black holes in ESGB gravity~\cite{Ripley:2019hxt,Ripley:2019irj,Ripley:2020vpk,East:2020hgw,East:2021bqk,Franchini:2022ukz}, it was found that when the coupling is made too large, the hyperbolicity of the evolution equations breaks down prior to any singular behavior developing in the metric or scalar field. Here we find evidence this can happen in neutron star mergers not only when a black hole forms, but also during the postmerger oscillations of a remnant star, with apparent breakdown in the latter occurring at comparable but somewhat larger values of the coupling constant compared to when it does during black hole formation. (Though unlike the spherically symmetric studies in~\cite{Ripley:2019hxt,Ripley:2019irj}, here we do not explicitly compute the characteristics of the full system, and only surmise that this is the cause of the breakdown of our numerical evolutions.) In other words, even though exceeding the weak coupling limit in ESGB gravity has dire consequences for well-posedness of the theory, approaching this limit in a dynamical setting does not appear to be preceded by novel or dramatically different spacetime/scalar field dynamics compared to far-from maximum coupling. An outline of the remainder of this paper is as follows. We review shift-symmetric ESGB, the gravity theory we consider here, in Sec.~\ref{sec:esgb}; we describe our methods for numerically evolving this theory coupled to hydrodynamics and analyzing the results in Sec.~\ref{sec:methods}; we present results from our study of neutron star mergers and collapse of unstable hypermassive neutron stars in Sec.~\ref{sec:results}; and we discuss these results and conclude in Sec.~\ref{sec:discuss}. Unless otherwise noted, we use geometric units with $G=c=1$. \section{Shift-Symmetric Einstein Scalar Gauss Bonnet} \label{sec:esgb} The action for shift-symmetric ESGB gravity is given by \begin{align} \label{eq:action} S = \frac{1}{8\pi} \int d^4x\sqrt{-g} \left( \frac{1}{2}R -\frac{1}{2}\left(\nabla\phi\right)^2 + \lambda\phi\mathcal{G} \right) + S_{\rm matter} , \end{align} where $g$ is the determinant of the spacetime metric, $\mathcal{G}$ is the GB scalar, given in terms of the Riemann tensor and its contractions as \begin{align} \mathcal{G} := R^2-4R^{ab}R_{ab}+R^{abcd}R_{abcd}, \end{align} $\lambda$ is a coupling constant with dimensions of length squared, $\phi$ is the scalar field, and $S_{\rm matter}$ is the action for any other matter (in our case, the neutron star fluid). The equations of motion are given by \begin{align} \label{eq:eom_scalar} \Box\phi + \lambda \mathcal{G} &=& 0 ,\\ \label{eq:eom_metric} R_{ab} - \frac{1}{2}g_{ab}R + 2\lambda\delta^{efcd}_{ijg(a}g_{b)d}R^{ij}{}_{ef} \nabla^g\nabla_c \phi &=& 8\pi T_{ab} , \end{align} where $\delta^{abcd}_{efgh}$ is the generalized Kronecker delta and $T_{ab}=T^{\rm matter}_{ab}+T_{ab}^{\rm SF}$ with \begin{align} T_{ab}^{\rm SF} := \frac{1}{8\pi}\left( \nabla_a\phi\nabla_b\phi-\frac{1}{2}g_{ab}\nabla_c \phi \nabla^c \phi \right) \ . \end{align} The other matter equations of motion are not affected by the GB term, and are the same as in GR. In this theory, stationary black holes have nonzero scalar charge $Q_{\rm SF}$. That is, at large radius, the scalar field falls of like $\phi=Q_{\rm SF}/r+\mathcal{O}(1/r^2)$. Furthermore, studies have found that for a given black hole mass and spin there is a maximum value of $\lambda$, above which stationary solutions no longer exist. For a nonspinning black hole, $\lambda \lessapprox 0.23 M^2$~\cite{Sotiriou:2014pfa}, where $M$ is the total mass, as measured at infinity, while for dimensionless black hole spins $a=0.7$ and 0.8, $\lambda/M^2 \lessapprox 0.19$ and $0.16$, respectively~\cite{Delgado:2020rev}. Neutron stars, in contrast to black holes, do not have scalar charge in ESGB gravity. Recalling the argument given in Ref.~\cite{Yagi:2011xp}, if one assumes a stationary, asymptotically flat star solution and integrates Eq.~\eqref{eq:eom_scalar} over the four-dimensional spacetime manifold, this gives \begin{equation} \int \Box\phi \sqrt{-g} d^4x = -\lambda \int \mathcal{G} \sqrt{-g} d^4x = 0 , \end{equation} with the last equality following from the fact that the integral of the Gauss Bonnet (GB) curvature is a topological invariant. Using stationarity to drop the time integration, and applying Stoke's theorem to the remaining spatial volume integral, we obtain a surface integral at spatial infinity contracted with the unit normal to the surface \begin{equation} \int \hat{n}^i (\partial_i \phi) \sqrt{-g} dS \propto Q_{\rm SF}=0 \ . \end{equation} Note that this argument does not apply to the black hole case due to the breakdown of the regularity of the solution in the black hole interior. \section{Methodology\label{sec:methodology}}% \label{sec:methods} \subsection{Evolution} We evolve the full, nonperturbative, shift-symmetric ESGB equations in the modified generalized harmonic formulation~\cite{Kovacs:2020pns,Kovacs:2020ywu} using the implementation and methods of Ref.~\cite{East:2020hgw}. In this formulation, there are two additional auxiliary metrics $\hat{g}_{ab}$ and $\tilde{g}_{ab}$, which, respectively, determine the light cone for the gauge and constraint propagating modes. As in Ref.~\cite{East:2020hgw}, we choose $\tilde{g}_{ab}=g_{ab}-(1/5)n_an_b$ and $\hat{g}_{ab}=g_{ab}-(2/5)n_an_b$, where $g_{ab}$ is the physical metric, and $n_a$ is the future-directed unit normal to slices of constant time. The gauge we use is the modified (by the auxiliary metric) version of the damped harmonic gauge~\cite{Lindblom:2009tu,Choptuik:2009ww}. We model the neutron stars using ideal hydrodynamics. The Euler equations are unmodified from the GR case (only the metric going into the equations will be different than in GR), and we use the hydrodynamics code of Ref.~\cite{East:2011aa} to evolve the fluid, and in particular, we use the same methods and parameters for evolving binary neutron stars as in Ref.~\cite{East:2019lbk}. In the Appendix, we provide details on the numerical resolution and convergence. \subsection{Initial data and cases considered}\label{sec_id} We use quasicircular binary neutron star initial data constructed with the Compact Object CALculator~({\tt COCAL})~\cite{Tsokaros:2015fea,Tsokaros:2018dqs}. For the scalar field, we choose $\phi=\partial_t\phi=0$ on the initial time slice, in which case the constraint equations of ESGB are the same as in GR. This means that at the beginning of the evolution there will be a short transient associated with the scalar field evolving to a nonzero value in the presence of the neutron stars. For the binary neutron stars, we use a piecewise polytropic form of the DD2 EOS~\cite{Banik:2014qja}. We focus on equal mass binary neutron stars with an initial separation of 45 km, approximately four orbital periods before merger. We consider two values for the total mass of the system: $M=3.0\ M_{\odot}$, which gives rise to a longer-lived hypermassive remnant; and $M=3.45\ M_{\odot}$, which promptly collapses to a black hole postmerger. We consider ESGB coupling parameters approaching, and in some cases exceeding, the maximum values where our evolutions break down, which depends on whether black hole formation occurs. For the longer-lived remnant cases, we consider ESGB coupling parameters $\lambda/M^2=0$, 0.04, 0.08, 0.2, 0.25, and 0.3, while for the prompt black hole cases we consider smaller values of $\lambda/M^2=0$, 0.02, and 0.03. We also consider the axisymmetric collapse of uniformly rotating hypermassive neutron stars. For initial data, we use a stationary (in GR) but unstable star solution constructed using the RNS code~\cite{Stergioulas:1994ea} with the piecewise polytropic representation of the ENG EOS~\cite{Engvik:1995gn} from~\cite{Read:2008iy} with a mass $M=2.64\ M_{\odot}$ and a dimensionless spin of $0.7$. The collapse of this model in GR was previously considered in Ref.~\cite{Zhang:2020qlh}. For this scenario, we consider ESGB coupling parameters $\lambda/M^2=0$, 0.05, 0.065, and 0.08. For ease of comparison with other works, we convert our coupling $\lambda$ into the $\alpha_{\rm GB}:=\lambda/\sqrt{8\pi}$ used in, e.g., Refs.~\cite{Perkins:2021mhb,Lyu:2022gdr}, \footnote{Some other references~\cite{Witek:2018dmd,Blazquez-Salcedo:2016enn,Pierini:2021jxd,Pierini:2022eim} use a convention that gives a value of $\alpha_{\rm GB}$ that is $16\sqrt{\pi}$ times higher.} and restore physical units. We have that \begin{equation} \sqrt{\alpha_{\rm GB}}\approx 1.98 \ {\rm km} \left(\frac{\lambda^{1/2}}{M}\right)\left(\frac{M}{3 \ M_{\odot}}\right) \ . \end{equation} For reference, in Ref.~\cite{Lyu:2022gdr}, a constraint of $\sqrt{\alpha_{\rm GB}} \lesssim 1.2$ km (90\% confidence level) is found by comparing several black hole-neutron star and binary black hole gravitational wave signals to post-Newtonian results for ESGB. \subsection{Diagnostic quantities} To determine the gravitational wave signal, we compute the Newman-Penrose scalar $\Psi_4$ on coordinate spheres at large radii ($r=100M$), and decompose this quantity into spin $-2$ weighted spherical harmonics. In addition to the gravitational waves, we also analyze several quantities related to the scalar field. Considering just the canonical scalar stress-energy tensor, we calculate several associated quantities, including the associated energy \begin{align} E_{\rm SF} := -\int \left(T_{t}^t\right)^{\rm SF} \sqrt{-g}d^3x \ , \end{align} and energy flux through a surface in the wavezone \begin{align} \dot{E}_{\rm SF} \equiv -\int \alpha \left(T_t^i\right)^{\rm SF} dA_i \ , \end{align} where $\alpha$ is the lapse. We note that $T_{ab}^{\rm SF}$ is not conserved, and, for example, even for an isolated black hole with scalar charge in ESGB, $E_{\rm SF}$ will only account for a fraction of the difference between the global mass and black hole horizon mass. We also consider the value of $\phi$ on a sphere at large radius $r=100M$, using its average value to calculate the scalar charge, as well as calculating the value of other (spin 0) spherical harmonics. \section{Results} \label{sec:results} We follow the evolution of three different scenarios: a binary neutron star that promptly collapses to a black hole after merger, a binary neutron star that forms a massive remnant star at merger, and the collapse of an unstable uniformly rotating hypermassive neutron star. The last mentioned case approximates the scenario where a postmerger remnant star collapses to a black hole on long time scales (on the order of 100 ms~\cite{Hotokezaka:2013iia}), after sufficient cooling and the dissipation of differential rotation. For all these scenarios, we vary the ESGB coupling $\alpha_{\rm GB}$ all the way up to near the maximum value where we are able to carry out the evolution, and analyze the impact on the gravitational wave and scalar radiation. The more massive binary neutron star merger ($M=3.45\ M_{\odot}$) is shown in Fig.~\ref{fig:collapse_rad}. After $\sim 3$--4 orbits, the binary merges and promptly forms a black hole which rings down. The $\ell=m=2$ component of the scalar field (bottom panel of Fig.~\ref{fig:collapse_rad}) shows similar behavior to the gravitational waves in both the inspiral and ringdown. However, the scalar radiation is not significant enough to lead to any noticeable dephasing in the inspiral for these parameters, and the gravitational wave signals for different values of $\alpha_{\rm GB}$ are indistinguishable on the scale of the plot, except during the ringdown. This is consistent with the fact that the neutron stars do not have a scalar charge, and that scalar charge only develops after the black hole forms. This is illustrated in Fig.~\ref{fig:charge} where we show $Q_{\rm SF}$, as measured from the average scalar field value at large distances. There it can be seen that the scalar charge only settles to its final value $\sim 1$ ms after the peak of the gravitational waves, while the period of gravitational waves during ringdown is $\approx 0.2$ ms. Perturbation theory~\cite{Blazquez-Salcedo:2016enn,Pierini:2021jxd,Pierini:2022eim} predicts that the real frequency of the fundamental $\ell=2$, $m=2$ quasinormal mode of a black hole in ESGB gravity will have a smaller real frequency as the coupling increases, and that the effect should be $<1\%$ for the values we consider here.\footnote{ We note that the results of Refs.~\cite{Blazquez-Salcedo:2016enn,Pierini:2021jxd,Pierini:2022eim} are obtained for Einstein-dilaton-Gauss-Bonnet gravity, which is equivalent to ESGB only for small values of $\phi$, and make use of a small black hole spin expansion, and thus are only approximately applicable to the cases studied here. } Though the effect on the frequency and decay rate (imaginary frequency) of the ringdown is small, and difficult to reliably quantify here, the most noticeable effect is a suppression in the overall amplitude of the ringdown gravitational wave signal with increasing GB coupling, as shown in the bottom panel of Fig.~\ref{fig:charge}, which occurs as the black hole develops a scalar charge. The highest value of the ESGB coupling we consider for the prompt collapse case is $\sqrt{\alpha_{\rm GB}}\approx 0.39$ km. This should be compared to the maximum value for which there exists stationary black hole solutions with the same mass and spin ($a_{\rm BH}\approx 0.8$ here), which is $\sqrt{\alpha_{\rm GB}}\approx 0.91$ km. We also consider a less massive binary neutron star merger with $M=3\ M_{\odot}$ that forms an oscillating hypermassive remnant star. We show the gravitational and scalar radiation in Fig.~\ref{fig:llived_rad}. Without evolving to presumed late-time black hole formation, we are able to evolve cases with significantly larger values of $\alpha_{\rm GB}$ in comparison to the prompt collapse case. In the top panel of Fig.~\ref{fig:llived_rad}, starting slightly before merger, and continuing to the postmerger oscillations, there is some noticeable dephasing in the gravitational waves for the highest coupling case with $\sqrt{\alpha_{\rm GB}}\approx 0.89$ km.\footnote{Achieving small phase errors in the postmerger phase of binary neutron simulations is still an open problem, see, e.g., Ref.~\cite{Raithel:2022san}, and this comparison should be treated as an upper bound on the gravitational wave dephasing assuming that the dominant truncation error is similar comparing ESGB to GR simulations performed at the same resolution. } This difference will show up at high gravitational wave frequencies (in the kilohertz regime). We note that a value of $\sqrt{\alpha_{\rm GB}}\approx 0.95$ km would exclude even a nonspinning (static) black hole solution with mass $3\ M_{\odot}$. The scalar radiation also tracks the neutron star oscillations evident in the gravitational waves. In this $\sqrt{\alpha_{\rm GB}}\approx 0.89$ km case, the initial data transient from the scalar field going from zero to nonzero in the vicinity of the star also induces measurable (yet small) oscillations in the fundamental fluid mode of the star, known as the f-mode, which in turn cause scalar radiation during the inspiral. This is evident in the bottom panel of Fig.~\ref{fig:llived_rad}. (N.B. the higher vertical axis scale in Fig.~\ref{fig:llived_rad} compared to Fig.~\ref{fig:collapse_rad}.) In this case, these f-mode oscillations are an artifact of the initial conditions, though similar oscillations can arise through tidal excitations, for example in eccentric neutron star mergers~\cite{1977ApJ...216..914T,Gold:2011df,East:2011xa,East:2012ww}. We further compare the collapsing and longer-lived remnant star cases in Fig.~\ref{fig:power}. In both cases, the luminosity of the scalar radiation is always subdominant to the gravitational radiation, and the former peaks after the latter (top panel). In the longer-lived remnant case, for higher values of the GB coupling than discussed above, in particular for $\sqrt{\alpha_{\rm GB}}\gtrsim 1$ km, we find a nonlinear enhancement in the scalar field, which reaches values $>0.1$ (in units of the Planck mass) postmerger, and causes our evolution to breakdown before there is any sign of collapse to a black hole. This is illustrated in Fig.~\ref{fig:nonlinear}, where we show the scalar field energy and maximum field magnitude for several values of the coupling. Postmerger, these quantities oscillate with the remnant star. After rescaling for the test-field dependence on coupling, we can see that there is a mild nonlinear enhancement in these quantities for $\sqrt{\alpha_{\rm GB}}\approx 0.89$ km, which becomes strongly nonlinear for $\sqrt{\alpha_{\rm GB}}\approx 1.0$ km. For the highest coupling considered ($\sqrt{\alpha_{\rm GB}}\approx 1.1$ km), the blow up in the scalar quantities happens during the first oscillation, while for a slightly smaller value ($\sqrt{\alpha_{\rm GB}}\approx 1.0$ km) it happens during the second oscillation. For both of the cases, we are unable to continue the evolution further. This could be related to a breakdown in the hyperbolicity of the ESGB equations, either in the theory itself, or in our particular formulation and choice of gauge, though further work would be needed to demonstrate this. Assuming this is due to breakdown of hyperbolicity, similar to arguments constraining $\sqrt{\alpha_{\rm GB}}$ based on the smallest observed black hole, the observation of a binary neutron star postmerger without apparent anomalies can set a constraint on $\sqrt{\alpha_{\rm GB}}\lesssim 1$ km. However, an alternative perspective might be that ESGB is only an approximation to a more complete gravity theory, and these cases may merely lie in the regime where additional corrections need to be taken into account. We show snapshots of the density, GB curvature, and scalar field around the time $|\phi|$ reaches a local maximum during the oscillations in the postmerger remnant in Fig.~\ref{fig:snapshot}. At the center of the star, coincident with high density, the GB curvature reaches a magnitude that is only a factor of 2 smaller than the value at the horizon of a nonspinning black hole ($\mathcal{G}\approx2\times10^{-3}$ km$^{-4}$ for a Schwarzschild black hole with $M=3\ M_{\odot}$), though with the opposite sign. In turn, the scalar field is also negative with largest magnitude at the center of the star. The maximum positive value of the GB curvature is $~\sim 4\times$ smaller in magnitude than the maximum negative value and occurs near the surface of the star. \subsection{Collapse of isolated hypermassive neutron stars} One possible outcome for a binary neutron star merger is that the remnant star undergoes a delayed collapse to a black hole, which happens only after gravitational radiation, cooling, viscosity, and other dissipative effects have sufficiently reduced the differential rotation and thermal support of the star. To cover this scenario, we consider the collapse of a uniformly rotating hypermassive neutron star with mass 2.64 $M_{\odot}$ and dimensionless spin $0.7$. The star is an unstable equilibrium solution in GR and rapidly collapses to a black hole, with the collapse induced either by truncation error (when $\alpha_{\rm GB}=0$) or by the perturbation induced on the star by the modified gravity (when $\alpha_{\rm GB}\neq 0$). As above, in ESGB gravity the compact object develops a scalar charge as it collapses to a black hole and rings down down to a stationary black hole (with scalar hair) solution. Also as found in the neutron star mergers, the scalar field is negative, but with growing magnitude at the center of the collapsing star, coinciding with the negative GB curvature. However, as the black hole forms, this region is hidden, and the magnitude of $\phi$ is peaked at a positive value in the vicinity of the black hole horizon, which grows towards its asymptotic value as the black hole settles down. This is illustrated in Fig.~\ref{fig:rns_scalar}. Similar to the prompt collapse following a neutron star merger (Fig.~\ref{fig:charge}), the development and settling of the scalar charge to its final value takes place over $\approx0.5$--1 ms. This transition is accompanied by a burst of scalar radiation, as shown in Fig.~\ref{fig:rns_scalar_flux}. In this case, where the gravitational wave radiation is almost entirely from black hole ringdown, the peak scalar radiation slightly precedes the peak gravitational luminosity (as opposed to the gravitational wave signal being peaked at merger, and the peak scalar radiation following, as in Fig.~\ref{fig:power}). The gravitational wave ringdown, and its dependence on the GB coupling, is illustrated in Fig.~\ref{fig:rns_psi4}. There it can be seen that as the coupling is increased, the gravitational wave amplitude also increases, which may in part be an artifact of using as initial conditions a solution that is an unstable stationary solution when $\alpha_{\rm GB}=0$, so the development of a scalar field hastens the collapse to a black hole. We are not able to discern the expected shift in the frequency of the quasinormal mode here---in fact the trend in Fig.~\ref{fig:rns_psi4} is towards a small decrease in period between successive peaks for larger coupling. This is most likely because the biggest effect of changing the GB coupling here, as in the binary merger case above, is just the amplitude at which different quasinormal modes (including overtones) are excited, which could swamp a small effect on the frequency of the fundamental mode of the final black hole. \section{Discussion and Conclusion}% \label{sec:discuss} We have used numerical evolutions of the full equations of ESGB gravity to study binary neutron star mergers, motivated by the fact that the smaller masses of such binaries, relative to black hole binaries, may probe modifications to GR at smaller curvature scales. We find that during the inspiral, there is scalar radiation, but its amplitude is suppressed due to the fact that neutron stars do not have scalar charge in this theory, and the impact on the gravitational wave signal is negligible. This is true even for values of the GB coupling up to values where there no longer exist black hole solutions with the same total mass. We note in passing that the scalar radiation may be enhanced if the stars become tidally perturbed: we found that it was significantly larger for stars that exhibited f-mode oscillations. Though here the excitation of the oscillations was an unphysical artifact of the initial conditions, in nature this can occur (for example) during close encounters in neutron star binaries with orbital eccentricity~\cite{1977ApJ...216..914T,Gold:2011df,East:2011xa,East:2012ww}. When the neutron stars merge, the effects due to the ESGB modifications of GR become more important. The GB curvature in the remnant star has a maximum magnitude that is only a factor of a few less than a black hole of the same mass, but since there is no horizon, it is peaked at the center of the star with negative value. This gives rise to a scalar field profile that is also peaked at the center of the star, and with opposite sign from a black hole. In the case of a longer-lived remnant star, the density oscillations of the star also cause oscillations in the scalar field and produce scalar radiation. At larger values of the GB coupling, there is a small decrease in the frequency of the postmerger oscillations, which in turn affects the phase of the postmerger gravitational waves. In shift-symmetric ESGB, there is a minimum mass, in units of the coupling parameter, for stationary black hole solutions, and there have been attempts to use the putative observation of the smallest mass black holes to constrain the theory. It has been previously shown that from the perspective of evolution, starting with a vacuum black hole, or collapsing to a black hole with mass below this threshold leads to a breakdown in the hyperbolicity of the evolution equations~\cite{Ripley:2019irj,Ripley:2019aqj}. Here, we find evidence that something similar may happen in a hypermassive remnant star. In particular, we find that for a value of the GB coupling only $\sim 30\%$ larger than the value that would exclude a black hole of the same mass, and that is still marginally consistent with observations, there is a strong nonlinear enhancement in the scalar field magnitude, and a breakdown in our numerical evolution. This is suggestive that we are near the strong-coupling regime where the ESGB evolution equations may become elliptic, though a more detailed analysis would be needed to establish this. We also considered several cases where a black hole forms, both promptly following the merger of a binary neutron star, and by considering the collapse of a uniformly rotating hypermassive star, the latter of which approximates the delayed collapse of a remnant after the dissipation of differential rotation. In both cases, following the appearance of an apparent horizon, the scalar field on the horizon and the scalar charge at large distances grows and settles towards its final value on timescales of $\sim 0.5$--1 ms. These cases also allow us to self-consistently study the effect of modifications to GR on the ringdown gravitational wave signal of newly formed black holes. Much attention has been focused on the change in the ringdown frequency of the final black hole in modified theories of gravity, since this is a simple quantity that can be calculated in perturbation theory without a detailed understanding of the merger dynamics in the modified theory. However, for the cases considered here, the frequency shift is small, and we find that the dominant effect is actually a change in the amplitude of the black hole perturbation that lead to the ringdown signal. This is an additional observational signature of modified gravity that can be potentially leveraged, but it also illustrates the complications in ringdown tests of GR that come from including all the ways in which the modifications will affect the ringdown signal. The gravity modification can shift the amplitude of the ringdown, including the relative amplitude at which different overtone modes are excited, impacting when the dominant quasinormal mode frequency can be cleanly extracted using a finite time interval following the peak of the gravitational wave signal, as well as potentially changing the mass and spin of the remnant black hole compared to GR. Unfortunately, for binary neutron star mergers, the postmerger oscillations and, to an even greater degree, the ringdown of the final black hole are at kilohertz frequencies that are too high for current ground-based detectors to be very sensitive to. So directly observing this regime will likely require third generation detectors~\cite{Hild:2010id,LIGOScientific:2016wof} or detectors that specifically target high frequencies~\cite{Martynov:2019gvu}. We defer a more detailed study of the detectability of the modified gravity effects we find here to future work. An important aspect of assessing this would be to determine how degenerate these effects are with different binary parameters, and how robust they are to different choices for the unknown neutron star equation of state. \acknowledgments We thank Vasileios Paschalidis and Antonios Tsokaros for their assistance constructing the binary neutron star initial data used here. W.E. acknowledges support from an NSERC Discovery grant. This research was supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Research, Innovation and Science. Computational resources were provided by XSEDE under Grant TG-PHY100053, as well as by Calcul QuГ©bec (www.calculquebec.ca) and Compute Canada (www.computecanada.ca), and the Symmetry cluster at Perimeter Institute. F.P. acknowledges support from NSF Grant No. PHY-2207286, the Simons Foundation, and the Canadian Institute For Advanced Research (CIFAR). \appendix \section{Numerical resolution and convergence} \label{app:conv} For all of the binary neutron star merger cases considered in the main text, we perform simulations with six levels of adaptive mesh refinement where the finest level has a linear grid spacing of $dx\approx 0.05M$, and each successive level has a grid spacing that is twice as coarse. For the case with $M=3 \ M_{\odot}$ and $\sqrt{\alpha_{\rm GB}} \approx 0.89$ km, we also perform a convergence study with grid spacing that is $4/3$ and $\times 2/3$ as large. Unless otherwise stated, all results are from the highest resolution. In the top panel of Fig.~\ref{fig:conv}, we show how the canonical scalar field energy postmerger (as in the top panel of Fig.~\ref{fig:nonlinear}) varies with resolution. There it can be seen that the difference in the amplitude of the first peak in all resolutions, and the timing and amplitude of subsequent peaks for the two highest resolutions, is small (e.g. compared to the nonlinear effects in Fig.~\ref{fig:nonlinear}), though there is some noticeable difference in the lowest resolution after the oscillation. For the simulations of the collapse of isolated hypermassive stars, we assume axisymmetry, which makes the computational domain two-dimensional, and use seven levels of mesh refinement with $dx\approx 0.01 M$ on the finest level. We perform a resolution study for $\sqrt{\alpha_{\rm GB}}\approx 0.44$ km, running simulations with grid spacing $2$ and $\times4/3$ coarser. In the bottom panel of Fig.~\ref{fig:conv}, we show the norm of the modified generalized harmonic constraint~\cite{East:2020hgw} \begin{equation} C^a:=H^a-\tilde{g}^{bc}\nabla_b \nabla_c x^a \end{equation} integrated over the domain as a function of time for the three resolutions. Though at early times the order of convergence is closer to first order, presumably from scalar induced perturbations engaging the shock-capturing scheme, as the star collapses to a black hole and rings down, the convergence is consistent with approximately second order convergence (which is assumed in the scaling of the lower panel of Fig.~\ref{fig:conv}), as expected from our numerical scheme in the absence of shocks. \bibliographystyle{apsrev4-1.bst} \bibliography{mod_grav,ref}
Title: Spectroscopic performance of flight-like DEPFET sensors for Athena's WFI
Abstract: The Wide Field Imager for the Athena X-ray telescope is composed of two back side illuminated detectors using DEPFET sensors operated in rolling shutter readout mode: A large detector array featuring four sensors with 512x512 pixels each and a small detector that facilitates the high count rate capability of the WFI for the investigation of bright, point-like sources. Both sensors were fabricated in full size featuring the pixel layout, fabrication technology and readout mode chosen in a preceding prototyping phase. We present the spectroscopic performance of these flight-like detectors for different photon energies in the relevant part of the targeted energy range from 0.2 keV to 15 keV with respect to the timing requirements of the instrument. For 5.9 keV photons generated by an iron-55 source the spectral performance expressed as Full Width at Half Maximum of the emission peak in the spectrum is 126.0 eV for the Large Detector and 129.1 eV for the Fast Detector. A preliminary analysis of the camera's signal chain also allows for a first prediction of the performance in space at the end of the nominal operation phase.
https://export.arxiv.org/pdf/2208.04178
\keywords{Athena WFI, DEPFET, Silicon detector, Flight-like sensor, X-ray camera, Imager, Spectral performance} \section{INTRODUCTION} \label{sec:intro} % The DEPFET (DEpleted P-channel Field-Effect Transistor) \cite{kemmer87a} is the chosen X-ray detection and first signal amplification principle for the MPE-led development of the Wide Field Imager (WFI) of Athena, a large class mission of ESA’s Cosmic Vision program.\cite{nandra13,rau16} The WFI consists of two units.\cite{meidinger20} The Large Detector Array composed of four 512×512 pixel matrices and a field of view of \SI{40}{\arcminute}×\SI{40}{\arcminute} has a timing requirement of \SI{\le 5}{\ms}, which demands a readout time of \SI{\le 9.8}{\us} per sensor row for the applied rolling shutter readout mode. For a second, smaller sensor with 64×64 pixels for the observation of bright, point-like sources, a row-wise readout speed of \SI{2.5}{\us} as well as a two-row parallel readout is needed to fulfill the pile-up and throughput requirements. After the pixel layout, the fabrication technology and the readout mode for the detector were fixed in a preceding prototyping phase,\cite{treberspurg16,treberspurg17,treberspurg18} DEPFET arrays with the full flight size were fabricated. A fully assembled Large Detector is shown in \autoref{fig:ld-module}. The pixel size of both detectors is \SI{130}{\um}×\SI{130}{\um}. In a DEPFET active pixel sensor for X-ray astrophysics, the electrons generated in the sensitive volume of the back side illuminated device are collected under the DEPFET channel and influence its conductivity by inducing mirror charges into the transistor channel. It enables the measurement of their number that corresponds to the energy of the charge generating incident photon. Quasi Fano-limited spectral performance in combination with a high readout speed can be achieved. To realize a good quantum efficiency even at \SI{15}{\keV}, the sensitive volume is maximized by a full depletion of the silicon semiconductor over the entire sensor thickness of \SI{450}{\um}. \section{Measurement and Analysis Method} \label{sec:method} Apart from the standard calibration source used for most of the measurements throughout the development phase---a radioactive \textsuperscript{55}Fe source---all the emission lines were produced with an X-ray tube available at the MPE laboratories. A filament is heated to emit electrons. They are accelerated and focused on a target of the desired material. Atoms in the target are ionized and emit characteristic radiation. The drawback is a contamination of the spectrum by bremsstrahlung emitted while the electrons are decelerated in the target material. The bremsstrahlung adds a continuous contribution up to the maximum energy $E_{max}$ that the electrons gained in the accelerating electric field. Its distribution is described by Kramers' law\cite{kramers23} \begin{equation} \label{eq:kramers} \Psi(E) = \frac{K}{2 \pi c} \left(E_{max} - E\right) \end{equation} with $K$ proportional to the atomic number of the target element and the speed of light in vacuum $c$. To reduce the influence of the continuum on the spectrum, filters are placed into the ray path. In addition, the on-chip optical blocking filter, that reduces the optical loading on the detector during operation in space, affects the spectrum that is detected by the sensor. Using Kramers' law and filter transmission data from Henke et al.\cite{henke93}, a model for the continuum was generated. Known emission lines---apart from the one of interest---were added and an uncertainty according to Fano statistics\cite{fano47} was applied to model the measured spectrum. To obtain proper values for the spectral performance, the resulting fit of known components was subtracted from the measurement data. The remaining emission line including charge losses is then used to determine the Full Width at Half Maximum, FWHM. The widths of the manganese emission lines were determined without a background fit and its subtraction. In \autoref{fig:fitexpl}, an example of such a fit to measured data is given. All measurements were performed at a temperature of \SI{-60}{\celsius} at the DEPFET sensor and at about \SI{0}{\celsius} at the front-end electronics which comprises the steering and readout ASICs (Application-Specific Integrated Circuit). To avoid icing and the absorption of the X-ray photons before they hit the sensor, the entire detector modules are operated in vacuum. All measurements were performed in continuous rolling shutter mode of the full sensor frame. The readout process and the multiplexing of the readout ASIC output data\cite{herrmann22} was always set to the slowest feasible speed to achieve the best performance possible at a given exposure time. The readout of the Fast Detector is split into two halves to gain a factor of two in speed. For the spectra, event patterns of up to 2×2 pixels were considered. Larger patterns can only be generated by pile-up---also with a noise excess---or massive particles entering the sensor and are discarded. All spectroscopic performance results are given for all the accepted patterns. The performance of events that deposit all their charge carriers in a single pixel is typically a few electronvolts better. A limitation to those would result in a loss of a large fraction of detected photons and is not an option for a detector used in astrophysics. \section{Results} \label{sec:results} The pre-flight production for Athena's WFI delivered DEPFET sensors of full size featuring the pixel layout, fabrication technology and readout mode designated for the flight modules. Using detailed inspection methods and repair effort on pixel level as well as improved fabrication steps, the yield was significantly increased and the overall homogeneity improved.\cite{bonholzer22} This resulted in the first functioning DEPFET sensors of this size. The obtained spectral performances of the two different detectors types designated for the WFI of Athena using the method described in \autoref{sec:method} are summarized in \autoref{tab:specperf}. \begin{table}[ht] \caption{The spectral performance of a Large (LD) and Fast Detector (FD) as determined from the measurements with different emission lines. The theoretical Fano limit is given for comparison. The Large Detector can be operated with a higher readout speed than required in exchange for a degradation of the performance. Missing values are due to failed measurements which need to be redone.} \label{tab:specperf} \begin{center} \begin{tabular}{l|r|r|r|r|r|r} \multicolumn{3}{c|}{~} & \multicolumn{3}{c|}{LD FWHM} & FD FWHM \\ Emission line & \multicolumn{1}{l|}{Energy} & Fano limit & $t_{exp} = \SI{5.00}{\ms}$ & $t_{exp} = \SI{2.00}{\ms}$ & $t_{exp} = \SI{1.28}{\ms}$ & $t_{exp} = \SI{80}{\us}$ \\ \hline C K$\alpha_{1,2}$ & \SI{0.2770}{\keV} & \SI{26}{\eV} & \SI{53.9}{\eV} & \SI{59.9}{\eV} & \SI{65.4}{\eV} & \SI{59.5}{\eV} \\ O K$\alpha_{1,2}$ & \SI{0.5249}{\keV} & \SI{36}{\eV} & \SI{56.0}{\eV} & \SI{62.7}{\eV} & \SI{67.5}{\eV} & \SI{60.6}{\eV} \\ Zn L$\alpha_{1,2}$ & \SI{1.0117}{\keV} & \SI{50}{\eV} & \SI{61.1}{\eV} & \SI{68.4}{\eV} & \SI{74.9}{\eV} & \SI{66.4}{\eV} \\ Al K$\alpha_1$ & \SI{1.4867}{\keV} & \SI{60}{\eV} & \SI{69.8}{\eV} & \SI{75.3}{\eV} & \SI{79.6}{\eV} & \SI{74.7}{\eV} \\ Ag L$\alpha_1$ & \SI{2.9843}{\keV} & \SI{85}{\eV} & \SI{90.2}{\eV} & \SI{94.2}{\eV} & \SI{97.9}{\eV} & \SI{93.3}{\eV} \\ Ti K$\alpha_1$ & \SI{4.5108}{\keV} & \SI{105}{\eV} & \SI{111.2}{\eV} & \SI{117.2}{\eV} & \SI{119.4}{\eV} & \SI{113.8}{\eV} \\ Cr K$\alpha_1$ & \SI{5.4147}{\keV} & \SI{115}{\eV} & \SI{121.2}{\eV} & \multicolumn{1}{c|}{---} & \SI{128.6}{\eV} & \SI{121.2}{\eV} \\ Mn K$\alpha_{1,2}$ & \SI{5.8951}{\keV} & \SI{120}{\eV} & \SI{126.0}{\eV} & \SI{129.5}{\eV} & \SI{132.2}{\eV} & \SI{129.1}{\eV} \\ Fe K$\beta_{1,3}$ & \SI{7.0580}{\keV} & \SI{131}{\eV} & \SI{138.4}{\eV} & \SI{140.2}{\eV} & \SI{142.8}{\eV} & \SI{139.4}{\eV} \\ \end{tabular} \end{center} \end{table} The Fano limited theoretical performance is calculated via the following expression. \begin{equation} \label{eq:fano} \text{FWHM} = 2 \sqrt{2 \ln{2}} \sqrt{F \omega E} \end{equation} with the material dependent Fano factor $F = 0.118$ and the mean electron--hole pair creation energy $\omega = 3.7\,\e\text{/eV}$ for the operation temperature of \SI{-60}{\celsius} of the silicon sensor.\cite{lowe07} The Large Detector suffers from a few noisy columns as well as rows with noisy pixels. In both cases, the origins are still unknown but are located in the DEPFET sensor. The additional noise in the columns can be eliminated with a modified common mode correction. Even though the design of the readout chain is limited to \SI{2.5}{\us} per row, resulting in a minimum exposure time of \SI{1.28}{\ms} for the Large Detector, the flexible laboratory setup allows for higher speeds. For a frame time of \SI{1}{\ms}, a spectral performance of \SI{136.3}{\eV}~FWHM (\SI{129.5}{\eV} for single pixel events) at \SI{5.9}{\keV} photon energy was achieved. \section{Performance Analysis} \label{sec:analysis} All results presented so far are obtained from laboratory measurements. Such measurements demonstrate the capabilities of the detectors under test but do not account for additional effects which might degrade the performance in a relevant environment. To be able to predict the spectroscopic potential, the entire signal chain and all aspects that might influence it were analyzed. For the degradation over time, first Total Ionizing Dose (TID) tests were performed.\cite{emberger22} The results of the first noise component analysis are summarized in \autoref{tab:perfana}. There are two types of contributions to the spectral performance: noise components $\sigma_i$ and further, non-noise effects $\Delta \text{FWHM}_i$ that broaden an emission line linearly. \begin{equation} \label{eq:fwhm} \text{FWHM}(E) = 2 \sqrt{2 \ln{2}} \sqrt{F \omega E + \omega^2 \Sigma_i \sigma_i^2} + \Sigma_i \Delta \text{FWHM}_i(E) \end{equation} Most of the components listed in \autoref{tab:perfana} are already influencing the measurements taken in our laboratories. Only the degradation of shot and read noise, the photon background and potential electromagnetic (EM) emission from other parts of the Athena satellite will decrease the performance further. To get an impression of their impact, those components were added to the measured data from \autoref{tab:specperf}. The resulting performances are shown in \autoref{tab:specperfdeg}. Due to the quadratic addition of noise components, the effect on smaller numbers and, therefore, the performance at lower energies is slightly larger. While the degradation at low energies is about \SI{3}{\eV}, it is below \SI{2}{\eV} for higher energies. Another finding of the detailed analysis was, that the difference between the theoretical Fano limit and the measured data---which cannot be explained just by a line-broadening due to noise---is caused by threshold effects during the event recombination, which depends on the noise anyway. \begin{table}[ht] \caption{The individual components that contribute to the noise and, thereby, spectral performance of the two detectors. The analysis is done at \SI{1}{\keV} and \SI{7}{\keV} that are the energies at which performance requirements for the WFI instrument exist. They are \SI{\le 80}{\eV}~FWHM and \SI{\le 170}{\eV}~FWHM, respectively. The values represent the contribution to one event which typically spreads over multiple pixels. The factors multiplied to the noise of a single pixel are $1.26$ and $1.42$ for \SI{1}{\keV} and \SI{7}{\keV}, respectively.} \label{tab:perfana} \begin{center} \begin{tabular}{l|r|r|r|r} & \multicolumn{2}{c|}{LD} & \multicolumn{2}{c}{FD} \\ & \multicolumn{1}{c|}{\SI{1}{\keV}} & \multicolumn{1}{c|}{\SI{7}{\keV}} & \multicolumn{1}{c|}{\SI{1}{\keV}} & \multicolumn{1}{c}{\SI{7}{\keV}} \\ \hline Fano & \SI{21.5}{\eV}~RMS & \SI{56.0}{\eV}~RMS & \SI{21.5}{\eV}~RMS & \SI{56.0}{\eV}~RMS \\ Shot Noise & \SI{7.1}{\eV}~RMS & \SI{7.9}{\eV}~RMS & \SI{0.9}{\eV}~RMS & \SI{1.0}{\eV}~RMS \\ Photon Background & \SI{3.8}{\eV}~RMS & \SI{3.8}{\eV}~RMS & \SI{0.2}{\eV}~RMS & \SI{0.2}{\eV}~RMS \\ \hline Power Supplies & \SI{2.0}{\eV}~RMS & \SI{2.0}{\eV}~RMS & \SI{2.0}{\eV}~RMS & \SI{2.0}{\eV}~RMS \\ \hline Switcher & \SI{0.0}{\eV}~RMS & \SI{0.0}{\eV}~RMS & \SI{0.0}{\eV}~RMS & \SI{0.0}{\eV}~RMS \\ Read Noise & \SI{10.8}{\eV}~RMS & \SI{12.1}{\eV}~RMS & \SI{11.3}{\eV}~RMS & \SI{12.8}{\eV}~RMS \\ Veritas & \SI{7.1}{\eV}~RMS & \SI{8.0}{\eV}~RMS & \SI{5.7}{\eV}~RMS & \SI{6.4}{\eV}~RMS \\ Bandwidth Limits & \SI{0.2}{\eV}~RMS & \SI{0.2}{\eV}~RMS & \SI{0.2}{\eV}~RMS & \SI{0.2}{\eV}~RMS \\ Ext. EM Emission & \SI{0.1}{\eV}~RMS & \SI{0.1}{\eV}~RMS & \SI{0.1}{\eV}~RMS & \SI{0.1}{\eV}~RMS \\ \hline ADC & \SI{3.0}{\eV}~RMS & \SI{3.0}{\eV}~RMS & \SI{3.0}{\eV}~RMS & \SI{3.0}{\eV}~RMS \\ OnBoard Pipeline & \SI{0.7}{\eV}~RMS & \SI{0.7}{\eV}~RMS & \SI{0.7}{\eV}~RMS & \SI{0.7}{\eV}~RMS \\ \hline Ground Pipeline & \SI{1.5}{\eV}~RMS & \SI{6.0}{\eV}~RMS & \SI{1.5}{\eV}~RMS & \SI{6.0}{\eV}~RMS \\ \hline Charge Losses & \SI{6.0}{\eV}~FWHM & \SI{6.0}{\eV}~FWHM & \SI{6.0}{\eV}~FWHM & \SI{6.0}{\eV}~FWHM \\ Non-Linearity & \SI{0.5}{\eV}~FWHM & \SI{0.5}{\eV}~FWHM & \SI{0.5}{\eV}~FWHM & \SI{0.5}{\eV}~FWHM \\ Energy Misfits & \SI{0.1}{\eV}~FWHM & \SI{0.1}{\eV}~FWHM & \SI{0.1}{\eV}~FWHM & \SI{0.1}{\eV}~FWHM \\ \end{tabular} \end{center} \end{table} \begin{table}[ht] \caption{Estimated mean spectral performance for the measured data from \autoref{tab:specperf} at the end of the nominal operation phase.} \label{tab:specperfdeg} \begin{center} \begin{tabular}{l|r|r|r|r|r} \multicolumn{2}{c|}{~} & \multicolumn{3}{c|}{LD FWHM} & FD FWHM \\ Emission line & \multicolumn{1}{l|}{Energy}& $t_{exp} = \SI{5.00}{\ms}$ & $t_{exp} = \SI{2.00}{\ms}$ & $t_{exp} = \SI{1.28}{\ms}$ & $t_{exp} = \SI{80}{\us}$ \\ \hline C K$\alpha_{1,2}$ & \SI{0.2770}{\keV} & \SI{57.1}{\eV} & \SI{62.8}{\eV} & \SI{68.0}{\eV} & \SI{61.4}{\eV} \\ O K$\alpha_{1,2}$ & \SI{0.5249}{\keV} & \SI{59.2}{\eV} & \SI{65.5}{\eV} & \SI{70.1}{\eV} & \SI{62.5}{\eV} \\ Zn L$\alpha_{1,2}$ & \SI{1.0117}{\keV} & \SI{64.2}{\eV} & \SI{71.1}{\eV} & \SI{77.4}{\eV} & \SI{68.3}{\eV} \\ Al K$\alpha_1$ & \SI{1.4867}{\keV} & \SI{72.7}{\eV} & \SI{77.9}{\eV} & \SI{82.1}{\eV} & \SI{76.5}{\eV} \\ Ag L$\alpha_1$ & \SI{2.9843}{\keV} & \SI{92.6}{\eV} & \SI{96.5}{\eV} & \SI{100.1}{\eV} & \SI{94.8}{\eV} \\ Ti K$\alpha_1$ & \SI{4.5108}{\keV} & \SI{113.1}{\eV} & \SI{119.0}{\eV} & \SI{121.2}{\eV} & \SI{115.1}{\eV} \\ Cr K$\alpha_1$ & \SI{5.4147}{\keV} & \SI{123.0}{\eV} & \multicolumn{1}{c|}{---} & \SI{130.3}{\eV} & \SI{122.4}{\eV} \\ Mn K$\alpha_{1,2}$ & \SI{5.8951}{\keV} & \SI{127.7}{\eV} & \SI{131.2}{\eV} & \SI{133.8}{\eV} & \SI{130.2}{\eV} \\ Fe K$\beta_{1,3}$ & \SI{7.0580}{\keV} & \SI{140.0}{\eV} & \SI{141.8}{\eV} & \SI{144.3}{\eV} & \SI{140.4}{\eV} \\ \end{tabular} \end{center} \end{table} \section{Summary} \label{sec:summary} For the first time, the two detectors designated for the Wide Field Imager of Athena are under test with their full size as well as their final fabrication technology, pixel layout and readout mode. Beside some noise issues to clarify, they already show an excellent performance over the relevant energy range. A first rough performance analysis indicates, that the energy resolution is degraded only by a few electronvolts until the end of nominal operation. Nevertheless, the spectral resolution is shown only as the mean value of all pixels here. To assess the overall performance, a pixel specific analysis is necessary in the future to quantify the amount of non-compliant pixels. \acknowledgments % Development and production of the DEPFET sensors for the Athena WFI is performed in a collaboration between MPE and the MPG Semiconductor Laboratory (HLL). We gratefully thank all people who gave aid to make the presented measurements possible. The work was funded by the Max-Planck-Society and the German space agency DLR (FKZ: 50 QR 1901). \bibliography{report} % \bibliographystyle{spiebib} %
Title: Variability Signatures of a Burst Process in Flaring Gamma-ray Blazars
Abstract: Blazars exhibit stochastic flux variability across the electromagnetic spectrum, often exhibiting heavy-tailed flux distributions, commonly modeled as lognormal. However, Tavecchio et al. (2020) and Adams et al. (2022) found that the high-energy gamma-ray flux distributions of several of the brightest flaring Fermi-LAT flat spectrum radio quasars (FSRQs) are well modeled by an even heavier-tailed distribution, which we show is the inverse gamma distribution. We propose an autoregressive inverse gamma variability model in which an inverse gamma flux distribution arises as a consequence of a shot-noise process. In this model, discrete bursts are individually unresolved and averaged over within time bins, as in the analysis of Fermi-LAT data. Stochastic variability on timescales longer than the time bin duration is modeled using first-order autoregressive structure. The flux distribution becomes approximately lognormal in the limiting case of many weak bursts. The fractional variability is predicted to decrease as the time bin duration increases. Using simulated light curves, we show that the proposed model is consistent with the typical gamma-ray variability properties of FSRQs and BL Lac objects. The model parameters can be physically interpreted as the average burst rate, the burst fluence, and the timescale of long-term stochastic fluctuations.
https://export.arxiv.org/pdf/2208.03614
\title{Variability Signatures of a Burst Process in Flaring Gamma-ray Blazars} \correspondingauthor{Aryeh Brill} \email{aryeh.brill@nasa.gov} \author{A.~Brill} \affiliation{NASA Postdoctoral Program Fellow, NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA} \section{Introduction} Blazars are a class of active galactic nuclei (AGN) which are thought to possess relativistic jets oriented nearly along our line of sight \citep{Urry1995}. Blazars make up the vast majority of extragalactic gamma-ray sources that have been detected in the GeV band by \textit{Fermi}-LAT \citep{Ajello2020}. Blazars are highly variable objects at essentially all wavelengths, with variability observed in the GeV band at timescales of years down to several minutes \citep[e.g.][]{Ackermann2016}. Observations of variability can help us understand the physical processes that give rise to the high-energy emission from these objects. Characteristic timescales of variability constrain the apparent size of the emission region by placing upper bounds on the light crossing time, and are a key input for modeling the physical processes giving rise to gamma-ray emission. Blazars can be divided into two source classes, flat spectrum radio quasars (FSRQs) and BL Lac objects. FSRQs have greater synchrotron luminosity and lower peak synchrotron frequency than BL Lac objects. Although these two source classes are often considered to form a continuous ``blazar sequence'' \citep{Fossati1998}, they may be better understood as belonging to two intrinsically distinct AGN populations, of high-power and low-power objects, respectively \citep{Keenan2021}. In the GeV gamma-ray band, FSRQs are typically more luminous and more variable than BL Lac objects \citep{Abdo2010}. Blazars typically have a power spectral density (PSD) with a power-law shape, indicating that the emission can be described as a stochastic process, or random walk \citep[e.g.][]{Abdo2010}. Characteristic variability timescales may appear as features or spectral breaks in the PSD. Previous works have found such breaks (when present) at short timescales of $\sim$1 day, potentially connected to the timescale of successive flare events giving rise to the high-energy emission, and at long timescales of $\sim$1 year, which may be related to processes in the accretion disk \citep[e.g.][]{Kataoka2001, Ryan2019, Tarnopolski2020}. However, gamma-ray blazars are also notable for undergoing intermediate-duration flaring periods which can last for many days, during which the gamma-ray flux can in some cases increase by an order of magnitude or more. One example of the flaring phenomenon comes from the bright FSRQ 3C~279, which has undergone several such flares \citep[e.g.][]{Ackermann2016, Adams2022}. In particular, \citet{Adams2022} analyzed its sub-daily variability during 10 bright flaring periods lasting between 1 and 11 days, for a total of 54 days of observations. From modeling using exponential profiles, each flare was resolved into between 1 and 4 separate components, for a total of 24 distinct components that had rise and decay times ranging from days to less than 1 hour. The fluences of the flare components were found to have a dynamic range about an order of magnitude narrower than that of their rise and decay timescales, with a median fluence of $0.85 \times 10^{-3}$~erg~cm$^{-2}$. Blazar variability can be usefully studied in the time domain using autoregressive processes, a powerful class of time series models well suited for describing stationary stochastic processes \citep[see][and references therein]{Scargle1981}. In an autoregressive (AR) process of order $p$, the value $X_i$ at a given time step $i$ is a linear combination of the values at the previous $p$ time steps, to which a stochastic ``innovation'' term $\epsilon$ is added. In the related moving average (MA) process of order $q$, a linear combination of the values of the innovations at the previous $q$ time steps is used instead. Combining these processes results in a so-called ARMA process of order $(p, q)$. In this paper, we restrict our attention to AR(1) processes, that is, the case $p = 1$, $q = 0$, and we will use the term ``autoregressive process'' broadly to refer to the class of ARMA processes in general. Variability can also be studied through the flux distribution, that is, the probability density function (PDF) describing the observed flux values. Blazars are generally found to have heavy-tailed flux distributions, with excesses at high fluxes compared to a normal distribution. These distributions are commonly modeled as a lognormal distribution, which can arise through the multiplicative combination of many independent events \citep{Uttley2005}. Variability consistent with lognormality has been observed in many studies of blazars using data in multiple wavebands \citep[e.g.][]{Giebels2009, Kushwaha2017, Valverde2020, Acciari2021, Bhatta2021}. A commonly observed feature of variable emission from AGN, first discussed in depth by \citet{Uttley2001}, is an approximately linear relationship between the mean flux and standard deviation calculated within subintervals of the light curve, known as the linear RMS-flux relation. \citet{Uttley2005} showed that a linear RMS-flux relation can be associated with lognormal variability, suggesting that the long-term memory needed to produce a linear RMS-flux relation must be connected with an underlying multiplicative process, ruling out an additive origin for variability, such as shot noise \citep{Lehto1989}. Importantly, however, \citet{Scargle2020} showed that an approximately linear RMS-flux relation arises as a general statistical property of any light curve that can be modeled as an autoregressive process, with the exact form of the relation depending on the shape of the innovation. An underlying multiplicative process is therefore not required to explain the observed properties of blazar light curves. In this work, we propose a model in which blazar variability results from an underlying process of Poisson-distributed bursts. Critically, the shapes of the individual bursts are taken to be unresolved. The variability then arises from the statistics of the burst production process, giving rise to an inverse gamma flux distribution. We derive a model that yields an inverse gamma flux distribution with AR(1) autoregressive structure. The model has only three free parameters, representing the average rate of the putative bursts, the burst fluence, and an autocorrelation timescale representing long-term fluctuations in the burst rate. These parameters can be interpreted in terms of physical processes, such as plasmoid-powered flares in a relativistic magnetic reconnection scenario. In Section~\ref{sec:background}, we provide theoretical background on autoregressive processes and the inverse gamma distribution, and discuss related work. In Section~\ref{sec:model}, we motivate the autoregressive inverse gamma model, derive its properties, and discuss important limiting cases. In Section~\ref{sec:simulation}, we use the proposed process to simulate light curves representative of several source classes. In Section~\ref{sec:discussion}, we discuss some of the model's limitations, extensions, and implications; connections to theoretical models; comparisons to previous work; and applications to blazar classification. Finally, we summarize our conclusions in Section~\ref{sec:conclusion}. \section{Background and Related Work} \label{sec:background} An ARMA process of order $(p, q)$ can be written as \begin{equation}\label{eq:arma} X_i = \sum_{j=1}^p \phi_j X_{i - j} + \sum_{k=1}^q \theta_k \epsilon_{i - k} + \epsilon_i, \end{equation} \noindent where $X_i$ are the values of the time series, $\epsilon_i$ are stochastic innovation terms, and $\phi_j$, $\theta_k$ are the constant AR and MA coefficients, respectively. In this paper, we consider the special case of AR(1) processes with the form \begin{equation}\label{eq:ar1} X_i = \phi X_{i - 1} + \epsilon_i. \end{equation} An AR(1) process in which the innovation terms are assumed to be normally distributed, such that \begin{equation}\label{eq:normal_innovation} \epsilon \sim \mathcal{N}\left(\mu', \sigma^2 \right), \end{equation} \noindent where $\mu' \equiv (1 - \phi)\mu$ is the mean and $\sigma^2$ is the variance of a Gaussian distribution, is commonly referred to in astrophysics as an Ornstein-Uhlenbeck or damped random walk process. This process and its continuous-time analogue have found widespread application in modeling AGN light curves \citep[e.g.][]{Kelly2009, Moreno2019, Burd2021}. The continuous-time analogue of the general ARMA process, called CARMA, has been fruitfully deployed as well \citep{Kelly2014, Ryan2019}. An AR(1) process is associated with an expected marginal distribution\footnote{The marginal distribution is identical to the expected flux distribution if the modeled time series is the flux light curve. These distributions will be different, though closely related, if a different time series is being considered, such as that of the logarithm of the flux.}. If the innovation is normally distributed according to Eq.~\ref{eq:normal_innovation}, the marginal distribution is normally distributed as well, such that \begin{equation}\label{eq:gaussian_ar1} X \sim \mathcal{N}\left(\mu, \frac{\sigma^2}{1 - \phi^2}\right). \end{equation} A lognormal flux distribution is usually obtained from an AR(1) model with a normal innovation by fitting it with the logarithm of the flux, $X = \log F$. Conveniently, this transformation also prevents the possibility of generating nonphysical negative fluxes. One way to interpret the flux distribution is as the result of the combination of two distinct underlying processes, represented by the autoregressive term parameterized by $\phi$ and the innovation term parameterized by $\mu$ and $\sigma$, which may in general have different physical interpretations and characteristic timescales. The innovation term contributes short-timescale variability, which is enhanced by the long-timescale variability provided by the autoregressive term. These are both sources of intrinsic variability, which are independent of any extrinsic uncertainty contributed by measurement error. Recently, \citet{Tavecchio2020} proposed a model of blazar variability based on a stochastic differential equation (SDE) that exhibits nonlinear dynamics. Their model is physically motivated by considering a magnetically arrested accretion disk. The accumulation of magnetic energy is modeled as a deterministic, equilibrium-reverting process (the autoregressive term) and its dissipation as a stochastic process (the innovation term). In this model, the standard deviation of the innovation term depends linearly on the flux, so that its discrete representation is equivalent to Eq.~\ref{eq:ar1} now with \begin{equation}\label{eq:tavecchio_innovation} \epsilon \sim \mathcal{N}\left(\mu', (\sigma X_{i - 1})^2 \right). \end{equation} Assuming that $X > 0$, the marginal distribution of this process is given by \citep[][Eq.~A8]{Tavecchio2020} \begin{equation}\label{eq:tavecchio_pdf} f(X) = \frac{(\lambda\mu)^{1 + \lambda}}{\Gamma(1 + \lambda)} \frac{e^{-\lambda\mu/X}}{X^{\lambda + 2}}, \end{equation} \noindent where $\lambda \equiv 2\phi/\sigma^2$ and $\Gamma(x)$ is the Gamma function\footnote{In the notation of \citet{Tavecchio2020}, $\lambda \equiv 2\theta/\sigma^2$.}. \citet{Tavecchio2020} showed that this PDF provides a satisfactory representation of the gamma-ray flux distributions of six bright FSRQs observed by \textit{Fermi}-LAT using the light curves analyzed by \citet{Meyer2019}. Subsequently, \citet{Adams2022} demonstrated that this model provides a better fit than a lognormal PDF to the \textit{Fermi}-LAT flux distributions of three bright FSRQs, 3C~279, PKS~1222+216, and Ton~599. In fact, Eq.~\ref{eq:tavecchio_pdf} has the form of a well-known probability distribution, the inverse gamma distribution, which has the PDF \begin{equation}\label{eq:inverse_gamma} f(X) = \frac{\beta^\alpha}{\Gamma(\alpha)} \frac{e^{-\beta/X}}{X^{\alpha + 1}}. \end{equation} It can be seen that Eq.~\ref{eq:tavecchio_pdf} is equivalent to Eq.~\ref{eq:inverse_gamma} with $\alpha = 1 + \lambda$ and $\beta = \lambda\mu$. The inverse gamma distribution and gamma distribution are intimately related. As the names of these two distributions suggest, if $X \sim \mathrm{InvGamma}(\alpha, \beta)$, then $1/X \sim \mathrm{Gamma}(\alpha, \beta)$, and vice-versa\footnote{We will also use the notation $\Gamma(\alpha, \beta)$ with two arguments to denote the Gamma distribution, and $\Gamma(x)$ with one argument for the Gamma function.}. The PDF of the gamma distribution is given by \begin{equation} f(X) = \frac{X^{\alpha - 1} e^{-\beta X} \beta^\alpha}{\Gamma(\alpha)}. \end{equation} The inverse gamma and gamma distributions have positive support, that is, $X \in (0, \infty)$. For both distributions, $\alpha > 0$ is a shape parameter determining the form of the distribution. Figure~\ref{fig:inverse_gamma} shows inverse gamma PDFs plotted for a range of $\alpha$ values. The inverse gamma distribution is highly skewed, with a high-flux tail that is heavier than that of a lognormal distribution and resembles a power law. In the regime $\alpha \gg 1$, the distribution becomes approximately (but not exactly) lognormal. The scale of the inverse gamma distribution is determined by $\beta > 0$, which for the gamma distribution is an inverse scale (or rate) parameter. In a Poisson process, the interarrival time, or time difference between each event and the next arriving independently, is randomly distributed following an exponential distribution. The sum of $\alpha$ independent exponential variables with rate parameter $\beta$ is distributed as $\mathrm{Gamma}(\alpha, \beta)$. For this reason, the gamma distribution is closely connected with Poisson processes. In this work, we propose an autoregressive inverse gamma model that shares several commonalities with the model of \citet{Tavecchio2020}, including an inverse gamma flux distribution and autoregressive time structure. However, the proposed model has a different physical motivation and mathematical basis deriving from an underlying emission process of discrete unresolved bursts, rather than from the nonlinear stochastic term of Eq.~\ref{eq:tavecchio_innovation}. These differences lead to several novel predictions. In the proposed model, variability is controlled primarily by the average burst rate. The model produces isolated ``loner flares'' when the average burst rate is low; flaring light curves reminiscent of highly variable FSRQs and BL~Lac objects when it is moderate; and approximately lognormal variability when it is high. In addition, high-energy gamma-ray light curves are typically analyzed in time bins of a set duration. The proposed model predicts that the fractional variability of a source will decrease proportionately to the square root of the time bin duration used in the analysis, so long as the bin duration is longer than the timescale at which individual bursts can be resolved. \section{Autoregressive Inverse Gamma Model}\label{sec:model} \subsection{Inverse Gamma Flux Distribution}\label{sec:derivation} We consider a scenario in which the gamma-ray emission is dominated by discrete bursts (which might also be called flares or shots) distributed as a Poisson process. We denote the average arrival rate of the bursts as $r$ and suppose for simplicity that each burst has the same fluence $\mathcal{F}$. We consider a light curve binned in time bins of duration $\Delta T$, such that the observed flux is averaged within each time bin. Therefore, $\Delta T$ is the minimum timescale at which information about variability in the source can be determined. We can roughly associate some characteristic timescale $t_\mathrm{shape}$ with the shape of each burst, for example, an exponential rise or decay time. We make the key assumption that $t_\mathrm{shape} \ll \Delta T$, that is, the bursts are individually unresolved on the timescale of the observations. In this scenario, all variability driven by the shapes of the individual bursts is washed out. Instead, the flux distribution derives from the Poisson statistics of the process producing the bursts. The average number of bursts occurring in each time bin is given by \begin{equation}\label{eq:alpha} \alpha = r \Delta T. \end{equation} As a first approximation, we assume that each bin has exactly $\alpha$ bursts. We denote the interarrival time for burst $i$ to occur as $t_{\mathrm{wait},i}$. Because the observed flux in each time bin is obtained by averaging over all of the fluxes produced by bursts in that bin, each burst will contribute a portion of the observed flux given by \begin{equation}\label{eq:t_wait} F_i = \frac{\mathcal{F}}{t_{\mathrm{wait},i}}. \end{equation} By taking the fluence of burst $i$ to be spread out uniformly on the timescale $t_{\mathrm{wait},i}$ in its contribution to the observed flux, we have essentially further assumed that the burst shape is perfectly unresolved. We discuss the effects of modifying these assumptions in Sections~\ref{sec:discretization} and \ref{sec:discussion_limitations}. On average, then, each burst will contribute flux $F = \mathcal{F} r$. If the binned light curve is normalized to units of flux $F_\mathrm{scale}$, where $F_\mathrm{scale}$ may be chosen arbitrarily, the average normalized flux of each burst is \begin{equation} \beta_\mathrm{burst} = \frac{r\mathcal{F}}{F_\mathrm{scale}}. \end{equation} From Eq.~\ref{eq:t_wait}, for burst $i$, the corresponding reciprocal of the flux $F^{-1}_i$ is directly proportional to the interarrival time $t_{\mathrm{wait},i}$. In a Poisson process, the interarrival time for a single event is distributed as an exponential distribution. The same is therefore true for the reciprocal of the flux, $F^{-1} \sim \mathrm{Exp}(\beta_\mathrm{burst})$. Using the relationship between the exponential and gamma distributions, the reciprocal of the flux averaged over $\alpha$ bursts in a bin is \begin{equation} F^{-1} \sim \frac{1}{\alpha} \Gamma(\alpha, \beta_\mathrm{burst}) = \Gamma(\alpha, \alpha\beta_\mathrm{burst}) = \Gamma(\alpha, \beta), \end{equation} \noindent where $\Gamma(\alpha, \beta)$ is the gamma distribution with shape parameter $\alpha$ and rate parameter $\beta$, where \begin{equation}\label{eq:beta} \beta = \alpha \beta_\mathrm{burst} = \frac{\alpha r\mathcal{F}}{F_\mathrm{scale}}. \end{equation} Since $F = (F^{-1})^{-1}$, it follows from the relationship between the gamma and inverse gamma distributions that \begin{equation} F \sim \mathrm{InvGamma}(\alpha, \beta), \end{equation} \noindent where $\mathrm{InvGamma}(\alpha, \beta)$ is the inverse gamma distribution with shape parameter $\alpha$ and scale parameter $\beta$. From Eqs.~\ref{eq:alpha} and \ref{eq:beta}, the parameters of the inverse gamma distribution are related to the physical characteristics of the bursts as \begin{equation}\label{eq:physical_parameters} \begin{split} r &= \frac{\alpha}{\Delta T}\\ \mathcal{F} &= \frac{\beta}{\alpha^2} F_\mathrm{scale} \Delta T. \end{split} \end{equation} \subsection{Incorporating Autoregression} To model a stationary time series possessing AR(1) autoregressive structure and an inverse gamma flux distribution, we make use of the existing literature on conditional linear AR(1) processes \citep{Grunwald2000}, as the simplest formulation that can produce a non-Gaussian time series with autoregressive structure. We consider the time series of $F^{-1}$, allowing us to take advantage of existing models of gamma-distributed AR(1) processes. The flux light curve, with an inverse gamma flux distribution, can then be obtained by applying a reciprocal transformation to the time series. In the standard AR(1) process, a normal innovation produces a normal marginal distribution, but the gamma distribution does not have this property. A more complex model is required to produce an AR(1) process with a gamma marginal distribution. We model the autoregressive structure of the light curve using the process proposed by \citet{Sim1990}, in which the time dependence has the form \begin{equation}\label{eq:sim1990} \begin{split} &F_i^{-1} = \Gamma(N(F_{i - 1}^{-1}) + \alpha, \beta),\\ &N(F^{-1}) \sim \mathrm{Pois}(\phi \beta F^{-1}). \end{split} \end{equation} In this approach, known as thinning, rather than multiplying $F_{i - 1}^{-1}$ by a constant $\phi$, it is instead reduced using a stochastic function taking $\phi$ as a parameter. The process is time-reversible. As shown by \citet{Sim1990}, the autocorrelation function is $\mathrm{Corr}(F_{i + j}^{-1}, F_i^{-1}) = \phi^j$ and the resulting marginal distribution is \begin{equation} f(F^{-1}) = \Gamma\left(\alpha, (1 - \phi)\beta\right). \end{equation} The flux distribution is therefore \begin{equation} f(F) = \mathrm{InvGamma}(\alpha, (1 - \phi)\beta). \end{equation} The autocorrelation parameter $\phi$ can be used to estimate an autocorrelation timescale $\tau$, \begin{equation} \phi = e^{-\Delta T/\tau}, \end{equation} \noindent or as a function of $\phi$, \begin{equation} \tau = \frac{\Delta T}{\ln{(1/\phi)}}. \end{equation} Several other models for a gamma AR(1) process have been proposed, possessing different statistical properties. In the GAR(1) model \citep{Gaver1980, Lawrance1982, Walker2000}, a marginal gamma distribution is obtained by considering an innovation constructed to represent a shot-noise process with an exponential distribution of shot amplitudes. The process is not time-reversible and exhibits peaks followed by geometrically decaying runs of values. While we do not consider this model well suited to describe fluxes averaged over time bins containing multiple unresolved bursts, it may have useful applications in other astrophysical contexts in which fast rising and exponentially decaying flares are individually resolved. In the BGAR(1) process \citep{Lewis1989}, the innovation has a gamma distribution, and a gamma marginal distribution is obtained by replacing the constant autoregression coefficient $\phi$ with a random variable following a beta distribution with expected value $\phi$. The BGAR(1) process is time-reversible. However, the marginal distribution resulting from this process is independent of $\phi$, which is inconsistent with the dependence on $\phi$ in the marginal distribution that might be expected by considering the limiting Gaussian case (Eq.~\ref{eq:gaussian_ar1}). Here, the autoregressive structure could be thought of as making it likely that nearby time steps sample similar values of the innovation PDF, which is difficult to interpret physically. \subsection{Accounting for the Discretization of Bursts in Time Bins}\label{sec:discretization} As we have described the model so far, each time bin contains exactly the average number of bursts $\alpha$. More realistically, the number of bursts would vary from time bin to time bin following a Poisson distribution with parameter $\alpha$. This scenario has two main implications. First, a time bin could contain no bursts, so it would have no associated flux. To allow this, we can describe the inverse flux using a mixed distribution, with finite probability mass $p_0 = e^{-\alpha}$ at $F^{-1} = 0$ and continuous probability density integrating to $p_\Gamma = 1 - e^{-\alpha}$ for $F^{-1} > 0$. Second, for bins that do have bursts, because the number of bursts is different in each bin, the continuous part of the inverse-flux distribution is no longer a single gamma distribution with shape parameter $\alpha$, but a weighted mixture of gamma distributions with shape parameters k = 1, 2, 3, etc. While the flux distribution resulting from this mixture is not exactly inverse gamma in general, as shown in Figure~\ref{fig:mixture_gamma_approximation}, it can be approximated well by a single inverse gamma distribution. The properties of the resulting process are derived in Appendix~\ref{appendix:derivations}. The distribution of $F^{-1} > 0$ is given by a gamma distribution with shape parameter \begin{equation} \alpha_\mathrm{obs} = \frac{A(\alpha)}{1 - e^{-\alpha}}\alpha, \end{equation} \noindent and rate parameter \begin{equation} \beta_\mathrm{obs} = A(\alpha)(1 - \phi)\beta, \end{equation} \noindent where \begin{equation} A(\alpha) = \left( \frac{\alpha}{(1 - e^{-\alpha})^2} \sum_{l = 1}^{\infty} \frac{\alpha^l e^{-\alpha}}{l!} \frac{1}{l} \right)^{-1}. \end{equation} In Figure~\ref{fig:a_alpha_obs}, $A(\alpha)$ and $\alpha_\mathrm{obs}$ are plotted as a function of $\alpha$. As shown in the figure, $A(\alpha) \lesssim 1$ for all $\alpha$, reaching a minimum of $A(\alpha) \approx 0.73$ at $\alpha \approx 2.98$, and approaching 1 in the limits $\alpha \to 0$ and $\alpha \to \infty$. As a result, $\alpha_\mathrm{obs}$ approaches 1 as $\alpha \to 0$ and approaches $\alpha$ as $\alpha \to \infty$. For time bins containing bursts, the behavior of the time series is described by Eq.~\ref{eq:sim1990} with a simple change of parameters. Since $\alpha_\mathrm{obs}$ and $\beta_\mathrm{obs}$ are the parameters of the marginal distribution, the process parameters are $\alpha_\mathrm{obs}$ and $\beta_\mathrm{obs}/(1 - \phi) = A(\alpha)\beta$, giving \begin{equation}\label{eq:sim1990_modified} \begin{split} &F_i^{-1} = \Gamma(N(F_{i - 1}^{-1}) + \alpha_\mathrm{obs}, A(\alpha)\beta),\\ &N(F^{-1}) \sim \mathrm{Pois}(\phi A(\alpha)\beta F^{-1}). \end{split} \end{equation} To accommodate bins with $F^{-1} = 0$ while preserving the AR(1) autocorrelation structure, the modified process can be described as a Markov process with two states, $F^{-1} = 0$ for bins without bursts and $F^{-1} \sim \Gamma(\alpha_\mathrm{obs}, \beta_\mathrm{obs})$ for bins with bursts. The state transition probabilities $s_i \in \{0, \Gamma\} \to s_{i + 1} \in \{0, \Gamma\}$ are given by \begin{equation}\label{eq:transition_probs} \begin{split} p_{00} &= 1 - p(\phi, \alpha)(1 - e^{-\alpha})\\ p_{0\Gamma} &= p(\phi, \alpha)(1 - e^{-\alpha})\\ p_{\Gamma0} &= p(\phi, \alpha)e^{-\alpha}\\ p_{\Gamma\Gamma} &= 1 - p(\phi, \alpha)e^{-\alpha}, \end{split} \end{equation} \noindent where \begin{equation} p(\phi, \alpha) = \frac{1 - \phi}{1 + \frac{\phi}{\alpha}A^{-1}(\alpha)(1 - e^{-\alpha})}. \end{equation} \subsection{``Loner Flares'' at Low Burst Rates} When the burst rate is very low, most time bins contain no bursts, and the light curve consists of isolated flares separated by intervals of no emission. The model parameters $\alpha$, $\beta$, and $\phi$ can be extracted from the properties of the isolated flares by considering the limiting case where $\alpha \ll 1$. Specifically, to first order in $\alpha$, the parameters of the observed distribution become \begin{subequations} \begin{align} \alpha_\mathrm{obs} &\approx 1 + \frac{\alpha}{4},\\ \beta_\mathrm{obs} &\approx \left(1 - \frac{\alpha}{4} \right)(1 - \phi)\beta,\label{eq:lowalpha_beta}\\ p_\Gamma &\approx \alpha,\label{eq:lowalpha_pgamma}\\ p(\phi, \alpha) &\approx \frac{1 - \phi}{1 + \phi}\left(1 + \frac{\phi}{1 + \phi}\frac{\alpha}{4} \right). \end{align} \end{subequations} The expected fraction of time bins that contain flaring gamma-ray emission equals $p_\Gamma$ and so provides a direct estimate of $\alpha$ from Eq.~\ref{eq:lowalpha_pgamma}. The average length of flares in units of $\Delta T$ is given by \begin{equation} N_\mathrm{avg} = \frac{1}{p_{\Gamma 0}} \approx \frac{1 + \phi}{1 - \phi}\left(1 + \frac{\alpha}{1 + \phi} \right). \end{equation} In particular, if $\alpha$ is very small, $\phi$ can be roughly estimated as \begin{equation} \phi \approx \frac{N_\mathrm{avg} - 1}{N_\mathrm{avg} + 1}. \end{equation} A measurement of $\beta_\mathrm{obs}$, thereby giving an estimate of $\beta$ from Eq.~\ref{eq:lowalpha_beta}, can then be obtained by fitting an inverse gamma distribution to the observed flaring flux distribution. \subsection{Lognormal Behavior at High Burst Rates}\label{sec:lognormal_approximation} When the burst rate is very high, the law of large numbers applies: every time bin contains a number of bursts close to the expected value, and the dynamic range of variability is reduced. The inverse flux distribution approaches a normal distribution, resulting in a flux distribution that is approximately lognormal. Specifically, if $\alpha$ is large, the parameters of the observed distribution become $\alpha_\mathrm{obs} \approx \alpha$, $\beta_\mathrm{obs} \approx (1 - \phi)\beta$, and $p_\Gamma \approx 1$, and \begin{equation} \Gamma(\alpha_\mathrm{obs}, \beta_\mathrm{obs}) \to \mathcal{N}\left(\frac{\alpha_\mathrm{obs}}{\beta_\mathrm{obs}}, \frac{\alpha_\mathrm{obs}}{\beta_\mathrm{obs}^2}\right), \end{equation} \noindent where $\mathcal{N}(\mu, \sigma^2)$ is the normal distribution with mean $\mu$ and variance $\sigma^2$. Then, to a level of $s$ standard deviations, the flux $F$ is constrained such that \begin{equation} \left| \frac{1}{F} - \frac{\alpha_\mathrm{obs}}{\beta_\mathrm{obs}} \right| \lesssim \frac{s \sqrt{\alpha_\mathrm{obs}}}{\beta_\mathrm{obs}}, \end{equation} \noindent from which we obtain \begin{equation} \left| \frac{\beta_\mathrm{obs}}{\alpha_\mathrm{obs}}\frac{1}{F} - 1 \right| \lesssim \frac{s}{\sqrt{\alpha}}. \end{equation} Letting $\Delta \equiv \beta_\mathrm{obs}/(\alpha_\mathrm{obs}F) - 1$, then within a significance level of $\sim5$ standard deviations, $\left| \Delta \right| \lesssim 1$ when $\alpha \gtrsim 25$. For a random variable $X$ with PDF $f(X)$, the reciprocal variable $Y = 1/X$ has the PDF $g(Y) = Y^{-2}f(1/Y)$. Since the inverse flux is normally distributed, the flux $F$ has the PDF \begin{equation} \begin{split} F &\sim \frac{1}{F^2} \frac{1}{(\sqrt{\alpha_\mathrm{obs}}/\beta_\mathrm{obs})\sqrt{2\pi}} e^{-\frac{1}{2}\left(\frac{(1/F) - (\alpha_\mathrm{obs}/\beta_\mathrm{obs})}{\sqrt{\alpha_\mathrm{obs}}/\beta_\mathrm{obs}}\right)^2}\\ &= (1 + \Delta)\frac{1}{F}\sqrt{\frac{\alpha_\mathrm{obs}}{2\pi}} e^{-\frac{1}{2}\alpha_\mathrm{obs}\Delta^2}\\ &\approx \frac{1}{F}\sqrt{\frac{\alpha_\mathrm{obs}}{2\pi}} e^{-\frac{1}{2}\alpha_\mathrm{obs}\ln^2{(1 + \Delta)}}\\ &= \frac{1}{F}\sqrt{\frac{\alpha_\mathrm{obs}}{2\pi}} e^{-\frac{1}{2}\alpha_\mathrm{obs}(\ln{F} - \ln{(\beta_\mathrm{obs}/\alpha_\mathrm{obs}}))^2}\\ &= \mathrm{Lognormal}(\mu, \sigma^2), \end{split} \end{equation} \noindent where \begin{equation}\label{eq:lognormal_approximation} \begin{split} \mu = \ln{\frac{\beta_\mathrm{obs}}{\alpha_\mathrm{obs}}} &\approx \ln{\frac{(1 - \phi)\beta}{\alpha}},\\ \sigma^2 = \frac{1}{\alpha_\mathrm{obs}} &\approx \frac{1}{\alpha}. \end{split} \end{equation} To first order, the flux distribution can be approximated as a lognormal distribution, the parameters of which can be interpreted in terms of the underlying burst process. Using Eq.~\ref{eq:physical_parameters}, the burst parameters can be estimated from the lognormal fit as \begin{equation} \begin{split} r &\approx \left(\sigma^2 \Delta T \right)^{-1}\\ \mathcal{F} &\approx \frac{1}{1 - \phi} \sigma^2 e^\mu F_\mathrm{scale} \Delta T. \end{split} \end{equation} \subsection{Fractional Variability} The shape of the flux distribution depends on the shape parameter $\alpha$. From Eq.~\ref{eq:alpha}, $\alpha$ is the product of two factors: the burst rate $r$, which is an intrinsic physical parameter, and the time bin duration $\Delta T$, which is an extrinsic choice made in the analysis. This fact enables us to predict how the fractional variability of a given source should scale when analyzed using time bins of different durations. The mean of an inverse gamma distribution is \begin{equation} \mu = \frac{\beta}{\alpha - 1}, \end{equation} \noindent and its variance is \begin{equation} \sigma^2 = \frac{\beta^2}{(\alpha - 1)^2(\alpha - 2)}. \end{equation} The mean is defined only for $\alpha > 1$ and the variance for $\alpha > 2$. Then, for $\alpha > 2$, the expected fractional variability is \begin{equation} \begin{split} F_\mathrm{var} &= \frac{\sigma}{\mu} = (\alpha_\mathrm{obs} - 2)^{-1/2} \approx (\alpha - 2)^{-1/2}\\ &= r^{-1/2}\left(1 - \frac{2}{r\Delta T} \right)^{-1/2} \Delta T^{-1/2}. \end{split} \end{equation} The fractional variability of a given source should scale with the time binning as $F_\mathrm{var} \propto \Delta T^{-1/2}$, at least in the regime $\Delta T \gtrsim \max(2/r, t_\mathrm{shape})$. \section{Representative Light Curves}\label{sec:simulation} To illustrate the process, we show simulated 12-year weekly light curves representing four different source types in Figure~\ref{fig:lightcurves}. The light curves were generated following the formulas given in Eqs.~\ref{eq:sim1990_modified} and \ref{eq:transition_probs}. Each light curve represents only one possible realization of the corresponding stochastic process. To generate a representative light curve for an FSRQ, we estimated $r$ and $\mathcal{F}$ from the observations reported by \citet{Adams2022} of flares of 3C~279, giving $r \approx 24/ 54~\mathrm{bursts}~\mathrm{day}^{-1} \approx 5\times10^{-6}~\mathrm{s}^{-1}$ and $\mathcal{F} \approx 0.85\times 10^{-3}~\mathrm{erg}~\mathrm{cm}^{-2}$. We set a fairly high autocorrelation parameter of $\phi = 0.95$, consistent with the absence of a resolvable spectral break in the gamma-ray PSD of 3C~279 as determined both by CARMA modeling \citep{Ryan2019} and by the power spectral response method \citep{Goyal2021}. With these parameters, $\alpha_\mathrm{obs} \approx 2.3$, similar to the best-fit values of $\lambda + 1 = 1.7$ and 2.0 reported respectively by \citet{Tavecchio2020} and \citet{Adams2022}. The resulting simulated light curve is shown in the top right of Figure~\ref{fig:lightcurves}. The pattern of variability, showing several flaring periods separated by intervals of relatively quiescent activity, indeed appears to resemble an actual light curve of an FSRQ like 3C~279. The overall normalization of the energy flux during the flares and quiescent periods is also similar to that seen in the actual light curve of that source. Figure~\ref{fig:lightcurves}, top left, illustrates a source exhibiting even more extreme variability, with isolated flares separated by intervals of complete quiescence. For this simulation, the burst rate was decreased by a factor of 50 and the burst fluence increased by the same. This light curve resembles that of a so-called ``loner flare'' blazar characterized by the detection of a single high-flux flaring event in the long-term gamma-ray light curve \citep{Wang2020}. The autoregressive inverse gamma model may also be able to reproduce the less extreme variability associated with BL Lac objects. For example, an extensive analysis of the multiwavelength variability of the flaring high-synchrotron-peaked BL Lac object 1ES~1215+303 was performed by \citet{Valverde2020}, who found that its flux distribution was described better by a lognormal distribution than by a normal distribution. They obtained a lognormal distribution with a best-fit value of $\sigma = 0.43 \pm 0.02$, which, using Eq.~\ref{eq:lognormal_approximation}, can be reproduced by an inverse gamma distribution with $\alpha \approx 5.4$. With this motivation, we generated simulated light curves with parameters chosen to represent two ``BL Lac'' objects, a ``flaring'' source with $\alpha_\mathrm{obs} \approx 5$ intended to be reminiscent of a source like 1ES~1215+303, and a ``constant'' source with $\alpha_\mathrm{obs} \approx 60$. These simulated light curves are shown in Figure~\ref{fig:lightcurves}, bottom left and right, respectively. Again, the simulated light curves appear to resemble real light curves. The flux distributions corresponding to the simulated light curves shown in Figure~\ref{fig:lightcurves} are plotted in Figure~\ref{fig:flux_distributions}, along with the expected PDF for each. We examined the RMS-flux relation for each of the simulated light curves. The light curves were binned in intervals of 25 data points and the mean and standard deviation was calculated within each interval. The standard deviation for each interval is plotted against the mean in Figure~\ref{fig:rms_flux}. An approximately linear RMS-flux relation can be observed in each case. The ``constant BL Lac'' light curve, with the least skewed flux distribution, exhibits the RMS-flux relation with the most scatter. \section{Discussion}\label{sec:discussion} \subsection{Applicability and Extensions of the Model}\label{sec:discussion_limitations} The autoregressive inverse gamma model for blazar variability presented in this work is based on several simplifying assumptions which should be considered when interpreting the model and the resulting flux distribution. An in-depth exploration of the effects of modifying these assumptions is beyond the scope of the simple statistical model presented in this work, and may be better addressed in the context of detailed physical simulations. First, we have assumed that all bursts have exactly the same fluence. More realistically, the fluence may vary from burst to burst. An illustration of how varying the fluence can affect the flux distribution is shown in Figure~\ref{fig:fluence_variability}. Distributions are shown for $\alpha = 1$, 2, 5 and 60, similar to the simulated sources considered in Section~\ref{sec:simulation}, scaled such that $\beta = \alpha$. We numerically simulated a modification of the basic scenario described in Sec.~\ref{sec:derivation}, without autoregression, for variable burst fluence by multiplying the mean fluence $\mathcal{F}$ by a random scale factor for each burst. The scale factors were drawn from a truncated Gaussian distribution with $\mu=1$ and $\sigma=0.275$. The distribution was truncated at $-3 \sigma$ to avoid negative fluctuations and at $+3 \sigma$ to preserve symmetry. We chose a value of $\sigma=0.275$ to yield a fluence distribution varying by one order of magnitude. Allowing the fluence to vary produces an excess at the low-flux end of the flux distribution, causing it to cut off less sharply, while preserving the heavy high-flux tail. This occurs because the fluence fluctuations are dominated by the burst arrival-time fluctuations except at the lowest fluxes, at least for small $\alpha$. For the much narrower distribution with $\alpha=60$, the entire distribution is noticeably shifted, while retaining an approximately lognormal shape. An additional low-flux component could also arise in the flux distribution for other reasons. Such a component might come about, for example, through the flux contributed by a subdominant population of weak bursts with low fluence. Also, if the bursts have relatively long rise or decay times, ``leakage'' of the flux from one bin to the next could result in a similar effect. The effect of such a component would be most apparent as an excess at the low end of the flux distribution, in addition to potentially adding a small flux component to the time bins that are considered here to have no flux. However, an additional low-flux component may be difficult to identify in data, as it must be disentangled from potential artifactual contributions, such as statistical fluctuations or imperfectly modeled gamma-ray flux coming from nearby sources in the field of view. In addition, as shown in Figure~\ref{fig:mixture_gamma_approximation}, the exact Poisson-weighted mixture distribution has in general heavier tails than the best-fit inverse gamma distribution. For this reason, a pure inverse gamma flux distribution may somewhat underestimate the true expected variability. We have also assumed that the burst arrival times follow a Poisson process, but in reality there may be physical constraints on how often bursts can occur. For example, if a lower bound exists on the time that must elapse between two bursts, that would place an upper bound on the imputed flux of a single burst as well as on the number of bursts that can occur in a time bin, inducing a cut-off at the high-flux end of the flux distribution. The high-flux end of the flux distribution is critical for distinguishing the inverse gamma distribution from other proposed distributions, such as the lognormal. However, high-flux time bins are rare. The longer the light curve, the more effectively the high-flux part of the flux distribution can be studied. Some previous studies that compared normal and lognormal distributions have excluded flaring intervals from the flux distribution so as not to overly favor the heavier-tailed lognormal distribution \citep[e.g.][]{Giebels2009, Valverde2020}. However, when only considering heavy-tailed distributions, these intervals should not be excluded. The derivation of the statistical model presented here requires that individual bursts are individually unresolved. For this reason, the model is only applicable at time scales longer than the putative burst shape timescale $t_\mathrm{shape}$. For very bright sources, it may be possible to estimate $t_\mathrm{shape}$ from an analysis of short-timescale variability during high-amplitude flares, but for weaker sources, this quantity may be more difficult to measure. At timescales shorter than $t_\mathrm{shape}$, the observed variability would depend on the characteristics of the bursts as well as on the statistics of their stochastic production process. For example, in the study done by \citet{Adams2022}, a time binning of 1 day was used when studying the flux distribution of 3C~279, which is shorter than the longest timescale of $\sim$100 hr found by fitting exponential profiles to individual flare components. In this case, care must be taken when interpreting the inverse gamma fit parameters using the model presented in this work. In this paper, we exclusively considered AR(1) processes. Some gamma-ray blazars may exhibit possible year-scale \mbox{(quasi-)periodicities} \citep[e.g.][]{Penil2020, Rueda2022} or trends \citep[e.g.][]{Valverde2020}. Future work could investigate the possibility of extending the model by adding deterministic trend or periodic terms, higher-order autoregressive terms, or moving average terms. Sources with possible year-scale quasi-periodic oscillations could be modeled using a second-order autoregressive process. Alternatively, one could study nonlinear time-series structures in which the innovation depends on the flux, such as in the SDE proposed by \citet{Tavecchio2020} or the autoregressive gamma process (ARG) of \citet{Gourieroux2006}. \subsection{Connection to Theoretical Models} The simulations shown in Section~\ref{sec:simulation} suggest that a single stochastic process can describe both flares and quiescent periods in light curves, consistent with the previous studies that have modeled blazar light curves using autoregressive processes \citep[e.g.][]{Ryan2019}. The process proposed in this work has three free parameters, connected to the average burst rate $r$, the typical burst fluence $\mathcal{F}$, and the characteristic autocorrelation timescale $\tau$. These quantities can be interpreted in terms of the physical processes producing the gamma-ray emission. One theoretical scenario that may give rise to a burst process of gamma-ray emission is magnetic reconnection, in which compact magnetized structures called plasmoids are produced that give rise to fast (minutes - days) gamma-ray flares \citep{Giannios2013}. Magnetic reconnection models based on particle-in-cell simulations predict that large, slow-moving plasmoids and much smaller but relativistic ones both produce flares with comparable fluence \citep{Petropoulou2016}. This finding suggests a motivation for the assumption applied in this work that all bursts have approximately the same fluence. Simulations of plasmoid growth and mergers have found that in a single reconnection event, the bolometric light curve is dominated by flares from a few large plasmoids, along with a superposition of very fast variability contributed by smaller plasmoids \citep{Petropoulou2018, Christie2019}. Fast variability is especially evident in the high-energy gamma-ray light curve \citep{Acciari2020} in comparison to other wave bands, such as optical \citep{Zhang2022}. Simulations of magnetic reconnection have been shown to yield flux levels and variability patterns compatible with gamma-ray observations of flaring blazars, particularly FSRQs \citep{Meyer2021}. The autocorrelation timescale is most likely associated with longer-timescale variability. It may be directly connected to the same underlying physics as the burst process, such as very long duration flares produced by large plasmoids in a magnetic reconnection scenario \citep{Giannios2013}, or it may be indirectly related and associated with a different physical process. In particular, the variability of the jet emission may originate in processes in the accretion disk. In this case, $\tau$ may reflect one of several timescales associated with dynamical, thermal, or viscous processes in the disk \citep{Czerny2006}. Alternatively, quasi-periodic variability with a characteristic timescale of approximately 1 to 100 days could result from a ``striped jet'' characterized by magnetic fields of alternating polarity \citep{Giannios2019}. The variability of the light curve may be driven by different physical processes at different time scales. If so, $\tau$ may change when the light curve is examined using different choices of $\Delta T$. \subsection{Comparison to Previous Work} The model presented in this paper shares several important similarities with the SDE proposed by \citet{Tavecchio2020}, along with a number of key differences. The model of \citet{Tavecchio2020} is motivated by the consideration of processes in a magnetically arrested accretion disk. In the model proposed in this work, the form of the flux distribution instead derives from the statistics of a burst process giving rise to the gamma-ray emission, although the long-term variations captured by the autoregressive structure may be considered to be related to processes in the accretion disk. Both models predict AR(1) time structure and flux distributions with the form of an inverse gamma distribution. However, the models exhibit different behavior when the shape parameter of the distribution becomes small. In the autoregressive inverse gamma model, as $\alpha \to 0$, $\alpha_\mathrm{obs} \to 1$ and $\beta_\mathrm{obs} \to (1 - \phi)\beta_\mathrm{burst}$. Only a few time bins have nonzero flux, resulting in a light curve characteristic of a loner flare blazar. In the model of \citet{Tavecchio2020}, in the limiting case of $\lambda \to 0$ indicating the predominance of the stochastic term, the resulting flux distribution approaches a pure power-law distribution, $f(X) \propto X^{-2}$. In addition, the model proposed by \citet{Tavecchio2020} is represented as an SDE. The process is fundamentally continuous, although discrete approximations are numerically useful for generating simulations and fitting the model to data. The quality of the discrete approximation would be expected to increase for data that are increasingly finely binned in time (up to the effective time resolution of the instrument). On the other hand, the model proposed in this work is inherently discrete in character, as it is derived from a stochastic process of individual bursts. At short timescales, it would be expected to break down as the time structure of the bursts themselves begins to dominate the light curve. Mathematically, this discreteness comes about because the innovation is not Gaussian, which would be required to obtain a continuous Markov process \citep{Gillespie1996}. In the model proposed by \citet{Tavecchio2020}, which is only supposed to be a simplified representation of the real dynamics, the stochastic term consists of Gaussian fluctuations multiplied by the normalized flux. As such, stochastic fluctuations can drive the flux below zero, especially when the flux is large. The requirement that the flux be greater than zero is required as an implicit additional constraint for deriving an inverse gamma flux distribution. In the model proposed in this work, the flux is inherently nonnegative, since the probability distributions have positive support. The autoregressive inverse gamma model predicts a heavy-tailed flux distribution and approximately linear RMS-flux relation, similar to lognormal variability models based on multiplicative processes \citep{Uttley2005}. However, it explains these phenomena with an additive burst process modulated by long-term stochastic variability, without invoking multiplicative processes. In this way, the proposed model is more similar in its physical interpretation to shot-noise models that explain the light curve as resulting from the additive superposition of many overlapping bursts \citep[e.g.][]{Lehto1989, Tanihata2001}. However, there is a significant difference: because the bursts are unresolved, the flux distribution and power spectrum are determined by the statistics of the burst arrival process and the long-term stochastic variability, rather than by the shapes and amplitudes of the bursts themselves. \citet{Duda2021} studied gamma-ray blazar variability using the log-stable family of probability distributions, which generalizes the lognormal distribution. Most of the blazars in their sample displayed variability consistent with lognormality; intriguingly, however, a subset of the sources had extremely heavy-tailed flux distributions leading to infinite variance. This finding of infinite variance is characteristic of the inverse gamma distribution with $\alpha_\mathrm{obs} < 2$. Interestingly, the sample of sources with infinite variance examined by \citet{Duda2021} contained both BL Lac objects and FSRQs. Another model class that produces a heavy-tailed marginal distribution is that of generalized autoregressive conditional heteroskedasticity (GARCH) models, commonly used in finance for modeling time series of price fluctuations \citep{Engle1982, Bollerslev1986, Francq2019}. GARCH models describe white-noise processes in which the variance of the innovation changes over time, following an ARMA process. The time-varying variance is called the volatility. The autoregressive inverse gamma model proposed in this work differs significantly from GARCH models, as it has a fixed innovation distribution. The model of \citet{Tavecchio2020}, which incorporates volatility in its stochastic term, is related to GARCH models. However, it differs from a standard first-order GARCH model by also having a mean-reverting autoregressive term. The GARCH(1, 1) process has an inverse gamma marginal distribution of volatility, giving the resulting time series a Student’s \textit{t} marginal distribution \citep{Nelson1990}. \citet{Golan2013} proposed an alternative model for autocorrelated inverse gamma volatility in price fluctuations, in which exogenous events influencing trading activity play a similar role to the burst process considered in this work. The model proposed in this work goes beyond that of \citet{Golan2013} by allowing the number of bursts (or events) to fluctuate from time bin to time bin, including the possibility of no bursts (Sec.~\ref{sec:discretization}); enforcing an exact AR(1) autocorrelation structure; and having separate parameters modeling the underlying burst rate and autocorrelation timescale. \subsection{Blazar Classification} In the autoregressive inverse gamma model, the extreme variability typical of bright flaring FSRQs, such as that studied by \citet{Tavecchio2020} and \citet{Adams2022}, is associated with a small burst rate $r$. Relating high-energy gamma-ray variability to physically interpretable quantities may help characterize the differences observed between FSRQs and BL Lac objects, increasing our understanding of the relationship between these blazar classes. For example, \citet{Uemura2020} identified differences between the optical light curves of FSRQs and BL Lac objects by modeling them using Ornstein-Uhlenbeck processes. If FSRQs and BL Lac objects make up distinct classes rather than a continuous sequence, we might expect the burst parameters of these objects to belong to distinct clusters, with the opposite being true otherwise. In some cases, blazar classifications are uncertain or disputed. For example, \citet{Padovani2019} have argued that the blazar TXS~0506+056, which has been associated with the detection of high-energy neutrinos, is an object intrinsically of the FSRQ type masquerading as a BL Lac object. Gamma-ray variability modeling may provide evidence to help us better understand these sources. \section{Summary and Conclusions}\label{sec:conclusion} Models of gamma-ray variability offer a critical tool for understanding the physical processes producing high-energy emission in blazars, potentially helping us characterize the relationship between FSRQs and BL Lac objects. In this work, we have proposed an autoregressive inverse gamma model of blazar variability that provides a simple framework for interpreting gamma-ray blazar variability on multiple timescales. In the proposed model, an inverse gamma flux distribution derives from an emission process in which gamma rays are produced in discrete bursts arriving as a Poisson process. Importantly, the bursts are taken to be individually unresolved within the time bins used in the analysis; only the average flux of each time bin is observed. Long-timescale stochastic variations are modeled by incorporating first-order autoregressive, or AR(1), structure. The autoregressive inverse gamma model has three free parameters, representing the average burst rate, the burst fluence, and the autocorrelation timescale. These parameters can be interpreted in terms of physical quantities, such as by associating the bursts with plasmoid-powered flares in a magnetic reconnection scenario, for example. Furthermore, in the proposed model, flares and quiescent periods lasting from days to months are caused by random fluctuations in the arrival rate of the bursts. This intermediate-timescale activity naturally emerges from the interaction between two underlying physical processes, the short-timescale burst process and the long-timescale stochastic variations. Flaring and quiescent emission therefore have the same origin. The autoregressive inverse gamma model yields simulated gamma-ray light curves consistent with multiple blazar source classes. The variability characteristics are controlled primarily by the burst rate. In particular, the use of parameters estimated from flare observations of the FSRQ 3C~279 yields a simulated light curve that demonstrates the variability behavior characteristic of that source, with bright flares separated by longer relatively quiescent periods, along with a similar overall flux normalization. Decreasing the burst rate generates a light curve consisting of isolated ``loner flares'', while increasing it produces light curves more closely resembling those of BL Lac objects. In this case, flaring activity is less prominent, and the flux distribution is approximately lognormal. The proposed model is based on an emission process of discrete bursts, resulting in novel predictions that differentiate it from previous work, including the model proposed by \citet{Tavecchio2020} that also features an inverse gamma flux distribution and autoregressive time structure. In particular, the fractional variability is predicted to decrease as a larger time binning is used, scaling as $F_\mathrm{var} \propto \Delta T^{-1/2}$. Because the model has AR(1) structure, the PSD has the form of a power law with a low-frequency break. A high-frequency break may also occur, depending on the characteristics of the unresolved burst process. \begin{acknowledgements} Thanks to Alberto Dom\'{i}nguez, Reshmi Mukherjee, Jeremy Perkins, Jeff Scargle, and Janeth Valverde for helpful discussions, comments, and suggestions on this work. A.B. is supported by the NASA Postdoctoral Program at Goddard Space Flight Center, administered by USRA and ORAU. \end{acknowledgements} \software{ NumPy \citep{Harris2020}, Matplotlib \citep{Hunter2007}, SciPy \citep{Virtanen2020}. } \appendix \section{Derivation of an AR(1) Gamma Process with Poisson-distributed Burst Counts}\label{appendix:derivations} In this section, we re-derive the gamma process proposed by \citet{Sim1990} to incorporate a more realistic model of burst counts. To do so, we replace the shape parameter $\alpha = r \Delta T$ that represents the average number of bursts in each bin with a Poisson distribution with parameter $\alpha$. Since $\alpha$ is Poisson distributed, we have \begin{equation} \alpha_\mathrm{bin} = \begin{cases} 0, & p = e^{-\alpha}\\ l, & p = \frac{\alpha^l e^{-\alpha}}{l!}, \end{cases} \end{equation} \noindent for natural numbers $l > 0$. A randomly chosen bin will with probability $e^{-\alpha}$ contain no bursts, and therefore generate no flux. It follows that time bins with observed flux will always have $\alpha_\mathrm{bin} > 0$, which is also a requirement for the process of \citet{Sim1990} to be well-defined. We therefore wish to construct an AR(1) process that switches between a zero and non-zero state (we will show later that in fact $F^{-1} = 0$ for bins with no bursts). To do so, we take inspiration from the Binary AR(1) process \citep{McKenzie1985}. Writing $F^{-1} \equiv X$ in the remainder of this section for brevity, we consider a Markov process with two states, $X = 0$ and $X \sim \Gamma(\alpha_\mathrm{obs}, \beta_\mathrm{obs})$, with transition probabilities for $X_i \to X_{i + 1}$ notated as \begin{equation} \begin{cases} p_{00}, & 0 \to 0\\ p_{0\Gamma}, & 0 \to \Gamma(\alpha_\mathrm{obs}, \beta_\mathrm{obs})\\ p_{\Gamma0}, & \Gamma(\alpha_\mathrm{obs}, \beta_\mathrm{obs}) \to 0\\ p_{\Gamma\Gamma}, & \Gamma(\alpha_\mathrm{obs}, \beta_\mathrm{obs}) \to \Gamma(\alpha_\mathrm{obs}, \beta_\mathrm{obs}). \end{cases} \end{equation} To ensure a self-consistent process, we have the relations \begin{subequations} \begin{align} p_{00} + p_{0\Gamma} &= 1 \label{eq:zero_consistency}\\ p_{\Gamma0} + p_{\Gamma\Gamma} &= 1 \label{eq:gamma_consistency}\\ p_0 + p_\Gamma &= 1,\label{eq:prob_consistency} \end{align} \end{subequations} \noindent where $p_0 = e^{-\alpha}$ and $p_\Gamma = 1 - e^{-\alpha}$, as well as \begin{subequations} \begin{align} p_0 p_{00} + p_\Gamma p_{\Gamma0} &= p_0 \label{eq:p0_consistency}\\ p_0 p_{0\Gamma}+ p_\Gamma p_{\Gamma\Gamma} &= p_\Gamma \label{eq:pgamma_consistency}. \end{align} \end{subequations} \begin{remark} Let an \textit{isolated flare} be defined as a contiguous sequence of bins with $X \sim \Gamma$ surrounded by bins with $X = 0$ on both ends. The expected length of such a sequence is given by the expected number of time bins needed for one $\Gamma \to 0$ transition to occur, starting from an initial time bin with $X \sim \Gamma$. The average isolated flare length in units of number of time bins is therefore \begin{equation} L = \frac{1}{p_{\Gamma 0}}. \end{equation} \end{remark} \begin{remark} Let the \textit{effective duration} of a time bin with $X \sim \Gamma$ be defined as the total duration encompassing that time bin and any subsequent time bins with $X = 0$, until the next bin with $X \sim \Gamma$ is reached. The expected length of a sequence of bins with $X = 0$ is $1/p_{0\Gamma}$. Making use of this fact and Eqs.~\ref{eq:pgamma_consistency} and \ref{eq:prob_consistency}, the expected effective duration is \begin{equation} N_\mathrm{eff} = \Delta T \left(1 + \frac{p_{\Gamma 0}}{p_{0 \Gamma}} \right) = \Delta T \left( 1 + \frac{p_0}{p_\Gamma} \right) = \frac{\Delta T}{p_\Gamma} = \frac{\Delta T}{1 - e^{-\alpha}}\label{eq:nominal_duration}. \end{equation} \end{remark} We now focus on the case $\alpha_\mathrm{bin} > 0$, that is, the process that takes place when the transition $\Gamma \to \Gamma$ occurs, and will return later to finish considering the remaining cases. Renormalizing the probability distribution of $\alpha_\mathrm{bin}$ for $\alpha_\mathrm{bin} > 0$ yields \begin{equation} \alpha_\mathrm{bin,obs} = \begin{cases} l, & p = \frac{1}{1 - e^{-\alpha}} \frac{\alpha^l e^{-\alpha}}{l!}. \end{cases} \end{equation} For a time bin with $\alpha_\mathrm{bin,obs} = l$, the corresponding observed rate is \begin{equation} \beta_\mathrm{bin,obs} = \frac{\Delta T}{N_\mathrm{eff}} \beta_\mathrm{burst} l = \frac{1 - e^{-\alpha}}{\alpha} \beta l, \end{equation} \noindent where two modifications have been incorporated. First, the flux is averaged over $l$ bursts rather than $\alpha$ bursts. Second, the average flux is reduced by a ``leakage'' factor of $\Delta T/N_\mathrm{eff}$ because the fluence produced in a time bin is supposed to be averaged over the burst interarrival time, which is the entire effective duration $N_\mathrm{eff}$, but the flux is only collected within that bin itself, of duration $\Delta T$. The process mean $\mathbb{E}(X)$ and variance $\mathrm{Var}(X)$ can be calculated as follows. From the law of total expectation, we have \begin{equation}\label{eq:mu} \begin{split} \mathbb{E}(X) &= \mathbb{E}(X_{i + 1})\\ &= \mathbb{E}_{X_i}\left[ \sum_{k = 0}^{\infty} \sum_{l = 1}^{\infty} \frac{(\phi \frac{1 - e^{-\alpha}}{\alpha} \beta l X_i)^k e^{-\phi \frac{1 - e^{-\alpha}}{\alpha} \beta l X_i}}{k!} \frac{1}{1 - e^{-\alpha}} \frac{\alpha^l e^{-\alpha}}{l!} \mathbb{E}_{X_{i + 1}}[\Gamma(X_{i + 1}; k + l, \frac{1 - e^{-\alpha}}{\alpha} \beta l)] \right]\\ &= \frac{1}{(1 - e^{-\alpha})^2} \frac{\alpha}{\beta} \mathbb{E}_{X_i}\left[ \sum_{k = 0}^{\infty} \sum_{l = 1}^{\infty} \frac{(\phi \frac{1 - e^{-\alpha}}{\alpha} \beta l X_i)^k e^{-\phi \frac{1 - e^{-\alpha}}{\alpha} \beta l X_i}}{k!} \frac{\alpha^l e^{-\alpha}}{l!} \left( \frac{k}{l} + 1 \right) \right]\\ &= \frac{1}{(1 - e^{-\alpha})^2} \frac{\alpha}{\beta} \mathbb{E}_{X_i}\left[ \sum_{l = 1}^{\infty} \frac{\alpha^l e^{-\alpha}}{l!} \left( \phi \frac{1 - e^{-\alpha}}{\alpha} \beta X_i + 1 \right) \right]\\ &= \frac{1}{1 - e^{-\alpha}} \frac{\alpha}{\beta} \mathbb{E}_{X_i}\left[ \phi \frac{1 - e^{-\alpha}}{\alpha} \beta X_i + 1 \right]\\ &= \phi \mathbb{E}(X) + \frac{1}{1 - e^{-\alpha}}\frac{\alpha}{\beta}\\ &= \frac{1}{1 - \phi} \frac{1}{1 - e^{-\alpha}} \frac{\alpha}{\beta}, \end{split} \end{equation} \noindent where we have made use of the fact that for a stationary process, $\mathbb{E}(X_{i + 1}) = \mathbb{E}(X_i) = \mathbb{E}(X)$. From the law of total variance, we have \begin{equation}\label{eq:sigma2_1} \begin{split} \mathrm{Var}(X) &= \mathbb{E}(\mathrm{Var}(X_{i + 1} | X_i)) + \mathrm{Var}(\mathbb{E}(X_{i + 1} | X_i))\\ &= \mathbb{E}(\mathrm{Var}(X_{i + 1} | X_i)) + \mathrm{Var}\left(\phi X_i + \frac{1}{1 - e^{-\alpha}}\frac{\alpha}{\beta}\right)\\ &= \mathbb{E}(\mathrm{Var}(X_{i + 1} | X_i)) + \phi^2 \mathrm{Var}(X)\\ &= \frac{1}{1 - \phi^2}\mathbb{E}(\mathrm{Var}(X_{i + 1} | X_i)). \end{split} \end{equation} For a mixture of distributions $f_i$ with means $\mu_i$, variances $\sigma^2_i$, and weights $p_i$, the variance of the mixture $f = \sum p_i f_i$ is given by \begin{equation}\label{eq:variance_mixture} \mathrm{Var}(f) = \sum_i p_i \left( \sigma_i^2 + \mu_i^2 \right) - \left(\sum_i p_i \mu_i \right)^2. \end{equation} Using Eq.~\ref{eq:variance_mixture}, we can therefore write \begin{equation}\label{eq:sigma2_2} \begin{split} \begin{aligned}\mathrm{Var}(X_{i + 1} | X_i) = \: \\ \end{aligned} &\overbrace{\begin{aligned}\sum_{k = 0}^{\infty} \sum_{l = 1}^{\infty} \frac{(\phi \frac{1 - e^{-\alpha}}{\alpha} \beta l X_i)^k e^{-\phi \frac{1 - e^{-\alpha}}{\alpha} \beta l X_i}}{k!} \frac{1}{1 - e^{-\alpha}} \frac{\alpha^l e^{-\alpha}}{l!} \biggl[ &\mathrm{Var}(\Gamma(X_{i + 1}; k + l, \frac{1 - e^{-\alpha}}{\alpha} \beta l)) \\ &+ \mathbb{E}(\Gamma(X_{i + 1}; k + l, \frac{1 - e^{-\alpha}}{\alpha} \beta l))^2 \biggr] \end{aligned}}^{(A)}\\ &- \underbrace{\left(\sum_{k = 0}^{\infty} \sum_{l = 1}^{\infty} \frac{(\phi \frac{1 - e^{-\alpha}}{\alpha} \beta l X_i)^k e^{-\phi \frac{1 - e^{-\alpha}}{\alpha} \beta l X_i}}{k!} \frac{1}{1 - e^{-\alpha}} \frac{\alpha^l e^{-\alpha}}{l!} \mathbb{E}(\Gamma(X_{i + 1}; k + l, \frac{1 - e^{-\alpha}}{\alpha} \beta l)) \right)^2}_{(B)}, \end{split} \end{equation} \noindent where \begin{equation}\label{eq:a} \begin{split} (A) &= \frac{1}{1 - e^{-\alpha}} \sum_{k = 0}^{\infty} \sum_{l = 1}^{\infty} \frac{(\phi \frac{1 - e^{-\alpha}}{\alpha} \beta l X_i)^k e^{-\phi \frac{1 - e^{-\alpha}}{\alpha} \beta l X_i}}{k!} \frac{\alpha^l e^{-\alpha}}{l!} \frac{(k + l) + (k + l)^2}{\left(\frac{1 - e^{-\alpha}}{\alpha} \beta l \right)^2 }\\ &= \frac{1}{1 - e^{-\alpha}} \frac{\alpha}{\beta^2} \frac{\alpha}{(1 - e^{-\alpha})^2} \sum_{k = 0}^{\infty} \sum_{l = 1}^{\infty} \frac{(\phi \frac{1 - e^{-\alpha}}{\alpha} \beta l X_i)^k e^{-\phi \frac{1 - e^{-\alpha}}{\alpha} \beta l X_i}}{k!} \frac{\alpha^l e^{-\alpha}}{l!} \frac{k^2 + 2kl + l^2 + k + l}{l^2}\\ &= \frac{1}{1 - e^{-\alpha}} \frac{\alpha}{\beta^2} \frac{\alpha}{(1 - e^{-\alpha})^2} \sum_{l = 1}^{\infty} \frac{\alpha^l e^{-\alpha}}{l!} \left[ \left(\phi \frac{1 - e^{-\alpha}}{\alpha} \beta X_i\right)^2 + 2\phi \frac{1 - e^{-\alpha}}{\alpha} \beta X_i + 1 + \frac{2\phi \frac{1 - e^{-\alpha}}{\alpha} \beta X_i + 1}{l} \right]\\ &= (\phi X_i)^2 + \frac{2\phi\alpha\beta X_i}{1 - e^{-\alpha}} + \left(\frac{\alpha}{1 - e^{-\alpha}}\right)^2 + \frac{1}{1 - e^{-\alpha}} \frac{\alpha}{\beta^2} \left( 2\phi \frac{1 - e^{-\alpha}}{\alpha} \beta X_i + 1 \right) A^{-1}(\alpha), \end{split} \end{equation} \noindent with \begin{equation} A(\alpha) = \left( \frac{\alpha}{(1 - e^{-\alpha})^2} \sum_{l = 1}^{\infty} \frac{\alpha^l e^{-\alpha}}{l!} \frac{1}{l} \right)^{-1}, \end{equation} \noindent and \begin{equation}\label{eq:b} \begin{split} (B) &= \frac{1}{(1 - e^{-\alpha})^4} \frac{\alpha^2}{\beta^2}\left[\sum_{k = 0}^{\infty} \sum_{l = 1}^{\infty} \frac{(\phi \frac{1 - e^{-\alpha}}{\alpha} \beta l X_i)^k e^{-\phi \frac{1 - e^{-\alpha}}{\alpha} \beta l X_i}}{k!} \frac{1}{1 - e^{-\alpha}} \frac{\alpha^l e^{-\alpha}}{l!} \left(\frac{k}{l} + 1 \right) \right]^2\\ &= \left(\phi X_i + \frac{1}{1 - e^{-\alpha}}\frac{\alpha}{\beta} \right)^2 \mathrm{(From~Eq.~\ref{eq:mu})}\\ &= (\phi X_i)^2 + \frac{2\phi\alpha\beta X_i}{1 - e^{-\alpha}} + \left(\frac{\alpha}{1 - e^{-\alpha}}\right)^2. \end{split} \end{equation} Combining Eqs.~\ref{eq:sigma2_1}, \ref{eq:sigma2_2}, \ref{eq:a}, and \ref{eq:b} and making use of Eq.~\ref{eq:mu} in the second step, \begin{equation} \begin{split} \mathrm{Var}(X) &= \frac{1}{1 - \phi^2}\frac{1}{1 - e^{-\alpha}} \frac{\alpha}{\beta^2}A^{-1}(\alpha)\mathbb{E}\left[2\phi\frac{1 - e^{-\alpha}}{\alpha} \beta X_i + 1 \right]\\ &= \frac{1}{1 - \phi^2}\frac{1}{1 - e^{-\alpha}} \frac{\alpha}{\beta^2}A^{-1}(\alpha)\left(2\phi\frac{1 - e^{-\alpha}}{\alpha} \beta \frac{1}{1 - \phi} \frac{1}{1 - e^{-\alpha}} \frac{\alpha}{\beta} + 1 \right)\\ &= \frac{1}{1 - \phi^2}\frac{1}{1 - e^{-\alpha}} \frac{\alpha}{\beta^2}A^{-1}(\alpha)\left(\frac{2\phi}{1 - \phi} + 1 \right)\\ &= \frac{1}{(1 - \phi)^2}\frac{1}{1 - e^{-\alpha}} \frac{\alpha}{\beta^2}A^{-1}(\alpha). \end{split} \end{equation} The resulting mixture of gamma distributions does not have the exact form of a gamma distribution, but it can be reasonably well approximated as one and approaches an exact gamma distribution in the limits of $\alpha \to 0$ and $\alpha \to \infty$. The best-fit parameters of the mixture to a gamma distribution can be estimated using the method of moments, \begin{equation} \alpha_\mathrm{obs} = \frac{\mathbb{E}(X)^2}{\mathrm{Var}(X)} = \frac{A(\alpha)}{1 - e^{-\alpha}}\alpha, \end{equation} \noindent and \begin{equation} \beta_\mathrm{obs} = \frac{\mathbb{E}(X)}{\mathrm{Var}(X)} = A(\alpha)(1 - \phi)\beta. \end{equation} We can now validate our assumption that $F^{-1} = 0$ for bins with no bursts. The mean of the average process $X_\mathrm{avg} \sim \Gamma(\alpha, \beta)$ is \begin{equation}\label{eq:mu_total} \mathbb{E}(X_\mathrm{avg}) = \frac{1}{1 - \phi}\frac{\alpha}{\beta}, \end{equation} \noindent which should be equal to the mean of the combined process, \begin{equation} \begin{split} \mathbb{E}(X_\mathrm{avg}) &= e^{-\alpha}\mathbb{E}(X_0) + (1 - e^{-\alpha})\mathbb{E}(X_\mathrm{obs})\\ \frac{1}{1 - \phi}\frac{\alpha}{\beta} &= e^{-\alpha}\mathbb{E}(X_0) + (1 - e^{-\alpha})\frac{1}{1 - \phi} \frac{1}{1 - e^{-\alpha}} \frac{\alpha}{\beta}, \end{split} \end{equation} \noindent from which we obtain $\mathbb{E}(X_0) = 0$. Since $F^{-1} \geq 0$, it follows that $F^{-1} = 0$ for all bins with no bursts. The flux for bins with no bursts is therefore undefined. We now return to the full process, considering both bins with and without bursts. To impose on the process AR(1) correlation structure with autocorrelation constant $\phi$, we use the condition \begin{equation}\label{eq:phi_total} \phi = \frac{\mathrm{Cov}(X_{i + 1}, X_i)}{\mathrm{Var}(X)} = \frac{\mathbb{E}(X_{i + 1} X_i) - \mathbb{E}(X)^2}{\mathrm{Var}(X)}. \end{equation} From the law of total expectation, \begin{equation}\label{eq:cond_expectation_total} \begin{split} \mathbb{E}(X_{i + 1} X_i) &= p_0 p_{00} \cdot 0^2 + (p_0 p_{0\Gamma} + p_\Gamma p_{\Gamma 0}) \cdot 0 \cdot \mathbb{E}(X_\Gamma) + p_\Gamma p_{\Gamma \Gamma} \mathbb{E}(X_{\Gamma, i}, X_{\Gamma, i + 1})\\ &= p_\Gamma p_{\Gamma \Gamma} \mathbb{E}(X_{\Gamma, i}, X_{\Gamma, i + 1})\\ &= p_\Gamma p_{\Gamma \Gamma} \left[\phi \mathrm{Var}(X_\Gamma) + \mathbb{E}(X_\Gamma)^2 \right]\\ &= p_{\Gamma \Gamma} \frac{1}{(1 - \phi)^2} \frac{\alpha^2}{\beta^2} \left( \frac{\phi}{\alpha A(\alpha)} + \frac{1}{1 - e^{-\alpha}} \right)\\ \end{split} \end{equation} Using Eq.~\ref{eq:variance_mixture}, \begin{equation}\label{eq:variance_total} \begin{split} \mathrm{Var}(X) &= p_0 \left( \mathrm{Var}(X_0) + \mathbb{E}(X_0)^2 \right) + p_\Gamma \left( \mathrm{Var}(X_\Gamma) + \mathbb{E}(X_\Gamma)^2 \right) - \left( p_0 \mathbb{E}(X_0) + p_\Gamma \mathbb{E}(X_\Gamma) \right)^2\\ &= p_\Gamma \left[ \mathrm{Var}(X_\Gamma) + \mathbb{E}(X_\Gamma)^2 - p_\Gamma \mathbb{E}(X_\Gamma)^2 \right]\\ &= \frac{1}{(1 - \phi)^2} \frac{\alpha^2}{\beta^2} \left( \frac{1}{\alpha A(\alpha)} + \frac{e^{-\alpha}}{1 - e^{-\alpha}} \right). \end{split} \end{equation} Combining Eqs.~\ref{eq:phi_total}, \ref{eq:cond_expectation_total}, \ref{eq:mu_total}, and \ref{eq:variance_total} and simplifying yields \begin{equation} p_{\Gamma\Gamma} = 1 - \frac{(1 - \phi)e^{-\alpha}}{1 + \frac{\phi}{\alpha}A^{-1}(\alpha)(1 - e^{-\alpha})} = 1 - p(\phi, \alpha)e^{-\alpha}, \end{equation} \noindent where \begin{equation} p(\phi, \alpha) = \frac{1 - \phi}{1 + \frac{\phi}{\alpha}A^{-1}(\alpha)(1 - e^{-\alpha})}. \end{equation} From Eqs.~\ref{eq:zero_consistency} and \ref{eq:gamma_consistency}, and Eq.~\ref{eq:p0_consistency} or \ref{eq:pgamma_consistency}, we obtain the full set of transition probabilities, \begin{equation} \begin{split} p_{00} &= 1 - p(\phi, \alpha)(1 - e^{-\alpha})\\ p_{0\Gamma} &= p(\phi, \alpha)(1 - e^{-\alpha})\\ p_{\Gamma0} &= p(\phi, \alpha)e^{-\alpha}\\ p_{\Gamma\Gamma} &= 1 - p(\phi, \alpha)e^{-\alpha}. \end{split} \end{equation} \bibliography{bibliography}
Title: Accretion Burst Echoes as Probes of Protostellar Environments and Episodic Mass Assembly
Abstract: Protostars likely accrete material at a highly time variable rate, however, measurements of accretion variability from the youngest protostars are rare, as they are still deeply embedded within their envelopes. Sub-mm/mm observations can trace the thermal response of dust in the envelope to accretion luminosity changes, allowing variations in the accretion rate to be quantified. In this paper, we present contemporaneous sub-mm/mm light curves of variable protostars in Serpens Main, as observed by the ALMA ACA, SMA, and JCMT. The most recent outburst of EC 53 (V371 Ser), an $\sim 18$ month periodic variable, is well-sampled in the SMA and JCMT observations. The SMA light curve of EC 53 is observed to peak weeks earlier and exhibit a stronger amplitude than at the JCMT. Stochastic variations in the ACA observations are detected for SMM 10 IR with a factor $\sim 2$ greater amplitude than as seen by the JCMT. We develop a toy model of the envelope response to accretion outbursts to show EC 53's light curves are plausibly explained by the delay associated with the light travel time across the envelope and the additional dilution of the JCMT response by the incorporation of cold envelope material in the beam. The larger JCMT beam can also wash out the response to rapid variations, which may be occurring for SMM 10 IR. Our work thus provides a valuable proof of concept for the usage of sub-mm/mm observations as a probe of both the underlying accretion luminosity variations and the protostellar environment.
https://export.arxiv.org/pdf/2208.13568
command. \usepackage{amsmath} \usepackage{natbib} \newcommand{\vdag}{(v)^\dagger} \newcommand\aastex{AAS\TeX} \newcommand\latex{La\TeX} \newcommand{\lf}[1]{\textcolor{blue}{\textbf{LF:} #1}} \defcitealias{Baek2020}{B20} \newcommand\dij{\textcolor{orange}} \shorttitle{} \shortauthors{Francis et al.} \graphicspath{{./}{figures/}} \begin{document} \title{Accretion Burst Echoes as Probes of Protostellar Environments and Episodic Mass Assembly} \correspondingauthor{Logan Francis} \email{loganfrancis3@uvic.ca} \author[0000-0001-8822-6327]{Logan Francis} \affiliation{Department of Physics and Astronomy, University of Victoria, 3800 Finnerty Road, Elliot Building, Victoria, BC, V8P 5C2, Canada} \affiliation{NRC Herzberg Astronomy and Astrophysics, 5071 West Saanich Road, Victoria, BC, V9E 2E7, Canada} \author[0000-0002-6773-459X]{Doug Johnstone} \affiliation{NRC Herzberg Astronomy and Astrophysics, 5071 West Saanich Road, Victoria, BC, V9E 2E7, Canada} \affiliation{Department of Physics and Astronomy, University of Victoria, 3800 Finnerty Road, Elliot Building, Victoria, BC, V8P 5C2, Canada} \author[0000-0003-3119-2087]{Jeong-Eun Lee} \affiliation{School of Space Research, Kyung Hee University, 1732, Deogyeong-daero, Giheung-gu, Yongin-si, Gyeonggi-do 17104, Republic of Korea} \author{Gregory J. Herczeg} \affiliation{Kavli Institute for Astronomy and Astrophysics, Peking University, Yiheyuan 5, Haidian Qu, 100871 Beijing, China} \affiliation{Department of Astronomy, Peking University, Yiheyuan 5, Haidian Qu, 100871 Beijing, China} \author[0000-0002-7607-719X]{Feng Long} \affiliation{Center for Astrophysics, Harvard \& Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA} \author[0000-0002-6956-0730]{Steve Mairs} \affiliation{SOFIA Science Center, Universities Space Research Association, NASA Ames Research Center, Moffett Field, California 94035, USA} \affiliation{East Asian Observatory, 660 N. A`oh\={o}k\={u} Place, Hilo, Hawai`i, 96720, USA} \author[0000-0003-1894-1880]{Carlos Contreras-Pe\~na} \affiliation{Department of Physics and Astronomy, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Republic of Korea} \affiliation{School of Space Research, Kyung Hee University, 1732, Deogyeong-daero, Giheung-gu, Yongin-si, Gyeonggi-do 17104, Republic of Korea} \author{Gerald Moriarty-Schieven} \affiliation{NRC Herzberg Astronomy and Astrophysics, 5071 West Saanich Road, Victoria, BC, V9E 2E7, Canada} \collaboration{99}{The JCMT Transient Team} \keywords{} \section{Introduction} \label{sec:intro} The infall of material from the envelopes surrounding the youngest protostars inevitably produces a circumstellar accretion disk, which serves as the primary reservoir of material for further stellar growth. Accretion through the disk is expected to be highly episodic in nature, and indeed, a wide variety of variability and accretion outburst phenomena are commonly observed in the more evolved T Tauri stars (See recent review by \citealt{Fischer2022}). Owing to their deeply embedded nature however, accretion variability in protostars is less often observed, and consequently more poorly understood. This earliest period of the stellar life cycle is where the majority of the final mass ($\gtrsim 90\%$, \citealt{Fischer2022}) is assembled, however, and up to $\sim 25\%$ of the mass may be accumulated in outbursts \citep{McKee2011,fischer2019}. Constraining accretion behaviours in protostars is thus of fundamental importance for star formation theory. The lack of emission at near-IR to UV wavelengths from protostars necessitates alternative diagnostics for accretion variability, as traditional measures such as emission line strength and UV-excess measurement \citep{Hartmann2016} are inaccessible. Indirectly, the clumpy structure of outflows (e.g. \citep{Plunkett2015,Jhan2022}) and the envelope chemistry \citep[e.g.][]{Jorgensen2013} can provide a fossil record of past accretion activity. Changes in accretion rate produce a directly observable response in the envelope emission at far-IR and longer wavelengths, as the shorter wavelength emission from the accretion luminosity is absorbed and re-radiated. This was first shown for simple spherically symmetric envelope models and parameterized outbursts by \cite{Johnstone2013}, while \cite{MacFarlane2019a,MacFarlane2019b} considered the observability of outbursts produced in hydrodynamical simulations of unstable disks. The response at far-IR ($\sim 100 \mu$m) wavelengths should be approximately proportional to the accretion luminosity change, whereas the sub-mm/mm response instead traces the temperature change in the envelope \citep{ContrerasPena2020}. A variety of bright outbursts have thus been detected at mid-IR to millimeter wavelengths. The first class 0 protostar for which a strong outburst was detected was HOPS 383, which brightened by a factor of $\sim35$ at 24$\mu$m \citep{Safron2015}. Similar mid-IR outbursts from other class 0 protostars have since been found for HOPS 12, HOPS 124, \citep{Zakri2022} and V2775 Ori (HOPS 223) \citep{fischer2019}. Outbursts have also been detected towards high-mass protostars. For example, the NGC 6334-I star forming region was found from the comparison of millimeter observations to have increased in luminosity by a factor of $\sim 70$ \citep{Hunter2006,Hunter2017}, while multiple outbursts of the massive protostar M17 MIR have been observed in the mid-infrared \citep{Chen2021}, the most recent of which corresponds to a luminosity change of a factor $\sim 6$. Although the aforementioned outbursts were all detected serendipitously, systematic efforts to quantify lower-level variability and constrain the frequency of bright outbursts from embedded protostars have recently begun. In the ongoing James Clerk Maxwell Telescope (JCMT) Transient Survey \citep{Herczeg2017}, 8 nearby ($<500$ pc) star forming regions are being monitored with a monthly or better cadence at 450 and 850 $\mu$m. An analysis of the first 4 years of Transient observations found 18 of 83 class 0 or I protostars to exhibit moderate secular variability on timescales of a few years, though the estimated mass accreted during these variations was at most a few percent of the stellar mass \citep{Lee2021}. A search for mid-IR variability in young stellar objects from the 6.5 yr Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE) All Sky Survey \citep{Mainzer2011} found $\sim 1700$ out of $\sim 5400$ protostars varying with a wide range of behaviours, with the youngest protostars the most likely to exhibit variability \cite{Park2021}. While mid-IR variability can also be attributed to changes in extinction and viewing geometry, a comparison of varying sources detected in both the Transient Survey and NEO(WISE) observations found that the sub-mm variable sources typically showed similar variability in the mid-IR. Furthermore, these sources constituted about 22\% of the overall sample, suggesting accretion variability is responsible for the mid-IR flux changes in many cases \cite{ContrerasPena2020}. While these large monitoring campaigns are important for establishing the distribution of accretion variability events, their low angular resolution precludes a detailed investigation of how the envelope structure may influence the response to accretion luminosity variations. Modeling of the envelope response predicts that higher resolution observations may be more sensitive to accretion rate changes, as the change in brightness of the outer envelope is likely to be dominated by heating from the interstellar radiation field \citep{Johnstone2013}. The details of the envelope structure, including the likely presence of outflow cavities and dust sublimation fronts, may also impact the observed response \citep[][hereafter B20]{Baek2020}. High resolution monitoring of embedded protostars with millimeter wavelength facilities should thus be able to probe the envelope structure, in addition to identifying variability behaviour. In this paper, we thus analyze observations from the Atacama Large Millimeter/Submillimeter Array (ALMA) and Submillimeter Array (SMA) of variable protostars in Serpens Main monitored by the JCMT Transient Survey. Particular emphasis is placed on EC 53 (V371 Ser), which exhibits $\sim 18$ month periodic accretion bursts, the most recent of which was observed at high cadence with the SMA and JCMT. We interpret the light curves from our monitoring programs using a simple toy model of the propagation of accretion bursts through the envelope, which we also use to further explore how the properties of the envelope and observational setup affect the burst response. The remainder of this paper is organized as follows: In Section \ref{sec:obs}, we describe our variable protostar targets and the ALMA, SMA, and JCMT observations, while in Section \ref{sec:dr}, we provide the details of our data reduction and present light curves of our targets. In Section \ref{sec:ec53_toy_modeling}, we develop a toy model to interpret the SMA and JCMT observations of EC 53, followed in Section \ref{sec:further_modeling} by further exploration with the toy model on the observation of generic accretion bursts. We then a discuss our results and observational/modeling caveats in Section \ref{sec:disc}, and finish with a brief summary of our major conclusions in Section \ref{sec:conc}. \section{Observations and Standard Calibration} \label{sec:obs} In this section, we describe the details of the ALMA Atacama Compact Array (ACA), Submillimeter Array (SMA), and James Clerk Maxwell Telescope (JCMT) observations used in this paper. The targets of our observations are all located in the Serpens Main star forming region, at a distance of 436 pc \citep{Ortiz-Leon2017,herczeg2019}. An overview of Serpens Main as it appears in a co-add of the Tranient Survey observations is shown in Figure \ref{fig:finder_chart}. The ACA observations monitor the thermal dust continuum of 3 known variable protostars (EC 53/V371 Ser, Serpens SMM1, Serpens SMM10 IR) and 5 YSOs (young stellar objects) intended for use as stable calibrators which were identified by the JCMT Transient Survey \citep{Lee2021}. The coordinates and basic details of each target are listed in Table \ref{tab:targets}. The strongest variations in the Transient Survey are seen from EC53 (V371 Ser), a class I protostar which undergoes periodic outbursts lasting $\sim 6$ weeks, followed by a slow decline in brightness until the next outburst every $\sim$ 18 months \citep{Yoo2017}. Modeling of multi-wavelength observations of EC 53 associates a reddening of the protostar just prior to the burst with the build-up and subsequent draining of material in the inner disk \citep{Lee2020yh}. Radiative transfer simulations of EC 53's envelope and comparison with the 850 $\mu$m Transient Survey observations suggest that the accretion luminosity increases by a factor of $\sim 3.3$ during each outburst \citepalias{Baek2020}, although the strength and duration between bursts varies to some degree \citep{Lee2020yh}. Our other targets selected from the Transient Survey are Serpens SMM 1, a intermediate mass protostar which shows a steady rise in brightness over several year timescales, and Serpens SMM 10 IR, which exhibits stochastic variations in brightness \citep{Johnstone2018,Lee2021}. The other five YSOs were intended to be monitored to provide a stable reference flux for relative calibration (Section \ref{sec:rel_calibration}), however, two were found to be unsuitable for this purpose. SMM 9 is now found to be moderately variable in the Transient Survey \citep{Lee2021}, while SMM 2 is too faint and extended at the resolution of the ACA to provide a relative calibration, as discussed by \cite{Francis2020}. The remaining three YSOs (SMM 3, SMM 4, and SMM 11) are thus the only ones used for relative calibration, and their brightness has remained stable (RMS $<2\%$) over the lifetime of the JCMT Transient Survey. The SMA observations were taken with the specific goal of capturing the 2021 outburst of EC 53 with a high cadence, and thus only targeted EC 53 and the 3 stable calibrator sources used by the ACA (see Table \ref{tab:targets}). \begin{deluxetable*}{lhllcc}[h] \label{tab:targets} \tablecaption{Science Targets and YSO Calibrators} \tablehead{\colhead{Name} & \nocolhead{ALMA Name} & \colhead{Other mm Source Names} & \colhead{ACA field center (ICRS)}\tablenotemark{a} & \colhead{SMA Target?} & \colhead{Target Type}} \startdata EC 53 (V371 Ser) & Serpens\_Main\_850\_02 & Ser-emb 21 & 18:29:51.18 +01:16:40.4 & Y & V \\ Serpens SMM 1 & Serpens\_Main\_850\_00 & Ser-emb 6, FIRS1 & 18:29:49.79 +01:15:20.4 & N & V \\ Serpens SMM 9 & Serpens\_Main\_850\_01 & Ser-emb 8, SH2-68N & 18:29:48.07 +01:16:43.7 & N & V\tablenotemark{b} \\ Serpens SMM 10 IR & Serpens\_Main\_850\_03 & Ser-emb 12 & 18:29:52.00 +01:15:50.0 & N & V \\ Serpens SMM 2 & Serpens\_Main\_850\_10 & Ser-emb 4(N) & 18:30:00.30 +01:12:59.4 & N & S\tablenotemark{c} \\ Serpens SMM 3 & Serpens\_Main\_850\_09 & - & 18:29:59.32 +01:14:00.5 & Y & S \\ Serpens SMM 4 & Serpens\_Main\_850\_08 & - & 18:29:56.72 +01:13:15.6 & Y & S \\ Serpens SMM 11 & Serpens\_Main\_850\_11 & - & 18:30:00.38 +01:11:44.6 & Y & S \enddata \tablecomments{V=JCMT Variable; S=JCMT Stable, intended to be used for calibration. \tablenotetext{a}{Our shared SMA and ACA targets have the same field center.} \tablenotetext{b}{Serpens SMM 9 was originally identified as stable by the JCMT Transient Survey \citep{Johnstone2018} but has since been found to exhibit moderate variability \citep{Lee2021}. \tablenotetext{c}{Serpens SMM 2 is bright and stable in the JCMT observations, but is too faint and extended at the resolution of the ACA to be used as a calibrator.}} } \end{deluxetable*} \subsection{ALMA ACA 850 $\mu$m Observations} \label{ssec:alma_obs} Our ALMA programs (2018.1.00917.S, 2019.1.00475.S, PI: Logan Francis) observe the targets in Table \ref{tab:targets} using the ACA, also known as the Morita Array, a sub-array of ALMA consisting of twelve 7m diameter antennas in a fixed configuration. We acquired 10 epochs of observations over 3 years, typically taken 1-3 months apart, except for a $\sim$1.5 year gap between the first 7 and last 3 due to the shutdown of ALMA in 2020. The details of the observations for each epoch are summarized in Table \ref{tab:aca_obs}. For the first 7 epochs, the ACA correlator was configured in time division mode using the default Band 7 continuum settings, which provides an integration time of 1.01s and sets the local oscillator frequency to 343.5 GHz with four spectral windows centered at 336.5 GHz, 338.5 GHz, 348.5 GHz, and 350.5 GHz. Each spectral window provides 1.875 GHz of bandwidth across 128 channels, for a total bandwidth of 7.5 GHz. For the last 3 epochs, we requested the frequency division mode of the correlator with improved spectral resolution for better emission line characterization. Specifically, in these epochs the spectral windows have the same bandwidth and frequency center, but 2048 channels across 1.875 GHz of bandwidth, the trade-off being a longer integration time of 10.1s. The typical resolution of our observations is $\sim 4\arcsec$. Our continuum sensitivity is typically $\lesssim 1$ mJy per epoch, in Table \ref{tab:aca_obs} we also provide values of the sensitivity for individual epochs estimated using the visibility weights with the \texttt{CASA} task \texttt{apparentsens}. Our visibility data for all epochs was calibrated in \texttt{CASA}\footnote{\url{https://casa.nrao.edu/}} 5.6.1 \citep{McMullin2007} using the ALMA pipeline. For each target, we extracted the continuum by creating dirty image cubes, then sigma-clipping the spectra measured in a 3\arcsec\ diameter aperture centered on the brightest source to determine the line free channels. To produce deep and high fidelity images of our targets (Figure \ref{fig:aca_deep_gallery}), we first combined the data from every epoch and brought the amplitude scales of the visibilities into agreement using the relative calibration factors discussed in Section \ref{sec:rel_calibration}, then created images with the \texttt{tclean} task in \texttt{CASA} using the multifrequency synthesis deconvolution and a Briggs robust weighting of 0.5. We performed phase-only self-calibration for each epoch and all of our targets except Serpens SMM 2, which was too faint to obtain useful gain solutions. Self-calibration can slightly increase the measured flux of our targets by reducing the effect of phase decorrelation and is important for accurate determination of the average flux scale, but does not significantly affect the relative calibration accuracy \citep{Francis2020}. Three rounds of self-calibration were applied using solution intervals of a scan length, 20.2s, and 5.05s (first 7 epochs) or 10.1s (last 3 epochs). \begin{deluxetable}{cccccccc} \label{tab:aca_obs} \tablecaption{ACA Observations} \tablehead{\colhead{Date\tablenotemark{a}} & \colhead{Antennas} &\colhead{$uv$ range (m)} & \colhead{Beam Size (\arcsec)\tablenotemark{b}} & \colhead{Sensitivity (mJy)} &\colhead{Spectral Setup\tablenotemark{c}} & \colhead{Calibrators\tablenotemark{d}}} \startdata 2018-08-14 & 10 & 7.0 - 45.1 & 5.8 $\times$ 2.7 & 0.63 & A & J1924-2914, J1851+0035 \\ 2019-03-06 & 11 & 8.1 - 44.2 & 5.3 $\times$ 2.3 & 0.67 & A & J1751+0939, J1743-0350 \\ 2019-04-07 & 11 & 7.8 - 47.2 & 5.8 $\times$ 2.2 & 0.52 & A & J1517-2422, J1751+0939 \\ 2019-05-18 & 11 & 8.7 - 44.9 & 4.9 $\times$ 2.8 & 0.50 & A & J1924-2914, J1743-0350 \\ 2019-08-04 & 9 & 7.3 - 44.9 & 4.8 $\times$ 2.9 & 0.41 & A & J1924-2914, J1743-0350 \\ 2019-09-20 & 9 & 7.3 - 45.4 & 6.4 $\times$ 2.8 & 0.51 & A & J1924-2914, J1851+0035 \\ 2019-10-29 & 11 & 7.0 - 44.7 & 5.2 $\times$ 2.9 & 0.43 & A & J1924-2914, J1851+0035 \\ 2021-07-01 & 8 & 8.5 - 42.6 & 5.4 $\times$ 2.6 & 0.63 & B & J1924-2914, J1851+0035 \\ 2021-08-02 & 8 & 8.2 - 42.0 & 5.5 $\times$ 3.5 & 0.95 & B & J1924-2914, J1851+0035 \\ 2021-09-05 & 8 & 7.2 - 44.6 & 5.8 $\times$ 3.4 & 0.62 & B & J1924-2914, J1851+0035 \\ \enddata \tablecomments{ \tablenotetext{a}{The dates provided are at the start date of each track in UTC time in year-month day format.} \tablenotetext{b}{Beam size is the full width at half maximum of the major $\times$ minor axes of the restoring beam in tclean.} \tablenotetext{c}{Our two spectral setups both have a local oscillator frequency of 343.5 GHz and four 2GHz wide spectral windows centered at 336.5 GHz, 338.5 GHz, 348.5 GHz, and 350.5 GHz. The correlator mode differs, however, and the spectral resolution and integration time for each setup are:\\ A: Time division mode, integration time 1.01s, 128 channels. \\ B: Frequency division mode, integration time 10.1s, 2048 channels \\} \tablenotetext{d}{The sources listed are in the order: flux and bandpass calibrator, gain calibrator. } } \end{deluxetable} \subsection{SMA 1.3 mm Observations} \label{ssec:sma_obs} Our observations with the SMA, an 8 element interferometer with 6m dishes, consist of 16 tracks (2020B-S044, 2021A-S056, PI: Logan Francis) used to simultaneously monitor EC 53 during its 2021 accretion outburst and 3 additional YSO calibrators (Table \ref{tab:targets}). The details of each track are summarized in Table \ref{tab:sma_obs}. Our observations were nominally requested to use either the compact or subcompact array configuration, and the 230/240 GHz RxA/RxB receiver combination with local oscillator tunings of 232.5 GHz and 244.5 GHz; however, to facilitate scheduling some flexibility was allowed, and any local oscillator tuning of the receivers within $238.5\pm10$ GHz was permitted for all but the first and final epochs. The typical beam size is $\sim 3\arcsec$, while the continuum sensitivity is $\lesssim 1$ mJy. We estimate the continuum sensitivity for each epoch in the same manner as for the ACA observations, the values of which are provided in Table \ref{tab:sma_obs}. The observations were performed using the upgraded SWARM correlator \citep{Primiani2016}, which simultaneously processes data from both receivers. The centers of the upper and lower sideband of each receiver are separated from the local oscillator frequency by $\pm10$ GHz; each sideband is divided into 6 spectral windows with 2GHz bandwidth and 140kHz channels each, providing a total processed bandwidth of up to 48 GHz from both receivers. Several initial epochs were taken in early 2021 to establish the pre-outburst flux of EC 53; after the outburst was detected to have begun at the SMA and JCMT in mid-April, an approximate 10 day cadence for the remaining observations during the EC 53 rise was requested. The compact configuration of the SMA was used for all but the final observation, which used the sub-compact configuration instead. For most epochs, only 6 of the 8 SMA antennas were available. Data calibration was performed using standard SMA procedures with the \texttt{MIR} software\footnote{\url{https://lweb.cfa.harvard.edu/~cqi/mircook.html}}, which we briefly describe here. The raw data was first binned down by a factor of 8 in spectral resolution using the SMARechunker tool\footnote{\url{https://github.com/Smithsonian/SMARechunker}}. This binning reduces the computational resources needed for reduction while still allowing good differentiation between emission lines and continuum. Baseline correction was performed if needed, and periods of noisy/corrupted data were flagged out. Spectral spikes in the data were corrected, followed by system temperature correction and bandpass calibration. Flux calibration was performed by measuring the gain calibrator flux using the brightest solar system object available during the track, then transferring the flux of the gain calibrator to the science targets during phase and amplitude calibration. The final data for each science target was exported to \texttt{CASA} measurement set format. Deep images of each SMA target were constructed by the same procedure as for the ACA observations (Section \ref{ssec:alma_obs}), by first extracting the continuum from the dirty cubes, then applying relative calibration factors for each SMA epoch (Section \ref{sec:rel_calibration}) and imaging the combined data from all 16 epochs using \texttt{tclean} with a Briggs robust value of 0.5. Due to the lower S/N on individual integrations with the SMA compared with the ACA, we did not apply self-calibration to our SMA observations. The final deep images are shown in Figure \ref{fig:sma_deep_gallery} in the Appendix. The visual appearance of our shared targets at the SMA and and ACA are similar, however, the SMA resolution and imaging fidelity is somewhat better than the ACA owing to the improved uv-coverage provided by the more optimal location of the SMA for targets near the celestial equator, additional epochs, and longer observation lengths, typically 5-8\,hrs for each SMA track versus $\sim$1 hr for the ACA. Additionally, the primary beam of the SMA is much larger than the ACA ($\sim 55\arcsec$ vs $\sim 30\arcsec$), providing better sensitivity to sources away from the field center. \begin{deluxetable}{ccccccc} \tablecaption{SMA Observations} \label{tab:sma_obs} \tablehead{\colhead{Date} & \colhead{Antennas} &\colhead{$uv$ range (m)} & \colhead{Beam Size (\arcsec)} & \colhead{Sensitivity (mJy)} & \colhead{Spectral Setup\tablenotemark{a}} & \colhead{Calibrators\tablenotemark{b}}} \startdata 2021-02-19 & 6 & 16.2 - 75.3 & 2.8 $\times$ 2.3 & 0.94 & A & Vesta, 3c279 \\ 2021-03-05 & 7 & 11.6 - 74.9 & 3.1 $\times$ 2.7 & 0.38 & B & Vesta, 3c279 \\ 2021-03-19 & 8 & 14.6 - 74.8 & 2.9 $\times$ 2.7 & 0.34 & B & Vesta, 3c279 \\ 2021-04-02 & 7 & 14.5 - 75.8 & 3.0 $\times$ 2.6 & 0.39 & B & MWC349a, 3c279 \\ 2021-04-18 & 6 & 16.4 - 72.0 & 2.9 $\times$ 2.5 & 0.44 & C & MWC349a, 3c279 \\ 2021-05-29 & 6 & 13.9 - 75.9 & 3.2 $\times$ 2.4 & 0.32 & B & Callisto, 3c279 \\ 2021-06-09 & 6 & 15.9 - 76.0 & 3.0 $\times$ 2.5 & 0.27 & B & Ganymede, 3c279 \\ 2021-06-18 & 6 & 15.2 - 76.2 & 3.0 $\times$ 2.4 & 0.47 & B & MWC349a, 3c279 \\ 2021-06-27 & 6 & 16.6 - 75.4 & 2.9 $\times$ 2.5 & 0.48 & B & MWC349a, 3c84 \\ 2021-07-06 & 6 & 14.5 - 76.2 & 2.9 $\times$ 2.4 & 0.65 & B & Titan, 3c84 \\ 2021-07-18 & 6 & 14.5 - 75.8 & 2.8 $\times$ 2.4 & 0.31 & B & Callisto, 3c84 \\ 2021-08-03 & 5 & 10.4 - 59.5 & 5.1 $\times$ 3.0 & 0.55 & B & Callisto, 3c84 \\ 2021-08-11 & 6 & 14.7 - 74.6 & 2.8 $\times$ 2.3 & 0.61 & A & Vesta, 3c279 \\ 2021-08-12 & 6 & 14.5 - 76.0 & 2.8 $\times$ 2.5 & 0.84 & A & Callisto, 3c279 \\ 2021-08-18 & 6 & 14.5 - 69.1 & 5.2 $\times$ 2.4 & 0.36 & A & Callisto, BL Lac \\ 2021-09-21 & 7 & 8.1 - 69.0 & 3.5 $\times$ 2.7 & 0.31 & A & Callisto, BL Lac \\ \enddata \tablecomments{ The date and beam size are formatted in the same manner as Table \ref{tab:aca_obs}. \tablenotetext{a}{Our three spectral setups are as follows:\\ A: Our requested default tuning, used for first and last 4 epochs. LO Tunings: Rx230 = 232.500 GHz, Rx240 = 244.500 GHz\\ B: A ``Standard" SMA continuum tuning, used for most epochs. LO Tunings: Rx230 = Rx240 = 225.538 GHz\\ C: Unique tuning used for 5th epoch only. LO Tunings: Rx230 = Rx240 = 215.100 GHz} \tablenotetext{b}{The sources listed are in the order: flux calibrator, bandpass calibrator. We also use 1743-038 for gain calibration in every epoch.} } \end{deluxetable} \subsection{JCMT 850 $\mu$m Observations} \label{ssec:jcmt_obs} The JCMT Transient Survey \citep[M16AL001, M20AL007;][]{Herczeg2017,Lee2021} monitors eight Gould Belt star forming regions in the 450 and 850 $\mu$m continuum bands using the SCUBA-2 instrument \citep{Holland2013}. Each map is observed to a uniform depth $\sim$ 12 mJy per beam at 850 $\mu$m for ease of comparison across epochs \citep[see][]{Mairs2017Cal}. The weather sensitivity of the 450 $\mu$m observations, however, yields a large, order of magnitude, range in observation depth, making the shorter wavelength monitoring significantly more complicated (Mairs et al.\ in preparation). In this paper we concentrate on the 850 $\mu$m measurements and only briefly discuss the 450 $\mu$m light curve for EC 53 (Section \ref{ssec:caveats}). Each star forming region is observed with a 30 arcminute diameter circular footprint, using the Pong 1800 scanning mode \citep{kackley2010}. For Serpens Main the map is centered at (R.A., Decl.) = (18:29:49,+01:15:20, J2000). All of our ACA and SMA targets are in this region, which when visible has been observed at a monthly or better cadence since February 2016, and with an approximate one week cadence during the 2021 outburst of EC 53. The effective beam size at the JCMT is 14.4\arcsec\ at 850 $\mu$m and 10.0\arcsec\ at 450 $\mu$m \citep{mairs2021}; however, to better measure the peak fluxes and account for variations in the telescope beam between epochs, the maps are convolved to 15.6\arcsec\ and 10.8\arcsec\ at 850 and 450 $\mu$m, respectively. The enhanced, second generation, JCMT relative flux calibration strategy is described in detail by {Mairs et al.\ (in preparation)} and offers a small improvement over the original relative calibration scheme \citep{Mairs2017Cal}. This calibration method is similar to that described below for the ACA and SMA. However, given the large number of JCMT epochs to date (81 as of March 14, 2022) every non-robust variable in the region is included as an epoch calibrator, weighted by its flux uncertainty across all epochs. An iterative approach is used to determine the best relative flux calibration for each epoch. For Serpens Main, the epoch calibration uncertainty at 850 $\mu$m is measured to be $1.5\%$. The expected flux uncertainty for a source in a given epoch includes both the map noise, 12 mJy\,bm$^{-1}$, and the flux calibration uncertainty applied to the source, added in quadrature. A deep image of Serpens Main is produced by stacking all the JCMT epochs. At 850 $\mu$m this results in an image with a noise $\sim$ 1.1 mJy\,bm$^{-1}$. The central region of Serpens Main in presented in Figure \ref{fig:finder_chart} along with the locations of the ACA and SMA targets. The four stable calibrator sources (Table \ref{tab:targets}) have measured fractional RMS brightness values across the 81 monitored epochs of 1.5\%, with the exception of SMM 2 (1.8\%). \section{ACA and SMA Data Reduction} \label{sec:dr} \subsection{Relative Flux Calibration} \label{sec:rel_calibration} Flux calibration at submm/mm wavelengths is typically accurate to 10-20\%, as calibration is usually performed with solar system objects or quasars, which have uncertain fluxes owing to difficulties in modeling the emission and in the inherent time variability \citep[see for example,][]{Francis2020}. To achieve high precision measurements of protostellar variability, our ACA and SMA observations use a relative flux calibration strategy pioneered for the JCMT Transient Survey \citep{Mairs2017Cal}, whereby a number of bright and stable calibrator YSOs are used to correct the flux scales at each epoch. This calibration strategy, applied to interferometric observations, is described extensively by \citet{Francis2020}, where it was used to independently assess the calibration accuracy of the ALMA pipeline. Here, we provide an overview of this method as applied to our ACA and SMA observations. Our ACA and SMA relative calibrations both use SMM 3, SMM 4, and SMM 11 (Table \ref{tab:targets}). Although the ACA observed two additional sources that were initially classified as stable in the JCMT Transient Survey, SMM 2 and SMM 9, we do not include them in the calibration. While SMM2 is bright and stable in the Transient Survey owing to the large $\sim15\arcsec$ JCMT beam, at the $\sim4\arcsec$ ACA resolution it is too faint and extended to obtain a useful calibration. Serpens SMM 9 was originally identified as stable in the JCMT Transient Survey \citep{Johnstone2018} but after four years of monitoring was found to exhibit moderate variability \citep{Lee2021}, and we thus also exclude it as a calibrator. To obtain a relative calibration, we first perform a $\chi^2$ fit of a point source model in the $uv$-plane to each calibrator and epoch using the \texttt{lmfit} library\footnote{\url{https://lmfit.github.io/lmfit-py/}}. For the SMA data, we additionally fit visibility data from receiver (Rx230 and Rx240) independently to take into account possible systematic differences between instruments. Our choice of a point source model is reasonable, as at the resolution of the ACA and SMA, our chosen calibrator sources are approximately point-like with some much fainter extended emission (see Figures \ref{fig:aca_deep_gallery} and \ref{fig:sma_deep_gallery}), and moreover, we find it sufficient for precise relative calibration. Performing our model fitting in the $uv$-plane also renders error analysis more tractable by avoiding the uncertainties common to image reconstruction algorithms. For our ACA observations, we proceed by using our point source fits to calculate the mean flux across all epochs for each calibrator. For each epoch and calibrator, we determine the ratio required to correct its flux to the mean across epochs. Then, we calculate a relative flux calibration factor for each epoch by taking the average across calibrators. Our procedure for the SMA data is the same, but performed independently for each receiver. The resulting relative flux calibration factors for all of our ACA and SMA epochs are shown by the black symbols in Figures \ref{fig:aca_calibration} and \ref{fig:sma_calibration} respectively. By examining the distribution of all mean correction factors (colored symbols) across calibrators, we estimate the uncertainty in our relative flux calibration factors to be $3\%$ for the ACA, the same accuracy achieved using the first 7 epochs by \cite{Francis2020}, and 2.5\% (per receiver) for the SMA. For both telescopes, the range of the mean correction factors is $\sim20$\%, while the standard deviation is 7-10\%, as expected. Our relative calibration method thus allows us to construct light curves for our targets that are sensitive to much smaller relative variations in flux than would otherwise be possible with conventional mm flux calibration strategies. \subsection{Calibrated Flux Measurements of Known JCMT Variables} In order to accurately measure the flux of our variable targets (EC 53, SMM 1, SMM 10) in the $uv$-plane at the scale of the beam size in our ACA and SMA observations, we model and remove the extended emission (i.e., the outer envelope and outflow cavity edges), which we assume to exhibit little to no variability. This approach avoids systematic errors resulting from biasing the flux of the central source towards higher values when including the high amplitude visibilities on the shortest baselines. For the purposes of our relative calibration, we have found that point source models alone are sufficient for delivering a high calibration accuracy. We construct a model for the extended emission by separating out the bright central source in the deconvolved deep images of our targets which combine data from all epochs (see Figures \ref{fig:aca_deep_gallery} and \ref{fig:sma_deep_gallery}). This is carried out by examining the distribution of flux over all pixels in the \texttt{.model} files produced by \texttt{tclean}, that contain the collection of point sources representing real emission before convolution by the clean beam. We first remove any model pixels with negative flux which may have been introduced during the cleaning process, and then identify the bright outlier pixels representing the central source by assigning a flux cutoff threshold using a multiple of the standard deviation of the pixel distribution, typically between 15-60$\sigma$. Because the default \texttt{tclean} algorithm represents all emission using point sources, models of extended structures usually appear patchy or speckled. We thus smooth the extended structure with a small circular Gaussian with approximately half the beam size: $1.5\arcsec$ for the SMA and $2\arcsec$ for the ACA. Finally, we transform our smoothed images of the extended emission to the $uv$-plane using the \texttt{galario} library. We fit the visibility data without relative calibration for each target and epoch (and for the SMA, each receiver) using a model consisting of the extended emission model plus a point source representing the bright central source. The extended emission model is matched to the flux scale of each epoch using the relative flux calibration factors and held fixed, while the amplitude of the point source is allowed to vary. Although the extended emission may also exhibit variability as changes in accretion luminosity propagate to the outer envelope, we assume it's flux to be constant as it is faint in comparison to the central source. We perform our $\chi^2$ fit of the models in the $uv$-plane using \texttt{lmfit}, and then use the relative flux calibration factors to correct the resulting point source flux scales. \subsection{Light Curves of Known JCMT Variables} \label{sec:light_curves} The final light curves for all the variable targets observed by the ACA are shown in Figure \ref{fig:aca_lc}. The error bars include both the source measurement uncertainty as well as the 3\,\% relative calibration uncertainty. For comparison, we overlay the contemporaneous JCMT 850 $\mu$m light curves of the same targets, with the data scaled such that the mean of the JCMT light curve is equal to the mean at the ACA, indicated by the dashed horizontal line. Similarly, we show the light curves from the SMA and JCMT for EC 53 during the 2021 outburst in Figure \ref{fig:sma_lc}. The average SMA flux across receivers is shown on the left dependent axis. On the right dependent axis, the normalized flux for the SMA and JCMT is shown, which is calculated by normalizing the data to the minimum observed flux during the outburst. These minima occur on 2021-04-02 and 2021-04-08 for the SMA and JCMT respectively. At the JCMT, Serpens SMM 1 has steadily brightened from early 2019 to Spring 2021, followed by a somewhat steeper decay till the end of 2021. Our ACA light curve of SMM 1 is consistent with a constant flux within observational uncertainties. In the most recent ACA observations, we either caught the decline earlier than at the JCMT owing to the light travel time through the envelope (see Section \ref{sec:further_modeling}), or have simply not significantly detected the rise in flux given the relatively larger uncertainty in the ACA flux (3\,\%) compared to the JCMT ($<2$\,\%). In the ACA observations of EC 53, we detect a decline in amplitude from late 2018 until the beginning of its outburst in late 2019, and a rise and fall in flux over 3 months of observations roughly centered on the burst peak as seen by the JCMT. The overall amplitude of the EC 53 light curve appears twice as strong as that seen by the JCMT when comparing the 2019 minimum flux and peak in Summer 2021. The individual outbursts of EC 53 vary in amplitude \citep{Lee2020yh}, so the lack of data near the 2019 outburst peak and 2021 outburst minimum precludes a robust comparison of the observed response to individual bursts between the ACA and JCMT (see Section \ref{ssec:caveats}). At the SMA, the outburst of EC 53 is observed to peak weeks earlier and decline more rapidly than in the JCMT observations. Furthermore, the amplitude of the outburst in the SMA observations $\sim$35\%, is stronger than for the JCMT, $\sim$30\%, though this difference is only somewhat larger than the uncertainty in the flux measurements of a few percent. The variability of SMM 10 at the JCMT is characterized by a many-month brightening and fading event \citep{Lee2021} that peaked in late 2017 and early 2018, before the ACA observations. Since that time SMM 10 has remained a stochastic source, where the standard deviation of the flux, $3\%$, exceeds the variations expected from noise and calibration errors, $\sim$1.5\%. Curiously, in the contemporaneous ACA observations we detect much stronger evidence of variability: the standard deviation of the ACA data is 0.014 Jy/beam or $\sim10$\,\%, a factor of 2 greater than that seen for the JCMT data. This variability is uncorrelated with the JCMT light curve but exhibits a similar stochastic behaviour. \section{Modeling the SMA AND JCMT Light Curves of EC 53} \label{sec:ec53_toy_modeling} The amplitude and structure of the ACA and SMA light curves in comparison with the simultaneous JCMT observations (Figures \ref{fig:aca_lc} and \ref{fig:sma_lc}) suggests that the spatial resolution of the observations, and thus the physical envelope scales probed, play an important role in determining the observed light curve. While the ACA and SMA probe the inner envelope ($\sim$1500\,au) response to accretion variability where densities and temperatures are higher, the much larger beam of the JCMT includes a significant amount of colder and less dense envelope material ($\sim$6000\,au). This material is additionally subject to heating of the outer envelope by the external interstellar radiation field, and is located at much larger light travel times from the central protostar, which may have a significant effect on the shape of the JCMT light curve. In this section, we present our toy model for the EC 53 envelope structure and periodic outburst to explain the observed time delay between the SMA and JCMT. We then explore how varying the parameters of the toy model affects the observed light curves for EC 53. \label{sec:toy_model} \subsection{EC 53 Model Description} We consider a protostar with a varying luminosity surrounded by a spherically symmetric envelope with a fixed density structure. Based on the previous radiative transfer modeling of EC 53 by \cite{Baek2020}, the envelope is assumed to have a radial power law density profile, \begin{equation} \rho(r) = \rho_0 (r/r_0)^\alpha, \label{eqn:density} \end{equation} truncated at an inner and outer radius, $r_\mathrm{in}$ and $r_\mathrm{out}$. The dust temperature profile follows a radial power law with a floor temperature below which the envelope can not fall, mimicking the effects of external heating by the (local) interstellar radiation field: \begin{equation} T(t,r) = \begin{cases} T_0(t)\left[r / r_0\right]^{-2/(4 + \beta_\mathrm{em})},& \text{if } T\geq T_\mathrm{floor}\\ T_\mathrm{floor}, & \text{otherwise} \end{cases}. \label{eqn:dust_temp} \end{equation} The power law portion of this temperature profile implicitly assumes the dust opacity also follows a power law with frequency, $\kappa_\nu \propto \nu^{\beta_\mathrm{em}}$, over the frequencies $\nu$ responsible for the bulk of the dust emission. Following \citet[][Section 6.2]{ContrerasPena2020}, the dust temperature profile in the envelope is related to the steady state luminosity by \begin{equation} T_0(t) = C\left(\frac{L(t)}{L_\odot}\right)^{(1/(4 + \beta_\mathrm{em}))}, \label{eqn:T_0_steady} \end{equation} where $L(t)$ is the time dependent accretion luminosity of the source and $C$ is a conversion constant which depends on the assumed dust properties and the reference radius $r_0$. When the accretion luminosity of the protostar changes, the time required for the light pulse to travel outward through the envelope and heat the dust will result in a lag in the observed dust temperature response. \cite{Johnstone2013} performed analytic radiative transfer calculations of the propagation of accretion bursts through protostellar envelopes, and found the light travel time to be significantly longer than the dust heating timescale. We therefore assume that changes in the dust temperature propagate at the speed of light. Photons emitted by the heated dust will then reach the observer with various lags, depending on their place of origin in the envelope. To simplify our modeling of the envelope, we use a cylindrical coordinate system with azimuthal symmetry, where the positive $x$ direction points towards the observer, the positive $z$ direction is North on the sky, and the distance from the protostar is $r=\sqrt{x^2+z^2}$. We then calculate a lookback time for any position in the envelope as a combination of the time for light to propagate from the protostar to that location, $r/c$, minus the travel time from the relative offset of the position along the line of sight, $x/c$. Combined, \begin{equation} t_\mathrm{lb} = (r - x)/c, \label{eqn:tlb} \end{equation} where $t_\mathrm{lb}=0$ is the time at which photons from a burst first reach the observer. During the burst, the observer sees a given position in the envelope to be experiencing a luminosity from the protostar of $L(t-t_\mathrm{lb})$, which modulates the dust temperature response, equation \ref{eqn:T_0_steady}, such that \begin{equation} T_0(t) = C \left(\frac{L(t-t_\mathrm{lb})}{L_\odot}\right)^{(1/(4 + \beta_\mathrm{em}))}. \label{eqn:T_0_mod} \end{equation} To illustrate how the lookback time affects the observed light curve, in Figure \ref{fig:lookback_time} we show lookback time contours in days over the $30000 \times 30000$ au spatial domain used for our models, where a typical envelope of radius of $10000$ au is denoted by a dashed red circle, and the representative sizes of the SMA and JCMT (850 $\mu$m) beams at the distance of Serpens are overlaid. Regardless of the beam size, the timescale for the observer to see the entire spherical envelope respond to an instantaneous burst is simply the light travel time across the envelope diameter, $\approx 115$ days. However, the beam size of the telescope will modulate the observed response of the envelope: a larger beam will incorporate more envelope material with larger lookback times $t_\mathrm{lb}$, producing a more pronounced lag in the response relative to a smaller beam size. To compute a model light curve from a given luminosity function, we discretize our model over a 2D $xz$-grid, where $x_j$ and $z_k$ denote the coordinates at the center of each cell, and sample the luminosity at timesteps $t_i$ with interval $\Delta_t$. The grid cells have a constant width of $\Delta_\mathrm{cell}$ in units of au, while the grid is $n_\mathrm{cell}$ and $n_\mathrm{cell}/2$ cells in the $x$ and $z$ directions respectively, owing to the symmetry about the $z=0$ axis. For each timestep, we calculate the density and dust temperature using equations \ref{eqn:density}, \ref{eqn:dust_temp}, and \ref{eqn:T_0_mod}. The envelope is assumed to be optically thin throughout, such that the dust in each model cell emits as a blackbody with intensity \begin{equation} I_\nu(t_i,r_{j,k}) = \kappa_\nu \rho(r_{j,k}) B_\nu(\nu, T(t_i,r_{j,k})), \label{eqn:model_intensity} \end{equation} where $r_{j,k} = \sqrt{x_j^2 +z_k^2}$, $B_\nu$ is the Planck function and $\nu$ is the observing frequency. Given that we are only interested in relative variations in the brightness of the protostellar envelope, fixed quantities in equation \ref{eqn:model_intensity}, such as $\kappa_\nu$, can be ignored or set to unity. The flux is then calculated for a given telescope and instrument by integrating the model intensity in each cell within a representative circular beam: \begin{equation} F(t_i) = \sum_{j=0}^{n_\mathrm{cell}} \sum_{k=0}^{k_\mathrm{beam}} 2\pi z_k I_\nu(t_i,r_{j,k})\Delta_\mathrm{cell}^2, \end{equation} where $k_\mathrm{beam} = \lfloor \frac{\theta_\mathrm{beam}/2}{ \Delta_\mathrm{cell}} \rfloor$ and $\theta_\mathrm{beam}$ is the beam FWHM in au. The representative observing frequencies and beam sizes used for our models are listed in Table \ref{tab:model_telescopes}. \begin{deluxetable}{cccc} \tablecaption{Toy Model Observing Properties} \label{tab:model_telescopes} \tablehead{\colhead{Observatory} & \colhead{Beam size (\arcsec)} & \colhead{Beam radius (au)} & \colhead{Frequency (GHz)}} \startdata JCMT 850 \micron & 15 & 3270 & 352.94 \\ JCMT 450 \micron & 10 & 2180 & 666.66 \\ SMA 1300 \micron & 3 & 654 & 230.00 \\ ACA 850 \micron & 4 & 872 & 343.50 \enddata \tablecomments{Beam size is the full width at half maximum. The beam radius is half this value converted to au assuming the model source lies at the distance to Serpens Main of 436 pc \citep{Ortiz-Leon2017}.} \end{deluxetable} \subsection{Fiducial EC53 Envelope Model} The parameters for our fiducial EC 53 model are based on the best-fit 2D radiative transfer models by \citetalias{Baek2020}, with some simplifications. \citetalias{Baek2020} fit the structure of EC 53 using the observed SED during the quiescent phase, and altered the luminosity of the central protostar to reproduce the observed brightening at 850 $\mu$m in the JCMT Transient Survey. We use the same density structure for the envelope as \citetalias{Baek2020} (equation \ref{eqn:density}), with an $r_\mathrm{out}$ of 10000 au and $\alpha=-1.5$. While \citetalias{Baek2020} also included a 90~au circumstellar disk and $20^{\circ}$ opening-angle outflow cavity in their models, we do not include either, as these features are primarily important for variability in the near- to mid-infrared emission and should only have a modest effect on the broad shape and time delay seen in the submm emission at the angular scales probed by the JCMT and ACA. Thus, in this paper we only consider spherically symmetric density profiles, leaving more complex geometries to a future work. To convert our model luminosities to a dust temperature profile, we set a value for the constant $C$ in equation \ref{eqn:T_0_steady}, which implicitly depends on the details of the dust opacity. We derive a value $C$ = 63 K $(100~\mathrm{au})$ from a by-eye fit of the temperature profiles for the \citetalias{Baek2020} radiative transfer models in the outbursting and quiescent phases of EC 53 (Fig 10 of \citetalias{Baek2020}) using equations \ref{eqn:dust_temp} and \ref{eqn:T_0_steady}. For the time-dependent luminosity, we follow the brightness model of EC53 used by \citet[their equation 23]{Lee2020yh} to interpret the phase-folded mid-infrared and submm light curves of EC 53. This luminosity function uses a periodic exponential rise and decay and is motivated by the shape of the near infrared light curves, which should approximately trace changes in the protostellar luminosity. Thus, our luminosity function is: \begin{equation} L(t) = L_\mathrm{max}\left( e^{-t_\mathrm{mod} / \tau_\mathrm{fall}} + f e^{(t_\mathrm{mod} - P) / \tau_\mathrm{rise}} \right), \label{eqn:EC53_lum} \end{equation} where $P$ is the period, $t_\mathrm{mod} = t\pmod P$, and $f = (1 - e^{-P / \tau_\mathrm{fall}})$ is a scaling factor which ensures a continuous transition between the rise and fall parts of the function. We set the rise and fall timescales to the values determined empirically by \cite{Lee2020yh} from fits to the near infrared light curves of $\tau_\mathrm{rise} = 35$ days and $\tau_\mathrm{fall} = 270$ days. The period and maximum luminosity are allowed to vary freely however, as the exact duration between past outbursts of EC 53 and their peak brightness exhibit some variability \citep{Lee2020yh}. To compare the effect of our model light curve against the submm observations, we compute the model flux for each telescope in Table \ref{tab:model_telescopes} over a grid with $\Delta_\mathrm{cell} = 10$ au and $n_\mathrm{cell} = 3000$, and sample the luminosity function at timesteps of length $\Delta_t=0.5$ days. Since our model does not include the details of the dust mass and opacity needed to output the brightness in physical units, we normalize the model light curves to their minima. To fit the observations, we vary the date of the peak luminosity ($t=0$ in equation \ref{eqn:EC53_lum}), period, floor temperature, and maximum luminosity of the model to match the observed light curves by eye. We find good agreement using $t_\mathrm{peak} = $ 2021-06-16, $P=573$ days, $T_\mathrm{floor} = 24$ K, and $L_\mathrm{max} = 17 L_\odot$; the corresponding minimum luminosity is $L_\mathrm{min} = 3.6 L_\odot$ and thus $L_\mathrm{max}/L_\mathrm{min} = 4.7$. The parameters of our fiducial model are summarized in Table \ref{tab:ec53_fiducial}, the model luminosity, temperature at $r_0=100$ au, and SMA/JCMT (850 $\mu$m) light curves are shown in Figure \ref{fig:ec_53_fid_fiducial_model}, and comparison of the observed SMA/JCMT and model light curves is given in Figure \ref{fig:ec_53_fid_model_fit}. We do not perform formal model fitting, as our purpose is only to qualitatively reproduce the relative amplitude and time delay of the EC 53 outburst rather than constrain exact parameter values. We have also computed model light curves for the ACA and the JCMT 450 $\mu$m observations, however, comparison with the EC 53 observations is more complicated owing to the paucity of data during the 2021 outburst and additional observational uncertainties; we thus defer our discussion of these models to Section \ref{sec:disc}. Our model light curve is in good agreement with the JCMT 850 $\mu$m observations for both the most recent decay in 2020 and the subsequent outburst in 2021. We also reproduce the relative amplitude and earlier rise of the SMA observations with respect to the JCMT, providing strong evidence for modulation of the accretion burst during its propagation through the envelope. Our fiducial model underestimates the rate of brightness decline at the SMA after the burst, however, suggesting that some details of the luminosity function and/or envelope properties are missing, which we explore further in Section \ref{sec:further_modeling}. \begin{deluxetable*}{ll} \tablecaption{EC 53 Fiducial Model} \label{tab:ec53_fiducial} \tablehead{\colhead{Parameter} & \colhead{Value}} \startdata Model Grid & \\ \hline $n_\mathrm{cell}$ & 3000 \\ $\Delta_\mathrm{cell}$ & 10 au \\ \hline Envelope Properties & \\ \hline $r_\mathrm{in}$ & 0 au \\ $r_\mathrm{out}$ & 10000 au \\ $r_\mathrm{0}$ & 100 au \\ $\rho_0$ & 1.0 (arb. units) \\ $\alpha$ & -1.5 \\ $\beta_\mathrm{em}$ & 1.5 \\ $T_\mathrm{floor}$ & 24 K \\ $C$ & 63 K $(r_0=100~\mathrm{au}$) \\ \hline Burst Properties and Time Sampling & \\ \hline $L_\mathrm{max}$ & 17 $L_\odot$ \\ $L_\mathrm{min}\tablenotemark{a}$ & 3.6 $L_\odot$ \\ % $\tau_\mathrm{rise}$ & 33 days \\ % $\tau_\mathrm{fall}$ & 270 days \\ % $P$ & 573 days \\ $\Delta_t$ & 0.5 days \\ % $t_\mathrm{peak}$ & 2021-06-16 (UTC) \enddata \tablecomments{\tablenotetext{a}{The minimum luminosity is dependent on the other parameters of the luminosity function, see equation \ref{eqn:EC53_lum}.} } \end{deluxetable*} \subsection{EC 53 Fiducial Model Parameter Space Exploration} In this Section, we explore how variations in selected parameters of the fiducial EC 53 model affect the relative amplitude and time lag of the observed light curves to evaluate the robustness of the fiducial model and illustrate the role each parameter plays. \subsubsection{Varying EC 53 Envelope Properties} In Figure \ref{fig:envelope_comp} (top row), we show the effect on the model SMA and JCMT (850 $\mu$m) light curves of varying the envelope floor temperature $T_\mathrm{floor}$, envelope density radial index $\alpha$, outer envelope radius $r_\mathrm{out}$, and inner envelope radius $r_\mathrm{in}$ with the maximum amplitude in each model light curve marked by a triangle. The floor temperature in the model is most important for the JCMT response, as the radius in the fiducial model where $T_\mathrm{floor}=24$ K is reached ($\sim 2700$ au at burst minimum) is fully enclosed within the JCMT beam. With a higher floor temperature of $T_\mathrm{floor}= 30 $ K, this radius is significantly smaller ($\sim 1500$ au), resulting in less of a temperature response from the outer envelope, and therefore significantly reducing the JCMT burst amplitude. A weaker reduction in amplitude is seen for the SMA, as the material near the floor temperature is only located along the front and back of the beam column, rather than the sides as in the case of the JCMT. With a lower floor temperature of $T_\mathrm{floor}= 20$ K the JCMT burst amplitude is significantly increased, as the radius at which $T_\mathrm{floor}$ is reached ($\sim 4300$ au) now occurs well outside the JCMT beam. The time delay between the SMA and JCMT light curves slightly increases with a lower $T_\mathrm{floor}$, due to the increase in the light travel time to radii where the floor temperature is important. We next vary the radial density power-law index (Figure \ref{fig:envelope_comp}, second row from the top). We note that the mass within the envelope is not conserved when varying the density index, however, our normalization of the light curves to the minimum flux removes the differences in envelope luminosity resulting from the differing masses. With a steeper density profile, $\alpha=-2.5$, little change from the fiducial model is seen in the SMA light curve. The amplitude of the JCMT light curve increases, however, and there is less delay between its peak and that of the SMA. The modified JCMT light curve is the result of concentrating more of the envelope within the center of the JCMT beam, reducing the influence of cold outer envelope material with temperatures close to $T_\mathrm{floor}=24$ K and longer lookback times. When the density profile is shallower, $\alpha=-0.5$, the peak amplitude for both the SMA and JCMT light curves is significantly reduced and the delay time between the peaks is slightly shorter. The reduction in normalized amplitude occurs because there is a significant increase in the amount of colder dust at large radii, which has a weak brightness response due to the temperature floor. For both the JCMT and ACA, the lag between the protostellar burst and the observed peak increases as the outer envelope becomes more important. This lag is more pronounced for the SMA, resulting in a shortening of the relative lag between the JCMT and the SMA. The effect of changes to the envelope outer radius $r_\mathrm{out}$ (Figure \ref{fig:envelope_comp}, 2nd row from bottom) depends on its size relative to the model beams (refer to Table \ref{tab:model_telescopes}). For $r_\mathrm{out}= 15000$ au, very little change from the fiducial is seen other than a slight amplitude reduction, which is the result of adding only a small amount of cold material to the front and back of the beam columns along the observer's line of sight. For a smaller than fiducial envelope ($r_\mathrm{out} = 5000$ au), the SMA amplitude only slightly increases, while the JCMT amplitude increases significantly due to the removal of cold material close to the temperature floor. In the extreme case of a very small, $r_\mathrm{out} = 1000$ au envelope, which is fully enclosed within the JCMT beam, the shape of the JCMT and SMA light curves become nearly identical and there is a negligible time delay. The JCMT reaches a higher peak amplitude due to the removal of all envelope material at large radii where the lookback times are long and the temperature approaches the floor. While the envelope in our fiducial model extends all the way to the grid center, we experiment with truncating the envelope some distance from the central protostar to determine if envelope substructure, e.g., low density central cavities carved by disks, may have an identifiable effect on the light curve (Figure \ref{fig:envelope_comp}, bottom row). With an inner cavity of radius $r_\mathrm{in} = 100$ au, the light curves for both the SMA and JCMT are unchanged. Despite the radial power law density and temperature structure of the envelope, the submm brightness within 100 au is negligible in comparison with the contribution from the envelope material out to $r=10000$ au along the observer's line of sight and within either beam. For an $r_\mathrm{in} = 500$ au cavity, the removed material becomes more significant and slightly reduces the SMA peak amplitude and shifts forward the peak date, while the JCMT model light curve is still unchanged. If the cavity is sufficiently large to be resolved by the SMA beam ($r_\mathrm{in} = 1000$ au), a large reduction in the SMA peak amplitude occurs, and a ``kink'' appears in the light curve at $t=0$. The large hole effectively disconnects the emission from the near and far sides into two pulses separated in time, with the near-side modulated by a narrow, 10 day spread, while the far-side begins later and has a much longer response spread (see Figure \ref{fig:lookback_time}). At the scale of 1000 au, the cavity is still unresolved by the JCMT beam, but a slight reduction in peak amplitude occurs. This highlights the possibility for envelope substructure to modulate the response to an accretion burst in an observable way, provided sufficiently high resolution monitoring. In reality, outflow cavities are likely significantly more complex than the simple spherical holes we have used here, however more accurate 3D modeling is beyond the scope of this paper. \subsubsection{Varying EC 53 Burst Properties} \label{ssec:ec53_burst_prop} In Figure \ref{fig:burst_comp} we consider the effect on the model SMA and JCMT (850 $\mu$m) light curves of changing the maximum luminosity, $L_\mathrm{max}$, and the rise timescale, $\tau_\mathrm{rise}$, of our fiducial luminosity function. Modifying the maximum luminosity has essentially no effect on the amplitude of the outburst seen by the SMA, as the minimum luminosity reached in equation \ref{eqn:EC53_lum} remains a fixed fraction of the peak luminosity and the majority of the emission in the SMA beam is from material radiating at high enough temperatures (see Equation \ref{eqn:T_0_mod}) to always be in the Rayleigh-Jeans limit. The amplitude of the burst at the JCMT is slightly increased, as more of the outer envelope stays above the floor temperature of 24 K, and thus undergoes a greater temperature change during the outburst. The delay time between the SMA and JCMT peak remains unchanged. The rise timescale affects the slope of the light curve and burst amplitude for both the SMA and JCMT. A shorter rise time increases the burst amplitude, as the decay from the maximum luminosity will continue for a longer time before the exponential rise takes over, and the minimum luminosity --- at which our light curves are normalized --- will be lower. Similar changes to the burst amplitude occur if the period and fall timescales are modified, but are less dramatic owing to their much longer lengths than the rise timescale. While the burst peaks earlier at both the SMA and JCMT with a shorter rise time, the time delay between the SMA and JCMT is not strongly affected. \section{Observing Generic Accretion Bursts} \label{sec:further_modeling} In this section, motivated by the differences in outburst behaviour seen between the SMA, ACA, and JCMT light curves of our variable targets, we explore how different types of accretion bursts propagate through our fiducial EC 53 envelope model (Table \ref{tab:ec53_fiducial}). To survey the prospects for monitoring protostellar variability with other telescopes, we also experiment with how the telescope beam size and observing wavelength determines the burst response. \subsection{Modulation of Short Bursts, Brightness Jumps, and Periodic Variability} \label{ssec:washouts} Our fiducial model, based on the best-fit 2D radiative transfer analysis by \citetalias{Baek2020}, works very well in the mean. However, after the peak, the SMA decay time for EC 53 is less steep than observed (Figure \ref{fig:ec_53_fid_model_fit}). Near infrared observations of EC 53 with a high cadence have shown structure in the light curve during the peak \citep[][Figure 5]{Lee2020yh}, suggesting that there is additional behaviour in the luminosity variability that our simple model (equation \ref{eqn:EC53_lum}) does not capture. Similarly, in the ACA monitoring of SMM 10, strong stochastic variability is found, whereas the contemporaneous JCMT light curves show only moderate variability. For both sources, these empirical differences could be the result of a poorer sensitivity to rapid variations in the envelope brightness obtained with the larger beam of the JCMT, $\sim15\arcsec$, compared against the $\sim3-4\arcsec$ beams of the SMA and ACA. We demonstrate the variation in observed response at the SMA and JCMT to various types of luminosity changes in Figure \ref{fig:gauss_sin_comp}, using the fiducial EC 53 envelope model. Here we compare fixed amplitude Gaussian, sigmoid, and sine wave luminosity functions (top row) with varying FWHM (1--9 days), rise timescale $\tau$ (0.3 -- 3 days, where $L \propto 1/(1+e^{-t/\tau})$) or period (5--30 days), respectively, against the resulting model light curves for the SMA (middle row) and JCMT (bottom row). For the case of a Gaussian outburst, we mark the time at which the model brightness peaks and the time during the decay when 20\% of the maximum is reached, for both telescopes. Similarly, for the sigmoid outburst we mark the time at which the model brightness reaches 80\% of the maximum. The Gaussian outbursts of shorter duration yield lower observed light curve amplitudes for both the SMA and JCMT, however, in all cases the observed amplitude is a factor of $\sim 2$ smaller for the JCMT. Furthermore, the lag timescale for the flux to return to the 20\% level is significantly longer, 17--21 days for the JCMT versus 4--9 days for the SMA. For the sigmoid outbursts, we see a similar difference in amplitude between the SMA and JCMT light curves as the Gaussian case, and for all values of $\tau$, the SMA model brightness reaches the 80\% level sooner than the JCMT (15--18 vs 20-22 days). For both SMA and JCMT observations, differences in the rise timescale are easier to detect early in the burst than later, as the light curve response at early times is dominated by large changes in brightness in the inner envelope where the lookback time is shorter. Conversely, after the 50\% amplitude level is reached, the differences in the rise timescale $\tau$ are harder to detect, as the response is diluted by envelope material at larger radii with long lookback times. With a sine wave luminosity function the amplitude of the model light curves are similarly reduced for shorter periods. The longer lag time of the JCMT response, however, results in an additional washing out of the light curve amplitude. Thus, it is entirely plausible that rapid, $< 30$ day, variations in protostellar luminosity may be responsible for both the observed structure in the post-peak decay of the SMA EC 53 light curve (Figure \ref{fig:sma_lc}), and the factor of $\sim 3$ stronger stochastic variability observed between the ACA and JCMT (850 $\mu$m) light curves of SMM 10 (Figure \ref{fig:aca_lc}, bottom panel). \subsection{Effect of Beam Size and Observing Wavelength} \label{ssec:beam_size_and_nu_comp} The observed envelope response to a change in protostellar luminosity is sensitive to both the telescope beam size and observing wavelength, which differ between our ACA, SMA, and JCMT measurements. To illustrate the effect of these properties on the observations, in Figure \ref{fig:obs_comp} we show model light curves for a Gaussian luminosity function (FWHM = 5 days, top panel) with varying beam size (center panel) and observing wavelength (bottom panel). The range of beam sizes correspond to high resolution observations with an interferometer, such as the ALMA 12m array ($0.1\arcsec$), up to single dish observations with the envelope fully unresolved ($30 \arcsec$). For Serpens, this corresponds to physical sizes ranging from 40 to 12000\,au. With a beam size $\leq 1.0 \arcsec$, very little modulation of the luminosity function occurs, and the model light curves closely trace the underlying accretion luminosity variations. For increasing beam sizes, the time for the burst to propagate through a larger portion of the envelope introduces a significant reduction in amplitude ($< 50\%$) and an additional lag in the light curve. An decrease in observing wavelength has little effect on the light curve lag, but does produce a stronger amplitude for the response, e.g., the response in the far infrared (0.3 mm) is $1.6-1.8$ times stronger than typical mm observing wavelengths (1--3 mm). This increase in response amplitude is due to the fact that at shorter wavelengths a larger fraction of the envelope lies below the temperature for which the response is Rayleigh-Jeans (i.e.\ the brightness responds as $T^\alpha$, with $\alpha > 1$ rather than $\alpha \sim 1$; \citealt{Johnstone2013,ContrerasPena2020}). \section{Discussion} \label{sec:disc} \subsection{Constraining Envelope Models and Accretion Burst Properties} \label{ssec:general_disc} Our work demonstrates that variations in the protostellar accretion rate produce a response in the dust temperature throughout the envelope, which yields a light curve modulated by the envelope structure and properties of the observations. Contemporaneous observations at multiple resolutions and wavelengths combined with modeling of the burst propagation through the envelope can thus recover detailed information about the envelope structure and identify the underlying accretion variations. Constraining the envelope structure used in modeling of the burst response is important for separating the accretion luminosity variations from their modulation by the envelope. The \citetalias{Baek2020} model for EC 53 used here as a basis for our toy model was determined through fitting of the observed SED with radiative transfer simulations; without such a template, forward modeling of the accretion luminosity variations would be much more uncertain (Section \ref{ssec:ec53_burst_prop}) due to the poorly constrained envelope properties. High resolution observations are particularly important for identifying rapid variations (Section \ref{ssec:washouts}, Figure \ref{fig:gauss_sin_comp}); the response to a short burst with a large beam suffers from a longer lag time due to long lookback times, smaller amplitude owing to the inclusion of envelope material subject to non-variable heating by the ISRF, and potentially destructive interference in the response due to the potential propagation of multiple burst components within the beam. Rapid variations in the source brightness have a special physical interest, as they trace dynamical scales tied to the innermost disk where star-disk interface instabilities likely manifest \citep[e.g.][]{dangelo2012,armitage2016}. Our ACA observations of SMM 10 (Figure \ref{fig:aca_lc}) and the near infrared light curve of EC 53 \citep{Lee2020yh} both show evidence for enhanced variations not witnessed in the lower resolution JCMT observations, suggesting that washing out of higher frequency variability is indeed occurring. Variability in the mid-infrared on timescales of days to months has been observed for many YSOs \citep[e.g.][]{Rebull2014, Park2021}, though a wide range of phenomena besides accretion variability can be responsible (e.g., rotating disk warps and starspots). Several notable examples show variability attributed to periodic accretion instabilities (L1634 IRS 7 \citep{Hodapp2015}, V347 Aurigae \citep{Dahm2020}, see also \citealt{Guo2022}), pulsed accretion from interaction with a binary (L1527 IRS \citep{Cook2019}), or rotation of a warped inner disk (LRLL 54361 \citep{Muzerolle2013}). Investigating such sources with higher cadence and resolution submm/mm observations is important for identifying the nature and distribution of rapid accretion variability in protostars, but requires also high angular resolution. High resolution submm/mm observations of the dust continuum can also provide valuable measurements of the inner envelope and protoplanetary disks dust density structure for modeling the burst propagation. Future observing campaigns can benefit from employing higher resolution, shorter wavelengths, and a variety of cadences. Long baseline observations with the ALMA 12m array would trace spatial scales associated with the dynamical timescales in the inner disk, and should have a stronger response to rapid variations in accretion luminosity (Section \ref{ssec:beam_size_and_nu_comp}). Furthermore, resolving the disk would allow its response to the burst to be separated from that of the envelope and therefore carefully quantified. Multi-wavelength monitoring can also provide complementary information on the physics of accretion variability. In particular, the typical dust temperatures in the envelope produce emission such that far infrared observations should produce a stronger light curve response, roughly proportional to the change in accretion luminosity, whereas submm/mm wavelength observations probe changes in brightness more closely tied to the induced dust temperature variations (see Section \ref{ssec:beam_size_and_nu_comp} and \citealt{Johnstone2013}). Near infrared observations should also trace protostar variability, but are subject to the effects of potentially time-varying extinction and emission from the hot inner disk \citep{Lee2020yh, hillenbrand2018, hillenbrand2019}. The combination of simultaneous observations at a variety of wavelengths should thus allow more accurate modeling of accretion luminosity variations and circumstellar environment; such an approach was used by \cite{Lee2020yh} to identify an increased buildup of material in the inner disk of EC 53 prior to outburst. Future planned ground-based observatories operating at the shortest submm wavelengths, such as CCAT-prime \citep{ccat2021}, and potential far infrared space-based missions \citep{andre2019, fischer2019} will be ideal for such extended monitoring. \subsection{Observational Limitations and Model Caveats} \label{ssec:caveats} The observations and modeling of accretion variability presented in this paper have a variety of limitations, which we detail here. Although our relative flux calibration schemes provide an unprecedented level of accuracy ($\sim 3$\% vs 10-20\% with standard calibration schemes, see Section \ref{sec:rel_calibration}), they are still the dominant contribution to the error budget in our flux measurements, as all of our targets are mm bright and typically have an S/N $> 100$ per epoch. Observations of additional sources for relative calibration and relaxing the approximation of point-like calibrators may enable an even higher relative flux calibration accuracy. Our approach to flux measurement with the ACA and SMA observations is robust but simplistic, and only considers the point-like emission within the brightest source in the map. We assume the extended emission surrounding the variable targets to be stable and remove it during model fitting, however, in principle light echoes from the propagation of the accretion burst through the extend emission may be observable provided sufficient imaging fidelity. Such a measurement is challenging given the $uv$-coverage with the limited number of baselines at the ACA and SMA. Observations with improved $uv$-coverage using the 12m ALMA array and imaging techniques better suited to reconstruction of resolved sources than the standard tclean \citep[e.g.][]{Akiyama2017,Chael2018,Honma2014} may provide improved models of our sources and allow measurements of variability over the entire field of view. For the single-dish observations, better understanding of the JCMT beam is likely required before any significant further improvement in the relative flux calibration is possible, especially at 450 \micron\ \citep{Dempsey2013,mairs2021}. Our toy model of EC 53 is very simplistic. It is based on the envelope structure determined by \citetalias{Baek2020}, which assumes spherical symmetry, and we ignore the conical outflow required by their SED fitting to recreate the near infrared emission. In reality, protostellar envelopes can be flattened by the effects of self-gravity and rotation, and they may contain substructures such as protostellar disks and outflow cavities, all of which influence the observed light curve. Full 3D modeling of the envelope would allow an exploration of these effects and their possible application to observations. Such ``reverberation mapping'' is often used to probe the structure of accretion disks surrounding highly variable active galactic nuclei \citep{Peterson1993}, and has also been employed to measure the radius of an inner dust hole in a protoplanetary disk by comparing the time delay of optical accretion variability with the near infrared response of the dust disk \citep{Meng2016}. When modeling the emission, we assume that the envelope is optically thin throughout, which is reasonable for submm/mm observations but breaks down for shorter wavelengths. Furthermore, the accretion disk around the embedded protostar is possibly bright and optically thick at submm/mm wavelengths \citep{Galvan-Madrid2018,Li2017}. Given the beam sizes explored in this paper (Table \ref{tab:model_telescopes}), the disk should play only a small role in the brightness variations, however, with higher angular resolution the disk would dominate, both complicating the modeling and providing a potential unique constraint on the disk physical properties. We now consider the agreement between our EC 53 model and the observational constraints. The free parameters of our fiducial burst model are the date and value of the maximum luminosity, the floor temperature, and the outburst period, the fitted values of which are listed in Table \ref{tab:ec53_fiducial}. The minimum and maximum luminosity in our fiducial model are 3.6 $L_\odot$ and 17 $L_\odot$, which is quite similar to the same values used by \citetalias{Baek2020} of 4 $L_\odot$ and 17 $L_\odot$ to fit the quiescent and outbursting SED of EC 53 when the effects of heating by the ISRF are included. Our fiducial floor temperature of $T_\mathrm{floor}$ = 24 K is somewhat higher than in the radiative transfer models of \citetalias{Baek2020}, where the temperature at 10000 au is $\sim17$ K. As shown in Figure \ref{fig:envelope_comp}, the amplitude of the JCMT 850 $\mu$m light curve is particularly sensitive to the value of $T_\mathrm{floor}$ in our models. The discrepancy is likely due to the difference in the manner in which we apply $T_\mathrm{floor}$; for simplicity our calculations fix a lower temperature threshold rather than properly calculating the local temperature due to both the incoming ISRF and the outgoing accretion luminosity. Alternatively, the outer envelope may have a steeper radial density power-law, $\alpha \sim -2$, which would also reduce the contribution of the larger scales probed by the JCMT. Despite the simplifications, the burst model predicts amplitudes for the SMA and JCMT 850 $\mu$m light curves that are good fits to the observations (see Figure \ref{fig:ec_53_fid_model_fit}). We have also produced model light curves for the parameters of the ACA and JCMT 450 $\mu$m observations (Table \ref{tab:model_telescopes}). Comparison of the model and observed ACA light curves is significantly complicated by the sparsity of data during the outbursts of EC 53 (Figure \ref{fig:aca_lc}). Comparing the late 2019 minimum and Summer 2021 maximum flux at the ACA suggests an outburst amplitude of $\sim 1.6$, whereas our model predicts an amplitude of only 1.35. There is clear intrinsic variability in the strength of the EC 53 outbursts \citep{Lee2020yh}, however, and thus fitting to measurements spread over two burst cycles in the comparison is questionable. Interestingly, our model for the JCMT 450 $\mu$m observations predicts an amplitude of $\sim 1.4$, which is stronger than the 850 $\mu$m amplitude of $\sim 1.3$ but weaker than the estimate from the 450 $\mu$m observations of the 2021 outburst of $\sim 1.6$. Two caveats must be noted, however. First, like the ACA observations, the JCMT 450 $\mu$m observations are relatively sparse across the 2021 outburst, making it more difficult to compare directly than is the case for the SMA and JCMT 850 $\mu$m observations. Second, the beam shape of the JCMT 450 $\mu$m observations is more complex than the simple Gaussian beam in our model, and includes an additional contribution from a larger 40\arcsec\ error beam \citep{difrancesco2008,mairs2021} which adds additional modulations. \subsection{Variable Molecular Line Emission} In this work, we have focused entirely on the response of the thermal dust continuum to accretion bursts; however, a variety of submm/mm molecular emission lines may also be sensitive to thermal and chemical changes in the circumstellar environment. At envelope scales, \citet{Johnstone2013} showed that the equilibrium time for the molecular gas component is significantly longer than the dust response but as one probes the smallest scales, the highest densities, and the highest temperatures this imbalance diminishes. Thus, with higher resolution observations, transitions of CO and its isotopologues may trace variations in gas temperature, while CH$_3$OH, SO, SO$_2$, and SiO transitions might trace warm gas and shocked material affected by the outburst. Furthermore, HCO+, CN, and HCN respond to increases in X-ray and UV luminosity in the accretion shock potentially triggered by the outburst. Indeed, variable H$^{13}$CO$^+$ J=3--2 emission possibly connected with an X-ray flare from a magnetic reconnection event has been detected in ALMA observations of a T Tauri star protoplanetary disk \citep{Cleeves2017}. Our SMA and ACA monitoring observations have detected a variety of these lines, which we plan to analyze for indications of variability in a forthcoming paper. \section{Conclusions} \label{sec:conc} In this paper, we have interpreted contemporaneous sub-mm/mm light curves of variable protostars as observed by the ALMA ACA, SMA, and JCMT using a toy model of the envelope dust response to accretion luminosity variations. Our major results are as follows: \begin{itemize} \item Relative flux calibration is vital to this work, and accurate to about 3\% for ACA/SMA (Section \ref{sec:dr}). This is a significant improvement over the typical flux calibration accuracy of sub-mm/mm interferometers of 10-15\%. \item We have robustly detected variability in our ACA observations of EC 53 (V371 Ser) and SMM 10, two known submm variables based on JCMT Transient Survey monitoring (Figure \ref{fig:aca_lc}). \item A delay between the peak amplitude of the EC 53 (V371 Ser) outburst at the SMA and JCMT is seen, and the amplitude of the SMA outburst is somewhat stronger (Figure \ref{fig:sma_lc}). \item We have developed a toy model of the envelope response in EC 53 to show that the delay and difference in amplitude between the SMA and JCMT (850 \micron) are plausibly explained by 1: the light travel time delay through the envelope; and 2: the dilution of the envelope response at the JCMT by the incorporation of more cold envelope material in the beam (Section \ref{sec:toy_model}). The JCMT amplitude is particularly sensitive to the heating by the ISRF, described by an envelope floor temperature in our model. \item We have further explored the effects a variety of bursts and observational properties of our toy model (Section \ref{sec:further_modeling}), and shown that high frequency bursts can be washed out by the lag in the response with a larger beam, which may be ocurring for SMM 10. A stronger envelope response tied more closely to luminosity variations is expected at shorter far infrared wavelengths. \end{itemize} The authors are grateful for the excellent support provided by the ALMA and SMA staff for our observing programs. We would particularly like to thank David Wilner, Mark Gurwell, and Charlie Qi for useful discussions on the planning and execution of our SMA observations. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. The James Clerk Maxwell Telescope is operated by the East Asian Observatory on behalf of The National Astronomical Observatory of Japan; Academia Sinica Institute of Astronomy and Astrophysics; the Korea Astronomy and Space Science Institute; the Operation, Maintenance and Upgrading Fund for Astronomical Telescopes and Facility Instruments, budgeted from the Ministry of Finance (MOF) of China and administrated by the Chinese Academy of Sciences (CAS), as well as the National Key R\&D Program of China (No. 2017YFA0402700). Additional funding support is provided by the Science and Technology Facilities Council of the United Kingdom and participating universities in the United Kingdom and Canada. Additional funds for the construction of SCUBA-2 were provided by the Canada Foundation for Innovation. The James Clerk Maxwell Telescope has historically been operated by the Joint Astronomy Centre on behalf of the Science and Technology Facilities Council of the United Kingdom, the National Research Council (NRC) of Canada and the Netherlands Organisation for Scientific Research. The JCMT Transient Survey project codes are M16AL001 and M20AL007. This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2018.1.00917.S, ADS/JAO.ALMA\#2019.1.00475.S ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. The Submillimeter Array is a joint project between the Smithsonian Astrophysical Observatory and the Academia Sinica Institute of Astronomy and Astrophysics and is funded by the Smithsonian Institution and the Academia Sinica. This research used the facilities of the Canadian Astronomy Data Centre operated by NRC Canada with the support of the Canadian Space Agency. D.J.\ is supported by NRC Canada and by an NSERC Discovery Grant. G.J.H.\ is supported by general grant 12173003 awarded by the National Science Foundation of China. \bibliography{paper}{} \bibliographystyle{aasjournal} \appendix \section{Deconvolved Deep SMA and ACA images of Targets} In Figures \ref{fig:aca_deep_gallery} and \ref{fig:sma_deep_gallery} we present the deep, ACA and SMA images of each Serpens Main source, combining all epochs and with the relative calibration in Section \ref{sec:rel_calibration} applied.
Title: Dark Energy Star in Gravity's Rainbow
Abstract: The concept of dark energy can be a candidate for preventing the gravitational collapse of compact objects to singularities. According to the usefulness of gravity's rainbow in UV completion of general relativity (by providing a new description of spacetime), it can be an excellent option to study the behavior of compact objects near phase transition regions. In this work, we obtain a modified Tolman-Openheimer-Volkof (TOV) equation for anisotropic dark energy as a fluid by solving the field equations in gravity's rainbow. Next, to compare the results with general relativity, we use a generalized Tolman-Matese-Whitman mass function to determine the physical quantities such as energy density, radial pressure, transverse pressure, gravity profile, and anisotropy factor of the dark energy star. We evaluate the junction condition and investigate the dynamical stability of dark energy star thin shell in gravity's rainbow. We also study the energy conditions for the interior region of this star. We show that the coefficients of gravity's rainbow can significantly affect this non-singular compact object and modify the model near the phase transition region.
https://export.arxiv.org/pdf/2208.07063
\title{Dark Energy Star in Gravity's Rainbow} \author{A. Bagheri Tudeshki$^{1}$, G. H. Bordbar$^{1}$\footnote{ email address: ghbordbar@shirazu.ac.ir}, and B. Eslam Panah$^{2,3,4}$\footnote{ email address: eslampanah@umz.ac.ir}} \address{$^{1}$ Department of Physics and Biruni Observatory, Shiraz University, Shiraz 71454, Iran \\ $^{2}$ Department of Theoretical Physics, Faculty of Science, University of Mazandaran, P. O. Box 47415-416, Babolsar, Iran\\ $^{3}$ ICRANet-Mazandaran, University of Mazandaran, P. O. Box 47415-416, Babolsar, Iran\\ $^{4}$ ICRANet, Piazza della Repubblica 10, I-65122 Pescara, Italy} \section{Introduction} The existence of a strange cosmic fluid known as dark energy (DE) with the negative pressure, describes the universe's accelerating expansion, requiring special consideration in other structures, including compact objects. In these cases, it also seems that using other modified gravity is a solution to the problem of quantum-mechanical incompatibility and general relativity. With the link between physical concepts on the demand of the general relativity, quantum mechanics and condensed matter physics by Chapline \cite{Chapline}, a new era in the development of alternative theories of black holes was formed. It may also be possible at the surface of a black hole (event horizon) a behavior similar to that of a Bose gas. In the sense that it is possible that the surface of a black hole is a result of a quantum phase transition. One of the alternative candidates to the black hole theory is gravastar. The concept of gravastar was first introduced by Mazur and Mottola \cite% {MazurM2004}. They introduced a compact object with three regions and different space-times; the interior vacuum region, which is related to the cosmological constant, the middle region, which is a shell with a finite thickness, and the equation of state of a perfect fluid governs it, and also the exterior region, which contains a true vacuum, and the pressure in it is zero \cite{MazurM2004}. The advantage of this model is that there is no singularity at the center of this compact object, and there is also no horizon \cite{MazurM2004}. Many studies have been done on gravastar. A $5-$% layer model similar to the introduced gravastar in Ref. \cite{MazurM2004} plus a junction interface in the middle region was investigated by Visser and Wiltshire \cite{VisserW2004}. For the first time, gravastars with anisotropic pressures were studied by Cattoen et al. \cite{Cattoen2005}. They showed that the Tolman-Oppenheimer-Volkoff (TOV) equation is not satisfied in the shell of gravastar-type objects with isotropic pressure, and anisotropic pressure must be considered. The concept of the dark energy star (DES) was first introduced by Chaplin \cite{ChaplinearXiv}. This idea holds that at a critical surface, the falling matter is converted into vacuum energy, which is much larger than the cosmic vacuum energy, creating a negative pressure to act against gravity \cite{ChaplinearXiv}. Therefore, a singularity does not occur inside the star. The observational results suggest that dark energy is very homogeneous and not very dense. However, there is still attention to anisotropic dark energy. Koivisto and Mota, in two works \cite% {KoivistoI,KoivistoII} proposed a universe full of dark energy, and they studied its features in detail. Also, in order to investigate the low quadrupole in the CMB oscillations, an anisotropic equation of state was attributed to dark energy \cite{Campanelli2011}. But the concept of anisotropy in the study of compact objects is derived from the role of some physical events such as phase transitions \cite{Sokolov1980}, a pion condensation \cite{Hartle1975}, the presence of strong magnetic \cite% {Bordbar2022} and electric fields \cite{Usov2004}, a solid core, etc. Lobo and Crawford \cite{LoboC2005} generally investigated the behavior of thin shells by benefiting from the application of the Lanczos equations and Gauss-Kodazzi equations. Then by generalizing this method, they discussed the stability of thin shell anent black holes and wormholes. Following this model, the dynamical stability of DES was studied in several cases. Lobo \cite{Lobo2006} selected two models: constant energy density and Tolman-Matese-Whitman mass function for a dark energy star with a transverse pressure. He showed that there are stable regions near the surface of star. Ghezzi \cite{Ghezzii2011} proposed a model of a compact object with the fermionic matter coupled to inhomogeneous anisotropic variable dark energy. Then he obtained the TOV equation and physical quantities such as mass in relation to coupling parameter. Considering the phantom scalar field as a model of dark energy, Yazadjiev \cite{Yazadjiev2011} provided an exact solution for the interior of the DES in the presence of matter. The stability condition and various physical properties for a special type of dark energy star with five regions were discussed in Ref. \cite{BharR2015}. By introducing a mass function, Bhar et al. \cite{Bharetal2018} studied the structure and stability of dark energy stars and compared the results with the observational results. The time-dependent equations of motion for dark energy stars was studied in Ref. \cite{BeltracchiG2019}. In this study, it is assumed that the pressure of the fluid is positive at the beginning of falling and then, where the star reaches its final stage of collapse, the negative pressure prevails in the system. By expressing the equations of motion in the presence of specific metric potentials Finch and Skea, Banerjee et al. \cite{Banerjee2020} were able to provide an exact solution for the dark energy star. The effects of slow rotation on the configuration of a dark energy star with the governing Chaplygin equation of state were investigated by Panotopoulos et al. in Ref. \cite{Panotopoulos2021}. In most researches, the stability of dark energy star has been confirmed, but it has been shown that a dark energy star can be physically unstable in the presence of a phantom field \cite{Sakti2021}. The study of dark energy stars at modified gravity is underway. The physical quantities of DES in the presence of Einstein-Gauss-Bonnet gravity were determined by Malaver et al. \cite{Malaver2021}. In other work, Bhar \cite{Bhar2021} studied physical properties of dark energy stars such as density, pressure, mass function, surface redshift, and maximum mass using metric potentials Tolman-Kuchowicz (TK) and showed that all constraints are regular. Gravitational and quantum behaviors in the phase transition layer of the dark energy star can be a good reason to use modified gravity. In a study, Magueijo and Smolin \cite{MagueijoS2004} by introducing rainbow functions, suggested that the metric in a dual spacetime can depend on energy, and as a result, the equations of motion also change. Recently, in several studies, the effect of rainbow functions in examining different states of physical phenomena have been investigated using the theory of gravity's rainbow \cite% {Galan2004,Hackett2006,Aloisio2006,Ling2007,Garattini2014a,Chang2015,Santos2015}% . Also, by attributing the energy to the location of the horizon of the two inner and outer particles of a black hole, Ali et al. \cite{Ali2015} showed that the information can be transferred from inside the black hole to the outside. Thermodynamic behavior of black holes in the presence of gravity's rainbow have been studied in Refs. \cite% {Galan2006,LingZ2007,Ali2014,HendiPEM2016,KimKim2016,HendiFEP2016,Gangopadhyay2016,Hendi2017, Alsaleh2017,Feng2017,EslamPanah2018,Upadhyay2018,EslamPanah2019,Morais2022,Hamil2022}% . Energy-dependence of such geometry can produce important modifications to non-singular compact objects \cite% {HendiJCAP2016,Garattini2017,EslamPanah2017,Debnath2021,Mota2022}. Our goal in this work is to study the behavior of dark energy star properties in a modified theory of gravity is called gravity's rainbow. We are interested in a comparison between energy-dependent physical quantities in gravity's rainbow and energy-independent quantities in general relativity. The plan of this paper is as follows: after introductory section 1, in section 2, we obtain field equations in gravity's rainbow, and in section 3, junction condition and dynamic stability of thin shell are introduced. In section 4, we determine the energy conditions, and finally, a discussion on the results is provided in section 5. \section{Basic Equations} The interior spacetime ($-$) and exterior spacetime ($+$) for a spherically symmetric metric in gravity's rainbow is given by% \begin{equation} ds_{\pm }^{2}=-\frac{e^{2\phi _{\pm }\left( r_{\pm }\right) }}{% l_{\varepsilon }^{2}}dt^{2}+\frac{e^{2\lambda _{\pm }\left( r_{\pm }\right) }% }{h_{\varepsilon }^{2}}dr^{2}+\frac{r_{\pm }^{2}\left( d\theta ^{2}+\sin ^{2}\theta d\varphi ^{2}\right) }{h_{\varepsilon }^{2}}, \label{metric} \end{equation}% where $e^{2\phi _{\pm }\left( r_{\pm }\right) }$ and $e^{2\lambda _{\pm }\left( r_{\pm }\right) }$ are the metric potentials. Also, $l_{\varepsilon }^{2}$ and $h_{\varepsilon }^{2}$ are rainbow functions. It is notable that $% \varepsilon =E/E_{P}$, where an observer with zero acceleration measures an energy $E$ for a test particle of mass $m$, and also $E_{P}$ refers to the Planck energy. The modified energy-momentum dispersion is \cite% {MagueijoS2004}% \begin{equation} E^{2}l_{\varepsilon }^{2}-p^{2}h_{\varepsilon }^{2}=m^{2}. \end{equation} The equation of motion in gravity's rainbow given by \cite{MagueijoS2004}% \begin{equation} G_{\mu \nu }(\varepsilon )=\frac{8\pi G(\varepsilon )}{c^{4}(\varepsilon )}% T_{\mu \nu }\left( \varepsilon \right) , \label{GReq} \end{equation}% where $G_{\mu \nu }(\varepsilon )$ is Einstein tensor, $G(\varepsilon )$ and $c(\varepsilon )$ are the energy-dependent gravitational constant and the energy-dependent speed of light, respectively.\textbf{\ In the theory of quantum gravity, a normalized gravitational coupling is defined, which depends on energy in the scale of high energies, and as a result, }$G$% \textbf{\ is also a function of energy. If we rewrite the linear element of Eq. (\ref{metric}) to form }$ds^{2}=-\frac{dt^{2}}{l_{\varepsilon }^{2}}+\frac{% \left( dx^{i}\right) ^{2}}{h_{\varepsilon }^{2}}$\textbf{, it can be seen that the speed of light, }$c\left( \varepsilon \right) =\frac{dx}{dt}=% \frac{h_{\varepsilon }}{l_{\varepsilon }}$\textbf{\ depends on the energy in gravity's rainbow due to the dependence of rainbow functions on the energy. It is necessary to note that in the limit of low energies, }$G\left( \varepsilon \right) $\textbf{\ and }$c\left( \varepsilon \right) $\textbf{\ tend to the universal forms }$G$\textbf{\ and }$c$\textbf{, respectively \cite% {MagueijoS2004}.} Also $T_{\mu \nu }\left( \varepsilon \right) $ is stress-energy tensor that plays a role as a source of time-space curvature. Here, we assume $G(\varepsilon )=c(\varepsilon )=1$. According to the linear element (Eq. (\ref{metric})), we assume that the interior spacetime of DES is full of dark energy that behaves like a fluid with equation of state $p_{r}\left( r\right) =\omega \rho \left( r\right) $, where $\omega $ is the dark energy parameter. Note, $-1<\omega <-1/3$, $% \omega =-1$ and $\omega <-1$ refer to the dark energy regime, the cosmological constant and phantom energy regime, respectively. Since the surface of a dark energy star is where the phase transition occurs, we are interested in using dark energy as an anisotropic fluid with the transverse and radius pressures. Generally, for an anisotropic distribution of matter, the stress-energy tensor can be achieved as follows \cite{Bayin1986}% \begin{eqnarray} T_{\mu \nu }\left( \varepsilon \right) &=&\left[ \rho \left( r\right) +p_{t}\left( r\right) \right] u_{\mu }u_{\nu }+p_{t}\left( r\right) g_{\mu \nu } \notag \\ && \notag \\ &&+\left[ p_{r}\left( r\right) -p_{t}\left( r\right) \right] x_{\mu }x_{\nu }, \label{T4} \end{eqnarray}% where $\rho \left( r\right) $, $p_{r}\left( r\right) $ and $p_{t}\left( r\right) $ are the energy density, the radial pressure and the transverse pressure, respectively. The transverse pressure $p_{t}\left( r\right) $ is perpendicular to the direction of the radial pressure $p_{r}\left( r\right) $ of the fluid. Also, $u_{\mu }$ represents the four-velocity vector with $% u_{\mu }u^{\mu }=-1$ and $x_{\mu }$ refers to the unit spacelike vector in the radial direction which is defined by $x^{\mu }=\sqrt{\left( g_{rr}\right) ^{-1}\delta _{~~r}^{\mu }}$\cite{Lobo2006}, and $x_{\mu }x^{\mu }=1$. In order to obtain the modified relations in gravity's rainbow, the metric coefficients can be converted into the following form% \begin{eqnarray} g_{tt} &\longrightarrow &\frac{g_{tt}}{l_{\varepsilon }^{2}}, \label{5} \\ && \notag \\ g_{kk} &\longrightarrow &\frac{g_{kk}}{h_{\varepsilon }^{2}}, \label{6} \end{eqnarray}% where $g_{tt}$ and $g_{kk}$ are metric coefficients in the general line element, and the index $k$ refers to $r$, $\theta $ and $\varphi $. Thus by using the interior line element Eq. (\ref{metric}) and Eqs. (\ref{T4}-\ref{6}% ), the modified components of stress-energy tensor are obtained% \begin{eqnarray} T_{tt}\left( \varepsilon \right) &=&\frac{\rho \left( r\right) e^{2\phi _{-}\left( r\right) }}{l_{\varepsilon }^{2}}, \\ && \notag \\ T_{rr}\left( \varepsilon \right) &=&\frac{p_{r}\left( r\right) e^{2\lambda _{-}\left( r\right) }}{h_{\varepsilon }^{2}}, \\ && \notag \\ T_{\theta \theta }\left( \varepsilon \right) &=&\frac{p_{t}\left( r\right) r^{2}}{h_{\varepsilon }^{2}}, \\ && \notag \\ T_{\varphi \varphi }\left( \varepsilon \right) &=&\frac{p_{t}\left( r\right) r^{2}\sin ^{2}\theta }{h_{\varepsilon }^{2}}, \end{eqnarray}% or mixed diagonal elements of the stress-energy tensor given by% \begin{equation} T_{\nu }^{\mu }=diag\left[ -\rho \left( r\right) ,p_{r}\left( r\right) ,p_{t}\left( r\right) ,p_{t}\left( r\right) \right] . \label{T} \end{equation} The $tt$ component of field equations (\ref{GReq}) provides the following equation, \begin{equation} M_{eff}\left( r,\varepsilon \right) =\int_{0}^{r}\frac{4\pi r\prime ^{2}\rho \left( r\prime \right) dr\prime }{h_{\varepsilon }^{2}}=\frac{m\left( r\right) }{h_{\varepsilon }^{2}}, \end{equation}% where $M_{eff}\left( r,\varepsilon \right) $ is the effective mass and $% m\left( r\right) $ refers the mass function. By calculating the $rr$ component of field equations (\ref{GReq}), we get the following relation% \begin{equation} \frac{d\phi _{-}}{dr}=g\left( r\right) =\frac{M_{eff}\left( r,\varepsilon \right) h_{\varepsilon }^{2}+4\pi r^{3}p_{r}\left( r\right) }{r\left( r-2M_{eff}\left( r,\varepsilon \right) \right) h_{\varepsilon }^{2}}, \label{phii} \end{equation}% where $g\left( r\right) $ is called "gravity profile" \cite{Lobo2006} which is related to the local acceleration in gravity's rainbow which is represented by $A=\sqrt{h_{\varepsilon }^{2}e^{-2\lambda \left( r\right) }}% g\left( r\right) $, and it is related to the redshift function by $\phi \left( r\right) =-\int_{r}^{\infty }g\left( \widetilde{r}\right) d\widetilde{% r}$ \cite{Lobo2006,Morris1988}. If $g\left( r\right) >0$, local acceleration due to gravity of the interior solution be attractive and if $g\left( r\right) <0$, local acceleration be repulsive. According to the dark energy equation of state, we can rewrite the gravity profile in relation to the dark energy parameter% \begin{equation} g\left( r\right) =\frac{M_{eff}\left( r,\varepsilon \right) +r\omega \left( \frac{\partial M_{eff}\left( r,\varepsilon \right) }{\partial r}\right) }{% r\left( r-2M_{eff}\left( r,\varepsilon \right) \right) }. \label{g(r)I} \end{equation} Using conservation law $\triangledown ^{\mu }T_{\mu \nu }=0$, and inserting Eq. (\ref{phii}) into it, we obtain the TOV equation for an anisotropic distribution of matter in gravity's rainbow% \begin{eqnarray} \frac{dp_{r}\left( r\right) }{dr} &=&-\frac{\left( 4\pi r^{3}p_{r}\left( r\right) +h_{\varepsilon }^{2}M_{eff}\left( r,\varepsilon \right) \right) % \left[ p_{r}\left( r\right) +\rho \left( r\right) \right] }{r\left( r-2M_{eff}\left( r,\varepsilon \right) \right) h_{\varepsilon }^{2}} \notag \\ && \notag \\ &&+\frac{2\left[ p_{t}\left( r\right) -p_{r}\left( r\right) \right] }{r}. \label{ModTOVI} \end{eqnarray} We can define the anisotropy factor $\Delta \left( r\right) =p_{t}\left( r\right) -p_{r}\left( r\right) $, and write both sides of the above equation in terms of $M_{eff}\left( r,\varepsilon \right) $ and dark energy parameter, hence $\Delta \left( r\right) $ is written as follows% \begin{eqnarray} \Delta \left( r\right) &=&\frac{\omega h_{\varepsilon }^{2}}{8\pi r^{2}}% \left[ r\left( \frac{\partial ^{2}M_{eff}\left( r,\varepsilon \right) }{% \partial r^{2}}\right) -2\left( \frac{\partial M_{eff}\left( r,\varepsilon \right) }{\partial r}\right) \right. \notag \\ && \notag \\ &&\left. +\left( \frac{\left( 1+\omega \right) r}{\omega }\right) r\left( \frac{\partial M_{eff}\left( r,\varepsilon \right) }{\partial r}\right) g\left( r\right) \right] . \label{Delta} \end{eqnarray}% Also, $\frac{\Delta \left( r\right) }{r}$ indicates a force caused by the anisotropic behaviors of the stellar model. If $\Delta \left( r\right) >0$, this force is repulsive, but if $\Delta \left( r\right) <0$, this force is attractive. In order to have a standard solution for the dark energy stars, $% \Delta \left( r\right) $ should be positive. Both the gravity profile $% g\left( r\right) $\ and the anisotropy factor $\Delta \left( r\right) $ depend on $h_{\varepsilon }^{2}$. To solve the field equations, we have to guess a suitable mass function. To compare the results of two different gravitational models, the general relativity and gravity's rainbow with the same mass function model, let us use the Tolman-Matese-Whitman (TMW) mass function that Lobo had previously used in his study \cite{Lobo2006}. Thus we consider a modified TMW mass function for gravity's rainbow as% \begin{equation} M_{eff}\left( r,\varepsilon \right) =\frac{b_{0}r^{3}}{2\left( 1+2b_{0}r^{2}\right) h_{\varepsilon }^{2}}, \label{MII} \end{equation}% where $b_{0}$ is a positive constant \cite{Lobo2006}, and this mass function is regular at the origin as $r\longrightarrow 0$. We use Eqs. (\ref{g(r)I})-% (\ref{MII}) to calculate the physical quantities of the dark energy star in gravity's rainbow, which are% \begin{eqnarray} \rho \left( r\right) &=&\frac{b_{0}\left( 2b_{0}r^{2}+3\right) }{8\pi \left( 2b_{0}r^{2}+1\right) }, \label{rhoo} \\ && \notag \\ p_{r}\left( r\right) &=&\frac{\omega b_{0}\left( 2b_{0}r^{2}+3\right) }{% 8\pi \left( 2b_{0}r^{2}+1\right) }, \label{Pp} \\ && \notag \\ g\left( r\right) &=&\frac{2\omega b_{0}r^{2}+2b_{0}r^{2}+3\omega +1}{% 2r\left( 2b_{0}r^{2}+1\right) \left( \frac{h_{\varepsilon }^{2}\left( 2b_{0}r^{2}+1\right) }{b_{0}r^{2}}-1\right) }, \\ && \notag \\ \Delta \left( r\right) &=&\frac{\omega h_{\varepsilon }^{2}\mathcal{A}_{1}-% \frac{b_{0}^{2}r^{4}\left( \mathcal{A}_{2}+\frac{\mathcal{A}_{3}}{b_{0}r^{2}}% \right) }{2}-\frac{3\mathcal{A}_{4}}{8}}{\frac{4\pi \left( 2b_{0}r^{2}+1\right) ^{3}}{-b_{0}}\left( \frac{h_{\varepsilon }^{2}\left( 2b_{0}r^{2}+1\right) }{b_{0}r^{2}}-1\right) }, \label{Deltaa} \end{eqnarray}% where $\mathcal{A}_{1}$, $\mathcal{A}_{2}$, $\mathcal{A}_{3}$ and $\mathcal{A% }_{4}$ are% \begin{eqnarray*} \mathcal{A}_{1} &=&4b_{0}^{2}r^{4}+12b_{0}r^{2}+5, \\ && \\ \mathcal{A}_{2} &=&\omega ^{2}+6\omega +1, \\ && \\ \mathcal{A}_{3} &=&3\omega ^{2}+15\omega +2, \\ && \\ \mathcal{A}_{4} &=&\omega ^{2}+4\omega +1. \end{eqnarray*} Note that using the energy density relation Eq. (\ref{rhoo}), and defining the central energy density $\rho _{c}$ in $r=0$, the constant $b_{0}$ is obtained $b_{0}=8\pi \rho _{c}/3$. Figs. \ref{Fig1} and \ref{Fig2} show the behavior of energy density $% \rho \left( r\right) $ and radial pressure $p_{r}\left( r\right) $ relative to the distance from the center of star, respectively. Note that in order to make the distance as a dimensionless quantity, the parameter $\beta $ is defined by $\beta =\sqrt{b_{0}}r$. $\rho \left( r\right) $ and $p_{r}\left( r\right) $ are independent of the rainbow function. Fig. \ref{Fig2} illustrates that as the value $\omega $ of increases, the magnitude of the radial pressure increases. The negative radial pressure is one of the characteristics of dark energy. To maintain the gravitational stability of DES, $g\left( r\right) $ should be negative. The gravity profile behavior is plotted versus $\omega $ and $% \beta $ in both dark energy and phantom energy regimes for different values of $h_{\varepsilon }$ in Fig. \ref{Fig3}. The range of $\beta $ is numerically determined according to the standard $g\left( r\right) $ range and $\omega $ values. Gravity profile values in the vicinity $\omega =-1/3$ are positive. As $h_{\varepsilon }$ increases, the range of $g\left( r\right) $ becomes more constrained and its positive values decrease. By reducing or removing the positive values of gravity profile, the model gets closer to the standard model of the dark energy star. Note that $% h_{\varepsilon }=1$, refers to the gravity profile in general relativity \cite{Lobo2006}. The anisotropy factor is shown in Figs. \ref{Fig4} and \ref{Fig5}. It is observed that the anisotropy factor is positive for all $\omega $\ values. There is also a slight difference between the anisotropy factor scheme with $% h_{\varepsilon }=1$ (general relativity) and $h_{\varepsilon }=1.1$,$~1.2$, and $1.3$ (gravity's rainbow). \section{Junction Condition and Dynamic Stability of Thin Shell} \subsection{General Relativity} According to Darmois-Israel formalism in general relativity \cite{Israel1966}% , we visualize two manifolds $M_{+}$ and $M_{-}$ with metrics $g_{\mu \nu }^{\pm }\left( x_{\pm }^{\mu }\right) $. These are matched together by two hypersurfaces $\Sigma _{\pm }$ with induced metrics $g_{ij}^{\pm }\left( \xi \right) $, where $\xi $ is the intrinsic coordinate of hypersurfaces. Note $% \mu ,\nu =0,1,2,3$ refer the coordinates of the $4-$dimensional manifold, and $i,j=1,2,3$ refer the coordinates of the $3-$dimensional shell. The induced metric on the junction surface is defined by the following relation \cite{Israel1966}% \begin{equation} g_{ij}=\left[ g_{\mu \nu }\frac{\partial x^{\mu }}{\partial \xi ^{i}}\frac{% \partial x^{\nu }}{\partial \xi ^{j}}\right] _{\pm }. \label{imetric} \end{equation} We select the parametric equation for a timelike hypersurface $\Sigma $ in the form $f\left( r,\tau \right) =r-a\left( \tau \right) =0$. The junction radius $a\left( \tau \right) $ is a function of proper time $\tau $. It is notable that $ds^{2}$ must be continuous throughout the junction. Using Eq. (% \ref{imetric}), the intrinsic metric to $\Sigma $ is written by \cite% {Lobo2004}% \begin{equation} ds_{\Sigma }^{2}=-d\tau ^{2}+a^{2}\left( \tau \right) \left( d\theta ^{2}+\sin ^{2}\theta d\varphi ^{2}\right) . \end{equation} According to the parametric equation, it can be shown that the unit normal to the junction surfaceare $n_{\mu }$ defined as follows \cite{Poisson2004}% \begin{equation} n_{\mu }=\pm \frac{\frac{\partial f}{\partial x^{\mu }}}{\sqrt{g^{\alpha \beta }\frac{\partial f}{\partial x^{\alpha }}\frac{\partial f}{\partial x^{\beta }}}}, \label{etaI} \end{equation}% where $n^{\mu }n_{\mu }=+1$, and $u^{\mu }n_{\mu }=0$. Let us use the extrinsic curvature tensor $\kappa _{ij}$ of junction surface \cite% {LoboC2005}% \begin{equation} \kappa _{ij}=-n_{\mu }\left( \frac{\partial ^{2}x^{\mu }}{\partial \xi ^{i}\partial \xi ^{j}}+\Gamma _{\alpha \beta }^{\mu \pm }\frac{\partial x^{\alpha }}{\partial \xi ^{i}}\frac{\partial x^{\beta }}{\partial \xi ^{j}}% \right) . \label{kappaI} \end{equation} The cause of the discontinuity in the extrinsic curvature is the presence of matter in the shell \cite{Mansouri1996}, thus the discontinuity in the extrinsic curvature is defined as \cite{Lobo2004}% \begin{equation} \chi _{ij}=\kappa _{ij}^{+}-\kappa _{ij}^{-}, \end{equation}% and we can define the surface stress-energy tensor on $\Sigma $ \cite% {Visser1989,Poisson1995}% \begin{equation} s_{\ j}^{i}=\frac{-1}{8\pi }\left( \chi _{~j}^{i}-\delta _{~j}^{i}\chi _{~k}^{k}\right) . \label{sform} \end{equation} This relation is known as the Lanczos equation, which roughly shows the dynamic behavior of the thin shell. We can obtain the non-zero components of the extrinsic curvature tensor $\kappa _{ij}$ \cite{LoboC2005}, by using Eqs. (\ref{etaI}) and (\ref{kappaI})% \begin{eqnarray} \kappa _{\theta }^{\theta \pm } &=&\frac{\sqrt{e^{-2\lambda _{\pm }\left( r\right) }+\overset{.}{a}^{2}}}{a}, \label{kappa2} \\ && \notag \\ \kappa _{\tau }^{\tau \pm } &=&\frac{\phi _{\pm }^{^{\prime }}\left( e^{-2\lambda _{\pm }\left( r\right) }+\overset{.}{a}^{2}\right) +\ddot{a}+% \overset{.}{a}^{2}\lambda _{\pm }^{^{\prime }}}{\sqrt{e^{-2\lambda _{\pm }\left( r\right) }+\overset{.}{a}^{2}}}. \label{kappa3} \end{eqnarray} \textbf{Here, we consider }$s_{\ j}^{i}=diag\left( -\sigma ,P,P\right) $\textbf{\ where }$\sigma $\textbf{\ is the surface energy density and }$P$\textbf{\ is tangential surface pressure.} Also, the prime denotes a derivative with respect to junction radius "$a$" and the overdot denotes a derivative with respect to the proper time, $\tau $. By using the Lanczos equation and Eqs. (% \ref{kappa2}) and (\ref{kappa3}), the surface energy density and the surface pressure can be written as follow% \begin{eqnarray} \sigma &=&-\frac{\chi _{\theta }^{\theta }}{4\pi }=\frac{\sqrt{e^{-2\lambda _{-}\left( r\right) }+\overset{.}{a}^{2}}-\sqrt{e^{-2\lambda _{+}\left( r\right) }+\overset{.}{a}^{2}}}{4\pi a}, \\ && \notag \\ P &=&\frac{\chi _{\tau }^{\tau }+\chi _{\theta }^{\theta }}{8\pi } \notag \\ && \notag \\ &=&\frac{\left[ \frac{\left( 1+\phi ^{^{\prime }}a\right) \left( e^{-2\lambda \left( r\right) }+\overset{.}{a}^{2}\right) +a\ddot{a}+\lambda ^{^{\prime }}a\overset{.}{a}^{2}}{\sqrt{e^{-2\lambda \left( r\right) }+% \overset{.}{a}^{2}}}\right] ^{\pm }}{8\pi a}, \end{eqnarray}% where contractually, $\left[ X\right] ^{\pm }=X^{+}\left\vert \Sigma \right. -X^{-}\left\vert \Sigma \right. $ is displayed. Poisson and Visser \cite% {Poisson1995} defined the $\eta =\frac{\sigma ^{^{\prime }}}{P^{^{\prime }}}$ parameter, that $\sqrt{\eta }$ is the speed of sound. In the surface layer, it should be in range $0<\eta \leq 1$. By determining $\eta $, the stability regions can be identified. \subsection{Gravity's Rainbow} To study thin shell and junction conditions in gravity's rainbow, we can define the intrinsic metric to $\Sigma $ in gravity's rainbow Eq. (\ref% {imetric}) and it given by \cite{Amirabi2018}% \begin{equation} ds_{\Sigma (\text{rainbow})}^{2}=-d\tau ^{2}+\frac{a^{2}\left( \tau \right) }{h_{\varepsilon }^{2}}\left( d\theta ^{2}+\sin ^{2}\theta d\varphi ^{2}\right), \end{equation}% \textbf{where }$\left( \tau ,\theta ,\varphi \right) $\textbf{\ refer the intrinsic coordinates}. The line element should be continuous across $\Sigma $% , therefore $\overset{.}{t}=\frac{\partial t}{\partial \tau }$ is given by% \begin{equation} \overset{.}{t}_{(\text{rainbow})}=l_{\varepsilon }e^{\left( \lambda -\phi \right) _{\pm }}\sqrt{e^{-2\lambda _{\pm }\left( r\right) }+\frac{\overset{.}% {a}^{2}}{h_{\varepsilon }^{2}}}. \end{equation} \textbf{The position of thin shell is given by }$x^{\mu }=\left( t\left( \tau \right) ,a\left( \tau \right) ,\theta ,\varphi \right) $\textbf{, thus the }$4-$\textbf{velocity can be written as}% \begin{equation} u_{\pm (\text{rainbow})}^{\mu }=\left( l_{\varepsilon }e^{\left( \lambda -\phi \right) _{\pm }}\sqrt{e^{-2\lambda _{\pm }\left( r\right) }+\frac{% \overset{.}{a}^{2}}{h_{\varepsilon }^{2}}},\overset{.}{a},0,0\right) . \end{equation} By using Eqs. (\ref{metric}) and (\ref{etaI}), we obtain% \begin{equation} n_{\pm (\text{rainbow})}^{\mu }=\left( \frac{l_{\varepsilon }\overset{.}{a}% e^{\left( \lambda -\phi \right) _{\pm }}}{h_{\varepsilon }},\sqrt{% h_{\varepsilon }^{2}e^{-2\lambda _{\pm }\left( r\right) }+\overset{.}{a}^{2}}% ,0,0\right) . \end{equation} General form of $G_{\mu \nu }\left( \varepsilon \right) $ in gravity's~rainbow is same to general relativity, hence for a hypersurface, we can use the Lanczos equation in the same form of the Eq. (\ref{sform}). By using Eqs. (\ref{metric}), (\ref{kappa2}) and (\ref{kappa3}), the extrinsic curvature tensor $\kappa _{ij\left( \text{rainbow}\right) }$ is introduced in gravity's~rainbow by% \begin{eqnarray} \kappa _{\theta \left( \text{rainbow}\right) }^{\theta \pm } &=&\frac{\sqrt{% h_{\varepsilon }^{2}e^{-2\lambda _{\pm }\left( r\right) }+\overset{.}{a}^{2}}% }{a}, \\ && \notag \\ \kappa _{\tau \left( \text{rainbow}\right) }^{\tau \pm } &=&\frac{\phi _{\pm }^{^{\prime }}\left( h_{\varepsilon }^{2}e^{-2\lambda _{\pm }\left( r\right) }+\overset{.}{a}^{2}\right) +\ddot{a}+\overset{.}{a}^{2}\lambda _{\pm }^{^{\prime }}}{\sqrt{h_{\varepsilon }^{2}e^{-2\lambda _{\pm }\left( r\right) }+\overset{.}{a}^{2}}}, \end{eqnarray}% and also% \begin{eqnarray} \sigma _{\left( \text{rainbow}\right) } &=&-\frac{\chi _{\theta }^{\theta }}{% 4\pi } \notag \\ && \notag \\ &=&\frac{h_{\varepsilon }\left( \sqrt{e^{-2\lambda _{-}\left( r\right) }+% \frac{\overset{.}{a}^{2}}{h_{\varepsilon }^{2}}}-\sqrt{e^{-2\lambda _{+}\left( r\right) }+\frac{\overset{.}{a}^{2}}{h_{\varepsilon }^{2}}}% \right) }{4\pi a}, \label{sigmaGsR} \\ && \notag \\ P_{\left( \text{rainbow}\right) } &=&\frac{\chi _{\tau }^{\tau }+\chi _{\theta }^{\theta }}{8\pi }= \notag \\ && \notag \\ &=&\frac{\left[ \frac{\left( 1+\phi ^{^{\prime }}a\right) \left( h_{\varepsilon }^{2}e^{-2\lambda \left( r\right) }+\overset{.}{a}^{2}\right) +a\ddot{a}+\lambda ^{^{\prime }}a\overset{.}{a}^{2}}{\sqrt{h_{\varepsilon }^{2}e^{-2\lambda \left( r\right) }+\overset{.}{a}^{2}}}\right] ^{\pm }}{% 8\pi a}. \label{PGsR} \end{eqnarray} The interior spacetime in the presence of dark energy should match the exterior vacuum spacetime at a junction with $a$ radius. According to Eq. (% \ref{metric}), we can write% \begin{eqnarray} e^{2\phi _{+}} &=&e^{-2\lambda _{+}}=1-\frac{2M}{r}, \\ && \notag \\ e^{-2\lambda _{-}} &=&1-\frac{2m\left( r\right) }{h_{\varepsilon }^{2}r}=1-% \frac{2M_{eff}}{r}, \end{eqnarray}% where $M_{eff}$ is the effective mass and it equals to $\frac{m\left( r\right) }{h_{\varepsilon }^{2}}$ (i.e., $M_{eff}=\frac{m\left( r\right) }{% h_{\varepsilon }^{2}}$). Thus, the exterior spacetime is followed% \begin{equation} ds_{+}^{2}=-\frac{1-\frac{2M}{r}}{l_{\varepsilon }^{2}}dt^{2}+\frac{dr^{2}}{% h_{\varepsilon }^{2}\left( 1-\frac{2M}{r}\right) }+\frac{r_{+}^{2}\left( d\theta ^{2}+\sin ^{2}\theta d\varphi ^{2}\right) }{h_{\varepsilon }^{2}}, \end{equation}% where $M$ is total mass. In order to avoid the event horizon in the dark energy star model, the junction radius places outside $a>2M$. The surface energy density and the surface pressure, Eqs. (\ref{sigmaGsR}) and (\ref{PGsR}), can be obtained in term of $m(r)$, $M$ and $h_{\varepsilon }$ as follows% \begin{eqnarray} \sigma _{\left( \text{rainbow}\right) } &=&\frac{h_{\varepsilon }\left( \sqrt{1-\frac{2m\left( a\right) }{ah_{\varepsilon }^{2}}+\frac{\overset{.}{a}% ^{2}}{h_{\varepsilon }^{2}}}-\sqrt{1-\frac{2M}{a}+\frac{\overset{.}{a}^{2}}{% h_{\varepsilon }^{2}}}\right) }{4\pi a}, \label{SigGsRII} \\ && \notag \\ P_{\left( \text{rainbow}\right) } &=&\frac{1}{8\pi a}\left[ \frac{% h_{\varepsilon }^{2}\left( 1-\frac{M}{a}\right) +\overset{.}{a}^{2}+\ddot{a}% }{\sqrt{h_{\varepsilon }^{2}\left( 1-\frac{2M}{a}\right) +\overset{.}{a}^{2}}% }-\mathcal{P}\right] , \end{eqnarray}% where $\mathcal{P}=\frac{\left( 1+\phi _{-}^{^{\prime }}a\right) \left( h_{\varepsilon }^{2}\left( 1-\frac{2m\left( a\right) }{ah_{\varepsilon }^{2}}% \right) +\overset{.}{a}^{2}\right) +a\ddot{a}+\frac{\left( am^{^{\prime }}\left( a\right) -m\left( a\right) \right) \overset{.}{a}^{2}}{% h_{\varepsilon }^{2}\left( a-\frac{2m\left( a\right) }{h_{\varepsilon }^{2}}% \right) }}{\sqrt{h_{\varepsilon }^{2}\left( 1-\frac{2m\left( a\right) }{% ah_{\varepsilon }^{2}}\right) +\overset{.}{a}^{2}}}$. From the above equations, it can be seen that $\sigma $\ and $P$ depend on the rainbow function $h_{\varepsilon }$ and are independent of $l_{\varepsilon }$. According to the equation (\ref{6}), we can define the area of junction sureface as $A_{\Sigma }=\frac{4\pi a^{2}}{h_{\varepsilon }^{2}}$, therefore \textbf{the mass of a thin shell }$m_{s}=\sigma _{\left( \text{rainbow}% \right) }A_{\Sigma }$\textbf{\ in gravity's rainbow is as follows}% \begin{equation} m_{s}=\frac{4\pi \sigma _{\left( \text{rainbow}\right) }a^{2}}{% h_{\varepsilon }^{2}}. \end{equation}% By rewriting the Eq. (\ref{SigGsRII}) in terms of $M$, we can obtain the total mass at a static radius $a_{0}$% \begin{eqnarray} M &=&M_{eff}\left( a_{0}\right) \notag \\ && \notag \\ &&+m_{s}\left( a_{0}\right) h_{\varepsilon }\left( \sqrt{1-\frac{% 2M_{eff}\left( a_{0}\right) }{a_{0}}}-\frac{m_{s}\left( a_{0}\right) h_{\varepsilon }}{2a_{0}}\right) . \label{M2} \end{eqnarray} As mentioned earlier, thin shell dynamical stability can be demonstrated using parameter $\eta $. Fig. \ref{Fig6} shows the stability region for the case $\omega =-0.5$ and different values of $h_{\varepsilon }$. It should be noted that we can use $b_{0}=2m\left[ a^{3}\left( 1-4m/a\right) \right] ^{-1} $ as an auxiliary tool \cite{Lobo2006}, thus $a>4m\left( a\right) $. On the other hand, we assumed $a\gtrsim 2M$, so $0<\frac{m}{M}\lesssim \frac{1}{2}$% . Note in these considerations, $h_{\varepsilon }$ is simplified. For $% h_{\varepsilon }=1$, all equations yield to the usual form in general relativity. It can be inferred from Fig. \ref{Fig7} that as the value of the rainbow function $h_{\varepsilon }$ increases, the unstable regions near the Schwarzschild radius move closer to stability. For values $h_{\varepsilon }\geq 1.3$, the whole region is stable. \section{ENERGY CONDITION} For interior region, the energy conditions are given by \cite% {Leon1993,Visser1995} \newline i) null energy condition (NEC): $\rho +p_{r}\geq 0$, and $\rho +p_{t}\geq 0$% . \newline ii) weak energy condition (WEC): $\rho \geq 0$, $\rho +p_{r}\geq 0$, and~$% \rho +p_{t}\geq 0$. \newline iii) strong energy condition (SEC): $\rho +p_{r}+2p_{t}\geq 0$, and~$\rho +p_{t}\geq 0$. \newline iv) dominant energy condition (DEC): $\rho \geq \left\vert p_{r}\right\vert $% , and $\rho \geq \left\vert p_{t}\right\vert $. \newline By placing Eqs. (\ref{rhoo}), (\ref{Pp}), and (\ref{Deltaa}) in above definition for energy conditions, the interior energy conditions are obtained \begin{eqnarray} \rho \left( r\right) +p_{r}\left( r\right) &=&\frac{b_{0}\left( 2b_{0}r^{2}+3\right) \left( \omega +1\right) }{8\pi \left( 2b_{0}r^{2}+1\right) }, \notag \\ && \notag \\ \rho \left( r\right) -p_{r}\left( r\right) &=&\frac{-b_{0}\left( 2b_{0}r^{2}+3\right) \left( \omega -1\right) }{8\pi \left( 2b_{0}r^{2}+1\right) }, \notag \\ && \notag \\ \rho \left( r\right) +p_{t}\left( r\right) &=&\frac{\frac{b_{0}}{8\pi }% \left[ \frac{8\mathcal{A}_{5}\left( b_{0}r^{2}+\frac{1}{2}\right) h_{\varepsilon }^{2}}{b_{0}r^{2}}+\mathcal{A}_{6}\right] }{\left( 2b_{0}r^{2}+1\right) ^{3}\left[ \frac{\left( 2b_{0}r^{2}+1\right) h_{\varepsilon }^{2}}{b_{0}r^{2}}-1\right] }, \notag \\ && \notag \\ \rho \left( r\right) -p_{t}\left( r\right) &=&\frac{\frac{b_{0}}{8\pi }% \left[ \frac{8\mathcal{A}_{7}\left( b_{0}r^{2}+\frac{1}{2}\right) h_{\varepsilon }^{2}}{b_{0}r^{2}}-\mathcal{A}_{8}\right] }{\left( 2b_{0}r^{2}+1\right) ^{3}\left[ \frac{\left( 2b_{0}r^{2}+1\right) h_{\varepsilon }^{2}}{b_{0}r^{2}}-1\right] }, \notag \\ && \notag \\ \rho \left( r\right) +p_{r}\left( r\right) +2p_{t}\left( r\right) &=&\frac{% \frac{b_{0}}{4\pi }\left[ \frac{4\mathcal{A}_{9}\left( b_{0}r^{2}+\frac{1}{2}% \right) h_{\varepsilon }^{2}}{b_{0}r^{2}}+\mathcal{A}_{10}\right] }{\left( 2b_{0}r^{2}+1\right) ^{3}\left[ \frac{\left( 2b_{0}r^{2}+1\right) h_{\varepsilon }^{2}}{b_{0}r^{2}}-1\right] }, \end{eqnarray}% where $\mathcal{A}_{5}$, $\mathcal{A}_{6}$, $\mathcal{A}_{7}$, $\mathcal{A}% _{8}$, $\mathcal{A}_{9}$ and $\mathcal{A}_{10}$ are% \begin{eqnarray*} \mathcal{A}_{5} &=&b_{0}^{2}r^{4}-\frac{b_{0}r^{2}\left( \omega -4\right) }{2% }+\frac{3}{4}\left( \omega +1\right) , \\ && \\ \mathcal{A}_{6} &=&b_{0}^{2}r^{4}\left( \omega -1\right) \left( \omega +3\right) \\ &&+3b_{0}r^{2}\left( \omega -\frac{2}{3}\right) \left( \omega +3\right) +% \frac{9\left( \omega ^{2}-1\right) }{4}, \\ && \\ \mathcal{A}_{7} &=&b_{0}^{2}r^{4}+\frac{b_{0}r^{2}\left( \omega +4\right) }{2% }-\frac{3}{4}\left( \omega -1\right) , \\ && \\ \mathcal{A}_{8} &=&b_{0}^{2}r^{4}\left( \omega ^{2}+2\omega -5\right) \\ &&+3b_{0}r^{2}\left( \omega ^{2}+\frac{7\omega }{3}+\frac{10}{3}\right) +% \frac{3\left( 3\omega ^{2}+5\right) }{4}, \\ && \\ \mathcal{A}_{9} &=&b_{0}^{2}r^{4}\left( \omega +1\right) +b_{0}r^{2}\left( \omega +2\right) +\frac{3\left( 3\omega +1\right) }{4}, \\ && \\ \mathcal{A}_{10} &=&b_{0}^{2}r^{4}\left( \omega ^{2}-1\right) \\ &&+3b_{0}r^{2}\left( \omega ^{2}+\omega -\frac{2}{3}\right) +\frac{3\left( 3\omega ^{2}-2\omega -1\right) }{4}. \end{eqnarray*}% The energy conditions are demonstrated in Fig. \ref{Fig8} for cases $% h_{\varepsilon }=1$, and $h_{\varepsilon }=1.3$, respectively. It can be seen in gravity's rainbow similar to general relativity, for $% \omega =-\frac{1}{3}$, all energy conditions are obeyed by our calculations. For $-\frac{1}{3}\leq \omega \leq -1$, all conditions are satisfied except SEC. Violation of strong energy condition is a feature of dark energy. For $% \omega <-1$, SEC and NES conditions are violated. Violation of null energy condition is a feature of phantom energy. \section{Conclusions} In this study, by assuming the existence of an anisotropic distribution of dark energy in the interior of spherically symmetric spacetime, we got a modified TOV equation of anisotropic distribution in the gravity's rainbow. In order to solve the energy-dependent field equations, we used the modified Tolman-Matese-Whitman mass function. We considered the dark energy equations of state to obtain the properties of dark energy stars. We showed that the final solution is independent of the rainbow function $l_{\varepsilon }$, and it only depends on $h_{\varepsilon }$. As the value of $h_{\varepsilon }$ increased, the gravity profile$\ g\left( r\right) $ became more constrained thus the model is closer to the standard definition of a dark energy star, and the anisotropy factor remain positive. As the rainbow function equals one, this solution tends to the general relativity \cite{Lobo2006}. We also investigated the dynamical stability of thin shell for the dark energy star in this gravity by generalizing the Darmois-Israel formalism in the gravity's rainbow. We showed that by increasing rainbow function $% h_{\varepsilon }$, the unstable regions near the event horizon decrease. The strong energy condition (SEC) is violated in the interior of the dark energy star. It seems that employing the gravity's rainbow has the greatest effect near where the phase transition zone is called. Where it is located near the event horizon and high-energy particles decay there due to crossing the critical surface \cite{Chapline,ChaplinearXiv}. The presence of modified gravity with quantum gravity backgrounds will significantly help to study the behavior of a dark energy star, especially near the critical region. \begin{acknowledgements} A. Bagheri Tudeshki and G. H. Bordbar wish to thank Shiraz University research council. B. Eslam Panah thanks the University of Mazandaran. The University of Mazandaran has supported the work of B. Eslam Panah by title "Evolution of the masses of celestial compact objects in various gravity". \end{acknowledgements}
Title: Faraday Rotation Measure Variations of Repeating Fast Radio Burst Sources
Abstract: Recently, some fast radio burst (FRB) repeaters were reported to exhibit complex, diverse variations of Faraday rotation measures (RMs), which implies that they are surrounded by an inhomogeneous, dynamically evolving, magnetized environment. We systematically investigate some possible astrophysical processes that may cause RM variations of an FRB repeater. The processes include (1) a supernova remnant (SNR) with a fluctuating medium; (2) a binary system with stellar winds from a massive/giant star companion or stellar flares from a low-mass star companion; (3) a pair plasma medium from a neutron star (including pulsar winds, pulsar wind nebulae and magnetar flares); (4) outflows from a massive black hole. For the SNR scenario, a large relative RM variation during a few years requires that the SNR is young with a thin and local anisotropic shell, or the size of dens gas clouds in interstellar/circumstellar medium around the SNR is extremely small. If the RM variation is caused by the companion medium in a binary system, it is more likely from stellar winds of a massive/giant star companion. The RM variation contributed by stellar flares from a low-mass star is disfavored, because this scenario predicts an extremely large relative RM variation during a short period of time. The scenarios invoking a pair plasma from a neutron star can be ruled out due to their extremely low RM contributions. Outflows from a massive black hole could provide a large RM variation if the FRB source is in the vicinity of the black hole.
https://export.arxiv.org/pdf/2208.08712
\label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \begin{keywords} (transients:) fast radio bursts -- (stars:) pulsars:general -- radio continuum: transients -- ISM: structure \end{keywords} \section{Introduction} Fast radio bursts (FRBs) are mysterious radio transients with millisecond durations and extremely high brightness temperatures at cosmological distances. So far, over 600 FRB sources have been detected, dozens of which exhibited a repeating behavior \citep[e.g.,][]{CHIME21}. However, their physical origin is still not well understood due to complexity and diversity of the observations \citep[e.g.,][]{Cordes19,Zhang20,Xiao21}. For example, a Galactic FRB, FRB 200428, was detected to be associated with the magnetar SGR J1935+2154 \citep{Bochenek20,CHIME20,Mereghetti20,Li20,Ridnaia20,Tavani20}, implying that at least some FRBs originate from magnetars born from the core collapse of massive stars \citep{Katz16,Murase16,Beloborodov17,Kumar17,Yang18,Yang21,Metzger19,Wadiasingh19,Lu20,Margalit20,Zhang22,Wang22b,Qu22}. However, such a magnetar formation is challenged by the observation of another nearby FRB, FRB 20200120E, which was localized to be in a globular cluster of the nearby galaxy M81 \citep{Bhardwaj21,Kirsten22}. The extremely old age of the globular cluster implies that it is more likely produced by an old object or a system associated with a compact binary merger \citep{zhang20c,Kremer21,Lu22}. Therefore, multiple physical origins for the FRB population seem increasingly likely. In addition to the FRB sources themselves, propagating effects, e.g., dispersion, Faraday rotation, temporal scattering, scintillation, depolarization, gravitational/plasma lensing, also play important roles to interpret FRB observations and constrain the properties of the FRB environment \citep[e.g.,][]{Xu16,Cordes17,Yang17,Li18b,Yang20b,Er20,Beniamini22,Yang22,Kumar22}. Dispersion measure (DM) and Faraday rotation measure (RM) are two of the most important measurable quantities for FRBs. For a repeating FRB source, the variations of its DM and RM would provide clues to study the properties of its near-source plasma. \cite{Yang17} studied various possible origins to cause DM variations of an FRB repeater and concluded that the plasma local to the FRB source is the most likely cause. Different from DM that usually has a significant contribution from the intergalactic medium, the observed RM of an FRB, especially when it is large, is likely contributed by a highly magnetized environment near the FRB source, because the RM contribution from the intergalactic medium is very small with a typical value of $|{\rm RM_{IGM}}|\ll 10~{\rm rad~m^{-2}}$ \citep{Akahori16} and because the contribution from the interstellar medium in the Milky Way is usually $|{\rm RM_{MW}}|\lesssim100~{\rm rad~m^{-2}}$ at high latitudes \citep{Hutschenreuter22}. The first known repeater, FRB 121102, shows the largest RM of $|{\rm RM}|\sim10^5~{\rm rad~m^{-2}}$ in all FRBs \citep{Michilli18}, which it decreased by $\sim30\%$ during one year \citep{Hilmarsson21}. This may be caused by the expansion of a young supernova remnant (SNR) \citep{Piro18}, a magnetar nebula \citep{Margalit18}, or an ejecta from a compact binary merger \citep{Zhao21} around the FRB source, although other scenarios (see discussion below) may also be possible. Another active repeater, FRB 190520B, has an extremely large host DM with ${\rm DM_{host}}\sim900~{\rm pc~cm^{-3}}$ \citep{Niu22}, which is nearly an order of magnitude higher than those of other FRBs. Meanwhile, its RM value reaches $\sim10^4~{\rm rad~m^{-2}}$ that is second in line next to FRB 121102, and there appears an RM sign reversal during a few months \citep{Anna-Thomas22,Dai22}. Such a large RM and significant RM reversal directly suggest that the FRB environment is magnetized and dynamically evolving. It is worth noting that both FRB 121102 and FRB 190520B are associated with a compact persistent radio source with a wide emission spectrum \citep{Chatterjee17,Niu22}, which implies that a persistent radio source and a large RM are likely physically related to each other \citep{Yang20a,Yang22}. Other FRB repeaters also show complex and diverse RM variations. FRB 20201124A showed an irregular RM variation over one month. Some bursts appeared to have circular polarization and frequency-dependent oscillating polarization properties \citep{Xu21}. FRB 180916B with a 16.33-day periodic activity \citep{CHIME20b} exhibited RM variation with a stochastic component and a secular component at different periods \citep{Mckinven22}. In summary, RM variations seem to be a common feature for all FRB repeaters. Such significant RM variations suggest that repeating FRBs are likely surrounded by a inhomogeneous and dynamically evolving Faraday screen. When a radio burst propagates in the screen, two important effects are involved: 1) the radio bursts would be depolarized due to different RMs at different paths. Very recently, \citet{Feng22} reported that active FRB repeaters exhibit conspicuous frequency-dependent depolarization that can be well described by the multi-path propagation effect in an inhomogeneous magnetized plasma \citep{Yang22}. 2) a significant circular polarization would be generated due to the superposition of electromagnetic waves with different phases and polarization angles from different paths \citep{Beniamini22}. In this paper, we investigate the possible physical mechanisms that may cause RM variations from a repeating FRB source, and discuss the physical implications of the observed RM variations. The paper is organized as follows. We discuss the necessary conditions and theoretical implications of Faraday rotation in Section \ref{general}. The physical origins of random and secular RM evolution are generally analyzed in Section \ref{FaradayScreen}, where the random RM variation is caused by the relative motion between the FRB source and a Faraday screen with an inhomogeneous magnetized medium, and the secular RM evolution may be caused by an expanding shell or the orbital motion in a binary system. In Section \ref{scenarios}, we discuss different astrophysical scenarios, including SNRs with an inhomogeneous medium in Section \ref{SNR}, stellar winds from a massive/giant star companion in Section \ref{stellarwind}, stellar flares from a low-mass star companion in Section \ref{stellarflare}, pulsar winds, pulsar wind nebulae and magnetar flares in Section \ref{pair}, and magnetized outflows from a massive black hole in Section \ref{BH}). The results are discussed and summarized in Section \ref{discussion}. Some detailed calculations are presented in the Appendices. \section{Rotation measure: a general discussion}\label{general} Observationally the Faraday rotation measure, ${\rm RM_{obs}}$, is measured by the frequency(wavelength)-dependent polarization angle of linearly polarized waves \be \psi={\rm RM_{obs}}\lambda^2, \ee where $\psi$ is the polarization angle of the electromagnetic wave of wavelength $\lambda$ with respect to that of infinite frequency. The necessary conditions to measure ${\rm RM_{obs}}$ of a source include: 1) the electromagnetic waves must contain a significant linear polarization component; 2) the polarization angle must satisfy $\psi\propto\lambda^2$. For an extragalactic FRB, the observed RM can be decomposed in terms of the contributions of various plasma components along the line of sight, i.e. \be {\rm RM_{obs}}={\rm RM_{\rm ion}}+{\rm RM_{MW}}+{\rm RM_{IGM}}+\frac{{\rm RM_{host}}}{(1+z)^2}+\frac{{\rm RM_{loc}}}{(1+z)^2}, \ee where $z$ is the redshift of the host galaxy, ${\rm RM_{\rm ion}}$ is the contribution from the Earth ionosphere, which is of the order of $|{\rm RM_{\rm ion}}|\sim(0.1-1)~{\rm rad~m^{-2}}$ \citep{Mevius18,Mckinven22}, ${\rm RM_{MW}}$ is the contribution from the interstellar medium in the Milky Way, which has a typical absolute value $|{\rm RM_{MW}}|\lesssim100~{\rm rad~m^{-2}}$ at high latitudes \citep{Hutschenreuter22}, ${\rm RM_{IGM}}$ is the contribution from the intergalactic medium which has a very small value of $|{\rm RM_{IGM}}|< 10~{\rm rad~m^{-2}}$ \citep{Akahori16}, ${\rm RM_{host}}$ is the contribution from the interstellar medium in the FRB host galaxy, which might be of the same order of magnitude as the Milky Way, and ${\rm RM_{loc}}$ is the contribution from the local plasma near the FRB source. Therefore, an observed RM with a large absolute value of $|{\rm RM_{obs}}|\gtrsim10^3~{\rm rad~m^{-2}}$ is expected to be mainly contributed by the local plasma ${\rm RM_{loc}}$. On the other hand, since both the intergalactic medium and interstellar medium cannot have short-term evolution \citep{Yang17}, the observed significant RM variations can only be attributed to the local plasma. In the following discussion, we are only interested in ${\rm RM_{loc}}$, and hereafter directly use the symbol ${\rm RM}$ to denote ${\rm RM_{loc}}$. For a non-relativistic magneto-ionic (ions+electrons) cold plasma with magnetic field $B$ and electron density $n_e$, the RM could be calculated by the dispersion relation of the electromagnetic wave, and the classical result is \begin{align} {\rm RM}&=\frac{e^3}{2\pi m_e^2c^4}\int n_eB_\parallel ds \nonumber\\ &\sim0.81~{\rm rad~m^{-2}}\fractionz{\left<n_e\right>_L}{1~{\rm cm^{-3}}}\fractionz{\left<B_\parallel\right>_L}{1~{\rm \mu G}}\fractionz{s}{1~{\rm pc}} \nonumber\\ &\sim0.81~{\rm rad~m^{-2}}\fractionz{\left<B_\parallel\right>_L}{1~{\rm \mu G}}\fractionz{{\rm DM}}{1~{\rm pc~cm^{-3}}},\label{rm} \end{align} where $\left<...\right>_L$ denotes to the average along the line of sight in the local plasma near an FRB source. Although there is usually a component of the field that is spatially coherent at the scale of a certain object, the field lines are disordered and there are magnetic fluctuations in the scales covering orders of magnitude due to MHD turbulence. Thus, both a large-scale ordered magnetic field and a turbulent magnetic field are involved in various astrophysical scenarios, as shown in Figure \ref{figfield}. The average of $B_\parallel$ depends on the geometric configuration of the magnetic fields along the line of sight. For example, we consider that the electron density and magnetic field strength are approximately uniform but the magnetic geometry might change along the line of sight. If the magnetic field is ordered in large scale (see panel (a) of Figure \ref{figfield}), the average parallel magnetic field $\left<B_\parallel\right>$ would be approximately of the order of the local parallel magnetic field $B_\parallel$, i.e., $\left<B_\parallel\right>_L\sim B_\parallel$. However, if the magnetic field is totally turbulent with coherent lengthscale of $s_B$ (in a region with lengthscale $s_B$, the field could be treated as approximately uniform with a certain direction, see panel (b) of Figure \ref{figfield}), the average parallel magnetic field would be $\left<B_\parallel\right>_L\sim (s_B/s)^{1/2} B_\parallel$ due to the Poisson r.m.s. fluctuations of the polarization angle \citep[e.g.,][]{Beniamini22,Yang22}. In general, for a local plasma one may write \begin{align} \left<B_\parallel\right>_L&\simeq1.23~{\rm \mu G}\fractionz{{\rm RM}}{1~{\rm rad~m^{-2}}}\fraction{{\rm DM}}{1~{\rm pc~cm^{-3}}}{-1}\nonumber\\ &\sim \left\{ \begin{aligned} &B_\parallel,&&\text{for a large-scale field},\\ &(s_B/s)^{1/2}B_\parallel,&&\text{for turbulent fields}. \end{aligned} \right.\label{Bfield} \end{align} The above equation has been used to estimate the strength of total magnetic field in the emission region using RM and DM according in many papers, if the RM and DM originate from the same region. However, if the field is totally turbulent, the real local field strength would be enhanced by a factor of $(s/s_B)^{1/2}$. For some FRB repeaters with large RMs, e.g., FRB 121102 and FRB 190520B \citep{Michilli18,Anna-Thomas22,Dai22}, the main contributions of observed RMs and DMs are likely to originate from different regions: a large part of DM is contributed by the intergalactic medium, but a large RM of $\gtrsim10^3~{\rm rad~m^{-2}}$ is more likely contributed from the local plasma in the FRB environment, because the intergalactic magnetic field is extremely low and disordered in large scale \citep[e.g.,][]{Akahori16}. Some authors \citep[e.g.,][]{Katz21} estimated the magnetic field strength of the local plasma by assumeing that the variations of RMs and DMs originate from the same region, so that $\left<B_\parallel\right>_L\simeq1.23~{\rm \mu G}(\delta{\rm RM}/1~{\rm rad~m^{-2}})(\delta{\rm DM}/1~{\rm pc~cm^{-3}})^{-1}$. This formula can be easily applied from the observational data, because both $\delta {\rm RM}$ and $\delta {\rm DM}$ are measurable quantities for a repeating source and the contributions from the ISM and IGM may be estimated for both DM and RM. However, such a formula is inapplicable of the scenario that the RM variation is caused by magnetic configuration variation, which is likely a cause of RM variations and has been confirmed by the RM reversal observed in FRB 190520B \citep{Anna-Thomas22,Dai22}. For example, let us consider a case that the electron density and magnetic field strength is unchanged but the magnetic field configuration is changing. This gives $\delta {\rm DM}\sim 0$ and $|\delta {\rm RM}|>0$. This would give the unphysical conclusion $\left<B_\parallel\right>_L\rightarrow\infty$ with the formula. The correct treatment should be to take differentiation of Eq.(\ref{rm}) and obtain \be \frac{\delta {\rm RM}}{{\rm RM}}\simeq\frac{\delta {\rm DM}}{{\rm DM}}+\frac{\delta \left<B_\parallel\right>_L}{\left<B_\parallel\right>_L}, \label{eq:deltaRM} \ee which suggests that the relative RM variation is the sum of the relative variations of DM and average parallel magnetic field. For example, the RM of FRB 121102 decreased by $|\delta{\rm RM}|\sim |{\rm RM}|\sim10^{5}~{\rm rad~m^{-2}}$ during a few years \citep{Michilli18,Hilmarsson21}. According to Eq.(\ref{eq:deltaRM}), there are two possible reasons for its large relative RM variation ($|\delta {\rm RM}/{\rm RM}|\sim1$): 1) It is due to a large relative DM variation, $\delta{\rm DM}/{\rm DM}\sim1$. Since the observed DM variation of FRB 121102 is small, $\delta{\rm DM}\sim 1~{\rm pc~cm^{-3}}$ \citep{Hessels19}, the DM of the region contributing the large RM is required to be also small, i.e. ${\rm DM}\sim\delta{\rm DM}\sim 1~{\rm pc~cm^{-3}}$. Thus, the estimated average magnetic field strength is $\left<B_\parallel\right>_L\sim 0.1~{\rm G}$ according to Eq.(\ref{Bfield}), suggesting that the magnetic field near FRB 121102 is extremely strong, much larger than the observed magnetic field of most SNRs and pulsar wind nebulae in the Milky Way \citep[e.g.,][]{Reynolds12}. 2) It is due to the change of the field configuration, $\left|\delta\left<B_\parallel\right>_L/\left<B_\parallel\right>_L\right|\sim1$, which implies that the magnetic field configuration near the source is dynamically evolving during the past few years, similar to the scenario of FRB 190520B \citep{Anna-Thomas22,Dai22}. At last, we should note that the condition of $\psi\propto\lambda^2$ for the RM measurement to be relevant requires that $\omega\gg\omega_B,\omega_p$ is satisfied according to the dispersion relation of circularly polarized waves, where $\omega_B=eB/m_ec$ is the cyclotron frequency, and $\omega_p=(4\pi e^2n_e/m_e)^{1/2}$ is the plasma frequency. For a given observed frequency, the necessary condition for $\psi\propto\lambda^2$ can be translated to a constraint on the magnetic field strength, i.e. \be B\ll B_c=\frac{2\pi m_ec\nu}{e}\simeq360~{\rm G}\fractionz{\nu}{1~{\rm GHz}}.\label{Bc} \ee Therefore, although the FRB engine (e.g. a neutron star, black hole, or even a white dwarf) has a strong magnetic field near the engine, the region with $B>B_c$ cannot contribute to the observed RM\footnote{The medium near the engine (e.g. a neutron star) is likely a relativistic pair plasma. The RM contribution by such a pair plasma is not important anyway, see discussion in Section \ref{pair} and Appendix \ref{RMpair}.}. \section{Physical origins of random and secular RM variations}\label{FaradayScreen} Recent observations show that many FRB repeaters appear to have complex, diverse RM variation patterns. For example, FRB 121102 exhibited a non-linear decrease of RM absolute value during a few years with several weeks of fluctuations in a short term \citep{Chatterjee17,Hilmarsson21}. In an active cycle, FRB 20201124A showed an irregular RM variation during the first 36 days, which is followed by an almost constant RM during a later 18 days \citep{Xu21}. FRB 180916B showed stochastic, small RM variations followed by a significant secular increasing component over the nine month period \citep{Mckinven22}. It seems that observations show both random fluctuations and systematic secular evolution. In the following, we present some general discussions on these two scenarios. \subsection{Random RM variations} For random RM variations, the most likely scenario is that there is a Faraday screen with inhomogeneous medium near the FRB source, and the relative motion between the FRB source and the screen causes irregular RM variations. For turbulence induced inhomogeneity, this requires that the timescale of relative motion is shorter than the timescale of turbulence, i.e. eddy turnover time. The ubiquitous turbulence in astrophysical plasmas naturally induces fluctuations in density and magnetic fields, and hence, RM fluctuations \citep{Minter96,Vogt2005,Xu16}. We consider a Faraday screen with thickness $\Delta R$ and assume a statistical homogeneity of the medium. The ``RM structure function'' is used to represent the mean-squared RM difference between two paths separated by a transverse distance $l$, i.e., \be D_{\rm RM}(\overrightarrow{l})\equiv\left<[{\rm RM}(\overrightarrow{x}+\overrightarrow{l})-{\rm RM}(\overrightarrow{x})]^2\right>,\label{SF} \ee where $\left<...\right>$ represents an ensemble average. Physically, the RM structure function depends on the power spectra $P(k)$ of the fluctuations of RM density $(n_e B_\parallel)(\overrightarrow{x})$, where $k=2\pi/l$ is the spatial wavenumber. For simplicity, we assume a power-law distribution of\footnote{In the following discussion, we do not separate the RM density fluctuations into fluctuations arising from electron density and magnetic field. If the fluctuations of electron density and magnetic field have different spectral indexes, the spectral index of RM density $(n_e B_\parallel)(\overrightarrow{x})$ would be dominated by the one with larger relative variation \citep{Xu16}, $\delta X/X$, where $X$ denotes $n_e$ or $B_\parallel$.} $P(k)\propto k^\alpha$ for $2\pi L^{-1}<k<2\pi l_0^{-1}$, where $L$ and $l_0$ correspond to the outer scale and inner scale, respectively. The power spectrum with the 3D spectral index $\alpha<-3$ is called a ``steep spectrum'' (e.g., $\alpha=-11/3$ for the Kolmogorov scaling), and the fluctuations are dominated by the large scale at $\sim L$, which corresponds to the energy injection scale of turbulence; The power spectrum with $\alpha>-3$ is called a ``shallow spectrum'', and inhomogeneity structures are dominated by small-scale fluctuations near $\sim l_0$, which is the energy dissipation scale of turbulence \citep{Lazarian04,Lazarian06,Lazarian16}. Shallow density spectra are commonly seen in cold interstellar phases with supersonic turbulence, where the small-scale density enhancement is caused by turbulence compression \citep{Xu17,Xu20}. For a shallow spectrum, the fluctuations on scales larger than $L$ are model-dependent, and $L$ is most likely the largest scale of the fluid system, $L\sim\Delta R$, for turbulence driven within the system. We define the correlation length scale $l_{\rm RM}$ with a correlation $\kappa^2\left<\delta (n_eB_\parallel)(\overrightarrow{x}+\overrightarrow{l_{\rm RM}})\delta (n_eB_\parallel)(\overrightarrow{x})\right>=\sigma_{\rm RM}^2/2$, where $\sigma_{\rm RM}^2=\kappa^2\left<\delta(n_eB_\parallel)^2\right>$ correponds to the variance of RM density fluctuations multiplying by $\kappa=e^3/(2\pi m_e^2c^4)$. For the steep spectrum ($\alpha<-3$), the correlation scale is $l_{\rm RM}\sim L$; while, for the shallow spectrum ($\alpha>-3$), the correlation scale is $l_{\rm RM}\sim l_0$ \citep{Lazarian16,Xu16}. For most astrophysical scenarios, the Faraday screen is considered to be thick\footnote{Here, the definitions of ``thick'' and ``thin'' of a Faraday screen is based on the relation between the correlation length $l_{\rm RM}$ and the screen thickness $\Delta R$ \citep{Lazarian16}. Since the correlation length $l_{\rm RM}$ cannot exceed the largest scale of a system for turbulence driven within it, in most astrophysical scenarios involving a shell with the fluctuations of magnetic field and density as the Faraday screen, the thick screen condition ($l_{\rm RM}<\Delta R$) is usually satisfied.}, i.e., $\Delta R>l_{\rm RM}$. Therefore, the RM structure function of a thick screen could be written as (\cite{Lazarian16,Xu16}; see Appendix \ref{RMSF} for a detailed derivation) \begin{align} D_{\rm RM}(l)\sim \left\{ \begin{aligned} &\sigma_{\rm RM}^2\Delta R l\fraction{l}{l_{\rm RM}}{-(\alpha+3)},&&l_0<l<l_{\rm RM}\sim L,\\ &\sigma_{\rm RM}^2\Delta R l_{\rm RM},&&l>l_{\rm RM}\sim L. \end{aligned} \right.\label{sf1} \end{align} for a steep spectrum ($\alpha<-3$) and $L\sim l_{\rm RM}$, and \begin{align} D_{\rm RM}(l)\sim \left\{ \begin{aligned} &\sigma_{\rm RM}^2\Delta R l\fraction{l}{l_{\rm RM}}{-(\alpha+3)},&&l_0\sim l_{\rm RM}<l<L,\\ &\sigma_{\rm RM}^2\Delta R^2\fraction{\Delta R}{l_{\rm RM}}{-(\alpha+3)},&&l> L. \end{aligned} \right.\label{sf2} \end{align} for a shallow spectrum ($-3<\alpha<-2$) and $L\sim\Delta R$. Therefore, the Kolmogorov scaling with $\alpha=-11/3$ has the RM structure function $D_{\rm RM}(l)\propto l^{5/3}$ in the inertial range and $D_{\rm RM}(l)\sim\text{constant}$ beyond the inertial range. We assume that the relative transverse velocity between the FRB source and the Faraday screen is $v_\perp$. Based on the definition of $D_{\rm RM}(l)$ in Eq.(\ref{SF}), the r.m.s. variation of RM during a time $t$ is \be |\delta{\rm RM}(t)|\sim \sqrt{D_{\rm RM}(v_\perp t)}. \ee The largest RM amplitude contributed by the Faraday screen can be estimated as $|{\rm RM}|\sim|\delta {\rm RM}(l>L)|$, where $|\delta {\rm RM}(l>L)|\sim\sqrt{D_{\rm RM}(l> L)}$ is given by the last equations of Eq.(\ref{sf1}) and Eq.(\ref{sf2}). Thus, during time $t$, the relative RM variation can be estimated as \begin{align} \left|\frac{\delta {\rm RM}}{{\rm RM}}\right|\sim \left\{ \begin{aligned} &\fraction{v_\perp t}{L}{-(\alpha+2)/2},&&v_\perp t< L,\\ &1,&&v_\perp t> L, \end{aligned} \right.\label{RMvariation} \end{align} for both steep and shallow spectra. We emphasize again that in the above equation $L\sim l_{\rm RM}$ is considered for a steep spectrum, and $L\sim\Delta R$ is considered for a shallow spectrum. The measurements of the RM structure function of some FRB repeaters revealed that \citep{Mckinven22} $D_{\rm RM}(t)\propto t^{0.2-0.4}$, which implies that the power spectrum is $\alpha\sim-(2.2-2.4)$. Therefore, the Faraday screens of these FRB repeaters have shallow spectra in the inertial ranges, which means that the variation is dominated by small-scale RM density fluctuations. Notice that as magnetic fluctuations arising from nonlinear turbulent dynamo \citep{Xu16} and turbulent compression have the magnetic energy spectrum basically following the turbulent energy spectrum, the magnetic energy spectrum is usually steep. Thus, the observed result implies that a shallow density spectrum is more likely to dominate the RM fluctuations for these particular FRB sources. Physically, a shallow density spectrum naturally arises in supersonic turbulence, e.g., in star-forming regions \citep{Hennebelle12}. On the other hand, many FRB repeaters, e.g, FRB 121102, FRB 190520B, FRB 180916B, show large RM variations $|\delta{\rm RM}/{\rm RM}|\sim 1$ during a few months to a few years \citep{Michilli18,Hilmarsson21,Anna-Thomas22,Dai22,Mckinven22}, implying that the outer scale of the inertial range satisfies \be L\lesssim v_\perp t\simeq10^{-4}~{\rm pc}\fractionz{v_\perp}{100~{\rm km~s^{-1}}}\fractionz{t}{1~{\rm yr}}.\label{outerscale} \ee \subsection{Secular RM evolution} A secular RM evolution may be attributed to the expansion of a magnetized shell or orbital motion of a binary system. First, we consider the scenario of an expanding magnetized shell with the magnetic field configuration unchanged during a short-term, which is applicable to young SNRs (see Section \ref{SNR}). or companion flares in a binary system (see Section \ref{stellarflare}). In a certain astrophysical environment, the electron density and magnetic field are usually related, e.g. $B\propto n_e^{\gamma_B}$, where $\gamma_B=1/2,2/3,1$ corresponds to an energy-equipartition plasma, a magnetic freezing plasma, or a shocked compressed plasma, respectively \citep[e.g.,][]{Yang22}. Due to the expansion, the electron density might decrease with shell radius $n_e\propto r^{-\gamma_n}$, with $\gamma_n=0,2,3$ corresponding to a shock compressed medium (upstream medium is assumed to be uniform), a wind medium, and free expansion, respectively. We assume that the time-dependent shell radius is $r\propto t^{\gamma_r}$ with $\gamma_r=1,2/5$ corresponding to free expansion and the Sedov–Taylor phase, respectively. Therefore, the RM evolution satisfies \be {\rm RM}\propto n_eBr\propto t^{\gamma_r(1-\gamma_n-\gamma_B\gamma_n)}. \ee Next, we are interested in the secular RM evolution caused by orbital motion of a binary system, and the companion could be a stellar object (see Section \ref{stellarwind}) or a massive black hole (see Section \ref{BH}). Since the large-scale magnetic fields are contributed by the magnetized companion (i.e., large-scale dipole field for a companion with weak wind, magnetic field in the disk of Be stars, etc.), the RM variation would be periodic with the same period as the orbital period, \be P=2\pi\fraction{a^3}{GM_{\rm tot}}{1/2}.\label{period} \ee where $a$ is the semi-major separation of the binary system, and $M_{\rm tot}$ is the binary total mass. A large RM variation of $|\delta {\rm RM}/{\rm RM}|\sim1$ should occur in the timescale of $\lesssim P$. Such a scenario could be tested by long-term monitoring of RM variations for particular FRB repeaters. \section{Different astrophysical scenarios generating RM variations}\label{scenarios} In this section, we will discuss a list of possible astrophysical processes that might cause RM variations of a particular FRB repeater. \subsection{RM variations contributed by a supernova remnant}\label{SNR} Radio polarization observations of young SNRs suggest that magnetic fields in SNRs are largely disordered, with a small radial preponderance \citep[e.g.,][]{Dickel76,Milne87,Reynolds12}. There are two possible explanations to the radial preponderance \citep{Jun96,Blondin01,Zirakashvili08,Inoue13,West17}: 1. The Rayleigh-Taylor instability stretches the field lines preferentially along the radial direction; 2. Turbulence with a radially biased velocity dispersion may be induced. In older, larger SNRs, the field lines are often disordered but sometimes tangential. The tangential fields could be explained by the shock compression of the upstream medium \citep[e.g.,][]{Dickel76,Milne87,Reynolds12}. In summary, polarization and imaging observations indicate that the magnetic fields in a SNR are turbulent and evolving. If magnetic geometry along the line of sight keeps unchanged, the RM contribution from an SNR would show a long-term evolution due to the expansion of the SNR \citep[][see the discussion in Section \ref{FaradayScreen}]{Piro18,Zhao21b}. Although this scenario seems to be consistent with the observation of FRB 121102 \citep{Michilli18,Hilmarsson21}, it cannot explain the non-monotonic irregularity exhibited by some FRB repeaters reported recently \citep{Xu21,Anna-Thomas22,Dai22}. In particular, FRB 190520B appears to have a significant RM reversal \citep{Anna-Thomas22,Dai22}, which directly confirms that the field configuration near the FRB source is changing along the line of sight. In the following discussion, we mainly focus on the RM variations contributed by the turbulence in the SNR. When an SNR propagates in a highly inhomogeneous interstellar/circumstellar medium, as shown in Figure \ref{figSNR}, it generates a pair of shocks upon interacting with the dense gas clouds, and induce turbulence in the magnetized medium \citep[e.g.,][]{Hu22}. Therefore, for the SNR scenario, one may take the largest outer scale of the turbulence in the SNR as \be L=\min(\xi \Delta R,l_{\rm cloud}),\label{largestscale} \ee where $\Delta R$ is the SNR thickness, $\xi\Delta R$ is the transverse scale of the ejected large-scale clumps in the SNR (see Figure \ref{figSNR}), with the parameter $\xi$ describing the local anisotropy of the SNR (the smaller the value of $\xi$, the more significant the SNR local anisotropy), and $l_{\rm cloud}$ is the typical scale of the gas clouds \citep[e.g.,][]{Heiles03,Inoue09}. The intensity map of the molecular emission shows that the density distribution within molecular clouds appears to have shallow spectra characterized by small-scale, large density. The smallest scale of clouds is \citep[see Figure 10 of][]{Hennebelle12} \be l_{\rm cloud}\lesssim 0.1~{\rm pc}. \ee Notice that the above upper limit of the characteristic size of density structures is due to the limited resolution of observations. We generally discuss a wide range of the possible transverse relative velocity as $v_\perp\sim(10-10^4)~{\rm km~s^{-1}}$. The lower limit of $v_\perp\sim10~{\rm km~s^{-1}}$ corresponds to the possible minimum intrinsic velocity of the neutron star FRB source \citep[e.g.,][]{Hansen97}, and the upper limit of $v_\perp\sim10^4~{\rm km~s^{-1}}$ corresponds to the initial velocity of an expanding SNR \citep[e.g.,][]{Yang17}. First, we consider that the scenario of $\xi\Delta R< l_{\rm cloud}$. During the observing time $t\sim1~{\rm yr}$, the relative distance is of the order $l\sim v_\perp t\sim(10^{-5}-10^{-2})~{\rm pc}\lesssim l_{\rm cloud}$. According to Eq.(\ref{RMvariation}) and Eq.(\ref{largestscale}), the large relative RM variation $|\delta{\rm RM}/{\rm RM}|\sim 1$ implies $l\sim v_\perp t\sim\xi\Delta R\sim \xi\eta v_{\rm SNR}t_{\rm SNR}$, where $v_{\rm SNR}$ is the SNR expanding velocity, $t_{\rm SNR}$ is the SNR age, and the SNR thickness is $\Delta R=\eta R\sim\eta v_{\rm SNR}t_{\rm SNR}$, and $R$ is the SNR radius. Thus, the typical SNR age is given by \be t_{\rm SNR}\sim\fractionz{v_\perp}{\xi\eta v_{\rm SNR}}t. \ee For the SNR with a small radius of $R\sim\eta^{-1}\Delta R<(\xi\eta)^{-1} l_{\rm cloud}$ and for $0.01\lesssim\xi\eta\lesssim1$, the SNR might have a large expanding velocity of $v_{\rm SNR}\gtrsim10^3~{\rm km~s^{-1}}$ \citep[e.g.,][]{Yang17}. Considering that the intrinsic velocity of most neutrons stars might not exceed $\sim 10^3~{\rm km~s^{-1}}$ \citep[e.g.,][]{Hansen97}, one may have $v_\perp\lesssim v_{\rm SNR}$ in this scenario. The large relative RM variation $|\delta{\rm RM}/{\rm RM}|\sim 1$ implies that the SNR is young with an age of $t_{\rm SNR}\lesssim100~{\rm yr}(\xi\eta/0.01)^{-1}(t/1~{\rm yr})$ for significant local anisotropy or a thin thickness. On the other hand, if the SNR is thick and almost locally isotropic with $\xi\eta\gtrsim 0.1$, the observations of $|\delta{\rm RM}/{\rm RM}|\sim 1$ would require that the SNR is extremely young with an age of $t_{\rm SNR}\lesssim (\xi\eta)^{-1}t\sim 10~{\rm yr}(t/1~{\rm yr})$. Such a young SNR would show significant, observable, secular evolution in both DM and RM \citep{Yang17,Piro18,Zhao21b}, and could even be opaque for FRBs. These seem to be inconsistent with the current observations. For the scenario of $\xi\Delta R>l_{\rm cloud}$, according to Eq.(\ref{RMvariation}), Eq.(\ref{outerscale}) and Eq.(\ref{largestscale}), the large relative RM variation $|\delta{\rm RM}/{\rm RM}|\sim 1$ implies that $l_{\rm cloud}\lesssim10^{-4}~{\rm pc}(v_\perp/100~{\rm km~s^{-1}})(t/1~{\rm yr})$. In this case, the SNR could be relatively older, however, it is challenging to resolve such small-scale density structures due to the limited resolution in observations \citep{Hennebelle12}. \subsection{RM variations contributed by stellar winds from a massive/giant star companion}\label{stellarwind} There is some evidence suggesting that an FRB source might be in a binary system: 1) FRB 180916B shows a periodic activity with a period of $16.35$ days \citep{CHIME20b}, which might correspond to an orbital period of a binary system as proposed by some authors \citep{Ioka20b,Dai20,Lyutikov20,Zhang20e,Li21d,wada21}; 2) FRB 20200120E was found to be associated with a globular cluster in the M81 galaxy \citep{Bhardwaj21,Kirsten22}, which contains many close binary systems; 3) PSR B1744-24A in a binary system displaying a complex, magnetized environment with Faraday conversion and circularly polarized attenuation, which is similar to some FRB repeaters \citep{Xu21,Li22}; 4) RM variations consisting of a constant component and an irregular component have been observed both in FRB repeaters \citep{Xu21,Mckinven22} and pulsar binary systems \citep{Johnston96,Johnston05}. The latter requires an elliptical orbit of the pulsar. The similar configuration may apply to FRB repeaters as well \citep[e.g.,][]{Li21d,Wang22}. In this subsection, we consider that the extremely large RM and the significant RM variation of some FRB repeaters are caused by the stellar wind from a companion in a binary system, as shown in Figure \ref{figstellarwind}. In order to provide strong stellar winds, the companion is likely a massive star or a giant star. We assume that the companion star has a mass $M_c$, a radius $R_c$, and a mass loss rate of $\dot M$. The wind velocity can be calculated as the escape velocity, i.e. $v_w\sim(2GM_c/R_c)^{1/2}\sim620~{\rm km~s^{-1}}(M_c/M_\odot)^{1/2}(R_c/R_\odot)^{-1/2}$. The electron density in the stellar wind at a distance $r$ from the star is given by \begin{align} n_w(r)&\simeq\frac{\dot M}{4\pi\mu_mm_pv_wr^2}\simeq1.1\times10^6~{\rm cm^{-3}}\nonumber\\ &\times\fraction{r}{1~{\rm AU}}{-2}\fractionz{\dot M}{10^{-8}M_\odot~{\rm yr^{-1}}}\fraction{v_w}{10^3~{\rm km~s^{-1}}}{-1}, \end{align} where $\mu_m=1.2$ is the mean molecular weight for a solar composition, and the mass loss rate depends on the stellar type, e.g. $\dot M\sim10^{-7}-10^{-5}M_\odot~{\rm yr^{-1}}$ for O stars \citep{Puls96,Muijres12}; $\dot M\sim10^{-11}-10^{-8}M_\odot~{\rm yr^{-1}}$ for Be stars \citep{Snow81,Poe86}; $\dot M\sim10^{-14}-10^{-10}M_\odot~{\rm yr^{-1}}$ for solar-like stars \citep{Wood02}. We assume that the FRB emission region is close to the FRB source. Since the wind density decrease as $r^{-2}$, most of the local DM would be contributed by the wind at\footnote{Except for the scenario that the companion is in front of the FRB source along the line of sight, leading to a large DM during the eclipsing phase. A similar scenario has been seen in PSR B1744-24A in the globular cluster Terzan 5 reported by \cite{Li22}). During the eclipsing phase, PSR B1744-24A shows a significant DM variation and depolarization caused by RM variation.} $r\sim a$, where $a$ is the binary separation. Thus, the DM contributed by the stellar wind is estimated as \begin{align} {\rm DM}_w&\sim n_wa\simeq5.4~{\rm pc~cm^{-3}}\fraction{a}{1~{\rm AU}}{-1}\nonumber\\ &\times\fractionz{\dot M}{10^{-8}M_\odot~{\rm yr^{-1}}}\fraction{v_w}{10^3~{\rm km~s^{-1}}}{-1}. \end{align} In order to estimate the RM contribution of the companion wind, we consider that the magnetic field strength at distance $r$ from the companion center satisfies \begin{align} B(r)\sim \left\{ \begin{aligned} &B_c\fraction{r}{R_c}{-3},&&R_c<r<R_A,\\ &B_c\fraction{R_A}{R_c}{-3}\fraction{r}{R_A}{-\beta_B},&&R>R_A, \end{aligned} \right. \end{align} where $1\lesssim \beta_B\lesssim 2$. Typically, one has $\beta_B\simeq1$ for a toroidal field and $\beta_B\simeq2$ for a radial field. In the above equation, $B\propto r^{-3}$ corresponds to the dipole field near the companion surface and $B\propto r^{-\beta_B}$ corresponds to the magnetic field in the stellar wind outside the Alfv\'en radius $R_A$. In the inner region $r<R_A$, the magnetic field pressure $P_B$ dominates, and the stellar wind moves along the field lines. In the outer region $r>R_A$, due to the large ram pressure of the stellar wind $P_w$, the magnetic field pressure is sub-dominant and would be carried by the wind. The Alfv\'en radius $R_A$ is defined by \be P_w=\frac{1}{2}\rho_w(R_A)v_w^2\simeq\frac{\dot Mv_w}{8\pi R_A^2}\sim P_B=\frac{1}{8\pi}B_c(R_A)^2, \ee where $\rho_w=\dot M/(4\pi r^2 v_w)$ is the mass density of the stellar wind. According to the above equation, one has \begin{align} R_A&\sim\fraction{B_c^2R_c^6}{\dot Mv_w}{1/4}\simeq0.1R_\odot\fraction{B_c}{1~{\rm G}}{1/2}\fraction{R_c}{1R_\odot}{3/2}\nonumber\\ &\times\fraction{\dot M}{10^{-8}M_\odot~{\rm yr^{-1}}}{-1/4}\fraction{v_w}{10^3~{\rm km~s^{-1}}}{-1/4},\label{alfven} \end{align} suggesting that $R_A$ is inside the star for our typical parameters, so that the magnetic field is dominated by the stellar wind at $r>R_c$. In other words, the stellar wind would carry the relative strong magnetic field extending to large distances. The RM contributed by the companion wind is approximately \begin{align} {\rm RM}_w&\sim\frac{e^3Bn_wa}{2\pi m_e^2c^4} \simeq2\times10^4~{\rm rad~m^{-2}}\fractionz{B_c}{1~{\rm G}}\fractionz{R_c}{1R_\odot}\nonumber\\ &\times\fractionz{\dot M}{10^{-8}M_\odot~{\rm yr^{-1}}}\fraction{a}{1~{\rm AU}}{-2}\fraction{v_w}{10^3~{\rm km~s^{-1}}}{-1}.\label{RMw1} \end{align} for a toroidal field with $\beta_B\simeq1$, and \begin{align} {\rm RM}_w&\simeq94~{\rm rad~m^{-2}}\fractionz{B_c}{1~{\rm G}}\fractionz{R_c}{1R_\odot}\nonumber\\ &\times\fractionz{\dot M}{10^{-8}M_\odot~{\rm yr^{-1}}}\fraction{a}{1~{\rm AU}}{-3}\fraction{v_w}{10^3~{\rm km~s^{-1}}}{-1}.\label{RMw2} \end{align} for a radial field with $\beta_B\simeq2$. A large RM value of ${\rm RM}\gtrsim10^4~{\rm rad~m^{-2}}$ is consistent with the observation of FRB 121102 and FRB 190520B \citep{Michilli18,Hilmarsson21,Anna-Thomas22,Dai22}. Therefore, the environment of FRB repeaters with large RMs might correspond to the stellar wind of a massive star or a giant star with a toroidal magnetic field configuration. In the following discussion, we will analyze the RM variation within this scenario. When an FRB repeater is in a binary system, both orbital motion of the binary system and dynamic evolution of the companion wind could cause RM variation. As discussed in Section \ref{FaradayScreen}, a magnetized companion usually has a large-scale field, and the orbital motion of the radio source in such a large-scale field would cause RM variation \citep{Wang22,Li22}. Such a scenario has been observed in a binary system. For example, the RM of PSR B1259-63 reaches a few times $10^3~{\rm rad~m^{-2}}$ and significantly reverses sign around the periastron \citep{Johnston96,Johnston05}. Consider that the total mass of the binary system is $M_{\rm tot}$. The orbital period is given by Eq.(\ref{period}), i.e. \be P \simeq0.3~{\rm yr}\fraction{a}{1~{\rm AU}}{3/2}\fraction{M_{\rm tot}}{10~M_\odot}{-1/2}. \ee In this scenario, RM variation should have the same period as the orbital period. However, there is no evidence that any FRB repeater shows a periodic RM variation up to the present. In particular, FRB 180916B exhibits a 16.33-day periodicity in its burst activity \citep{CHIME20b}, and the periodic activity might be due to orbital motion as proposed in some models \citep{Ioka20b,Dai20,Lyutikov20,Zhang20e,Li21d,Wang22}. However, the long-term RM evolution and burst-to-burst RM variations appear unrelated to the periodic activity and the activity cycle phase \citep{Mckinven22}. There are two possible reasons: 1. the periodic activity of radio bursts is not due to the orbital motion, and the possible RM variation period corresponding to the orbital period is much larger than the observing time of a few years; 2. the periodic activity of radio bursts is caused by the orbital motion, but the long-term RM variation of FRB 180916B is due to the intrinsic evolution of the stellar wind along the line of sight. Turbulence in stellar winds could be caused by the anisotropic distribution or the episodic outflow of the stellar wind, which may smear out the apparent RM periodic evolution. For turbulence of the companion wind at the distance $r\sim a$, the typical outer scale of turbulence might be estimated by \be L\sim \min(\theta_w a, v_w\Delta t_w), \ee where $\theta_w$ is the typical anisotropic distribution angle of the stellar wind, and $\Delta t_w$ is the typical timescale of the wind outflow variation. The episodic wind variation is usually caused by stellar flares, which will be further discussed in the Section \ref{stellarflare}. Here we are mainly interested in the case of persistent wind with $\theta_w a\ll v_w\Delta t_w$, leading to \be L\sim \theta_w a\simeq1~{\rm AU}\fractionz{\theta_w}{1~{\rm rad}}\fractionz{a}{1~{\rm AU}}, \ee The Keplerian velocity of the FRB source around the companion is \be v=\fraction{GM_{\rm tot}}{a}{1/2}\simeq94~{\rm km~s^{-1}}\fraction{M_{\rm tot}}{10M_\odot}{1/2}\fraction{a}{1~{\rm AU}}{-1/2}. \ee Therefore, the time of the FRB source crossing the outer scale $L$ is \be t_L\sim\frac{L}{v}\simeq18~{\rm day}\fractionz{\theta_w}{1~{\rm rad}}\fraction{M_{\rm tot}}{10M_\odot}{-1/2}\fraction{a}{1~{\rm AU}}{3/2}. \ee For the adopted typical parameters, this timescale is slightly shorter than the observing time of RM variations of some FRB repeaters. According to Eq.(\ref{sf1}), Eq.(\ref{sf2}) and Eq.(\ref{RMvariation}), one has $|\delta {\rm RM}/{\rm RM}|\propto t^{-(\alpha+2)/2}$ and $D_{\rm RM}(t)\propto t^{-(\alpha+2)}$ for $t\lesssim t_L$; and $|\delta {\rm RM}/{\rm RM}|\sim 1$ and $D_{\rm RM}(t)\sim\text{constant}$ for $t\gtrsim t_L$. In particular, if the binary orbit is elliptical, a larger RM variation would occur near the periastron due to a stronger magnetic field and a higher electron density, and would keep almost constant far away from the periastron, as was observed in PSR B1259-63 \citep{Johnston96,Johnston05}. Some FRB repeaters, e.g., FRB 20201124A, also exhibited the similar behaviors \citep{Xu21,Wang22}, although the periodic evolution of RM fluctuations have not been detected so far. It is worth noting that for this scenario, the periodic evolution of RM variations would be easier to achieve for an elliptical orbit compared with a circular orbit, because the possible significant fluctuations of electron density and magnetic fields in the latter case might smear out the clean periodic signature in RM variation. \subsection{RM variations contributed by stellar flares from a low-mass star companion}\label{stellarflare} Stellar flares are usually defined as catastrophic releases of magnetic energy leading to particle acceleration and electromagnetic radiation accompanied by coronal mass ejections (CMEs) \citep[e.g.,][]{Haisch91}. Frequent flaring occurs on stars with an outer convection zone, and the timescale of energetic fares is longer than that of less energetic flares, regardless of the host star \citep[e.g.,][]{Pettersen89}. Over short timescales of minutes to a few hours, they emit energy ranging from $10^{23}~{\rm erg}$ (nanoflares; e.g., \cite{Parnell00}) to $10^{31}-10^{38}~{\rm erg}$ (superflare; e.g., \cite{Shibayama13,Gunther20}). In particular, for low-mass stars, due to strong convection near their surfaces, flares and CMEs are usually frequent. In the following discussion, we will mainly focus on stellar flares/CMEs from low-mass stars. To estimate the impact of stellar flares/CMEs, we apply the empirical relationship between flare energy in the X-ray band, $E_X$, and the CME mass, $M_{\rm CME}$, found by \cite{Aarnio12}, i.e. $\log M=0.63\log E_X-2.57$. We assume that the ratio between the stellar flare X-ray energy $E_X$ and the CME kinetic energy $E_{\rm CME}$ is $\epsilon$, i.e. $E_X=\epsilon_X E_{\rm CME}$, and adopt a typical value $\epsilon_X\sim0.01$ considering that the energy emitted bolometrically is typically larger than the X-ray energy by a factor of 100 for the same flare strength \citep{Osten15,Gunther20}. Therefore, the CME mass is \be M_{\rm CME}\simeq 2.1\times10^{16}~{\rm g}\fraction{E_{\rm CME}}{10^{32}~{\rm erg}}{0.63}, \ee and the CME velocity is estimated as \be v_{\rm CME}\simeq\fraction{2E_{\rm CME}}{M_{\rm CME}}{1/2} \simeq980~{\rm km~s^{-1}}\fraction{E_{\rm CME}}{10^{32}~{\rm erg}}{0.185}. \ee We define $t_{1/2}$ as the decay time from the peak luminosity to half of that level for a stellar flare. The U-filter flare data show that $t_{1/2}$ depends on the U-band flare energy, i.e. $\log t_{1/2}=0.3\log E_U-7.5$ in cgs unit \citep{Pettersen89}. We assume $E_U=\epsilon_U E_{\rm CME}$ with the efficiency taken as $\epsilon_U\sim0.1$ for U band \citep[e.g.,][]{Osten15}. The flare duration $\Delta t$ is then estimated as \be \Delta t\sim t_{1/2}\simeq63~{\rm s}\fraction{E_{\rm CME}}{10^{32}~{\rm erg}}{0.3}. \ee Therefore, the electron density of the CME at the distance $r$ from the center of the companion star is \begin{align} n_{\rm CME}(r)&\simeq\frac{M_{\rm CME}}{4\pi\mu_mm_pv_{\rm CME}r^2\Delta t}\nonumber\\ &\simeq600~{\rm cm^{-3}}\fraction{E_{\rm CME}}{10^{32}~{\rm erg}}{0.145}\fraction{r}{1~{\rm AU}}{-2}. \end{align} Different from the scenario of stellar winds, the contributions of DM and RM from the stellar flare depend on the time difference between the stellar flare and FRBs. We consider that an FRB encounter the stellar flare at the distance $r$ from the companion star, as shown in Figure \ref{figstellarflare}. The DM contribution by the flare is approximately \be {\rm DM}_{\rm CME}\sim n_{\rm CME}r\simeq2.9\times10^{-3}~{\rm pc~cm^{-3}}\fraction{E_{\rm CME}}{10^{32}~{\rm erg}}{0.145}\fraction{r}{1~{\rm AU}}{-1}. \ee Similar to the discussion of stellar winds in Section \ref{stellarwind}, the Alfv\'en radius of the stellar flare is given by Eq.(\ref{alfven}), i.e. \begin{align} R_A&\sim\fraction{B_c^2R_c^6\Delta t}{M_{\rm CME}v_{\rm CME}}{1/4}\nonumber\\ &\simeq0.6R_\odot\fraction{B_c}{1~{\rm G}}{1/2}\fraction{R_c}{1R_\odot}{3/2}\fraction{E_{\rm CME}}{10^{32}~{\rm erg}}{-0.13}. \end{align} Because $r\gg R_A$, the magnetic field in the stellar flare also satisfies $B(r)\propto r^{-\beta_B}$, with $\beta_B$ ranging from 1 to 2. Therefore, the RM contribution by a flare is approximately \begin{align} {\rm RM}_{\rm CME}&\sim \frac{e^3}{2\pi m_e^2c^4} Bn_{\rm CME}r\simeq11~{\rm rad~m^{-2}}\nonumber\\ &\times\fractionz{B_c}{1~{\rm G}}\fractionz{R_c}{1R_\odot}\fraction{E_{\rm CME}}{10^{32}~{\rm erg}}{0.145}\fraction{r}{1~{\rm AU}}{-2}. \end{align} for a toroidal field with $\beta_B\simeq1$, or \be {\rm RM}_{\rm CME}\simeq 0.05~{\rm rad~m^{-2}}\fractionz{B_c}{1~{\rm G}}\fractionz{R_c}{1R_\odot}\fraction{E_{\rm CME}}{10^{32}~{\rm erg}}{0.145}\fraction{r}{1~{\rm AU}}{-3}. \ee for a radial field with $\beta_B\simeq2$. Notice that the above typical values of RMs are much smaller than those of stellar winds given by Eq.(\ref{RMw1}) and Eq.(\ref{RMw2}). This is because the absolute mass loss rates of stellar flares/CMEs from low-mass stars are much smaller than those of stellar winds from massive/giant stars. In particular, for a low-mass star with a surface magnetic field $B_c\sim10^3~{\rm G}$ and a radius of $R_c\sim0.1R_\odot$ (due to strong convection near the surface of a low-mass star, its surface magnetic fields are usually stronger than those of a massive star, also see \cite{Kochukhov21}), the RM contributed by its flare at $r\sim1~{\rm AU}$ could reach $\sim10^3~{\rm rad~m^{-2}}$ for a toroidal field. Therefore, if the large RMs of $\gtrsim10^3~{\rm rad~m^{-2}}$ are contributed by the companion flare, the separation of such a binary system is required to be small, with $a\lesssim1~{\rm AU}$, and each radio burst is required to be emitted just when the CME sweeps the FRB source. These requirements are likely fine turning. Considering that the burst rate of radio bursts of some FRB repeaters ($\gtrsim 100~\text{bursts}~{\rm day^{-1}source^{-1}}$ for FRB 121102, \cite{Lid21}) is much larger than the rate of stellar flares ($\lesssim10~\text{flares}~{\rm day^{-1}source^{-1}}$ e.g., \cite{Osten15,Davenport16}) and that FRB emission and stellar flaring in a binary system should be physically independent, the time difference between FRBs and flares should be randomly distributed. We assume that two radio bursts (Burst A and Burst B) are emitted with a ten-day time delay in a binary system with a separation of $a\sim1~{\rm AU}$ and that the companion star has $B_c\sim10^3~{\rm G}$ and $R_c\sim0.1R_\odot$. Burst A is emitted when the CME just sweeps the FRB source, leading to ${\rm RM}\sim10^3~{\rm rad~m^{-2}}$. Assuming that the CME velocity is $v\sim1000~{\rm km~s^{-1}}$, the CME would at $r\sim6~{\rm AU}$ when Burst B is crossing the CME. The RM of Burst B is about ${\rm RM}\sim30~{\rm rad~m^{-2}}$. In summary, the RM varies from $10^3~{\rm rad~m^{-2}}$ to $10~{\rm rad~m^{-2}}$ during ten days. Such an extremely RM variation has not been observed in any FRB repeaters, which is inconsistent with the current observations unless flares are more frequent so that Burst B would encounter another newly ejected CME when it is observed. \subsection{RM variations contributed by pulsar winds, pulsar wind nebulae, or magnetar flares}\label{pair} In this subsection, we consider the RM contribution from a pair plasma, including a pulsar wind, a pulsar wind nebula, or a magnetar flare. A pulsar wind could be produced by a neutron star as the companion of the FRB source in a binary system (see the panel (a) of Figure \ref{figpulsarwind}) or by the FRB source itself (see the panel (b) of Figure \ref{figpulsarwind}). In order to place a strong constraint, we first assume that the FRB source is at the center of the pulsar wind in the following discussion (Figure \ref{figpulsarwind}b). If the FRB source significantly deviates from the center of the pulsar wind (Figure \ref{figpulsarwind}a), the corresponding RM would be smaller. In general, since the pulsar wind is composed of relativistic electron-positron pairs, its RM contribution would be very small as proved below. For a neutron star with radius $R$, dipolar magnetic field at the pole $B_p$ and angular velocity $\Omega$, the spin-down power of the neutron star is \be L_{\rm sd} \simeq \frac{B_p^2R^6\Omega^4} {6c^3} \simeq9.6\times10^{36}~{\rm erg~s^{-1}}\fraction{B_{p}}{10^{13}~{\rm G}}{2}\fraction{P}{0.1~{\rm s}}{-4}. \ee The magnetic field is nearly dipolar inside the light cylinder $R_{\rm LC}=c/\Omega$ but become toroidal in the pulsar wind. Thus, the field strength at $r>R_{\rm LC}$ is given by \begin{align} B(r)&=\frac{B_p}{2}\left(\frac{R}{R_{\rm LC}}\right)^{3}\left(\frac{R_{\rm LC}}{r}\right)\nonumber\\ &=2.2~{\rm G}\fractionz{B_p}{10^{13}~{\rm G}}\fraction{P}{0.1~{\rm s}}{-2}\fraction{r}{10^{13}~{\rm cm}}{-1},\label{B} \end{align} where $B_p/2$ is the mean surface magnetic field strength. The Goldreich-Julian particle ejection rate from the polar cap may be estimated by \be \dot N_{\rm GJ}=2cA_{\rm cap}n_{\rm GJ}=2.7\times10^{33}~{\rm s^{-1}}\fractionz{B_p}{10^{13}~{\rm G}}\fraction{P}{0.1~{\rm s}}{-2}, \ee where $n_{\rm GJ}=B_p/Pec$ is the Goldreich-Julian density at the neutron star pole \citep{Goldreich69}, and $A_{\rm cap}\simeq\pi R^3/R_{\rm LC}$ is the area of the polar cap for $R_{\rm LC}\gg R$. For a pair multiplicity $\mathcal{M}$, the electron/positron number density at distance $r$ is \begin{align} n_{e}(r)&=\frac{\mathcal{M}\dot N_{\rm GJ}}{4\pi c r^2}=0.07~{\rm cm^{-3}}\fractionz{\mathcal{M}}{10^3}\nonumber\\ &\times\fractionz{B_p}{10^{13}~{\rm G}}\fraction{P}{0.1~{\rm s}}{-2}\fraction{r}{10^{13}~{\rm cm}}{-2}.\label{n} \end{align} We consider that the comoving Lorentz factor of the pulsar wind is $\gamma$, and the pair plasma is hot with thermal Lorentz factor $\gamma_{\rm th}$ in the comoving frame. For the pair plasma, due to the symmetry of positive and negative charges, its RM would be suppressed by the multiplicity $\mathcal{M}$. In other words, only the net charges contribute to Faraday rotation (see Appendix \ref{RMpair} for details). Meanwhile, in the comoving frame, the RM contribution from relativistic hot electrons is suppressed by a factor of $\gamma_{\rm th}^2$ due to the relativistic mass $m_e\rightarrow\gamma_{\rm th} m_e$ \citep{Quataert00}. Therefore, the RM contributed by the pulsar wind is (see Appendix \ref{RMpair} for details) \be {\rm RM}=\frac{e^3}{\pi m_e^2c^4}\frac{1}{\gamma_{\rm th}^2\mathcal{M}}\int_{r_c}^dn_eB_\parallel ds,\label{RMw} \ee Notice that the above equation does not directly involve the wind Lorentz factor $\gamma$ (see Appendix \ref{RMpair} for detailed reason). In Eq.(\ref{RMw}), $d$ corresponds to the radius of a pulsar wind nebula, and $r_c$ is the critical radius where the electron cyclotron frequency is equal to the wave frequency in the comoving frame, leading to \be B(r_c)\sim \frac{2\pi m_ec}{e}\left(\frac{\nu}{\gamma}\right)\simeq3.6~{\rm G}\fraction{\gamma}{100}{-1}\fractionz{\nu}{1~{\rm GHz}}.\label{Brc} \ee Different from Eq.(\ref{Bc}), here a factor of $1/\gamma$ is involved due to the Doppler effect of relativistic motion of the pulsar wind and the unchanged parallel field in different frames (see Appendix \ref{RMpair}). In the region with $r<r_c$, the requirement of $\psi\propto\lambda^2$ for the RM measurement could not be satisfied according to the wave dispersion relation (see Appendix \ref{RMpair}). According to the equations of $B(r)$ and $n_e(r)$ (Eq.(\ref{B}) and Eq.(\ref{n})), one has \be \frac{B}{n_e}\simeq30~{\rm G~cm^3}\fraction{\mathcal{M}}{10^3}{-1}\fractionz{r}{10^{13}~{\rm cm}},\label{Bn} \ee which is independent of the surface magnetic field and period of the neutron star. Due to $n_e\propto r^{-2}$ and $B\propto r^{-1}$, most RM is contributed at $r_c$. Thus, according to Eq.(\ref{RMw}), Eq.(\ref{Brc}) and Eq.(\ref{Bn}), the RM contributed by a pulsar wind is estimated by \be {\rm RM}\simeq2.3\times10^{-3}~{\rm rad~m^{-2}}\gamma_{\rm th}^{-2}\fraction{\gamma}{100}{-2}\fraction{\nu}{1~{\rm GHz}}{2},\label{RMw1} \ee which is very small. Note that in the above equation, the frequency-dependent RM is due to the integral lower limit $r_c$. For an FRB with a finite bandwidth between $(\nu_{\min},\nu_{\max})$, the RM measurement based on the condition $\psi={\rm RM}\lambda^2$ implies that $\nu$ in Eq.(\ref{RMw1}) should be replaced by the observed minimum frequency $\nu_{\rm min}$. Thus, the RM and its variation contributed by a pulsar wind is very small, which is independent of the surface magnetic field, period and multiplicity of the neutron star. Note that the above discussion assumes that the parallel component of the magnetic field is of the order of the total field, $B_\parallel\sim B$. For the pulsar wind scenario with the field almost perpendicular to the wind velocity \citep[e.g.,][]{Becker09}, the parallel component $B_\parallel$ would be much smaller, leading to an even smaller RM contribution. We are also interested in the scenarios of pulsar wind nebulae and magnetar flares. A pulsar wind nebula is produced by the interaction between the pulsar wind and the SNR/interstellar medium. In this process, the kinetic energy of the pulsar wind is transferred to thermal energy, i.e., $\gamma\rightarrow\gamma_{\rm th}$. Meanwhile, more pairs could be generated via magnetic reconnection, but the number of the net charges keeps unchanged. Because $n_e\propto r^{-2}$ and $B\propto r^{-1}$, at the pulsar wind nebula radius that is much larger than $r_c$, the RM contribution would be much smaller. For magnetar flares, since a part of flare energy is transferred to relativistic pairs \citep{Thompson95}, their RM contribution is also expected to be very small for the same reason as in the pulsar winds and pulsar wind nebulae. The estimated RM contributions from pulsar winds (nebulae) and magnetar flares are also consistent with the observations of most Galactic radio pulsars and radio loud magnetars that have relatively small RMs mainly contributed from the interstellar medium. In particular, FRB 200428 was produced during the active phase of the magnetar SGR J1935+2152 \citep{Bochenek20,CHIME20}, which was associated with a hard X-ray burst \citep{Mereghetti20,Li20,Ridnaia20,Tavani20}. However, its RM is almost consistent with the value during its radio pulsar phase (Zhu et al. 2022, submitted). This implies that a magnetar flare cannot contribute significantly to RM variations. \subsection{RM variations contributed by magnetized outflows from a massive black hole}\label{BH} The extremely large RMs with ${\rm RM}\gtrsim 10^4~{\rm rad~m^{-2}}$ have been observed in the vicinities of massive black holes \citep{Bower03,Marrone07,Eatough13}. For example, the radio-loud magnetar PSR J1745-2900, which resides just $0.12~{\rm pc}$ from Sgr ${\rm A}^\ast$ \citep{Eatough13}, shows a large but relatively stable DM of $1800~{\rm pc~cm^{-3}}$ (consistent with a source located within $<10~{\rm pc}$ of the Galactic center, in the framework of the NE 2001 free electron density model of the Galaxy \citep{Cordes02}) and RM of $10^4-10^5~{\rm rad~m^{-2}}$ with an RM variability of $\sim 3500~{\rm rad~m^{-2}}$ \citep{Desvignes18}. Thus, it has been suggested that the large RMs observed in some FRB repeaters might be a result that the source is located in the vicinity of a massive black hole \citep{Michilli18,Zhang18,Anna-Thomas22,Dai22}, as shown in Figure \ref{figblackhole}. Since the wind from a massive black hole is attributed to its accretion, the mass loss rate of the massive black hole can be normalized to the Eddington accretion rate $\dot M_{\rm Edd}$ with a dimensionless parameter $f$, \begin{align} \dot M&=f\dot M_{\rm Edd}=\frac{4\pi Gm_p}{\epsilon_{\rm BH}\sigma_Tc}fM_{\rm BH}\nonumber\\ &\simeq2.2\times10^{-3}M_\odot~{\rm yr^{-1}}f\fractionz{M_{\rm BH}}{10^5M_\odot}, \end{align} where $M_{\rm BH}$ is the black hole mass, and $\epsilon_{\rm BH}\sim0.1$ is the radiative efficiency of a black hole accretion disk. Thus, the electron density at distance $r$ from the massive black hole is \begin{align} n_e(r)&\simeq\frac{\dot M}{4\pi\mu_mm_pvr^2}\nonumber\\ &\simeq2\times10^3~{\rm cm^{-3}}f\fractionz{M_{\rm BH}}{10^5M_\odot}\fraction{v}{0.1c}{-1}\fraction{r}{10^{-2}~{\rm pc}}{-2}. \end{align} Similar to the discussion on stellar winds (see Section \ref{stellarwind}), the DM contribution is \be {\rm DM}\sim n_ea \simeq20~{\rm cm^{-3}}f\fractionz{M_{\rm BH}}{10^5M_\odot}\fraction{v}{0.1c}{-1}\fraction{a}{10^{-2}~{\rm pc}}{-1}, \ee where $a$ is the separation between the FRB source and the massive black hole, and the RM contribution is \begin{align} {\rm RM}&\sim \frac{e^3}{2\pi m_e^2c^4} B_rn_ea\simeq1.6\times10^4~{\rm rad~m^{-2}}f\nonumber\\ &\times\fractionz{M_{\rm BH}}{10^5M_\odot}\fraction{v}{0.1c}{-1}\fractionz{B_r}{1~{\rm mG}}\fraction{a}{10^{-2}~{\rm pc}}{-1}, \end{align} where $B_r$ is the field strength at $r\sim a$ from the center of the massive black hole. When an FRB source is moving near a massive black hole with a Keplerian velocity of \be v\simeq\fraction{GM_{\rm BH}}{a}{1/2}\simeq210~{\rm km~s^{-1}}\fraction{M_{\rm BH}}{10^5M_\odot}{1/2}\fraction{a}{10^{-2}~{\rm pc}}{-1/2}, \ee the RM variation is accounted for by the changes of magnetic field or electron density due to orbital motion or the inhomogeneous wind medium similar to the scenarios of binary systems. Different from the stellar wind scenario, the outflow from a massive black hole might interact with the gas clouds in the vicinity of the massive black hole. For turbulence in the outflow at the distance $r\sim a$, the typical turbulence outer scale may be estimated as \be L\sim \min(\theta_w a, l_{\rm cloud}), \ee where $\theta_{\rm out}$ is the typical anisotropic distribution angle of the outflow from the massive black hole, and $l_{\rm cloud}$ is the typical size of the clouds for the turbulence induced by the interaction of the outflow with the circumnuclear clouds. If anisotropy of the outflow is significant, $\theta_w a<l_{\rm cloud}$, one has \be L\sim \theta_w a\simeq10^{-2}~{\rm pc}\fractionz{\theta_{\rm out}}{1~{\rm rad}}\fractionz{a}{10^{-2}~{\rm pc}}, \ee and the timescale of the FRB source crossing the outer scale $L$ is \be t_L\sim\frac{L}{v}\simeq47~{\rm yr}\fractionz{\theta_{\rm out}}{1~{\rm rad}}\fraction{M_{\rm BH}}{10^5M_\odot}{-1/2}\fraction{a}{10^{-2}~{\rm pc}}{3/2}. \ee If the typical cloud size is small with $l_{\rm cloud}<\theta_w a$, one has \be L\sim l_{\rm cloud}\simeq10^{-2}~{\rm pc}\fractionz{l_{\rm cloud}}{10^{-2}~{\rm pc}}, \ee and the time of the FRB source crossing the outer scale $L$ is \be t_L\sim47~{\rm yr}\fractionz{l_{\rm cloud}}{10^{-2}~{\rm pc}}\fraction{M_{\rm BH}}{10^5M_\odot}{-1/2}\fraction{a}{10^{-2}~{\rm pc}}{1/2}. \ee According to Eq.(\ref{sf1}), Eq.(\ref{sf2}) and Eq.(\ref{RMvariation}), one has $|\delta {\rm RM}/{\rm RM}|\propto t^{-(\alpha+2)/2}$ and $D_{\rm RM}(t)\propto t^{-(\alpha+2)}$ for $t\lesssim t_L$; and $|\delta {\rm RM}/{\rm RM}|\sim 1$ and $D_{\rm RM}(t)\sim\text{constant}$ for $t\gtrsim t_L$. At last, if the orbit of the FRB source is elliptical, a large RM variation would occur near the periastron and keep almost constant far away from the periastron, and a significant periodic evolution of RM variation could be tested through long-term monitoring of the FRB sources. In general, the timescale is too long for the supermassive black hole scenario to interpret the observed short-term RM variations, which requires a scaled down $a$ and $l_{\rm cloud}$. The FRB source must be very close to the black hole in order to have the observed rapid RM variability. \section{Discussions and Conclusions}\label{discussion} FRBs are mysterious radio transients with the physical origin still unknown. As cosmological radio transients, their propagating effects (including dispersion, Faraday rotation, temporal scattering, scintillation, depolarization, etc.) are important probes to reveal the physical properties of the astrophysical environments the radio waves propagate through, including the near-source plasma, interstellar medium and the intergalactic medium. Different from the DM that is mainly contributed by the intergalactic medium, a large absolute RM value of $\gtrsim(10^2-10^3)~{\rm rad~m^{-2}}$ observed at a high Galactic latitude can be only contributed by the magnetized environment near an FRB source. Furthermore, since the interstellar medium and intergalactic medium are not expected to vary in a short time, an observed RM variation can be only attributed to the dynamical evolution, or the relative motion of the near-source plasma with respect to the FRB source. Very recently, some FRB repeaters were found to show significant RM variations \citep{Michilli18,Hilmarsson21,Xu21,Anna-Thomas22,Dai22,Mckinven22}, and the relative variation amplitudes of some repeaters reach $|\delta{\rm RM}/{\rm RM}|\sim1$ during a few months to a few years, e.g., FRB 121102 and FRB 190520B \citep{Michilli18,Anna-Thomas22,Dai22}. The RM variations of FRB repeaters reflect that the near-source environments of FRB repeaters are dynamically evolving (e.g., SNR, stellar flare, etc.) or there is a significant relative motion between the FRB source and the environment (e.g., FRB source in a binary system or in the vicinity of a massive black hole). If the magnetized environment is inhomogeneous, when an FRB propagates in the environment, the electromagnetic waves will be depolarized due to the multi-path propagation effects \citep{Beniamini22,Yang22}, which have been recently confirmed by the observations of some FRB repeaters \citep{Feng22}. In this work, we have investigated some astrophysical processes that may cause RM variations of an FRB repeater, including SNRs, winds and flares from a companion in a binary system, pair plasma (pulsar winds, pulsar wind nebulae and magnetar flares), and outflows from massive black holes, and turbulence induced in these processes. First, we make a general discussion about the statistical properties of random RM variations. We consider that the power spectrum of the RM density ($n_eB_\parallel$) fluctuations satisfies $P(k)\propto k^{\alpha}$, then the RM structure function is $D_{\rm RM}(t)\propto t^{-(\alpha+2)}$ for $v_\perp t<L$ and $D_{\rm RM}(t)\sim\text{constant}$ for $v_\perp t>L$, where $v_\perp$ is the transverse relative velocity between the FRB source and the environment, and $L$ is the outer scale of the inhomogeneous medium. During the observing time $t$, the relative RM variation is $|\delta {\rm RM}/{\rm RM}|\sim(v_\perp t/L)^{-(\alpha+2)/2}$ for $v_\perp t<L$ and $|\delta {\rm RM}/{\rm RM}|\sim1$ for $v_\perp t>L$. The measurements of the RM structure function of some FRB repeaters reveal that \citep{Mckinven22} $D_{\rm RM}(t)\propto t^{0.2-0.4}$, leading to $\alpha\sim-(2.2-2.4)$, which implies that the power spectrum of the RM density fluctuations is shallow, and the RM variations are mainly contributed by the electron density fluctuations at small scales. On the other hand, the large relative RM variations of FRB 121102 and FRB 190520B imply that the outer scale of the fluctuations is $L\lesssim v_\perp t\simeq10^{-4}~{\rm pc}(v_\perp/100~{\rm km~s^{-1}})(t/1~{\rm yr})$ for observations of a few years. On the other hand, secular RM evolution could be attributed to the expansion of a magnetized shell (e.g., SNR, stellar flares, etc.) or the orbital motion of a binary system (e.g., an FRB source in the stellar winds of a companion in a binary system or in the outflows of a massive black hole). The former scenario predicts that the RM exhibits a long-term monotonic evolution, and the latter scenario suggests that a periodic RM evolution could be detected when the companion has a large-scale strong magnetic field, especially if the orbit is elliptical. The long-term evolution of the RM contributed by an SNR has been discussed in some previous papers with the assumption that the geometry of magnetic fields along the line of sight keeps unchanged \citep[e.g.,][]{Piro18,Zhao21b}. However, such a model cannot explain the non-monotonic irregular RM evolutions exhibited by some FRB repeaters. In this work, we consider that the medium in an SNR is inhomogeneous due to turbulence and instabilities, and the irregular RM variation is due to the relative motion between the FRB source and the SNR. We find that there are two possibilities to contribute a large relative RM variation during a few years: 1) the SNR is young with an age of a few hundreds years, meanwhile, the SNR shell must be thin compared with its radius and appears significantly anisotropic locally. In addition, a young SNR would predict observable secular evolution of RM and DM, which can be tested in future observations. 2) The SNR could be relatively older, and the size of dense clouds around the SNR is extremely small. However, such small-scale clouds have not been detected due to the limited resolution of observations. The significant RM variations can also be contributed by the medium from the companion in a binary system. Meanwhile, some evidence suggests that the FRB source might be in a binary system (see a detailed discussion in Section \ref{stellarwind}). When an FRB repeater is in a binary system, the RM variation could be caused by the orbital motion of the binary system or the dynamical evolution of the medium from the companion. For a persistent stellar wind, we consider that the RM variation is due to the inhomogeneity arising from the turbulence in the anisotropic distribution of the stellar wind. We find that in order to explain the large RM of $\sim(10^4-10^5)~{\rm rad~m^{-2}}$ of some FRB repeaters, the mass loss rate of the companion wind is required to be $\dot M\sim 10^{-8}M_\odot~{\rm yr^{-1}}$ if the FRB source is at $r\sim1~{\rm AU}$. Such a large mass loss rate implies that the companion is a massive main sequence star or a giant star. Meanwhile, for a binary system with a mass $M_{\rm tot}\sim10M_\odot$, a separation $a\sim1~{\rm AU}$, and the companion wind with an anisotropic distribution angle $\theta_w\sim1~{\rm rad}$, the relative RM variation could be $|\delta {\rm RM}/{\rm RM}|\sim1$ and $D_{\rm RM}(t)\sim\text{constant}$ for the observing time longer than a few weeks. In particular, if the binary orbit is elliptical, a large RM variation would occur near the periastron, and a periodic evolution of RM variation is expected \citep{Wang22}. Different from the stellar wind case, stellar flares are catastrophic release of magnetic energy and are accompanied by CMEs, which are more frequent in low-mass stars. Based on the observed empirical relations of the stellar flares, we calculated the CME RM contribution in a binary system. We found that the RM is almost independent of the CME energy, but is more related to the companion's surface magnetic field and the position where the FRB crosses it. Although a large RM value can be generated if the companion is a low-mass star with a strong field and a small separation from the FRB source, a large RM variation is expected during short terms due to the different positions of the CME where the FRB crosses through at different times. The current observation seems not to support such a scenario unless the flares are very frequent. In the above discussion, the RM is mainly considered to be contributed by the cold non-relativistic magneto-ionic (ion+electron) plasma. In some astrophysical scenarios, including pulsar winds, pulsar wind nebulae and magnetar flares, the plasma is composed of relativistic pairs. Due to the symmetry of positive and negative charges, the Faraday rotation effect would be canceled, and only the net charges contribute the RM (see Appendix \ref{RMpair} for details). On the other hand, the relativistic motion of electrons would significantly suppress the RM due to the large kinetic mass. Therefore, pulsar winds, pulsar wind nebulae and magnetar flares cannot contribute significantly to RM and RM variations. This is consistent with observations of most Galactic pulsars and magnetars, especially for the observations of FRB 200428 and radio pulses from SGR J1935+2154. At last, we discussed the RM contribution by the plasma near a massive black hole. The extremely large RMs with ${\rm RM}\gtrsim 10^4~{\rm rad~m^{-2}}$ have been observed in the vicinity of massive black holes \citep{Bower03,Marrone07,Eatough13}, which has been proposed to be the environment of some FRB repeaters with extremely large RMs. In such a scenario, the random RM variation can be due to the turbulence in the anisotropic distribution of the outflow from the massive black hole or by the interaction between the outflow and nearby clouds. Similar to the stellar wind scenario, if the orbit of the FRB source is elliptical, a large RM variation would occur near the periastron, and a periodic evolution of RM variation is expected for long-term monitoring. It is worth noting that the mass of the massive black hole cannot be too large in this scenario, because many FRB repeaters were localized at positions far from the centers of their host galaxies \citep{Chatterjee17,CHIME20b,Xu21}. \section*{Acknowledgements} We acknowledge helpful discussions with Shi Dai, Yi Feng, Kejia Lee, Di Li, Dongzi Li, Qiao-Chu Li, Fa-Yin Wang, Wei-Yang Wang, Zhao-Yang Xia, and Yong-Kun Zhang. YPY is supported by National Natural Science Foundation of China grant No. 12003028 and the China Manned Spaced Project (CMS-CSST-2021-B11). \section*{Data Availability} This theoretical study did not generate any new data. \bibliographystyle{mnras} \bibliography{ms} \appendix \section{Structure function of RM fluctuations}\label{RMSF} In this appendix, we calculate the RM structure function following \cite{Lazarian16}. We define the position on the plane of sky as $\overrightarrow{x}$ and the distance along the LOS as $s$. The Faraday RM can be written as \be {\rm RM}(\overrightarrow{x})=\kappa\int u(\overrightarrow{x},s) ds, \ee where $\kappa\equiv e^3/(2\pi m_e^2c^4)$, and $u(\overrightarrow{x},s)\equiv n_e(\overrightarrow{x},s)B_\parallel(\overrightarrow{x},s)$ is defined as the RM density. The RM density $u(\overrightarrow{x},s)$ can be described as the sum of its ensemble-average mean and zero mean fluctuations, \be u(\overrightarrow{x},s)=u_0+\delta u(\overrightarrow{x},s)~~~\text{with}~\left<\delta u(\overrightarrow{x},s)\right>=0, \ee where the subscript ``0'' denotes the mean value, and $\left<...\right>$ is denoted as an ensemble average. The two-point correlation function $\xi_u(l,\Delta s)$ and the structure function $D_u(l,\Delta s)$ of RM density (fluctuations) are described by \begin{align} \xi_u(l,\Delta s)&=\kappa^2\left<\delta u(\overrightarrow{x_1},s_1)\delta u(\overrightarrow{x_2},s_2)\right>,\\ D_u(l,\Delta s)&=\kappa^2\left<[u(\overrightarrow{x_1},s_1)-u(\overrightarrow{x_2},s_2)]^2\right>, \end{align} where the transverse separation is $l=|\overrightarrow{x_1}-\overrightarrow{x_2}|$, and $\Delta s=s_1-s_2$. Here the statistical homogeneity of the medium is assumed, which is reflected in the fact that $\xi_u(l,\Delta s)$ and $D_u(l,\Delta s)$ only depend on the coordinate difference between the two positions. According to the statistical descriptions presented in \cite{Lazarian16}, we adopt a power-law model of $\xi_u(l,\Delta s)$ and $D_u(l,\Delta s)$, \begin{align} \xi_u(l,\Delta s)&=\sigma_{\rm RM}^2\frac{l_{\rm RM}^m}{l_{\rm RM}^m+(l^2+\Delta s^2)^{m/2}},\label{xiu}\\ D_u(l,\Delta s)&=2\sigma_{\rm RM}^2\frac{(l^2+\Delta s^2)^{m/2}}{l_{\rm RM}^m+(l^2+\Delta s^2)^{m/2}},\label{Du} \end{align} where $m$ is the scaling slope, $l_{\rm RM}$ is the correlation length of RM density, and $\sigma_{\rm RM}^2=\kappa^2\left<\delta u^2\right>$ is the variance of fluctuations. According to the above equations, the correlation scale $l_{\rm RM}$ can also be defined as $\xi_u(l_{\rm RM},0)=\kappa^2\left<\delta u(\overrightarrow{x}+\overrightarrow{l_{\rm RM}})\delta u(\overrightarrow{x})\right>=\sigma_{\rm RM}^2/2$. We consider that the power spectrum of the RM density fluctuations satisfies $P(k)\propto k^\alpha$ for $L^{-1}<k<l_0^{-1}$, where $k=2\pi/l$ is the spatial wavenumber, $L$ and $l_0$ are the outer scale and inner scale, respectively. The power spectra with $\alpha<-3$ and $\alpha>-3$ are called as steep spectrum (e.g., $\alpha=-11/3$ for the Kolmogorov scaling) and shallow spectrum, respectively. For the steep spectrum, the fluctuations are dominated by the large scales $\sim L$, which corresponds to the energy injection scale of turbulence, and the correlation scale is $l_{\rm RM}\sim L$. For the shallow spectrum, the fluctuations are dominated by the small scales $\sim l_0$, which is the energy dissipation scale of turbulence, and the correlation scale is $l_{\rm RM}\sim l_0$ \citep{Lazarian16,Xu16}. The relation between the scaling slope $m$ and the spectral index $\alpha$ depends on whether the power spectrum is steep or shallow \citep{Lazarian06,Xu16}, \begin{align} m&=-(\alpha+3),\alpha<-3,\label{m1}\\ m&=\alpha+3,\alpha>-3,\label{m2} \end{align} Next, we define the structure function of RM as \begin{align} D_{\rm RM}(\overrightarrow{l})&\equiv\left<[{\rm RM}(\overrightarrow{x}+\overrightarrow{l})-{\rm RM}(\overrightarrow{x})]^2\right>\nonumber\\ &=\left<[\delta{\rm RM}(\overrightarrow{x}+\overrightarrow{l})-\delta{\rm RM}(\overrightarrow{x})]^2\right>. \end{align} Notice that the non-standard factor $1/2$ in the definition of $D_{\rm RM}$ in \cite{Lazarian16} has been corrected here due to the standard definition of the structure function adopted. According to \cite{Lazarian16}, for the Faraday screen with thickness $\Delta R$, the RM structure function could be calculated by \begin{align} D_{\rm RM}(l)=&4\int_0^{\Delta R}(\Delta R-\Delta s)[\xi_u(0,\Delta s)-\xi_u(l,\Delta s)]d\Delta s\nonumber\\ &=4\sigma_{\rm RM}^2\int_0^{\Delta R}(\Delta R-\Delta s)\nonumber\\ &\times\left[\frac{l_{\rm RM}^m}{l_{\rm RM}^m+\Delta s^m}-\frac{l_{\rm RM}^m}{l_{\rm RM}^m+(l^2+\Delta s^2)^{m/2}}\right]d\Delta s. \end{align} In order to obtain an analytical resolution of the above integral, we adopt the following approximations: 1) we are mainly interested in the case of $l\lesssim \min (L,\Delta R)$, because the RMs separated by $l\gtrsim \min (L,\Delta R)$ should be independent, leading to the structure function $D_{\rm RM}(l)\sim\text{constant}$ for $l\gtrsim \min (L,\Delta R)$; 2) due to $\xi_u(0,\Delta s)-\xi_u(l,\Delta s)\sim0$ for $\Delta s\gg l$, the upper limit of the above integral could be changed to $\Delta R\rightarrow l$; 3) for the integral range from $0$ to $l$, the integrand term $\Delta R-\Delta s$ becomes approximately $\Delta R-\Delta s\sim\Delta R$. Therefore, the above equation could be approximately written as \be D_{\rm RM}(l)\simeq4\sigma_{\rm RM}^2\Delta R\int_0^{l}\left[\frac{l_{\rm RM}^m}{l_{\rm RM}^m+\Delta s^m}-\frac{l_{\rm RM}^m}{l_{\rm RM}^m+(l^2+\Delta s^2)^{m/2}}\right]d\Delta s. \ee (1) for $l<l_{\rm RM}$, one has \begin{align} D_{\rm RM}(l)&\simeq4\sigma_{\rm RM}^2\Delta R\int_0^{l}\left\{\left[1-\fraction{\Delta s}{l_{\rm RM}}{m}\right]\right.\nonumber\\ &\left.-\left[1-\left(\fraction{l}{l_{\rm RM}}{2}+\fraction{\Delta s}{l_{\rm RM}}{2}\right)^{m/2}\right]\right\}d\Delta s\nonumber\\ &\sim\sigma_{\rm RM}^2\Delta R\int_0^{l}\fraction{l}{l_{\rm RM}}{m}d\Delta s\sim\sigma_{\rm RM}^2\Delta Rl\fraction{l}{l_{\rm RM}}{m}. \end{align} (2) for $l>l_{\rm RM}$, one has \begin{align} D_{\rm RM}(l)&\simeq4\sigma_{\rm RM}^2\Delta R\int_0^{l}\left(\frac{l_{\rm RM}^m}{l_{\rm RM}^m+\Delta s^m}\right)d\Delta s \\\nonumber &\sim \sigma_{\rm RM}^2\Delta R\left[\int_0^{l_{\rm RM}}d\Delta s+\int_{l_{\rm RM}}^l\fraction{\Delta s}{l_{\rm RM}}{-m}d\Delta s\right]\nonumber\\ &\sim\sigma_{\rm RM}^2\Delta R l_{\rm RM}\left[1+\fraction{l}{l_{\rm RM}}{1-m}\right], \end{align} where $D_{\rm RM}(l)\sim \sigma_{\rm RM}^2\Delta R l_{\rm RM}$ for $m>1$ and $D_{\rm RM}(l)\sim \sigma_{\rm RM}^2\Delta R l_{\rm RM}(l/l_{\rm RM})^{1-m}$ for $m<1$. We must notice that in the above calculation, the power-law model given by Eq.(\ref{xiu}) and Eq.(\ref{Du}) has been used in the total integral range. However, the power-law model reflects robust scaling in the inertial range, and is satisfied only in $l_0<l<l_{\rm RM}\sim L$ for the steep spectrum ($\alpha<-3$), and $l_0\sim l_{\rm RM}<l< L$ for the shallow spectrum ($\alpha>-3$). For a thick Faraday screen with thickness $\Delta R>l_{\rm RM}$ that we are interested in here, the RM structure function is finally given by \begin{align} D_{\rm RM}(l)\sim \left\{ \begin{aligned} &\sigma_{\rm RM}^2\Delta R l\fraction{l}{l_{\rm RM}}{-(\alpha+3)},&&l_0<l<l_{\rm RM}\sim L,\\ &\sigma_{\rm RM}^2\Delta R l_{\rm RM},&&l>l_{\rm RM}\sim L, \end{aligned} \right. \end{align} for a steep spectrum ($\alpha<-3$), and \begin{align} D_{\rm RM}(l)\sim \left\{ \begin{aligned} &\sigma_{\rm RM}^2\Delta R l\fraction{l}{l_{\rm RM}}{-(\alpha+3)},&&l_0\sim l_{\rm RM}<l<L\sim\Delta R,\\ &\sigma_{\rm RM}^2\Delta R^2\fraction{\Delta R}{l_{\rm RM}}{-(\alpha+3)},&&l\gtrsim L\sim\Delta R, \end{aligned} \right. \end{align} for a shallow spectrum ($-3<\alpha<-2$) and $L\sim\Delta R$. Notice that in the above equations the condition of $D_{\rm RM}(l)\sim\text{constant}$ for $l\gtrsim \min (L,\Delta R)$ has been used, because the RMs separated by $l\gtrsim\min (L,\Delta R)$ should be independent. \section{Faraday rotation measure from a relativistic pair plasma}\label{RMpair} In this appendix, we discuss the RM contribution from a pulsar wind. We generally discuss the dispersion relation of a pair plasma not satisfying electric neutrality. For the electromagnetic waves with wavevector $k$ and angular frequency $\omega$, the dispersion relations of right and left circular polarized waves are \citep[e.g.,][]{Stix92} \begin{align} \frac{c^2k^2}{\omega^2}&=1-\sum_s\frac{\omega_{ps}^2}{\omega(\omega+\omega_{Bs})}\simeq1-\frac{\omega_p^2}{\omega^2}+\frac{\omega_p^2\omega_B}{\mathcal{M}\omega^3}~~~{\rm for~R~mode},\nonumber\\ \frac{c^2k^2}{\omega^2}&=1-\sum_s\frac{\omega_{ps}^2}{\omega(\omega-\omega_{Bs})}\simeq1-\frac{\omega_p^2}{\omega^2}-\frac{\omega_p^2\omega_B}{\mathcal{M}\omega^3}~~~{\rm for~L~mode}, \end{align} where $\mathcal{M}$ is the pair multiplicity, $\omega_p^2=\omega_{pe^+}^2+\omega_{pe^-}^2$ is the total plasma frequency, $\omega_{pe^+}$ and $\omega_{pe^-}$ are the plasma frequencies of positrons and electrons, respectively, $\omega_B=\omega_{Be^+}=-\omega_{Be^-}=eB/m_ec$ is the electron cyclotron frequency, and here we assume that $n_{e^-}<n_{e^+}$ and $\omega\gg\omega_B$. We define the laboratory frame as $K$ and the pulsar wind comoving frame as $K'$. For an electromagnetic wave with frequency $\omega$ and wavevector $k$, the Lorentz transformations of frequency and wavevector between the two frame are \be \omega'=\gamma\left(\omega-k_\parallel c\beta\right),~~~~~~~~k_\parallel'=\gamma\left(k_\parallel-\frac{\omega\beta}{c}\right)~~~~~{\rm and}~~~~~k_\perp'=k_\perp, \ee where $\beta=v/c$ is dimensionless velocity. If the wavevector is almost along the line of sight, one has \be n'=\frac{n-\beta}{1-n\beta},~~~~~~n=\frac{n'+\beta}{1+n'\beta}. \ee Approximately, one has \be n^2=\left(\frac{n'+\beta}{1+n'\beta}\right)^2 =1-\frac{1}{\gamma^2(1+n'\beta)^2}(1-n'^2) \simeq1-\frac{1}{4\gamma^2}(1-n'^2), \ee for $n'\sim1,\beta\sim1$ as we are interested in here. In the $K'$ frame, one has \be n_e'\sim \frac{n_e}{\gamma},~~~~~B_\parallel'\sim B_\parallel,~~~~~\omega'\sim\frac{\omega}{2\gamma}. \ee We write the wave dispersion relation of right/left circular wave in the $K'$ frame as \be \frac{c^2k'^2}{\omega'^2}\simeq1-\frac{\omega_p'^2}{\omega'^2} \pm \frac{\omega_B'\omega_p'^2}{\mathcal{M}\omega'^3}, \ee for $\omega'\gg\omega_B'$. Notice that if one observes Faraday rotation with polarization angle satisfying $\Delta\phi\propto\nu^{-2}$, the condition of $\omega'\gg\omega_B'$ would be necessary. We define $r_c$ as \be \omega_B'(r_c)\sim\omega', \ee and the magnetic field at $r_c$ is \be B(r_c)\sim \frac{2\pi m_ec}{e}\left(\frac{\nu}{\gamma}\right)\simeq3.6~{\rm G}\gamma_2^{-1}\nu_9. \ee The classical Faraday rotation is available only for $r>r_c$. The dispersion relation in the laboratory frame is \be \frac{c^2k^2}{\omega^2}\simeq1-\frac{1}{4\gamma^2}\left(\frac{4\gamma\omega_p^2}{\omega^2} \mp \frac{8\gamma^2\omega_B\omega_p^2}{\mathcal{M}\omega^3}\right)\simeq1-\frac{\omega_p^2}{\gamma\omega^2} \pm \frac{2\omega_B\omega_p^2}{\mathcal{M}\omega^3}. \ee After propagating a distance $d$ from $r_c$, the frequency-dependent polarization position angle is \begin{align} \psi&\simeq\frac{1}{2}\int_{r_c}^d\left|k_R-k_L\right|ds\simeq\frac{1}{2c\omega^2}\int_{r_c}^d\frac{2\omega_B\omega_p^2}{\mathcal{M}}ds\nonumber\\ &=\left(\frac{e^3}{\pi m_e^2c^2}\frac{1}{\mathcal{M}}\int_{r_c}^dn_eB_\parallel ds\right)\nu^{-2}, \end{align} where $\omega\gg\omega_p/\sqrt{\gamma}$ is required for the approximation, which can be easily satisfied in the pulsar wind. As shown from the above result, only the net charges make contribution to Faraday rotation, and the relativistic effect disappears for Faraday rotation. According to $\psi={\rm RM}\lambda^2$, the effective RM could be written as \be {\rm RM}=\frac{e^3}{\pi m_e^2c^4}\frac{1}{\mathcal{M}}\int_{r_c}^dn_eB_\parallel ds, \ee which is suppressed by a factor of $\mathcal{M}/2$ compared with the classical result. The above result assumes that the plasma is cold in the $K'$ frame. If the plasma is relativistically hot with a typical Lorentz factor $\gamma_{\rm th}$ in the $K'$ frame, the RM contribution is further suppressed by a factor of $\gamma_{\rm th}^2$ due to the relativistic mass $m_e\rightarrow\gamma_{\rm th} m_e$. One can finally get \be {\rm RM}=\frac{e^3}{\pi m_e^2c^4}\frac{1}{\gamma_{\rm th}^2\mathcal{M}}\int_{r_c}^dn_eB_\parallel ds. \ee Therefore, compared with the RM contributed by the non-relativistic magneto-ionic (ions+electrons) cold plasma, the RM from relativistic pair plasma is suppressed by a factor of $\gamma_{\rm th}^2\mathcal{M}/2$. \bsp \label{lastpage}
Title: Exoplanet weather and climate regimes with clouds and thermal ionospheres: A model grid study in support of large-scale observational campaigns
Abstract: With observational efforts moving from the discovery into the characterisation mode, systematic campaigns that cover large ranges of global stellar and planetary parameters will be needed. We aim to uncover cloud formation trends and globally changing chemical regimes due to the host star's effect on the thermodynamic structure of their atmospheres. We aim to provide input for exoplanet missions like JWST, PLATO, and Ariel, as well as potential UV missions ARAGO, PolStar or POLLUX. Pre-calculated 3D GCMs for M, K, G, F host stars are the input for our kinetic cloud model. Gaseous exoplanets fall broadly into three classes: i) cool planets with homogeneous cloud coverage, ii) intermediate temperature planets with asymmetric dayside cloud coverage, and iii) ultra-hot planets without clouds on the dayside. In class ii),} the dayside cloud patterns are shaped by the wind flow and irradiation. Surface gravity and planetary rotation have little effect. Extended atmosphere profiles suggest the formation of mineral haze in form of metal-oxide clusters (e.g. (TiO2)_N). The dayside cloud coverage is the tell-tale sign for the different planetary regimes and their resulting weather and climate appearance. Class (i) is representative of planets with a very homogeneous cloud particle size and material compositions across the globe (e.g., HATS-6b, NGTS-1b), classes (ii, e.g., WASP-43b, HD\,209458b) and (iii, e.g., WASP-121b, WP0137b) have a large day/night divergence of the cloud properties. The C/O ratio is, hence, homogeneously affected in class (i), but asymmetrically in class (ii) and (iii). The atmospheres of class (i) and (ii) planets are little affected by thermal ionisation, but class (iii) planets exhibit a deep ionosphere on the dayside. Magnetic coupling will therefore affect different planets differently and will be more efficient on the more extended, cloud-free dayside.
https://export.arxiv.org/pdf/2208.05562
\title{Exoplanet weather and climate regimes with clouds and thermal ionospheres} \subtitle{A model grid study in support of large-scale observational campaigns} \author{Christiane Helling \inst{1,2,3} \and Dominic Samra \inst{1,2} \and David Lewis \inst{1,2} \and Robb Calder \inst{2} \and Georgina Hirst \inst{2} \and Peter Woitke \inst{1,2} \and Robin Baeyens \inst{4} \and Ludmila Carone \inst{2} \and Oliver Herbort \inst{1,2,5} \and Katy L. Chubb \inst{2}} \institute{ Space Research Institute, Austrian Academy of Sciences, Schmiedlstrasse 6, A-8042 Graz, Austria\\ \email{Christiane.Helling@oeaw.ac.at} \and Centre for Exoplanet Science, School of Physics \& Astronomy, University of St Andrews, North Haugh, St Andrews, KY169SS, UK \and Institute for Theoretical Physics and Computational Physics, Graz University of Technology, Petersgasse 16 8010 Graz \and Institute of Astronomy, KU Leuven, Celestijnenlaan 200D, 3001, Leuven, Belgium } \date{Received September 15, 1996; accepted March 16, 1997} \abstract {Gaseous exoplanets are the targets that enable us to explore fundamentally our understanding of planetary physics and chemistry. With observational efforts moving from the discovery into the characterisation mode, systematic campaigns that cover large ranges of global stellar and planetary parameters will be needed to disentangle the diversity of exoplanets and their atmospheres that all are affected by their formation and evolutionary paths. Ideally, the spectral range includes the high-energy (ionisation) and the low-energy (phase-transitions) processes as they carry complementary information of the same object.} {We aim to uncover cloud formation trends and globally changing chemical regimes into which gas-giant exoplanets may fall due to the host star's effect on the thermodynamic structure of their atmospheres. We aim to examine the emergence of an ionosphere as indicator for potentially asymmetric magnetic field effects on these atmospheres. We aim to provide input for exoplanet missions like JWST, PLATO, and Ariel, as well as potential UV missions ARAGO, PolStar or POLLUX on LUVOIR.} {Pre-calculated 3D GCMs for M, K, G, F host stars are the input for our kinetic cloud model for the formation of nucleation seeds, the growth to macroscopic cloud particles and their evaporation, gravitational settling, element conservation and gas chemistry.} {Gaseous exoplanets fall broadly into three classes: i) cool planets with homogeneous cloud coverage, ii) intermediate temperature planets with asymmetric dayside cloud coverage, and iii) ultra-hot planets without clouds on the dayside. { In class ii),} the dayside cloud patterns are shaped by the wind flow and irradiation. Surface gravity and planetary rotation have little effect. For a given effective temperature, planets around K dwarfs are rotating faster compared to G dwarfs leading to larger cloud inhomogeneities in the fast rotating case. Extended atmosphere profiles suggest the formation of mineral haze in form of metal-oxide clusters (e.g. (TiO$_2$)$_{\rm N})$. } {The dayside cloud coverage is the tell-tale sign for the different planetary regimes and their resulting weather and climate appearance. Class (i) is representative of planets with a very homogeneous cloud particle size and material compositions across the globe (e.g., HATS-6b, NGTS-1b), classes (ii, e.g., WASP-43b, HD\,209458b) and (iii, e.g., WASP-121b, WP0137b) have a large day/night divergence of the cloud properties. The C/O ratio is, hence, homogeneously affected in class (i), but asymmetrically in class (ii) and (iii). The atmospheres of class (i) and (ii) planets are little affected by thermal ionisation, but class (iii) planets exhibit a deep ionosphere on the dayside. Magnetic coupling will therefore affect different planets differently and will be more efficient on the more extended, cloud-free dayside. How the ionosphere connects atmospheric mass loss at the top of the atmosphere with deep atmospheric layers need to be investigated to coherently interpret high resolution observations of ultra-hot planets. } \keywords{ Planets and satellites: gaseous planets -- Planets and satellites: atmospheres -- Planets and satellites: composition -- (Stars:) brown dwarfs } \section{Introduction} \label{section:Intro} The diversity of exoplanets known so far\footnote{\url{http://exoplanet.eu}} calls for concerted modelling efforts in order to optimally access the information content of the observational data. Mission concepts for deciphering rocky exoplanets and, in the ideal case, an exo-Earth (\citealt{2020arXiv200106683G,2021arXiv210404824T,quanz2021atmospheric}) are being developed and a fleet of exoplanet missions is under development at ESA\footnote{\url{https://sci.esa.int/web/exoplanets}, \url{https://exoplanets.nasa.gov/discovery/exoplanet-catalog/}} and at NASA\footnote{\url{https://exoplanets.nasa.gov/exep/about/missions-instruments/}}. The planets that can be studied with today's observational facilities in considerable detail (i.e. atmosphere characterisation) are planets that orbit close to their host star, like most of the known gas-giants outside the solar system. The number of directly imaged planets, i.e.~those orbiting at a substantial distance from their host star, is increasing due to massive observational efforts, for example with SPHERE at the VLT (\citealt{2021A&A...651A..71L}). Unless in discovery mode, observations will need to focus on specific objects to maximise the outcome of their instruments, for example for JWST, or % future missions like PLATO and in the UV (e.g. PolStar \cite{scowen2021polstar}; ARAGO \cite{2019BAAS...51c.219N}; POLLUX at LUVOIR~\citep{18BoNeGo}). Ariel, however, will allow to observe a large ensemble of gas planets and thus to move beyond single observations towards the study of 1000 transiting exoplanets and thus to comparative planetology for exoplanets. With JWST, comparative observations of several gas dominated exoplanets will be possible, for example, comparison between hot Jupiters of different equilibrium temperatures and planetary rotation orbiting K and G dwarf stars. Modelling efforts allow to span ranges of global parameters (\citealt{2016ApJ...828...22P,2021MNRAS.501...78P,2021MNRAS.tmp.1277B}) that are wider than what instrument target lists can afford and are therefore valuable tools to put the individually observed objects into context as demonstrated in Fig.~\ref{fig:global_properties}. They are necessary to provide context for future ensemble studies. Questions to explore include the effect of stellar evolution on their planetary companions but also in-depth studies of the effect of host star radiation field on cloud formation and the formation of a thermal ionosphere (i.e. a region of a sufficiently high number of charges { that enables plasma behaviour, for example by coupling to the ambient magnetic field (\citealt{2015MNRAS.454.3977R,2021A&A...648A..80H}})), as well as the effect of different planetary rotation on the wind flow and thus on cloud formation. The study of cloud formation is not only key to understand the atmospheric chemistry that will be observed with space observatories like JWST, PLATO, Ariel, LUVOIR (e.g. \citealt{2020A&A...642A..28M}) but also with ground-based telescopes like the VLT and the ELTs. Understanding cloud formation has gained further momentum due to the role that cloud particles may play in aerial biospheres (\citealt{2017ApJ...836..184Y,2021Univ....7..172S}). We address the question of how cloud formation is affected by the global parameters like planetary effective temperature and rotation period by utilizing a grid of 48 3D General Circulation Models (GCMs) that includes M, K, G and F stars as planetary host stars. Key properties, like nucleation rate and particle sizes, are selected to study how cloud formation and the resulting global distribution of clouds change with changing global parameters of the star-planet system (global temperatures, type of host star; Sect.~\ref{section:Cloud_regimes}). We supplement this part of our grid study by a catalogue which contains the complete set of cloud (nucleation rate, mean particles sizes, material properties, dust-to-gas ratios) and derived gas (C/O ratio, degree of ionisation, mean molecular weight) properties for all the models included in this study (\citealt{Lewis2022}). We follow this up by presenting integrated properties that help to discern correlated cloud property trends (Sect.~\ref{section:Cloud_regimes}). Sect~\ref{sec:3cases} studies three selected cases that are representative of cloud formation regimes as exoplanet tell-tale signs for weather and possible climate regimes. This specific study is followed up by addressing the effect of the outer boundary of our computational domain on our results (Sect~\ref{section:extrapolation_gas_and_cloud_results}) which leads us to suggest the formation of mineral hazes in form of metal-oxide clusters in the atmospheric region of local pressure $< 10^{-8}$ bar. Sect~\ref{section:observational_implications} presents observational implications in terms of the Transmission Spectroscopy Metric TSM, $p(\tau(\lambda)=1)$-levels, wavelength-dependent albedo. This paper presents all results for the planetary $\log_{10}(g\,{\rm [cgs]})$)=3.0. In this first cloud-grid study we focus on the interplay between 3D wind flow and temperatures for different planetary rotations and how they affect equilibrium chemistry and cloud formation. Disequilibrium chemistry will only become important for effective temperatures smaller than 1400~K and high radiation fields. Here we address formation of mineral cloud particles from collisions dominated gases where equilibrium chemistry holds well. The C/N/O/H non-equilibrium will have little effect on our results \citep{helling2020mineral}. \section{Approach} \label{section:Approach} \subsection{3D GCM input} \label{section:3D_GCM} We utilise the grid of 3D GCMs for tidally-locked irradiated planetary atmospheres from \cite{2021MNRAS.tmp.1277B}. The grid spans host stars of spectral types M5, K5, G5 and F5, which have T$_{\rm eff} = 3100, 4250, 5650, 6500 \rm{K}$ respectively, and planetary effective temperatures of T$_{\rm eff, P} = 400\, ...\, 2600$K (in 200 K spacing). While the grid of \cite{2021MNRAS.tmp.1277B} also varies the gravity, the present study addresses the $\log_{10}(g\,{\rm [cgs]})$ = 3 models only. All model planets are assumed to have the same radius of 1.35~R$_{\rm Jup}$, and a constant mean molecular weight of $\mu=2.3$ is assumed for the atmospheric modelling. The atmospheric circulation in the 3D climate models utilised here is driven using parametrized radiative transfer (Newtonian relaxation) towards radiative-convective equilibrium \citep{Carone2020, 2021MNRAS.tmp.1277B}. As equilibrium state, chemical equilibrium abundances with solar metallicity and C/O ratio have been assumed. This Newtonian-relaxation approach results in a computationally efficient 3D model of the atmospheric circulation, which qualitatively agrees with those produced by self-consistent GCMs. As such, its use enables large grid studies of the 3D climate. For an in-depth comparison between a Newtonian-relaxed model and a GCM with self-consistent radiative coupling, see \cite{2022arXiv220209183S}. The planetary rotation is determined by its orbital period under the assumption of synchronous rotation (P$_{\rm rot}$ = P$_{\rm orb}$) and ranges from $P=0.03 - $ 256~days. For all except the longest orbital periods, the assumption of synchronous rotation is valid based on timescale arguments \citep{2021MNRAS.tmp.1277B}. For a given planetary equilibrium temperature, M and K dwarf planets orbit closer to their host stars than planets around G and F stars. As such planets around cooler stars, e.g.~NGTS-10b (\citealt{2020MNRAS.493..126M}), HATS-6b (\citealt{2015AJ....149..166H}), or WASP-43b, are fast rotators, which may impact the planet's climate. One result of fast rotation is an increased day-night temperature contrast (\citealt{Carone2020,2021MNRAS.tmp.1277B}), which sets the GCMs for these planets apart from models for more typical hot Jupiter (e.g.~\citealt{Parmentier2018,Drummond2018a,mendonca2018b}). Large gas planets around M dwarfs like HATS-6b are rare and challenge the present understanding of planet formation \citep[e.g.][]{Kennedy2008,Morales2019,Bayliss2018}. Thus, such planets pose interesting targets for future characterization with JWST. The very short period corner of the parameter space may also be used to represent gas planets that orbit white dwarfs, for example WD 1856+534 b with an orbital period of 1.4 days, or brown dwarfs - white dwarfs pairs (e.g., WD 0137-349B with P= 0.0803 days, \citealt{2020MNRAS.496.4674L}). For the grid study conducted in this paper, we extract 48 1D (T$_{\rm gas}$, p$_{\rm gas}$, $\vec{v}(x,y,z)$) profiles per 3D GCM atmospheric model. The sampled latitudes are $\theta = 0^{\degree}$ (the equator) and $\theta = 45^{\degree}$, and the sampled longitudes span $\phi = -165^{\degree} {~\rm to~} 180^{\degree}$ in $15^{\degree}$ spacing The morning and evening terminators are given by $\phi = 270^{\degree}$ and $\phi = 90^{\degree}$ respectively. In a small fraction of vertical temperature profiles extracted from the GCMs, spurious unphysical variations have been found, which are likely related to numerical artifacts, possibly due to the parameterized radiative forcing. Such profiles have been omitted from our cloud analysis. In the end, we included 85\% of the G star profiles and 86\% of the M star profiles in our investigations, as well as 100\% of the 1D atmosphere profiles for the K and the F star planets. We further note that 3D GCMs exhibit notoriously slow temperature evolution in the deep atmosphere \citep{Carone2020, Wang2020, 2022arXiv220209183S}. This has largely been mitigated in the grid of \citet{2021MNRAS.tmp.1277B} by starting from a hot initial adiabatic temperature profile, but some 3D GCMs still are not fully converged for p$_{\rm gas}>100$~bar, leading to locally oscillating (T$_{\rm gas}$, p$_{\rm gas}$) structures in these innermost atmospheric regions. { The main results in this paper will not be altered by these uncertainties since the high gas pressures do stabilise the cloud particles over a substantial temperature range at these deep atmospheric layers (see for example, Fig. 14 in \citealt{2021A&A...649A..44H}) which sits deep in the optically thick part of the atmosphere.} WASP-43b is one example where cloud formation reaches deep into these inner atmospheric regimes (\citealt{2021A&A...649A..44H}). We include these unconverged regions nevertheless in order to explore how deep inside the atmosphere clouds could form { for the different global parameters covered in our grid.} \paragraph{ The grid's global parameters corners:} The model grid spans a { large range of global parameters that has not yet been filled with existing exoplanets equivalents completely. Two corners of the grid's parameter space are therefore pointed out that may appear as unrealistic at a first glance for the time being: } 1) Planets with $P<35$d can safely be assumed to be tidally locked. Longer orbital periods of tidally locked planets may occur nevertheless for older systems, for example Mercury has a period of 90d. One should, however, expect a spin evolution for exoplanets during their migration through the planet forming disks. { For example, } brown dwarfs undergo a spin down evolution which may align them with planets (\citealt{2018ApJ...859..153S}). 2) Giant gas planets with log(g) = 2 [cgs] = 100 cm\,s$^{-2}$ have not been discovered so far. Nevertheless, there is a very small sub-class of objects, so-called super-puffs with very small bulk densities: For example, the Kepler-51 planets with orbital periods of 45, 85 and 130 days and with densities below 0.1 g/cm3 (\citealt{2014ApJ...783...53M,2020AJ....159...57L}). While the 3D GCMs for very low surface gravity exist, they are not included in the present cloud study. \subsection{Cloud formation and gas-phase modelling} \label{section:CF_and_gas_phase} { Cloud formation has become an important piece of physico-chemistry for exoplanet research and various groups worked on understanding cloud formation in the context of atmospheric environments. Model approaches have been compared in \cite{2008MNRAS.391.1854H} and within the atmosphere modelling context summarised in \cite{2018ApJ...854..172C} and \cite{2020RAA....20...99Z}. A discussion of the cloud formation models in the exoplanet community as well as the approach applied in the plant-forming disk community compared to the modelling approach used here can be found in the recent paper of \cite{2022A&A...663A..47S}. } The local gas-phase abundances are calculated assuming chemical equilibrium by applying {\sc GGChem} which is part of our cloud formation code. { The gas phase is assumed to be in chemical equilibrium throughout the simulation. This, however, does not imply phase equilibrium for the condensate species considered for cloud formation. We addressed the small differences that may be imposed by kinetic gas-phase effects on the gas composition in the cloud forming regions in \cite{2020A&A...635A..31M}.} Out of the total set of elements considered for the gas-phase, only 11 elements (Mg, Si, Ti, O, Fe, Al, Ca, S, K, Cl, Na) participate in the bulk growth of the cloud particles and only 6 (Ti, Si, O, K, Cl, Na) participate in the formation of cloud condensation nuclei. Within our kinetic cloud formation approach, we treat the formation of 4 nucleation species (TiO$_2$, SiO, KCl, NaCl) that form the cloud condensation nuclei and determine the total nucleation rate ($J_*$ [cm$^{-3}$s$^{-1}$]). We use the modified classical nucleation theory (e.g., see \citealt{helling2013RSPTA,2018A&A...614A.126L}) the results of which we compare for TiO$_2$ to a Monte Carlo approach treating individual cluster collisions (\citealt{2021A&A...654A.120K}). The total nucleation rate determines the number of cloud particles that are forming locally and which grow to macroscopic sizes by the condensation of 16 materials through 132 gas-surface growth reactions. The cloud particle sizes are expressed as local mean cloud particle radii $\langle a\rangle$ [$\mu$m] (\citealt{Woitke2003,Woitke2004,2004A&A...423..657H,Helling2006,2008A&A...485..547H}). For a recent review, see \cite{2022arXiv220500454H}. We do present our results in terms of surface averaged cloud particle radii which is more representative of their effect of the local opacity (see Sect.~\ref{amean}). In total, we are solving 31 ODEs in order to model the formation of cloud particles as a sequence of nucleation, surface growth/evaporation, gravitational settling, element replenishment and element conservation. The undepleted gas is assumed to be of solar composition. We have also undertaken an update of our evaporation routine. The updated modelling of the vertical mixing is described below. \subsection{Treatment of vertical mixing} \label{section:vertical_mixing} Atmospheric transport processes remain challenging in combination with cloud formation modelling. Within a hydrodynamic framework, there is advective but also diffusive transport. Both components, gas and cloud particles, will move with the same velocity if the cloud particles are frictionally coupled to the gas phase. Cloud particles move with different velocities than the gas if an additional force, like gravity, causes a frictional decoupling. Gravitational decoupling is treated as a consistent part of our kinetic cloud model (\citealt{Woitke2003}). Hydrodynamic transport processes that cause a vertical transport, however, are either parameterized \citep[e.g.][]{parmentier20133d,2019A&A...631A..79H,2021A&A...649A..44H,2021MNRAS.504.2783S} or derived from the hydrodynamic velocity field as described in Appendix~\ref{s:diffmix}. In this paper, we apply two different approaches for two different domains: a) Within the 3D GCM computational domain ($p_\textrm{gas} > 10^{-4}$~bar): The cloud modelling within the computational domain of the whole grid of 3D models applies a different treatment of the vertical mixing source term than in previous papers (\citealt{2019A&A...631A..79H,2021A&A...649A..44H}), calculating the standard deviation based on adjacent grid cells. The details are outlined in Appendix~\ref{s:diffmix}. The respective mixing time scale is calculated from Eq.~\ref{final}. b) Above the 3D GCM computational domain ($p_\textrm{gas} < 10^{-4}$~bar; Sect.~\ref{section:extrapolation_gas_and_cloud_results}): No information about the local velocity fields are available outside the computational domain of the 3D GCMs. Hence, we adopt the final value for the original non-extrapolated profile for the vertical velocity within the extrapolated regime to such that the velocity is constant. We demonstrate the validity of the hydrodynamic assumptions within the extrapolated atmosphere profiled in Appendix ~\ref{section:hydrodynamics_validity}. The hydrodynamic assumption would break down at p$_{\rm gas}\approx 10^{-9}$~bar if the collisional processes within the atmospheric gas were only due to neutral molecules (here: H$_2$). This threshold moves to pressures as low as p$_{\rm gas}\approx 10^{-15}$~bar if the atmospheric gas is ionised (also \citealt{Debrecht2020}). The increased degree of ionisation in exoplanet (and brown dwarf) atmospheres has been demonstrated in \cite{Barth2021} due to photochemical effects as well as due to Lyman continuum ionisation by the interstellar radiation field (\citealt{2018A&A...618A.107R}). \section{Results} \label{section:Results} The leading aim of this study is to identify global cloud formation trends (Sect.~\ref{section:Cloud_regimes}) and globally changing chemical regimes (Sect.~\ref{section:chemical_regimes}) depending on the global parameters of the star-planet system with view of upcoming space missions like JWST, PLATO and Ariel, but also for potential missions in the UV. The respective mission host stars are covered by the model grid that is utilised here. We concentrate on the stellar effective temperature and the orbital period as global system parameters. The orbital period does determine the planetary effective (or global) temperature and the stellar spectral type is represented by the stellar effective temperature. An overview of the parameter range can be found in Figs.~\ref{fig:global_properties}-\ref{fig:global_slice_plots2}. A secondary aim is to provide a first insight regarding the potential of magnetic coupling by investigating the general trend of thermal ionisation with global parameters in comparison to the cloud location (Sect.~\ref{section:degree_of_ionisation}). We hope to stimulate follow-up studies in magnetic coupling effects beyond the assumption of an ideal MH flow that assumes a constant coupling for changing thermodynamic conditions. Our 3D grid study supports the transition found between hot and ultra-hot Jupiters for $T>1800$~K \citep[e.g.][]{2020A&A...639A..36B, 2019NatAs...3.1092K,Showman2020Review,Zhang2020Review,Parmentier2021}: The dayside of ultra-hot planets are cloud-free (Fig. \ref{fig:global_slice_plots2}), which leads to a low bond albedo (and thus efficient dayside irradiation). Further, the day-to-nightside heat circulation is very inefficient in the 3D GCMs \citep[see also][]{Perna2012,2017ApJ...835..198K}, leading to strong horizontal temperature gradients (Fig. \ref{fig:global_slice_plots1}). Low bond albedo at the dayside and inefficient heat circulation in combination lead to particularly large day-to-nightside emission differences. We note, however, that faster rotators, that is, the M and K dwarf planets are prone to have an even more pronounced day-to-nightside dichotomy in temperatures and cloudiness than slower rotators, that is the G and F planets. We note here that we do not have cloud-feedback on the temperatures that may lead to even less heat circulation in ultra-hot Jupiters \citep{Parmentier2021}. However, \citet{Parmentier2021} used a simpler cloud model than used in this study. A future study that will also incorporate cloud-feedback will show if this effect can be reproduced also with the microphysical cloud model. Further assumptions may alter the exact temperature threshold between hot and ultra-hot Jupiters since 3D atmosphere modelling did require further assumption to enable the simulations. One such assumption is the mean molecular weight which we address in Sect.~\ref{section:mean_molecular_weight}. \subsection{Host star trends of planetary (T$_{\rm gas}$, p$_{\rm gas}$)-structures} \label{section:Tp-struc_trends} A summary of the change in the 3D (T$_{\rm gas}$, p$_{\rm gas}$)-structures demonstrates first trends that will translate into trends in the global cloud structure of these planetary atmospheres. A detailed analysis is presented in \cite{2021MNRAS.tmp.1277B} and we focus on global trends only. Figures~\ref{fig:global_slice_plots1} \& ~\ref{fig:global_slice_plots2} show the thermal structure of the equatorial plane ($\theta$=0$^{\circ}$) for all the log(g)=3 [cgs] models, and how the local atmosphere temperature changes when the planet orbits closer to its host star (increasing T$_{\rm eff, P}$). All models with T$_{\rm eff, P}\leq800{\rm K}$ have a horizontally/zonally/longitudinally homogeneous temperature distribution. The day-night asymmetry emerges at T$_{\rm eff, P}=800{\rm K}$ and is more pronounced for the faster rotating stellar types. For hotter planets ($T_{\rm eff, P} \geq 1600$~K), advection of cooler air from the nightside onto the dayside can be seen, extending across the evening terminator at 1~bar. This structure is more extended for the F and the G star models, and it does not appear at all for the M star planets. The JWST targets NGTS-10b, HD~209458b (pink square), HD~189733b (brown square) and WASP-63b have equilibrium temperature between 1200\,K and 1600\,K orbiting G and K stars, hence represent the transitional regime from homogeneous temperature distributions to pronounced day/night temperature differences (e.g. HAT-P-7b (purple filled circle)) in the 3D GCM grid utilized here. The super-Saturn HATS-6b (yellow triangle) that orbits an M1V host star (d=148.4 ($\pm$ 3.3) pc) at a distance of a=0.03623 AU with a period of P=3.325 d (M$_{\rm P}$=0.319 $M_{\rm Jup}$, R$_{\rm P}$=0.998 $R_{\rm Jup}$) would be represented by the model of T$_{\rm eff, P}$=600\,K, $g=10$m\,s$^{-1}$ and M-type host star. A homogeneous temperature structure can, hence, be expected for HATS-6b. \subsection{Cloud regimes with changing global system parameters} \label{section:Cloud_regimes} The formation of clouds is triggered by a gas-phase transition leading to the formation of newly formed cloud condensation nuclei (nucleation), unless meteoritic dust re-condenses or the planet under consideration has a rocky surface from which sand particles are transported into the atmosphere. The chemical processes that lead the nucleation process are only partially known (e.g., \citealt{2021A&A...654A.120K} and references therein) and extensive studies are ongoing. The nucleation process is key to where clouds can form and it is determined by the local thermodynamic conditions. It is therefore important to be able to determine where in the atmosphere nucleation occurs with which efficiency as this is already indicative for changing cloud regimes with changing global stellar-planetary parameters (Sect.~\ref{ss:nuc}). The nucleation rate determines the number of cloud particles that eventually make up the whole cloud (and their distribution), hence, it also influences the size of the cloud particles. Due to element conservation (mass conservation), it is reasonable to generally expect large cloud particles in regions of low nucleation efficiency (Sect.~\ref{amean}). \subsubsection{The formation of cloud condensation nuclei}\label{ss:nuc} The total nucleation rate indicates the efficiency at which nucleation seeds form spontaneously from the gas phase, and is therefore used to identify the cloud formation regime. We consider here the nucleation of four species: TiO$_2$, SiO, KCl, NaCl. We calculate the total nucleation rate as $J_{\rm *, tot}=\sum J_{\rm *, i}$ (i=TiO$_2$, SiO, KCl, NaCl). The equatorial distribution of the total nucleation rate is presented for all models with log(g)=3 [cgs] in Figs,~\ref{fig:global_slice_plots_logg3_nucleation_1}, \ref{fig:global_slice_plots_logg3_nucleation_2}. There are crudely two nucleation regimes: one where nucleation occurs throughout the whole atmosphere across the whole globe (T$_{\rm eff, P}\leq 1200$ K), the globally homogeneous nucleation regime, and one where nucleation occurs intermittently or asymmetrically distributed in the atmosphere (T$_{\rm eff, P} >1200$ K), the partial nucleation regime. The lowest global temperature corner of the globally homogeneous nucleation regime is characterised by an extremely efficient nucleation process. In the upper atmosphere of the T$_{\rm eff, P } = 400$ K model the total nucleation has an initial value of approximately $10^{-3} ~{\rm cm^{-3} s^{-1}}$ which rises to a peak of $10^{3} ~{\rm cm^{-3} s^{-1}}$ by $p_{\rm gas} = 10^{-2}$ bar. This is due to the delayed onset of SiO nucleation, which begins at 10$^{-3}$ bar and dominates significantly over the TiO$_{2}$ nucleation between ~10$^{-2.5}$ bar and ~10$^{-0.5}$ bar. In the deep atmosphere at pressures greater than $p_{\rm gas} = 10^{-0.5}$ bar the efficiency of the SiO nucleation decreases such that TiO$_{2}$ becomes the dominant nucleation species. Nucleation occurs across almost the entire equatorial plane for the models T$_{\rm eff, P} \leq 1200$ K, with the dayside nucleation generally reduced compared to the nightside with a strong day-night asymmetry emerging with increasing planetary temperature. This asymmetry becomes most apparent in the models with T$_{\rm eff, P} \geq 1400$ K where regions of the dayside atmosphere do not exhibit nucleation. The extent of the dayside atmosphere where nucleation is not possible increases with the global planetary temperature T$_{\rm eff, P}$, starting east of the substellar point and varies in size between the four stellar types M, K, G and F. The size of the decreased nucleation region on the dayside is larger for the faster rotating planets at a given temperature, i.e. largest for the M star and smallest for the F stars. The location of this dayside reduction in nucleation coincides with the temperature hot spot which is offset from the substellar point due to the equatorial jet. Nucleation still occurs on the dayside these hotter models, however, this only occurs where cool air has been carried across the morning terminator by the jet. In hotter planetary atmospheres, the radiative response becomes shorter and cold air is efficiently heated as it reaches the dayside. For T$_{\rm eff, P} \geq 2000$ K there is essentially no nucleation on the dayside of the M star orbiting planets, and the extension of the nucleation across the morning terminator is similar for the K, G and F stars. The nucleation rate will not only be affected by the global planetary temperature but also by the gravity which determines the density structure of the atmosphere. Higher gravity will shift the nucleation emergence towards higher temperature due to an increased thermal stability as result of a higher collision rate for increased gas densities. To enable the comparison of the nucleation efficiency across the whole grid of global parameters, column integrated values are considered. We note that the integration column does vary for different planetary atmospheres due to the varying cloud extension as result of the local thermodynamic conditions (e.g. Fig.\ref{fig:extrapolated_clouds_2.0}). Figure~\ref{fig:nuc_integrate_scatter} shows the column integrated total nucleation rate for each of the model planets for each stellar type to allow for a comparison of nucleation activity across the whole 3D GCM grid. The range of values for the column integrated nucleation rates is very narrow for both the T$_{\rm eff, P}$= 400 K ($\sim 10^{8}-10^{10} ~{\rm cm^{-2} ~s^{-1}}$ ) and T$_{\rm eff, P}$= 600 K ($\sim 10^{6}-10^{10} ~{\rm cm^{-2} ~s^{-1}}$ ) models for all stellar types. For warmer models the range in values widens and the divide between dayside and nightside becomes more apparent. The range of values is larger for the M and K stars ($\sim 10^{3}-10^{10} ~{\rm cm^{-2} ~s^{-1}}$) compared to the G and F stars ($\sim 10^{5}-10^{10} ~{\rm cm^{-2} ~s^{-1}}$) for planetary effective temperatures T$_{\rm eff, P} = 800 - 1200$ K. The small spread of values is representative for the globally homogeneous nucleation regime where the formation of cloud condensation nuclei is most efficient and possible across the globe. For models T$_{\rm eff, P} = 1400, 1600, 1800$ K that range increases slightly and again separation between day and nightside becomes clearer and it is apparent that nucleation is less efficient for the higher $\theta = 45^{\circ}$ latitude compared to the equator for these models. Such a spread in values is representative for the partial nucleation regime, the largest spread will represent the atmospheres with the largest cloud formation asymmetry. The largest thermodynamic, and hence, nucleation asymmetry occurs for T$_{\rm eff, P}\geq 2000$ K. For all models the spread of values for the column integrated total nucleation rate is smaller for the nightside than the dayside. This is a reflection of the homogeneous local gas temperature of the nightside, whereas there is more variation in the temperatures on the dayside. The range of values on the dayside is reasonably consistent at $\sim 10^{-13}-10^{5} ~{\rm cm^{-2} ~s^{-1}}$ for the T$_{\rm eff, P} = 1400, 1600$ K models, and there is still significant overlap between the day and nightside values. A `cone' of diverging integrated nucleation rates emerges as function of the stellar effective temperature where the upper limit of integrated nucleation rate remains roughly similar ($10^{10}$cm$^{-2}$s$^{-1}$). A `bifurcation' occurs at the hotter end T$_{\rm eff, P} > 1400$K, most clear for the M5 host star, where there is a clear separation between the dayside and nightside nucleation efficiency. We conclude that no one value of nucleation rate is sufficient to describe the first step for cloud formation in exoplanet atmospheres. Only for the coolest atmospheres one might describe the nucleation rate reasonably by one value. Here, the \cite{2001ApJ...556..872A} model may well be suited to speed up GCM efficiency. \subsubsection{Mean particle size and dust-to-gas ratio}\label{amean} Cloud particles sizes are essential to calculate the cloud opacity and are often seen as observationally accessible test of cloud properties and of cloud models, for example \cite{1998Sci...282.2063G,2002ApJ...568..335M,2006ApJ...648..614C, LeeHengIrwin2013,2016ApJ...830...96H,Benneke2019,Lacy2020}. We re-iterate previous results of that exoplanet and brown dwarf clouds can not be characterised by one particle size only (\citealt{Helling2006,2017A&A...603A.123H,2019AREPS..47..583H}). Here, however, the focus is on potential trends that might serve as input for automized retrieval efforts. We therefore chose to represent the mean cloud particle size in terms of surfaced averaged mean particle size, $\langle a \rangle_{A}$. The dust-to-gas ratio is a helpful property to locate the cloud mass load and to compare to other astrophysical objects where condensation processes take place (e.g. AGB stars, Wolf-Rayet stars, SNs). The surfaced averaged mean particle size, $\langle a \rangle_{A}$ [cm], is \begin{equation} \centering \langle a\rangle_{\rm A} = \sqrt[3]{\frac{3}{4\pi}}\, \frac{L_3}{L_2}, \label{eq:surf_size} \end{equation} with $L_{2}$ and $L_{3}$ the second and third dust moments \citep[Eq.A.1 in][]{helling2020mineral}. Further discussion of the mean particle size and the differing definitions can be found in Appendix A of \citet{helling2020mineral}. The column integrated, number density weighted, surface averaged mean particle size is \begin{equation} \label{eq:aa} \langle \langle a \rangle_{\rm A} \rangle = \frac{\int_{z_{min}}^{z_{max}} n_{d}(z)\langle a \rangle_{\rm A}(z) dz } { \int_{z_{min}}^{z_{max}} n_{d}(z) dz} \quad \mbox{with}\quad n_{\rm d}(z) = \frac{\rho(z) L_3(z)}{4\pi \langle a(z)\rangle_{\rm A}^3/3}. \end{equation} The column-integrated properties are used to compare the cloud particle size within the grid of 3D GCM model atmospheres. All detailed results for $\langle a \rangle_{A}$ as well as for the dust-to-gas ratio, $\rho_{\rm d}/\rho_{\rm gas}$, are provided in the supplementary catalogue (\citealt{Lewis2022}). A summary is given here as link for the understanding of the column integrated plots as well as for the comparison with the degree of thermal ionisation in Sect.~\ref{section:degree_of_ionisation}. The surfaced averaged mean particle size, $\langle a \rangle_{A}$, range from $10^{-2}\mu$m to $\approx 10^{4}\mu$m. The largest particle sizes correlate with either low nucleation rates (in hot planetary atmospheres) or very high local densities (inside the planetary atmospheres). The local cloud particle distribution covers a larger volume of the planetary atmosphere than the nucleation rate due to transport processes like gravitational settling. For all models with T$_{\rm eff, P} \geq 800$K, cloud particles on the dayside, where they exist, are on average larger than that of the nightside by 2-3 orders of magnitude. Cloud particles are generally not found on the dayside for models with T$_{\rm eff, P} \geq 2000$, except where the deep equatorial jet permits some cloud formation at 10$^{-2}$ bar. Figure \ref{fig:mean_particle_size_integrate_scatter} shows the global distribution of the average particles sizes in terms of their column integrated, number density weighted values. The results complement the nucleation rate results in Fig. \ref{fig:nuc_integrate_scatter}: In regions of low nucleation efficiency the average particle sizes are large. The range of integrated average particle sizes spans several orders of magnitude $\langle \langle a \rangle_{\rm A} \rangle \sim 10^{-3}-10^{5}\mu$m. The distribution follows the same `cone' like divergence structure of the average dayside particle size compared to the nightside which are consistently in the range $10^{-3}-10^{-1} {\rm \mu m}$. For the colder models, the dust-to-gas mass ratio (Figs. 7 and 8 in te supplementary catalogue in \citealt{Lewis2022}) is lower across the dayside, especially near the morning terminator, hence, demonstrating the lower cloud formation efficiency in these atmospheric regions. For models with T$_{\rm eff, P} \geq 1600$K, the dayside cloud formation is limited to the morning terminator regions on the dayside. This layer of cloud formation is more extended in the pressure scale for the slower rotators. As the planetary effective temperature increases, this structure reduces in size until there is only limited dayside cloud formation near the evening terminator for the slower rotators and none whatsoever for the M star model. There are also inversions in the dust to gas mass ratio for the M star models with T$_{\rm eff, P}$=1600\,K-2000\,K \subsubsection{Material composition of cloud particles} \label{section:cloud_material_compostion} Another property used to characterise cloud particles is the material composition of the cloud particles. It has been stated elsewhere that cloud particles change their material composition throughout their life time if they get transported into atmospheric regions of different thermodynamic conditions (for example, \citealt{Helling2006,2017A&A...603A.123H,2019AREPS..47..583H}). The global distribution of the individual materials for the whole 3D grid that we address here is included in the supplementary catalogue (\citealt{Lewis2022}). Here we explore the material composition of the cloud particles which form in the atmosphere in terms of material groups. The bulk growth of 16 condensate species is considered here, and for the purpose of extracting trends in types of material condensate, similarly to \cite{2021A&A...649A..44H}, we split the condensates into four groups: high temperature condensates, metal oxides, silicates, and salts. The condensate species included and the group they are assigned to are shown in Table.~\ref{tab:vol_frac_type_table}. We choose to focus here on a comparison between the M and G stellar type models, with further discussion on the K and F stellar types and the equatorial distribution of material composition in \citet{Lewis2022}. \begin{table}[h] \centering \begin{tabular}{p{3cm}|p{4cm}} \hline Condensate Group & Species Included\\ \hline\hline Metal Oxides & SiO[s], SiO$_{2}$[s], MgO[s], FeO[s], Fe$_{2}$O$_{3}$[s] \\ Silicates & MgSiO$_{3}$[s], Mg$_{2}$SiO$_{4}$[s], CaSiO$_{3}$[s], Fe$_{2}$SiO$_{4}$[s]\\ High Temperature\newline Condensates & TiO$_{2}$[s], Fe[s], Al$_{2}$O$_{3}$[s], CaTiO$_{3}$[s], FeS[s]\\ Salts & KCl[s], NaCl[s]\\ \hline \end{tabular} \caption{The 16 bulk materials considered in our model are grouped in 4 categories. [s] indicates condensate materials.} \label{tab:vol_frac_type_table} \end{table} Figure.~\ref{fig:norm_col_vol_frac_w_grainsize_all_M_G} shows the normalised column integrated volume fractions with s is a particular condensate species group, \begin{equation} \langle V \rangle_{\rm norm} = \frac{\int_{z_{min}}^{z_{max}} \frac{V_{s}(z)}{V_{tot}(z)} dz}{\sum_{i} \int_{z_{min}}^{z_{max}} \frac{V_{i}(z)}{V_{tot}(z)} dz}, \label{eqn:V_norm} \end{equation} and $i$ runs through all the condensate species groups (Table \ref{tab:vol_frac_type_table}), with the integrated, number density weighted, surface averaged mean particle size (Eq.~\ref{eq:aa}) for the substellar, antistellar, equatorial morning and evening terminator of the models with M and G stellar type host stars. The antistellar points for both the M and G star are similar with small cloud particles ($\langle \langle a \rangle_{\rm A} \rangle \sim 10^{-2}-10^{-1} {\rm \mu m}$) which are dominated in composition by silicates ($\sim40-50\%$) forming for all model planets. The remaining volume is approximately equally distributed between the high temperature condensates and metal oxides (each $\sim20-25\%$). This is similarly the case for the morning terminator, excepting the highest temperature models, T$_{\rm eff} \geq 1800$ K, planets orbiting the M5V star where the silicates comprise $\sim55-60\%$ of the total volume, and the high temperature condensates dominate over the metal oxides by more than $10\%$. The highest temperature model where clouds form for the M star, T$_{\rm eff} = 2400$ K, has high temperature condensates as the largest contributor to the cloud particle load by volume at $\sim 60\%$, with the remaining volume comprised of silicates ($\sim 35\%$) and metal oxides ($\sim 5\%$). The average value for $\langle \langle a \rangle_{\rm A} \rangle$ increases gradually with increasing planet temperature for the M star planets, compared to an almost consistently small particle size for the G star. At the substellar point the coolest planets exhibit a similar pattern of material composition as is found at the antistellar and morning terminator points, however, the range of values for the integrated number density weighted surface averaged mean particle size is larger here, $\langle \langle a \rangle_{\rm A} \rangle \sim 10^{-2}-10^{0} {\rm \mu m}$. For the hottest temperature planets (M: T$_{\rm eff} \geq 1800$ K, G: T$_{\rm eff} = 1800$ K) there are temperature inversions between $p_{\rm gas} = 10^{-1}...10^{0.5..1}$ bar which are sufficiently large to reduce the temperature such that TiO nucleation can occur and the supersaturation ratios of certain material condensates are large, $S \gg 1$. Of the species which can condense in these regions to form the bulk composition of the cloud particle Fe[s], Al$_{2}$O$_{3}$[s], CaTiO$_{3}$[s], Mg$_{2}$SiO$_{4}$[s], CaSiO$_{3}$[s] and SiO[s] are the most dominant. The nucleation is particularly inefficient for the daysides of the hottest models ($\log_{10}\left(J_{*,{\rm tot}}\right) \leq -13\,\ldots-15$ cm$^{-3}s^{-1}$) resulting in larger average particle sizes ($\langle\langle a \rangle_A\rangle \sim 10^{3.7}\ldots10^{5}\mu$m). Hence, clouds forming in the deeper atmosphere at the substellar point are comprised of large particles made of high temperature condensates and silicates. For models with T$_{\rm eff, P} > 1800$ K, no cloud particles exist at the substellar point for the G star orbiting planets as the atmospheres are too warm to permit any cloud formation. The planetary atmosphere clouds differ most significantly at the equatorial evening terminator. For the cooler planets the material composition is similar to that of the other three points previously discussed, with silicates dominating and the high temperature condensates and metal oxides contributing equally to the remaining volume. With increasing planetary global temperature, T$_{\rm eff, P} \geq 1800$ K there is significant drop in the fraction of metal oxide condensates in favour of high temperature condensates. For the G star orbiting planets, silicates remain the dominant contributing species, excepting the hottest T$_{\rm eff, P} = 2600$ K model where high temperature condensates dominate. For the M star orbiting planets, however, the fraction of high temperature condensates increases and the fraction of silicates decreases with increasing model temperature. The average particle size in the atmospheric clouds is similar between both stellar types. The salts (KCl[s], NaCl[s]) do not contribute significantly to the average cloud particle volume composition at any point for any of the stellar types and planetary temperatures. We note that carbonaceous materials have not been considered in the cloud formation model applied here because the undepleted element abundance was considered solar, hence, oxygen rich. The effect of element depletion/enrichment by cloud formation will be addressed in Sect.~\ref{section:chemical_regimes}. Carbon-rich materials in form of condensates (not hazes) have been discussed in the literature to study the effect of non-solar elemental abundances on the cloud structure and composition (\citealt{2017A&A...603A.123H,2017MNRAS.472..447M,2019AREPS..47..583H,2021arXiv211114144H}). The formation of hydrocarbon hazes (which are not condensates) may be a potential explanation for radii that maybe 1.5-2$\times$ larger in the UV than in the optical (\citealt{2021AJ....162..287C}). This, however, requires to free the necessary carbon which is chemically blocked by CO and/or CH$_4$ in an oxygen-rich gas. While photochemically triggered hydrocarbon hazes can form in the upper atmosphere, their optical depth may not be large enough % compared to mineral cloud particles (\citealt{2020A&A...641A.178H}). \medskip Our systematic study has demonstrated that atmospheric thermodynamic structures lead to the formation of clouds that differ in particle sizes and numbers, and to a lesser extent, in their material composition. This finding is based on a cloud formation model in contrast to \cite{2021MNRAS.501...78P} who use particle sizes as parameters for their study of individual cloud materials. The suggestion of nightside clouds being similar for different planets promoted in \citet{Gao2021} can not be supported by our simulations, nor `the simple explanation that hot Jupiters all have the same species of nightside clouds' (\citealt{2019NatAs...3.1092K}). \subsection{Changing C/O regimes with changing global system parameters} \label{section:chemical_regimes} Cloud formation affects the local atmospheric chemistry in two ways. First, through the opacity feedback onto the temperature structure, and second, by element depletion of all elements that participate. Since cloud radiative feedback is not included in the GCMs that represent the input state for our cloud models, we focus here on the latter effect. Specifically, we only refer to the effect on the oxygen abundance (which affects the carbon-to-oxygen ratio) and the mean molecular weight, and the major effects will be summarised below. The detailed spatial results can be found in the accompanying catalog (\citealt{Lewis2022}). Extensive studies regarding other elements have been presented in previous papers of our group. \subsubsection{The carbon-to-oxygen ratio} \label{section:C_to_O_ratio} The carbon-to-oxygen ration (C/O) can be used as a fingerprint for the two nucleation regimes of exoplanet atmospheres introduced in Sect.~\ref{ss:nuc}: the exoplanets that are covered in clouds homogeneously globally (like HATS-6b) and those with a partial cloud coverage some of which feature strong day/night asymmetries (like WASP-18b). These characteristic are: \begin{itemize} \item the undepleted (e.g., solar) C/O on cloud-free hot daysides, \item increased C/O ratio close to 0.75-0.8 where clouds form in the cool upper nightside atmosphere \item decreased C/O with increased pressure as cloud particles evaporate and replenish the gas phase with oxygen that was trapped in silicates and metal oxides. \end{itemize} \noindent The C/O threshold of 0.8 matches result of \citet{2021A&A...649A..44H} and provides confirmation of findings by \citet{2020A&A...639A..36B} for hot and ultra-hot Jupiters to have C/O < 0.8. { C/O changes are a direct consequence of depletion of the oxygen by the nucleation and the surface growth. The carbon abundance is not affected as all planetary atmosphere considered here are assumed to be oxygen rich, hence, all carbon will be locked in CO (or CH$_4$). Mixing processes can only affect C/O (any local element abundance) if the respective processes are faster than the chemical processes involved in cloud formation.} \subsubsection{Mean molecular weight} \label{section:mean_molecular_weight} A constant mean molecular value, $\mu$, is typically assumed when running 3D GCMs to reduce the computational demands of the simulations \citep{Drummond2018a}. A constant mean molecular weight of 2.3 is likewise adopted in the GCMs underlying this study, in agreement with an atmospheric composition dominated by H$_2$ and He. However previous work has shown that this assumption may not be valid for all planets and generally for the whole atmosphere. We show in \citet{2021A&A...649A..44H} that in the case of ultra-hot Jupiters (e.g. HAT-P-7b, WASP-121b), where the day and nightside temperatures can differ by greater than 1500 K, there are substantial differences in the ionisation state of the atmospheric gas phase. The dayside is dominated by atomic and ionised species compared to a nightside dominated by molecular species. Different classes of exoplanet atmosphere can therefore be expected { be characterised by different} mean molecular weight { regimes}. In this paper, these classes are determined by the irradiation the planet receives and which affects the local thermal ionisation. For the grid models with T$_{\rm eff, P} \leq 1600$ K the entire planet maintains a constant value of $\mu$ = 2.35 which is consistent with a molecular hydrogen dominated atmosphere. The nightside of all model planets has a constant $\mu = 2.35$. The hotter models (T$_{\rm eff, P}\geq1800$ K) show a decrease in $\mu$ only on the dayside initially, at the hottest region, offset from the substellar point. With increasing model temperature $\mu$ is decreased across more of the upper atmosphere on the dayside. The lowest value of mean molecular weight achieved is $\mu = 1.8$ and is centred around the substellar point in the upper atmosphere of the hottest T$_{\rm eff, P} = 2400, 2600$ K models which orbit the M and K stars. The maximum value of the decrease is $\mu = 1.8$ and is centred around the substellar point in the upper atmosphere of the hottest T$_{\rm eff, P} = 2400, 2600$ K models which orbit the M and K stars. For the G and F stars, $\mu \gtrsim 1.95$ across the dayside. \smallskip We conclude that it is reasonable to assume a constant mean molecular weight consistent with a H$_2$-dominated atmosphere for planets with T$_{\rm eff, P} \leq 1600$ K, such as warm Saturn or some hot Jupiter class planets, but not for hotter planets. The implication of the changing mean molecular weight due to the dissociation of H$_2$ is demonstrated in \cite{2021MNRAS.505.4515R}. \subsection{Thermally driven ionospheres and the emergence of exoplanetary global electric circuits} \label{section:degree_of_ionisation} The degree of thermal ionisation ($f_{\rm e} = p_{\rm e}/\left(p_{\rm gas}+p_{\rm e}\right)$, where $p_e$ is the electron pressure) may be used to indicate plasma-like behaviour of the atmosphere, and by extension the potential for a thermally driven ionosphere to exist. \citet{2015MNRAS.454.3977R} propose that values of f$_{e}\geq10^{-7}$ mark the transition between gas to plasma behaviour which is relevant for discussing the magnetic coupling of exoplanet atmospheres (e.g., \citealt{2022AJ....163...35B}). Here we are also interested in comparing the extension and location of the ionised part of the atmosphere to the location of the clouds. A net electron flux may be established that causes the cloud particles to gain a net electrical charge in an ionised atmosphere, and more generally, a global electric circuit will establish if sufficient global background ionisation is available (\citealt{2019JPhCS1322a2028H}). Figures~\ref{fig:global_slice_plots_logg3_ionisation_rate_1} and \ref {fig:lobal_slice_plots_logg3_ionisation_rate_2} present the thermal degree of ionisation in comparison to the global distribution of the exoplanet clouds. We utilize the mean particle sizes in 2D equatorial slices for our comparison to regions of high ionisation based on thermal ionisation. The f$_{e}\geq10^{-7}$ threshold is shown as solid contour line and the location of f$_{e}\geq10^{-6}$ (dashed contour line) indicates the region where the thermal ionisation increases. The degree of ionisation resulting from thermal processes can be of the same magnitude as the degree of ionisation resulting from other processes like cosmic rays and UV radiation if the temperature is high enough (\citealt{Barth2021}). The Lyman-continuum irradiation in star forming regions may be considerably higher than thermal effects in the very rarefied gases of the outer atmosphere of planets (\citealt{2018A&A...618A.107R}). The degree of thermal ionisation never exceeds $f_{\rm e} = 10^{-7}$ in the upper atmosphere for any of the model planets where T$_{\rm eff, P} \leq 1600$ K. For T$_{\rm eff, P}=1800{\rm K}$, a dayside ionosphere emerges for all models, though it is more extended for the slower rotators. For the F, G and K star models, there is some overlap between the dayside cloud layers and the edge of the ionosphere, with the K star models having the most overlap. The most overlap for all stellar types occurs at T$_{\rm eff, P}$=2400\,K; beyond this planetary effective temperature the overlap decreases due to the reduction in the size of the dayside cloud layers. \medskip While the day/night asymmetry in $f_{\rm e}$ begins to appear at T$_{\rm eff, P}$=1000\,K, the thermal degree of ionisation does not reach ~10$^{-7}$ in the outer atmosphere until T$_{\rm eff, P}$=1400\,K, but only for the M and the F stars. Larger cloud particles ($\langle a \rangle_{\rm A}\geq10^{2.2}\mu$m) form at pressures below this level of ionisation ($\leq10^{-2}$ bar). Such cool exoplanets are therefore unlikely to have a thermally driven ionospheres. At T$_{\rm eff, P}$=1600\,K and T$_{\rm eff, P}$=1800\,K, the ionisation level exceeds 10$^{-7}$ above 10$^{-1}$ bar across the dayside for the M and the K star exoplanets. However, there is no cloud formation in these regions in the M and the K star exoplanet atmospheres. The G and the F star exoplanets with T$_{\rm eff, P}$=1600\,K form medium sized cloud particles in their atmospheres ($\langle a \rangle_{A}\geq10^{0.8}\mu m$) in regions (near the evening terminator) where f$_e\geq10^{0.7}$. A electron flux induced cloud particle charging may occur in these atmospheres. Above T$_{\rm eff, P}$=1800\,K, what little clouds remain form in regions where f$_{\rm e}$<10$^{-7}$. It is not clear yet what effect the dayside ionization has on the wind flow and thus the 3D thermodynamics of utra-hot Jupiters. \citet{Tan2019} propose that reduced molecular mean weight at the dayside, which we neglect in the 3D GCM here, leads to larger wind speeds. Further, the thermal effect of hydrogen dissociation at the dayside and re-combination at the nightside reduces the horizontal temperature gradient. The first was proposed to lead to dayside cooling and the latter to nightside warming. However, \citet{Tan2019} did not consider cloud formation and their results seem to contradict \citet{2020A&A...639A..36B}. The latter study find a particularly large dayside emission, which the authors attribute to low dayside albedo and inefficient heat circulation, as we also re-produce in this study. \citet{Beltz2022} suggest that coupling between the ionized wind and magnetic fields of 3 G can disrupt wind jets at the dayside completely, in particular for the upper atmosphere ($p<0.01$~bar), leading instead to equatorial-to-polar flow, greatly diminishing heat circulation between nightside and dayside. It is not clear yet how such a flow regime may affect vertical transport and cloud formation at the nightside. In any case, it is quite clear that the daysides of ultra-hot Jupiters are fundamentally different from those of colder planets and that further work is needed here to study wind flow and cloud formation in the context of dayside ionization and magnetic field coupling. Further implications of atmospheric ionisation are discussed in Sect.~\ref{ss:asymion}. \smallskip We conclude that all gaseous exoplanets can be expected to have a thermally ionised inner atmosphere for pressure $\gtrsim 10$bar where, therefore, magnetic coupling of the atmosphere can occur. Exoplanet atmospheres with T$_{\rm eff, P}>2000{\rm K}$ can be expected to have deep thermally driven ionospheres on their dayside. This ionsophere reached into the terminator regions the hotter the planetary atmosphere is. We therefore conclude that these exoplanets have a) a geometrically more extended, cloud-free dayside compared to their nightside which b) can undergo magnetic coupling to a global planetary magnetic field should it exists, and c) that a global electric circuit { may determine} the charge distribution within the atmosphere longitudinally, i.e. east-west-ward. Both, the global and local extension of the ionosphere and the global electric circuit will increase if additional ionisation processes occur. We further conclude that both regimes of exoplanet atmospheres, those with a globally homogeneous cloud coverage and those with an intermittent or asymmetric cloud coverage will undergo different degrees of magnetic coupling inside their atmospheres and, hence, may exhibit different magnetic field geometries. \section{Exoplanet weather/climate regimes: the cool, the transient and the hot exoplanet atmospheres} \label{sec:3cases} The previous sections have presented a global picture of cloud formation in gaseous extrasolar planets that orbit M, K, G and F host stars at different distances. Most of the details for individual models, like the individual nucleation rates or material compositions of the cloud particles, have been left to a supplementary catalogue. Now we provide a comparison of three groups of exoplanets of the same planetary effective temperature that orbit different host stars at different distances. A closer inspection of the 3D GCM model grid shows that the planetary model atmospheres fall into three cloud formation regimes, which we call classes, separated by planetary temperature: class i) \textit{the cool planets} (T$_{\rm eff, P} \leq 1200$ K), class ii) \textit{the transition planets} (T$_{\rm eff, P} = 1400 - 1800$ K), and class iii) \textit{the hot planets} (T$_{\rm eff, P} \geq 2000$ K). The temperature thresholds should be considered as approximate and may shift by $\pm200$K due to modelling uncertainties and additional parameter dependencies (e.g., log(g)). These cloud forming regimes are representative of weather regimes on short timescales but also of climate regimes if understood for longer time scales. Example objects for class (i) include HATS-6b and NGTS-1b, for class (ii) WASP\,43b, NGTS-10b and HD\,209458b, and for class (iii) WASP-18b, WASP-121b, WASP-103b and also brown dwarfs in close orbits around white dwarfs like WD\,0137b and EPIC\,2122B. { We note here that, young objects excepted, brown dwarfs may have a stronger internal heating than giant gas planets at a similar age. The overall trend, however, can be expected to be very similar with respect to the large day/night difference in cloud presence, the far stronger ionisation on the day side, as well as the lower mean molecular weight in the night side. This hypothesis is supported by the finding that \cite{2021MNRAS.tmp.1277B} shows that the mentioned BD-WD pairs are in general agreement in circulation regimes with \cite{2020MNRAS.496.4674L, 2021MNRAS.502.2198T}}. To provide an overview of each of these three regimes we select a specific temperature planet model which is representative of the whole regime, these are T$_{\rm eff, P} = 800, 1600, 2400$ K. The transition and hot planet temperatures line up with the approximate temperature space occupied by the hot and ultra-hot Jupiter class planets \citep[Table. 1 in][]{2021A&A...649A..44H}. The cool planets represent the homogeneous regime from previous sections, while the transient and the hot planets represent the regime of intermittent cloud formation. In the following, we will confirm that the intermittent regime is the best suitable to look for morning/evening terminator differences (\citealt{2021MNRAS.tmp.1277B}) which is greatly amplified by the cloud distribution \subsection{Asymmetric cloud coverage} The complex interplay between dynamics and irradiation is reflected by the asymmetric cloud coverage. Further, for a given global (equilibrium or effective) temperature, planets around K dwarfs are substantially faster rotators than G dwarf planets. Thus, K dwarf planets tend to have more asymmetric and smaller dayside cloud coverage compared to G dwarf planets of the same global temperature. This is particularly apparent for $T_{\rm eff, P}= 1200\,\ldots\, 1600$ K (classes i) and ii)), of which several will be observed by JWST to investigate differences in evening/morning clouds (WASP-63b and HD 189733b). We note that the cloud layers will be able to extend into deeper atmospheric layers for planets with higher masses where the increasing pressure increases the thermal stability of the cloud particles despite the increasing local temperature. This effect can be manipulated for low-mass planets by the choice of the inner boundary for the computational domain in 3D GCMs (see Sect. 6 in \citealt{2021A&A...649A..44H}). \paragraph{Nightside cloud coverage:} In all three classes (i, ii, iii), clouds do form on the nightside. This is an essential result because the greenhouse feedback of the nightside clouds will reduce the heat redistribution in the atmosphere. This affects the atmospheric temperature structure of the whole planet and the observed phase curves \citet{Parmentier2021}. We conclude that the nightside clouds play a particularly large role for exoplanets in the intermediate temperature regime (class ii). In case of an increasing nightside temperature due to radiative transfer effects, for example \cite{2022arXiv220209183S}, the location of thermally stable materials will change somewhat. Other effects need a more detailed consideration: A decrease of local density may not affect the cloud's optical depth due to an increased geometrical extension of the cloud, for example. \paragraph{Dayside cloud coverage:} The dayside cloud coverage distinguishes the cool, the transition and the hot planets in their global weather and climate appearance. A nearly homogeneous cloud coverage emerges for the coolest planets (case i), a partial or transient cloud coverage for class (ii) and a cloud-free dayside with considerable ionisation for the hot planets of class (iii). The details of these cloud layers have been discussed in Sects.~\ref{ss:nuc}--~\ref{section:cloud_material_compostion}. Associated with these changing cloud coverage are effects on the local chemistry which we presented in terms of the local C/O and the mean molecular weight (Sect.~\ref{section:chemical_regimes}). The dayside of the cool planetary regime will, hence, show a strongly depleted set of element abundances and a C/O that differs from the undepleted, pristine value. Planets in the transient regime will show such depleted elements in regions where cloud formation occurs which will be near the morning terminator. Hence, the dayside of these planets will be determined by a mix of depleted and undepleted areas. The dayside of planets in the hot regime show a mainly undepleted set of element abundances and therefore a solar (or pristine) C/O in combination with a deceased mean molecular weight due to the thermal instability of H$_2$ in such hot atmospheres. Large day/nightside temperature differences further result into a different geometrical extension of the dayside as well as the terminator regions. Hence, the cold planets in our grid (class i) are likely to have no large geometrically asymmetries between the day- and the nightside, while the hot class (iii) planets may have a substantial geometrical asymmetry already within the atmosphere below p$_{\rm gas}<10^{-8}$ bar as footpoint of a planetary mass loss. \smallskip The dayside cloud coverage for planets of similar global temperature may well be different for different host stars. This is the case for the class (ii) planets, WASP-43b/NGTS-10b (Fig.~\ref{fig:global_slice_plots_main_4250}) that orbit an K dwarf and HD\,209458b (Fig.~\ref{fig:global_slice_plots_main_5650}) that orbits a G star. WASP-43b and NGTS-10b have a 10 times greater surface gravity compared to HD\,209458b which, in addition, leads to cloud extending deeper into the atmosphere. We note that recent WFC3/UVIS-HST observations in scattered light between 346-822 nm of WASP-43b are interpreted as seeing a (very dark) cloud-free dayside for pressure >1 bar \citep{Fraine2021}. This may be supported by the fast rotation models presented here. Conversely, observations of HD\,209458b that canonically concluded it to be very cloudy would only pick up the very cloudy half of the dayside and not the cloud-free region that occur towards the evening terminator as suggested in Fig.~\ref{fig:global_slice_plots_main_5650}. Extensive modelling studies are conducted for HD\,209458b (e.g., \citealt{2018A&A...615A..97L,2022arXiv220209183S}) which may enable a detailed modelling comparison. Class (ii) enables further to study the effects of rotation on the cloud patterns that characterise the weather and climate of these gaseous exoplanets. For example, planets with T$_{\rm eff, P}=1600$K that orbit a cooler star (M dwarf; Fig.~\ref{fig:global_slice_plots_main_3100}) need to orbit their host star at a smaller orbital separation of 0.13 days compared to a planet of the same effective temperature orbiting a G star orbiting with 1.55 days (Fig.~\ref{fig:global_slice_plots_main_5650}). Our simulations therefore support our hypothesis that WASP-43b and NGTS-10b around K dwarfs may indeed show rotational deviations compared to G dwarf planets like HD~209458b of the same temperature but with much slower rotation. We note that the transition class (ii) of cloud climates arises because of a tight interplay between the thermal stability and the thermal background, which depends itself on the heat redistribution and wind field. Direct horizontal advection of cloud species is not included in our models, and could thus smear out the boundaries of the partial cloud coverage. This smearing will be limitted by the thermal stability of the cloud particle in particular towards high-temperature regions like on the dayside. GCMs with 3D cloud-coupling \citep[e.g.][]{2018MNRAS.481..194L} may be used to refine the boundaries of the classes defined in this work. \subsection{The asymmetry of the thermal ionosphere}\label{ss:asymion} How does the deep ionosphere affect vertical transport of elements observed with high resolution observations? Can we readily compare observed abundances in the upper atmosphere of (partly) ionized planets with those of un-ionized planets? While these questions are outwith the scope of this paper, we shortly review them with respect to the introduced cloud formation regimes that are characteristic for potentially distinct weather and climate scenarios. Weather and climate scenarios are determined by the thermo- and hydrodynamics behaviour of the atmosphere which does determine the global and local gas phase and cloud characteristics, but also secondary processes like ionisation and the emergence of global electric circuits (GEC) (\citealt{2016SGeo...37..705H}). The conditions for the emergence of an exoplanet GEC (eGEC) include a sufficient ionisation of the atmosphere and the presence of clouds than may produce lightning (\citealt{2019JPhCS1322a2028H}). Little to no eGEC effects are expected on the atmosphere structure for the solar system planets (\citealt{2020SSRv..216...26A}), a conclusion that most likely can be extrapolated to extrasolar planets. The ionisation processes that drive the eGEC, however, do affect the local chemistry and may support the formation of cloud condensation nuclei in particular in the photo-dominated uppermost atmosphere layers in analogy to processes on Earth (\citealt{2017NatCo...8.2199S,2018ACP....18.5921T}). We therefore seek to build our understanding of where in the atmosphere which ionisation processes act and how this may help to understand the global weather and climate on exoplanets. Based on the modelling framework of this paper, we concentrate on the thermal ionisation in what follows. Figure~\ref{fig:global_properties} contextualises the model grid that we explore here with respect to a selection of possible candidates (dark golden dost with error bars; candidate data from Tables~\ref{t:UV1}, ~\ref{t:UV2}) for future UV missions, for example ARAGO \cite{2019BAAS...51c.219N}, PolStar \cite{scowen2021polstar} or POLLUX on LUVOIR \cite{18BoNeGo}. It was suggested by \citet{Tan2019}, \cite{2021A&A...648A..80H} and \citet{Beltz2022} that the degree of ionization at the dayside of ultra-hot Jupiter would promote very efficient coupling between the ionized wind flow and the planetary magnetic field if it is of the order of a few Gauss. For example, \cite{2018A&A...618A.107R} demonstrate that the magnetic flux threshold value for where the cyclotron frequency exceeds the local gas collisional frequencies decreases to well below 1G in the upper, low-density atmospheric layers. For the high-density atmosphere at $>1$bar, a local magnetic flux of > 100G may be required. Consequently, the efficient magnetic field coupling of the atmospheric gas could lead to a very sharp transition from efficient day-to-nightside heat transport and very inefficient day-to-nightside heat transport as soon as a sufficient dayside ionization occurs. Thus, the transition between the intermediate and ultra-hot temperature regime would be affected by its degree of ionization. We find, however, that while the ionization can penetrate deep into the planet's atmosphere, the degree of ionization may not be sufficient to allow for efficient coupling between ionized gas and magnetic fields. In that case, the transition between intermediate and ultra-hot regime could occur for different temperature depending on rotational period, where faster rotators would exhibit a transition at cooler global temperature than slower rotators. That is, planets orbiting an K dwarf would transit at lower global temperatures than planet around F dwarf stars. {Observationally, phase curves of exoplanets around the 1800-2000 K temperature transition between case (ii) and (iii) with different orbital periods could be compared to determine if they exhibit differences in heat circulation and cloud distribution. Transiting planets orbiting bright stars with global temperatures around the case (ii) and (iii) transition between intermediate and ultra-hot Jupiters are e.g. K2-31b ($P=1.26$~days, G type, \citet{Grziwa2016}\footnote{Its grazing transit will make this planet, however, difficult to characterize.}), WASP-14b ($P=2.2$~days, F type, \citet{Raetz2015,Southwood2012,Wong2015} and for $T_{\rm P}=2000$~K WASP-19b ($P=0.78$~days, G type, see e.g. \citet{Hebb2010,Maxted2013}).} Reduced hot spot shift with rotation period for both intermediate and hot Jupiters have been verified very recently by a systematic study of Spitzer data \citep{May2022}. \subsection{Concluding discussion} The grid study presented here supports and complements the ensemble of observational study of gas planets across different temperatures and rotation periods in preparation of the PLATO and the Ariel space missions. Such complex models are needed to move on from single-case models for specific planets which does require a considerable adjustment in various physical and numerical parameters to fit the observational data (e.g., inner and outer boundary of computational domain, viscose damping, ...). \citet{Roman2021} have also performed a grid study with a 3D GCM and clouds. These authors, however, started with 1400~K, thus missing HD~189733b (with 1200 K). Both, \citet{Roman2021} and \citet{Parmentier2021} utilise simplified cloud models in contrast to our kinetic, multi-process approach, and they focus on only one rotational period. These authors conclude, that when radiative feedback of clouds is included, the dayside-to-nightside temperature differences increase and the eastward hot spot offset decreases.% For $T_{\rm gas}$=2000 - 3500 K, both authors see an apparent westward shift for phase curves in the optical wavelength range ($<1$~micron). This apparent westward shift in the optical phase curves is associated with a pile-up of clouds on the morning terminator and due to enhanced reflection of stellar light over these regions. This "westward shift" is thus mainly a radiative effect and only visible in optical wavelength ranges. \citet{Roman2021} excluded planetary rotation as a modifying factor in cloud feedback for phase curves on the basis that irradation on the planet is not modified by planetary rotation. \citet{Parmentier2021} did not investigate the impact of planetary rotation. However, \citet{May2022} have shown that planetary rotation period definitely plays a role in moderating eastward shift and even allows the on-set of westward shift as observed in the IR Spitzer data. Thermal westward phase curve shifts, in contrast to optical westward phase shifts, can only be brought about by dynamical effects, that is, changes in the wind jet from eastward to westward flow. \citet{Carone2020} predicted that westward flow tendency on an intermediate Jupiter could appear for $P_{orb}<1.5$~days in agreement with the observations reported by \citet{May2022}. For hotter planets, the westward shift already appears for $P_{\rm orb}<2$~days according to \citet{May2022}. Thus, radiative effects alone are not sufficient when discussing cloud feedback and their effects on planetary phase curves. In this study, westward flow at the dayside is also included in the GCMs used here for intermediate to hot planets around M and K dwarf stars \citep[see][Fig.2]{2021MNRAS.tmp.1277B} but here we have not yet included the full radiative feedback. Similar to \cite{Roman2021} and \citet{Parmentier2021}, our result suggest a morning cloud pile-up for ultra-hot Jupiters for G, F and K planets, but not for M dwarf planets. Thus, while the cloud pile-up effect found is not sufficient to explain all aspects of phase curves for intermediate to Jupiters, it is probably amplifying the underlying westward flow tendency due to dynamical effect on the optical phase curves. The closest analogues to ultra-hot Jupiters around M dwarfs in our simulations will also be important to consider to understand cloudy exoplanet phase curves. While ultra-hot Jupiters around M dwarfs do not exist, ultra-hot brown dwarfs in close orbit around a white dwarf star were detected: WD 0137B (2400 K, P= 0.0803 days, \citealt{2020MNRAS.496.4674L}) and EPIC 2122B (4000 K, P=0.047~days). Observations by \citet{Zhou2022} indicate no asymmetries across the limbs. In addition, the 3D GCM of \citet{2020MNRAS.496.4674L} is very close to our ultra-hot Jupiters around M dwarf simulations in its strong day/night temperature dichotomy. Interestingly, for WD 0137B water absorption was observed on the nightside. This may indicate that the nightside clouds lie deeper in the atmosphere in these two highly irradiated brown dwarfs due to the higher surface gravity of brown dwarfs compared to Jupiter mass planets as demonstrated, { for example by the 1D {\sc Drift-Phoenix} models that solve cloud formation consistently as part of the whole atmosphere simulation (Fig. 2 in \citealt{2009A&A...506.1367W}).} Recent results show that cloud particles and either magnetic drag or modified dynamics are needed to explain the phase curves of these objects \citep{Lee2022}. Large modelling studies like this work and those of \citet{Roman2021,Parmentier2021} that cover similar global parameters but consider different physical effects are thus highly timely and vital to interpret detailed observational studies with JWST and Ariel. { Fully consistent radiation transfer solutions are required and progress is being made (e.g., \citealt{2022arXiv220209183S,2022ApJ...929..180L}). }Only then can we identify which factors shape the phase curves of intermediate to hot Jupiters: radiative, dynamical, and magnetic effects and how clouds modify these. \section{Clouds beyond:\\ The formation of mineral hazes } \label{section:extrapolation_gas_and_cloud_results} The gas pressure domain over which a 3D GCMs are simulated may differ for different authors. For example, \cite{Parmentier18} simulate the gas pressure ranging from 200 bar to 2 $\mu$bar, and \citet{2021A&A...649A..44H} from 0.1 mbar to 700 bar. We began to explore the impact of inner pressure boundary of the GCM on the formation of clouds for the specific case of hot Jupiter WASP-43 b (\citealt{2021A&A...649A..44H}), showing that the increased thermal stability associated with higher pressures permits cloud formation to occur deeper in the atmosphere towards the hotter inner boundary. Here, we address the upper, low pressure boundary of the GCM simulations. We extrapolate a log-equidistant pressure grid and calculate corresponding temperatures using a parameterisation based on \citet{2009ApJ...707...24M}. We chose to extrapolate four profiles (substellar and antistellar point, equatorial morning and evening terminators) for on selected exoplanet atmosphere configuration (host star: G5V, T$_{\rm eff,~p} = 1600$ K, $\rm \log_{10}(g)$ = 3 [cgs]) to form the basis of the discussion on the potential for cloud formation outside of the commonly used computational domain. The final temperature approached by the temperature parameterisation, T$_{\rm gas, outer}$, of the substellar extrapolated profiles is 10000\,K { following works by \cite{2007P&SS...55.1426G,2004Icar..170..167Y}}. The antistellar extrapolated profile has a fixed T$_{\rm gas, outer}$ = 100 K. { We note that the exact value of the outermost temperatures do not affect the result presented here since cloud formation stops at lower pressures in both cases because of too high temperatures or too low collision rates.} The terminator points are assumed to have an isothermal temperature structure for p$_{\rm gas} \gtrsim 10^{-3}$ bar where the temperature is fixed to the final value of the original 1D profile. The lowest pressure considered in this extrapolation is $10^{-12}$ bar. The gas can safely be considered as a neutral hydrodynamic fluid to gas pressures as low as $10^{-8}$ bar (Appendix~\ref{section:hydrodynamics_validity}). If the gas can be assumed to be sufficiently ionised (see Sect.~\ref{section:degree_of_ionisation}) such that the collisional cross section increases accordingly, the validity shift to lower pressures of $10^{-15}$ bar. Such low pressures are still not without challenge since they imply very low particle growth rates as well as very little frictional interaction between the cloud particle and the gas. The transition from the original 3D GCM ($T_{\rm gas}, p_{\rm gas}$) domain into the extrapolated pressure domain is depicted by the transition from dark grey to light grey solid line in Fig.~\ref{fig:SatCurves} and occurs at $p_{\rm gas}\approx 10^{-3}$bar. The extended 1D profiles are shown in Figure~\ref{fig:SatCurves}, and % despite the possible crudity of our first-order extrapolation, it becomes clear that the trends of the inner atmosphere { with respect to their cloudiness} will continue into the upper atmosphere. Figure~\ref{fig:SatCurves} demonstrates that the atmospheric range were cloud formation is triggered by the formation of condensations seeds (TiO$_2$, SiO, KCl, NaCl), extends into the very low pressure range for the terminators and the nightside profile. TiO$_2$ (dark blue dashed) and SiO (brown dashed) remain the dominating nucleation species. Figure ~\ref{fig:SatCurves} further re-emphasizes that nucleation only occurs if the local gas temperature drops below the temperature were thermal stability (i.e. supersaturation ratio S=1) occurs. This explains the difference in cloud extension between the two terminators with the evening terminator ($\phi=90^o$) being somewhat hotter than the morning terminator. Furthermore, the nightside ($\phi=180^o$) and the morning terminator ($\phi=270^o$) profile probed here could have mineral clouds extending even into higher atmospheric regions where $p_{\rm gas}<10^{-12}$ bar. Figure~\ref{fig:extrapolated_clouds_2.0} presents the combined information about the clouds that form beyond the 3D GCM computational domain. The significant increase in the extension of clouds into the low pressure, upper atmosphere ($p_{\rm gas}< 10^{-4}$ bar) is apparent for the two terminators and antistellar point. Figure~\ref{fig:SatCurves} shows that the efficient nucleation in the low-pressure, extrapolated atmosphere enables the formation of a layer of mineral hazes { in the form of metal oxide clusters and cloud condensation nuclei}. The correspondingly low local densities do not enable efficient bulk growth until $p_{\rm gas}\approx 10^{-8}$ bar (Fig.~\ref{fig:extrapolated_clouds_2.0}). This suggest that the observation of these very upper atmospheric regions with $p_{\rm gas}< 10^{-8}$ bar, which are accessible through high resolution transmission spectroscopy, might allow to study the nucleation process in more detail by searching for the spectroscopic signatures of (TiO$_2$)$_{\rm N}$, (SiO)$_{\rm N}$ clusters as proposed in \cite{2021A&A...654A.120K}. This mineral haze layer would be expected to be extant to $p_{\rm gas}\approx 10^{-8}$ bar, below which first mixed-material particles occur. A clear difference in cloud particle sizes may occur between evening and morning terminator due to different nucleation efficiencies. The nightside temperature is the lowest such that the nucleation efficiency is the highest, hence, the cloud particles remain the smallest in these extrapolated, low-pressure atmospheres. \subsection{Limitations} The computational efforts of 3D GCM requires sensible approximations and the assessment of those. Within the hierarchical approach that this paper and \cite{2021A&A...649A..44H} have followed, it can be concluded that the extension of the derived cloud layers is affected by the location of the inner and the outer boundary of the computational domain. An inner boundary at higher pressures will stabilise the cloud particles such that the cloud's backwarming can stronger affect the temperature in atmosphere-core transition region. An extended upper boundary allows the mineral cloud formation to contribute in the domain that so far was understood as photochemically driven to form hydrocarbon hazes. This conclusion, however, is based on simulations that did not include the formation of metal-oxide clusters so far. A first comparison of their different efficiency was presented in \cite{2020A&A...641A.178H}. The limitation of our approach is that the simulations are not consistent, instead, the radiation hydrodynamics does not include the cloud formation effects nor the kinetic gas-phase effects. These are clearly topics for future work. It is, however, reasonable to expect that photochemical effects will not only occur for the C/N/O/H/S chemistry but also for the metal-oxide chemistry. \cite{2021Univ....7..243G} demonstrate, however, that the ionisation energies of (TiO$_2$)$_{\rm N}$ clusters (N being the number of TiO$_2$ monomers forming the cluster) exceed the atomic ionisation energy of Ti. (SiO)$_{\rm N}$ is suggested as a better candidate for cluster ionisation but one can only reasonably expect the UV part of the stellar radiation field to affect the ionisation state of these clusters which contribute to the formation of cloud condensation nuclei. We further note that the stability of these clusters may be affect by their fall speed within these low-pressure regimes. The gravitational settling speed is rather high as little frictional interaction occurs. Hence, once such an interaction does occur, the kinetic energy of such interactions will be high. \begin{table}[] \centering \caption{J band magnitudes for stellar types M5V, K5V, G5V and F5V. The absolute magnitudes are taken from \cite{PecautandMamajek2013} and the apparent magnitudes are calculated ($m_{\rm J} = 5\log_{10}\left( d/10 pc \right) + M_{\rm J}$) for three distances d = 50, 100, 200 pc. }\begin{tabular}{c||ccc} \hline\hline Stellar & Absolute & Distance & Apparent \\ Type & Magnitude & & Magnitude\\ & ($M_{\rm J}$) & (pc) & ($m_{\rm J}$)\\ \hline \multirow{3}{*}{M5V} & \multirow{3}{*}{9.09} & 50 & 12.58 \\ & & 100 & 14.09 \\ & & 200 & 15.6 \\ [1ex] \multirow{3}{*}{K5V} & \multirow{3}{*}{5.10} & 50 & 8.59 \\ & & 100 & 10.1 \\ & & 200 & 11.61 \\ [1ex] \multirow{3}{*}{G5V} & \multirow{3}{*}{3.73} & 50 & 7.22 \\ & & 100 & 8.73 \\ & & 200 & 10.24 \\ [1ex] \multirow{3}{*}{F5V} & \multirow{3}{*}{2.52} & 50 & 6.01 \\ & & 100 & 7.52 \\ & & 200 & 9.03 \\ [1ex] \hline \end{tabular} \label{tab:TSM_J_magnitudes} \end{table} \section{Observational implications} \label{section:observational_implications} \subsection{Transmission Spectroscopy Metric} \label{section:transmission_spectroscopy_metric} \medskip We calculate a Transmission Spectroscopy Metric (TSM) \citep{2018PASP..130k4401K} for each of the grid model planets to give an indication on how amenable the planets would be to transmission spectroscopy observations. The TSM is calculated as \begin{equation} {\rm TSM} = {\rm SF} \cdot \frac{R_{\rm P}^{3} T_{\rm eq}}{M_{\rm P} R_{*}^2} \cdot 10^{-\frac{\rm m_{J}}{5}} \end{equation} where $R_{\rm P}$ and $M_{\rm P}$ are the radius and mass of the planet in Earth units, T$_{\rm eq}$ is the planetary equilibrium temperature in Kelvin, $R_{*}$ is the radius of the host star, $m_{J}$ is the apparent magnitude of the host star in the J-band, and SF is a scaling factor. Whilst the radius of the model planets (1.35 $R_{\rm J}$) falls outside of the radius bins of the planets analysed by \cite{2018PASP..130k4401K}, we opt to use the scale factor SF $= 1.15$ calculated for the $4.0 < R_{\rm P} < 10 R_{\rm E}$ range. A J-band (central wavelength $\sim$1.2 ${\rm \mu m}$) apparent magnitude is used as the wavelength is closest to the centre of the NIRISS bandpass. We calculate the J-band apparent magnitudes of each of the host stars using the absolute magnitudes of \cite{PecautandMamajek2013} for three distances of d $ = 50, 100, 200$ pc. Both the absolute magnitudes of \cite{PecautandMamajek2013} and the calculated apparent magnitudes are listed in Table~\ref{tab:TSM_J_magnitudes}. Figure \ref{fig:transmission_metric_plot} compares the 3D GCM grid TSM with those for various gas-giants and ultra-hot Jupiters. Since the transmission spectroscopy metric $\propto H_{\rm p} \propto \frac{T}{g}$, planets with high T$_{\rm eq}$ and low log(g) like HD~189733b, HD~209458b but maybe also HATS-6b, WASP-121b and HAT-P-7b are deemed to be easier observable through transmission spectroscopy. \subsection{Where the clouds gets optically thick: the p($\tau=1 $) levels} \label{section:opacity_levels} Figure ~\ref{fig:Opt_Depth_tstar6500_test} shows the atmospheric gas pressure levels where the optical depth reaches one for $0.1\,\ldots\,100\mu$m, $p(\tau(\lambda=1))$ [bar]. The atmosphere will be blocked by the clouds for higher pressures being equivalent to lower altitudes. For the calculation, we follow the same approach as in \cite{2021A&A...649A..44H} (Sect.~7). Here, we focus on how the mineral absorption features change for planets within the cool (800\,K), transition (1600\,K) and the hot (2400\,K) exoplanet atmosphere regime for F-type host stars in Figure~\ref{fig:Opt_Depth_tstar6500_test} which represent the cases shown in Fig.~\ref{fig:global_slice_plots_main_6500}. The results for the M, G and K stars are presented in the Appendix (Fig.~\ref{fig:Opt_Depth_for_multiple_temps}) for the those models shown in Figs.~\ref{fig:global_slice_plots_main_4250} - ~\ref{fig:global_slice_plots_main_6500}. The comparison of the $p(\tau(\lambda=1))$ for different host stars is shown in Figs. 33 in the supplementary catalogue (\citealt{Lewis2022}). All figures depict the sub-stellar (dayside: $\phi=0.0^o$), the anti-stellar (nightside: $\phi=180.0^o$) and the two terminator profiles (evening: $\phi=90.0^o$; morning $\phi=270.0^o$) at the equator ($\theta=0.0^o$). The most suitable wavelength region for cloud investigation is $\lambda > 1\mu$m. In this region, the silicate features appear and the differentiation between compact and more agglomerate-like cloud particles can be made. The atmosphere becomes optically thick already at the cloud top at shorter wavelength such that the cloud provides a grey background opacity in the optical spectral region. Figure ~\ref{fig:Opt_Depth_tstar6500_test} suggest that the nightside would be almost indistinguishable for all three exoplanet regimes, but the considerable differences in the spectral range of the mineral features emerge on the dayside as well as in the terminator regions. The evening terminator, where the hot dayside gas flows towards the nightside, appears particular amenable to distinguish the three regimes. For the terminators in the mid-infrared the cloud optical depth for all planets is dominated by the silicate spectral features and therefore the profiles are largely indistinguishable for different planetary effective temperatures. However, for the hot exoplanet regime ($T_{\rm eff,p}=2400\,{\rm K}$) irregularly shaped particles, modelled through a Distribution of Hollow Spheres (DHS, see \citealt{Min2005}, \citealt{Samra20}) appears to produce a flat, higher optically thick cloud pressure level, compared with the compact case. Including a DHS has the effect of increasing the optical depth of clouds in general, although for all other wavelengths and planetary effective temperatures the difference is relatively minor. It is a different story in the near-infrared, the region covered by JWST NIRSpec and the Ariel infra-red spectrograph (AIRS). In the near-IR are substantial (at greatest an order of magnitude in pressure) differences in the optically thick pressure level of the clouds for different planetary effective temperatures. Phase curve observations were used to infer cloud properties (e.g. \citep{2016NatAs...1E...4A,Oreschenko16_phase-curve,2017AJ....153...68S,2021ApJ...915...45C}). This provides a good incentive to investigate near-IR phase curves for a wide variety of planetary effective temperatures, as such observations could tease out details of cloud formation as affected by stellar installation and wind flow. Such a survey of phase curves is proposed for Ariel for $\sim 50$ exoplanets (\citealt{CharnayARIEL2021}). Furthermore the differences in the optical depth between the near-IR region and the optical may also allow for better constraints on the pressure-temperature structure of these planets, especially for hot gas-giants and transition temperature gas-giants. At such wavelengths, deeper pressure levels are observable, potentially providing information about the local gas temperatures which is unavailable to visible and UV observations. The near-infrared provides a `window' through the clouds to the deeper atmosphere below the observable cloud deck at other wavelengths. The cloud's optical depth at the substellar points ($\phi=0^o$) differs dramatically for planet of different global temperatures because hot gas-giants have no clouds at $\phi=0^o$ (case i and iii) and cool gas-giants exhibiting a cloud deck (case i). This strongly affects the dayside albedo. Reflected light observations of WASP-43b have suggested a dark dayside \citep{Fraine2021}, and reflected light in visible wavelengths for Kepler-7b have been found to also potentially discriminate between material composition of the clouds \citep{Webber2015}. Finally, the effect of the stellar spectral type does also impact these conclusions. For the evening terminator ($\phi=90^o$) of M-type host star planets (Fig.~\ref{fig:Opt_Depth_for_multiple_temps}), the cloud optically thick pressure level is dramatically different for the three effective temperatures in the mid-IR. The silicate features are absent for planets in the hot planet regime (case iii). Figure ~\ref{fig:GCM_start_impact_ptau1} tests in how far the limit of the computational domain may affect our conclusions regarding the mineral spectra features: $p(\tau(\lambda=1))$ is compared for the original profile (black and similar to Fig. ~\ref{fig:Opt_Depth_tstar6500_test}) to results for the extend atmosphere profile (blue) as discussed in Sect.~\ref{section:extrapolation_gas_and_cloud_results}. The horizontal thin, red line indicates the upper boundary in pressure space of the 3D GCM computational domain. It can be concluded that the $p(\tau(\lambda=1))$-level moves higher into the atmosphere due to cloud particles being able to form higher in the atmosphere and that the feature depth increases around $\lambda\approx 8\mu$m. The cloud's optical depth is also affected in the optical and UV by the shifted upper boundary of the computational domain. The $p(\tau(\lambda=1))$ increases steeper with increasing wavelength until $\lambda\approx 0.6\mu$m for the terminators and antistellar points due to the extended cloud decks to higher altitudes (lower pressures) but also due to the smaller average size of the mineral haze at these pressures. \section{Conclusions} \label{section:conclusions} We propose to characterise the weather and climate on exoplanets by three classes that exhibit characteristic cloud and gas-phase chemistry with clear implications for atmospheric asymmetries: \begin{itemize} \item class i) \textit{the cool planets} (T$_{\rm eff, P} \leq 1200$ K; e.g., HATS-6b, NGTS-1b) are characterised by:\\ -- globally homogeneous nucleation, hence, a globally homogeneous cloud coverage,\\ -- globally depleted element abundances, hence, increasing C/O in cloud forming layers,\\ -- homogeneous mean molecular weight,\\ -- globally low thermal ionisation,\\ -- metal-oxide clusters form a homogeneous haze layer.\\ \item class ii) \textit{the transition planets} (T$_{\rm eff, P} = 1400 - 1800$ K; e.g., ASP\,43b, NGTS-10b, HD\,209458b) are characterised by:\\ -- intermittent nucleation, hence, intermittent cloud coverage\\ -- intermittent element depletion and, hence, intermittent C/O across observable planet disk\\ -- cloud and gas chemistry emphasise day/night terminator difference\\ -- homogeneous mean molecular weight,\\ -- intermittent increases in thermal ionisation, \\ -- metal-oxide clusters may form mineral hazes on the nightside and on the morning terminator.\\ \item class iii) \textit{the hot planets} (T$_{\rm eff, P} \geq 2000$ K; e.g., WASP-18b, WASP-121b, WASP-103b, brown dwarfs like WD\,0137b and EPIC\,2122B) are characterise by:\\ -- nightside confined nucleation,\\ -- cloud-free dayside with undepleted element abundances,\\ -- differences in day/night mean molecular weight implies a larger, geometrical extension of the dayside atmosphere, hence, a strong geometrical day/night asymmetry,\\ -- dayside exhibits an ionosphere that extends into the high-pressure, inner atmosphere suggesting a highly asymmetric magnetic coupling of these atmospheres,\\ -- metal-oxide clusters form mineral hazes on the nightside. \end{itemize} We, hence, conclude that for the cool planets (case i), 1D simulations suffice for the atmosphere up to 10$^{-5}$ bar. The homogeneity of the cloud cover suggest that the inferences of the C/O ratio based on observations of molecules over one planetary location is representative for the whole atmosphere. Combined with the evidence that atmospheric mixing processes homogenize the chemical composition of cool planets \citep{2021MNRAS.tmp.1277B}, this further means that the non-detection of methane in cool gas planets as inferred from observations of WASP-107b \citep{Kreidberg2018b} (800 K, G type), WASP-117b (800 K, F type) \citep{Carone2021} and HD 102195 b (800 K, K type) \citep{Gandhi2020} indeed represent the atmosphere composition and are indicative of methane quenching. Consequently, the presence of multiple carbon and nitrogen bearing species as inferred for HD~209458b \citep{Giacobbe2021}, which lies in the intermediate regime (1400\,K, G type), should not be interpreted in the 1D framework to represent the whole planet due to the complex interplay between 3D dynamics, chemistry and cloud formation. The intermediate temperature regime, not just hot extrasolar planets, may need substantial efforts to treat particularly the cloud distribution in three dimensions, or at the very least, with two profiles for asymmetric terminators for transmission retrieval efforts. \begin{acknowledgements} Ch.H. and P.W. acknowledge funding from the European Union H2020-MSCA-ITN-2019 under Grant Agreement no. 860470 (CHAMELEON). D.L. and G.H. acknowledge the School of Physics \& Astronomy at the university of St Andrews for financial support of the summer project, R.C. acknowledges the Laidlaw Foundation. D.S. acknowledges financial support from the Science and Technology Facilities Council (STFC), UK. for his PhD studentship (project reference 2093954), O.H. acknowledges PhD funding from the St Andrews Center for Exoplanet Science. D.S. and O.H. acknowledge financial support from the \"Osterreichische Akademie der Wissenschaften. R.B.~acknowledges support from the KU Leuven IDN/19/028 grant ESCHER. L.C. acknowledges the Royal Society University Fellowship URF R1 211718 hosted by the University of St Andrews. K.L.C. acknowledges STFC funding under project number ST/V000861/1. \end{acknowledgements} \bibliographystyle{aa} \bibliography{reference.bib} \begin{appendix} \section{Testing validity of hydrodynamics regime}\label{section:hydrodynamics_validity} Here we determine over which pressure range the utilized model atmospheres and their extrapolations are collision dominated, i.e. the fluid assumption is valid. The validity of the hydrodynamic assumption is assessed via the Knudsen number $Kn=\lambda/L$, where $\lambda$ is the mean free path and $L$ is the characteristic length scale. For the hydrodynamic assumption to be valid, $Kn<1$. . For this model, the characteristic length scale is taken as the scale height. The mean free path can be calculated via \begin{equation} \lambda = \frac{1}{\sqrt{2}n\pi d^2} \end{equation} where $d$ is the covalent radius of hydrogen: $d_{H}=2.25\times10^{-12}$m (\cite{Slater1964}) and $n$ is the number density of the local gas phase [cm$^{-3}$]. The scale height is derived from the assumption of hydrostatic equilibrium. \begin{equation} H_{S} = \frac{k_{B}T}{\mu m_{H}g} \end{equation} where $T$ is the temperature of the local gas phase and $\mu$=2.35 [amu]. \citet{Debrecht2020} point out that the upper atmospheres will be affected by ionisation such that the mean free paths of the gas phase does change. Irradiation by the planet's host star is the most likely cause, but the interstellar radiation may already suffice to ionise the uppermost atmospheric layers (\citealt{2018A&A...618A.107R}). To determine the Knudsen number limit for the ionised atmosphere, the mean free path is calculated as \begin{equation} \lambda = \frac{1}{\sigma n} \end{equation} where n is the number density of the local gas phase [m$^{-3}$] and $\sigma$ is the cross sectional area [m$^2$] where $\sigma=10^{-11}/T^2$ (\citet{Debrecht2020}). Figure \ref{fig:Height_vs_Pressure_Knudsen} shows the height of the atmosphere as a function of pressure, with vertical lines denoting the pressure at which the Knudsen number exceeds 1 for the molecular and ionised gas. We also plot the minimum cross sectional area of the gas particles for which the gas is collision dominated throughout the entire pressure regime (ie. the upper pressure limit at which the gas is collision dominated is 3.58$\times$10$^{-15}$) bar) in figure \ref{fig:cross_sectional_area_plots}. Furthermore, we include the collision cross-sectional area ($\sigma=10^{-11}/T^2$) as a function of pressure in figure \ref{fig:cross_sectional_vs_pressure_plots}. \section{Diffusive mixing}\label{s:diffmix} This appendix motivates a new approach how to measure the mass exchange timescale $\tau_{\rm mix}(z)$ from the vertical component of a given velocity field $v_z(\vec{r},t)$, where $\vec{r}$ is the 3D position and $t$ the time. This approach has been used in all {\sc StaticWeather} models presented in this paper. \subsection{Mixing and diffusion} Let's assume we have two identical boxes of length $\Delta z$ with cross section $A$ touching each other, see Fig.~\ref{3boxes}. The total number of a certain kind of molecule in one of those boxes is \begin{equation} N = n\,A\,\Delta z \end{equation} where $n\rm\ [cm^{-3}]$ is the molecular particle density. From the 3D hydro model, we observe that matter moves up and down with some average (e.g.\ root-mean-square) velocity $\rm[cm/s]$ as \begin{equation} v = v_{z,\rm rms} = \sqrt{\langle v_z^2 \rangle_t} = \sqrt{\langle v_z^2 \rangle_{\rm vol}} \label{eq:vmix} \end{equation} which either involves a long-term average over a suitably long time $t$ or a spatial average over a suitably large volume. Since we are interested in the stochastic part of the velocity field, we assume that there is no bulk motion here, i.e. $\langle v_z \rangle_t = \langle v_z \rangle_{\rm vol} = 0$. In real application to a given hydrodynamic structure, this means that we first need to subtract the bulk motion before we can apply Eq.\,(\ref{eq:vmix}). Because of the random mixing motions, molecules will go from box~1 to box~2 and vice versa. The associated mean particle fluxes $\rm[cm^{-2}s^{-1}]$ through the contact area $A$ are \begin{eqnarray} j_1 = n_1\,v &\quad\quad& \mbox{rightwards,}\\ j_2 = n_2\,v &\quad\quad& \mbox{leftwards.} \end{eqnarray} The change of the total number of molecules $N_1$ in the left box is \begin{align} dN_1 =& -j_1\,A\,dt + j_2\,A\,dt = (n_2-n_1)\,v\,A\,dt \label{eq:dN1}\\ \Rightarrow \frac{dn_1}{dt} =& \frac{n_2-n_1}{\Delta z}\,v \;\;\to\;\; -\frac{\partial n}{\partial z}\,v \end{align} \paragraph{Diffusion with rate equations:} The problem can be re-formulated with rate constants $R=v/\Delta z \rm\;[1/s]$, like a chemist would do \begin{eqnarray} \frac{dn_1}{dt} &=& -n_1 R + n_2 R \label{dn1}\\ \frac{dn_2}{dt} &=& -n_2 R + n_1 R \end{eqnarray} \paragraph{The mixing timescale:} Let's assume box~1 is full, and box~2 initially has none of those molecules. How long would it take to empty box~1? From Eq.~(\ref{dn1}), with $n_2\to 0$, we find $n_1(t) = n_1(0) \exp(-t/\tau_{\rm mix})$ where \begin{equation} \tau_{\rm mix} = \frac{1}{R} = \frac{\Delta z}{v} \label{tmix1} \end{equation} The same result is obtained when considering $dN_2$ in Eq.\,(\ref{eq:dN1}) for the right box \begin{equation} \frac{dn_2}{dt} = \frac{n_1-n_2}{\tau_{\rm mix}} \label{ansatz} \end{equation} where now index 1 refers to the ``full'' box, which ultimately provides the supply of fresh condensible material at some distance. In fact, solving the mixing ansatz Eq.\,(\ref{ansatz}) for the mixing timescale results in \begin{equation} {\tau_{\rm mix}} = \frac{n_1-n_2}{\frac{dn_2}{dt}} = \frac{n_1-n_2}{-n_2 R + n_1 R} = \frac{1}{R} \end{equation} for any $n_1$ and $n_2$. \subsection{A linear chain of boxes} \noindent Let us now repeat the same thought experiment for 3 boxes in a row as sketched in Fig.~\ref{3boxes}. The rate equations in this case, with $R_1=v_1/\Delta z$ and $R_2=v_2/\Delta z$ are \begin{eqnarray} \frac{dn_1}{dt} &=& -n_1 R_1 + n_2 R_1 \\ \frac{dn_2}{dt} &=& n_1 R_1 - n_2 (R_1+R_2) + n_3 R_2\label{dn2}\\ \frac{dn_3}{dt} &=& -n_3 R_2 + n_2 R_2 \end{eqnarray} Closer inspection of Eq.~(\ref{dn2}) shows the analogy to Fick's laws \begin{eqnarray} \frac{dn_2}{dt} &=& \frac{n_1-n_2}{\Delta z}\,v_1 + \frac{n_3-n_2}{\Delta z}\,v_2\\ &\to& \frac{\partial}{\partial z} \left(\frac{\partial n}{\partial z}\,v\right)\Delta z \;=\; \frac{\partial}{\partial z} \left(D\,\frac{\partial n}{\partial z}\right) \ , \end{eqnarray} where we find the diffusion constant $\rm[cm^2/s]$ (velocity $\times$ length) to be \begin{equation} D = v\,\Delta z \label{D} \end{equation} The meaning of $\Delta z$ is a bit special in Eq.~(\ref{D}). The mixing motions typically have a certain intrinsic range $\ell$, before the incoming particles actually have an effect on the concentration in the box. When $\Delta z\ll\ell$, those particles simply rush through, there is no time to mix with the ambient gas in the box. In the opposite case, when $\Delta z\gg\ell$, those particles only enrich the regions close to the surface $A$, but not in the entire box, the concentration gradient on the box is substantial, and that local enhancement at the surface should actually be taken into account when we determine the flux backwards to the originating cell, which we do not. Therefore, the only box thickness where the local Eq.~(\ref{D}) actually works fine is when \begin{equation} \mbox{Diffusion:}\quad\quad \Delta z \approx \ell \ . \end{equation} Indeed, when determining diffusion constants, we must always make some assumption about $\ell$, for example that $\ell$ is the mean free path for gas-kinetic diffusion, or $\ell$ is the scale height $H_p$ for convective mixing. To find the mixing timescale $\tau_{\rm mix}$ for the 3-box experiment, we assume that the concentration $n_2$ in the sandwich box 2 adjusts quickly to $n_1$ and $n_3$. Setting the time derivative in Eq.~(\ref{dn2}) to zero we find \begin{equation} n_2 = \frac{n_1 R_1 + n_3 R_2}{R_1+R_2} \label{n_2} \end{equation} Generalising the derivation of $\tau_{\rm mix}$ from the 2-box experiment, \begin{equation} \tau_{\rm mix}^{-1} = -\frac{1}{n_1}\frac{dn_1}{dt} = -\frac{1}{n_1}\Big(-n_1 R_1 + n_2 R_1\Big) \end{equation} and using Eq.~(\ref{n_2}) we find \begin{equation} \tau_{\rm mix} = \frac{R_1 + R_2}{R_1 R_2} \;\frac{1}{1-n_3/n_1} \ , \end{equation} which, in the limiting case of $n_3\to 0$, results in \begin{equation} \tau_{\rm mix} \;\to\; \frac{1}{R_1} + \frac{1}{R_2} \;=\; \frac{\Delta z}{v_1} + \frac{\Delta z}{v_2} \ . \label{tmix2} \end{equation} Again, the same result is obtained when considering the right box (index 3) and using Eq.~(\ref{n_2}) \begin{equation} {\tau_{\rm mix}} = \frac{n_1-n_3}{\frac{dn_3}{dt}} = \frac{n_1-n_3}{-n_3 R_2 + n_2 R_2} = \frac{\Delta z}{v_1} + \frac{\Delta z}{v_2} \ . \end{equation} This thought experiment can be extended to a linear chain of boxes of arbitrary length $K$. For each chain length, we consider the boundary particle densities $n_1$ and $n_K$ to be given and assume that $n_2\,...\,n_{K-1}$ can be calculated in their stationary limits. This is similar to the Maxwell daemon in nucleation theory, who would always collect the large clusters, break them up into monomers, and return them this way back to the gas phase. Here, we need a daemon who makes sure that $n_K$ stays small, and $n_1$ stays large. That daemon would quickly transport the molecules arriving in the right box back to the left box, to create a stationary problem with constant diffusive fluxes through all interface areas. In our case, the daemon is dust formation and settling, causing a stationary situation. Assuming $n_K \to 0$, the result is% \begin{equation} \tau_{\rm mix} \;=\; \frac{1}{R_1} + \frac{1}{R_2} + ... + \frac{1}{R_{K-1}} ~=~ \frac{\Delta z}{v_1} + \frac{\Delta z}{v_2} + ... + \frac{\Delta z}{v_{K-1}} \label{tmix3} \end{equation} The same result is obtained for the mixing timescale of the right box \begin{equation} {\tau_{\rm mix}} = \frac{n_1-n_K}{\frac{dn_K}{dt}} = \ldots = \frac{\Delta z}{v_1} + \frac{\Delta z}{v_2} + ... + \frac{\Delta z}{v_{K-1}} \ . \end{equation} In the limiting case $\Delta z\to 0$, the final result is \begin{equation} \tau_{\rm mix}(z) \;=\, \int_0^z \frac{1}{v(z')}\;dz' \label{final} \end{equation} which shows that the result is independent of the choice of $\Delta z$ (disregarding here the uncertainties in the actual numerical computation of that integral). To summarise: \begin{itemize} \setlength\itemsep{0.2mm} \item Equation (\ref{final}) states an expression for the replenishment timescale $\tau_{\rm mix}$ in consideration of a distant supply. \item The replenishment timescale is monotonic increasing with $z$, i.e.\ it always takes longer to replenish an atmospheric layer which is higher above the ground. \item There can be a bottleneck. If there is a layer between 0 and $z$ where $v(z')$ is particularly slow, all regions above that layer should indeed receive very little mixing supply. \item If $v=\rm const$, Eq.~(\ref{tmix3}) agrees with the 2-box result (Eq.~\ref{tmix1}) and the 3-box result (Eq.~\ref{tmix2}), namely $\tau_{\rm mix}(z)=z/v$ which had been used previously in {\sc StaticWeather} (case $\beta=1$). \end{itemize} \section{Miscellaneous Figures} We provide the optical depth plots for all host star classes for completeness in Figure~\ref{fig:Opt_Depth_for_multiple_temps}. Tables~\ref{t:UV1} and ~\ref{t:UV2} list a selection of potentially favourable targets for a UV mission. The targets are selected based on being gas giant exoplanets with a host star effective temperature close to or hotter than that of the sun. The grid models of this work with host stars of F5 (T$_{\rm eff}$~=~6500~K) or G5 (T$_{\rm eff}$~=~5650~K) type are thus most applicable to such potential future UV missions. \begin{table*}[] \tiny % \caption{Physical and orbital parameters of potentially favourable exoplanet targets for a UV mission. The density, surface gravity and $\log_{10}(g)$ were calculated (alongside their respective errors) for each planet based on the mass and radius. All planets marked with * have Msin(i) instead of M. Planets marked with ** had their T$_{\rm eff, P}$ calculated from Equ.~12 from \citet{2021MNRAS.tmp.1277B}, with $A_B$=0 and $f$=2. Table~\ref{t:UV2} list the corresponding host star parameters.} \renewcommand{\arraystretch}{1.5} \begin{tabular}{llllllll} \hline Planet & a [AU] & P [days] & T$_{\rm eff, P}$ [K] & M$_{\rm P}$ [M$_{\rm Jup}$] & R$_{\rm P}$ [R$_{\rm Jup}$] & $\rho_{\rm P, bulk}$ [g cm$^{-3}$] & log$_{10}$(g) [cm s$^{-2}$] \\ \hline ups And b & 0.05922166В±0.00000020 & 4.617033В±0.000023 & 1837$^{+46}_{-63}$ ** & 0.6876В±0.0044* & & & \\ tau Boo A b & 0.049В±0.003 & 3.3124568В±0.0000069 & 1997$^{+186}_{-189}$ ** & 4.32В±0.04* & & & \\ 61 Vir b & 0.050201В±0.000005 & 4.2150В±0.0006 & 1397В±44 ** & 0.016В±0.002* & & & \\ 51 Peg b & 0.0527В±0.0030 & 4.230785В±0.000036 & 1557$^{+149}_{-213}$ ** & 0.472В±0.039* & & & \\ HD 179949 b & 0.0443В±0.0026 & 3.092514В±0.000032 & & 0.916В±0.076* & & & \\ HD 75289 b & 0.050В±0.000* & 3.509270В±0.000064 & 1260 & 0.49В±0.03* & & & \\ KELT-20 b & 0.057В±0.006 & 3.474119$^{+0.000005}_{-0.000006}$ & 2260В±50 & 17 & 1.83В±0.07 & & \\ HD 209458 b & 0.04707$^{+0.00045}_{-0.00047}$ & 3.52474859В±0.00000038 & 1484В±18 & 0.682$^{+0.014}_{-0.015}$ & 1.359$^{+0.016}_{-0.019}$ & 0.3603$^{+0.0104}_{-0.0118}$ & 2.9807$^{+0.0264}_{-0.0296}$ \\ HD 212301 A b & 0.030В±0.000 & 2.24571В±0.00028 & 2195$^{+217}_{-235}$ ** & 0.51В±0.04* & 1.07 & & \\ HD 149143 b & 0.0530В±0.0029 & 4.07182В±0.00001 & 1756В±285 ** & 1.33В±0.15* & & & \\ HAT-P-7b & 0.03813В±0.00036 & 2.204737В±0.000017 & 2733В±21 & 1.806В±0.036 & 1.510В±0.020 & 0.6956В±0.0211 & 3.3121В±0.0274 \\ WASP-18b & 0.02087В±0.00068 & 0.9414526$^{+0.0000016}_{-0.0000015}$ & 2413В±44 & 10.4 & 1.191В±0.038 & 8.163 & 4.279 \\ WASP-103b & 0.01985В±0.00021 & 0.9255456В±0.0000013 & 2489$^{+66}_{-65}$ & 1.455$^{+0.090}_{-0.091}$ & 1.528$^{+0.073}_{-0.047}$ & 0.5408$^{+0.0559}_{-0.0444}$ & 3.2079$^{+0.0916}_{-0.0762}$ \\ WASP-121b & 0.02544$^{+0.00049}_{-0.00050}$ & 1.27492550$^{+0.00000020}_{-0.00000025}$ & 2720В±8 & 1.183$^{+0.064}_{-0.062}$ & 1.865В±0.044 & 0.2418$^{+0.0164}_{-0.0161}$ & 2.945$^{+0.0636}_{-0.0621}$ \end{tabular} {\small \textbf{References:} \textit{ups And b:} \citet{Curiel2011}, \citet{Stassun2019}, \citet{Fuhrmann1998}; \textit{tau Boo A b:} \citet{Butler1997}, \citet{Stassun2019}, \citet{Borsa2015}; \textit{61 Vir b:} \citet{Vogt2010}; \textit{51 Peg b:} \citet{Butler2006}, \citet{Keenan1989}, \citet{Rosenthal2021}; \textit{HD 179949 b:} \citet{Butler2006}, \citet{Rosenthal2021}; \textit{HD 75289 b:} \citet{Stassun2017}, \citet{Udry2000}; \textit{KELT-20 b:} \citet{Talens2018}; \textit{HD 209458 b:} \citet{Bonomo2017}, \citet{Evans2015}, \citet{Stassun2017}; \textit{HD 212301 A b:} \citet{Stassun2017}; \textit{HD 149143 b:} \citet{Ment2018}; \textit{HAT-P-7 b:} \citet{Bonomo2017}, \citet{Stassun2017}, \citet{Berger2018}, \citet{Morris2013}; \textit{WASP-18b:} \citet{Shporer2019}, \citet{Salz2015}, \citet{Southworth2012}; \textit{WASP-103b:} \citet{Gillon14}, \citet{Bonomo2017}, \citet{Southworth2016}; \textit{WASP-121b:} \citet{Delrez2016}, \citet{MikalEvans2019}.} \label{t:UV1} \end{table*} \begin{table*}[] \tiny % \caption{Physical parameters of the exoplanet host stars of potentially favourable targets for a UV mission.} \renewcommand{\arraystretch}{1.5} \begin{tabular}{lllllll} \hline Star & T$_{\rm eff}$ [K] & M$_*$ [M$_{\odot}$] & R$_*$ [R$_{\odot}$] & Spectral Type & {[}Fe/H] & Planet \\ \hline HD 9826 & 6105.510$^{+127.253}_{-151.085}$ & 1.150000$^{+0.164999}_{-0.144399}$ & 1.6364900$^{+0.1059680}_{-0.0580015}$ & F8 V & 0.09В±0.06 & ups And b \\ HD 120136 & 6466.2700$^{+115.2650}_{-96.8038}$ & 1.320000$^{+0.243739}_{-0.184934}$ & 1.4258800$^{+0.0642849}_{-0.0504688}$ & F7 V & 0.2642300В±0.0199902 & tau Boo A b \\ HD 115617 & 5577В±33 & 0.942 +0.034-0.029 & 0.963В±0.011 & G5 V & -0.01 & 61 Vir b \\ HD 217014 & 5758.000$^{+101.623}_{-119.624}$ & 1.0300000$^{+0.1666990}_{-0.0854185}$ & 1.1756100$^{+0.0673608}_{-0.0353276}$ & G2IV & 0.2057В±0.0598 & 51 Peg b \\ HD 179949 & 6168 & 1.21 & 1.2202В±0.0375 & F8 V & 0.137 & HD 179949 b \\ HD 75289 & 6117В±16 & 1.29В±0.10 & 1.23В±0.02 & G0 & 0.26 & HD 75289 b \\ HD 185603 & 8980$^{+90}_{-130}$ & 1.89$^{+0.06}_{-0.05}$ & 1.60В±0.06 & A2 V & -0.02В±0.07 & KELT-20 b \\ HD 209458 & 6065В±50 & 1.119В±0.033 & 1.155$^{+0.014}_{-0.016}$ & G0 V & 0.01 & HD 209458 b \\ HD 212301 A & 6239В±24 & 1.55В±0.16 & 1.16В±0.02 & F8V & 0.18 & % HD 212301 A b \\ HD 149143 & 5856 & 1.20В±0.20 & 1.44В±0.08 & G0 & 0.29 & HD 149143 b \\ HAT-P-7 & 6449В±129 & 1.510$^{+0.040}_{-0.050}$ & 1.991$^{+0.084}_{-0.080}$ & F8 & 0.260В±0.080 & HAT-P-7b \\ WASP-18 & 6431В±48 & 1.46В±0.29 & 1.26В±0.04 & F6 IV-V & 0.11В±0.08 & WASP-18b \\ WASP-103 & 6110В±160 & 1.220$^{+0.039}_{-0.036}$ & 1.436$^{+0.052}_{-0.031}$ & F8 V & 4.22$^{+0.12}_{-0.05}$ & WASP-103b \\ WASP-121 & 6459В±140 & 1.353$^{+0.080}_{-0.079}$ & 1.458В±0.030 & F6 V & 0.13В±0.09 & WASP-121b \end{tabular} \newline {\small \textbf{References:} \textit{ups And b:} \citet{Curiel2011}, \citet{Stassun2019}, \citet{Fuhrmann1998}; \textit{tau Boo A b:} \citet{Butler1997}, \citet{Stassun2019}, \citet{Borsa2015}; \textit{61 Vir b:} \citet{Vogt2010}; \textit{51 Peg b:} \citet{Butler2006}, \citet{Keenan1989}, \citet{Rosenthal2021}, \citet{Stassun2019}; \textit{HD 179949 b:} \citet{Butler2006}, \citet{Rosenthal2021}; \textit{HD 75289 b:} \citet{Stassun2017}, \citet{Udry2000}; \textit{KELT-20 b:} \citet{Talens2018}, \citet{Lund2017}; \textit{HD 209458 b:} \citet{Bonomo2017}, \citet{Evans2015}, \citet{Stassun2017}, \citet{Stassun2019}; \textit{HD 212301 A b:} \citet{Stassun2017}, \citet{Locurto2006}; \textit{HD 149143 b:} \citet{Ment2018}; \textit{HAT-P-7 b:} \citet{Bonomo2017}, \citet{Stassun2017}, \citet{Berger2018}, \citet{Morris2013}, \citet{Stassun2019}; \textit{WASP-18b:} \citet{Shporer2019}, \citet{Salz2015}, \citet{Southworth2012}, \citet{Stassun2019}; \textit{WASP-103b:} \citet{Gillon14}, \citet{Bonomo2017}, \citet{Southworth2016}; \textit{WASP-121b:} \citet{Delrez2016}, \citet{MikalEvans2019}, \citet{Stassun2019}.} \label{t:UV2} \end{table*} \end{appendix}
Title: The In-Flight Noise Performance of the JWST/NIRSpec Detector System
Abstract: The Near-Infrared Spectrograph (NIRSpec) is one the four focal plane instruments on the James Webb Space Telescope (JWST) which was launched on December 25, 2021. We present the in-flight status and performance of NIRSpec's detector system as derived from the instrument commissioning data. The instrument features two 2048 x 2048 HAWAII-2RG sensor chip assemblies (SCAs) that are operated at a temperature of about 42.8 K and are read out via a pair of SIDECAR ASICs. NIRSpec supports "Improved Reference Sampling and Subtraction" (IRS2) readout mode that was designed to meet NIRSpec's stringent noise requirements and to reduce 1/f and correlated noise. In addition, NIRSpec features subarrays optimized for bright object time series observations, e.g. for the observation of exoplanet transit around bright host stars. We focus on the dark signal as well as the read and total noise performance of the detectors.
https://export.arxiv.org/pdf/2208.12686
\keywords{JWST, NIRSpec, infrared detectors, noise} \section{INTRODUCTION} \label{sec:intro} % The Near-Infrared-Spectrograph (NIRSpec)\cite{Jakobsen2022} is one of the science instruments aboard the James Webb Space Telescope (JWST)\cite{gardner2006} launched on 25 December 2021. NIRSpec features an almost all reflective design with an optical bench manufactured out of Silicon-Carbide\cite{Salvignol2008}. It offers four different observing modes: 1) integral field spectroscopy (IFS)\cite{Boeker2022} via an image slicer\cite{Lobb2008}, 2) multi-object spectroscopy (MOS)\cite{Ferruit2022} via a micro-shutter array that employs about 250,000 individually addressable shutters\cite{Kutyrev2004}, 3) fixed slit (FS) spectroscopy via several long slits, and 4) bright object times series (BOTS)\cite{Birkmann2022} observations via a dedicated square aperture. In all observing modes the same selection of seven dispersers is available: one prism and six gratings selected via a grating wheel assembly\cite{Weidlich2006}, offering a wavelength coverage from $\sim$0.6 to 5.3$\;\mu m$ with varying spectral resolution. Light is detected by a pair of closely spaced HgCdTe HAWAII-2RG sensor chip assemblies (SCAs, see Fig.~\ref{fig:FPA})\cite{Rauscher_2014} with a nominal cut-off wavelength of $\sim$5.3$\;\mu m$. The detectors are operated at $T=42.8\;K$ and are read out by a pair of SIDECAR ASICs\cite{Loose2005}. Temperature drifts of the focal plane assembly (FPA) are kept to a minimum by means of active temperature control. \subsection{NIRSpec Readout Modes} \label{sec:readout} The NIRSpec detectors are read non-destructively ``up-the-ramp'' and offer two fundamentally different readout modes: the so-called traditional readout mode (TRAD) and the improved reference sampling and subtraction (IRS$^2$) readout mode. The latter is only supported for full-frame, and offers superior noise performance by means of sampling reference pixels regularly and interleaved with the science pixels, allowing for a better reference subtraction and thus lower correlated / 1/f noise\cite{Rauscher_2017}. Subarrays of different sizes are supported in traditional readout mode, using a slightly higher conversion gain compared to full frame mode. The higher conversion gain (e-/DN) is used in order to make the full physical well depth of the pixels accessible and support brighter targets before saturation, in particular important for BOTS observations. For full-frame readout modes all pixels in the detector are read with a cadence or frame time $t_f$ of approximately 10.7 seconds (TRAD) or 14.8 seconds (IRS$^2$). The frame time for subarrays depend on the subarray size and for NIRSpec range from approximately 5.5 seconds for the largest subarray (ALLSLITS, covering all NIRSpec fixed slit apertures) down to $\sim$15\,ms for the smallest subarray (SUB32, used for target acquisition only). Readout modes that use all frames in an integration are called RAPID modes in the Astronomers Proposal Tool (APT), i.e.\ NRSRAPID for traditional readout mode and NRSIRS2RAPID for IRS$^2$ readout mode. In order to reduce the amount of memory needed in the on-board solid state recorder (SSR) for detector data, the NIRSpec readout modes are also offered in a frame averaged version, where multiple frames are averaged into groups on-board. For the traditional readout mode four frames are averaged into one group (NRS) and for IRS$^2$ five frames can be averaged into one group (NRSIRS2). \section{NIRSpec Commissioning and Acquired Data} The NIRSpec commissioning campaign started shortly after the successful launch of JWST on an Ariane 5 rocket with the power on and initialization of the Micro-shutter control electronics. After functional checkouts of the various NIRSpec sub-systems and the detector operating temperature had been reached, data acquisition including dark exposures started. A much more detailed description of the NIRSpec commissioning campaign and results is given at this conference\cite{BoekerSPIE, LuetzgendorfSPIE, GiardinoSPIE, AlvesSPIE, RawleSPIE}. The analysis presented in this paper is based on the following set of dark exposures that were obtained between 04 March 2022 and 05 April 2022, after the NIRSpec FPA had reached its operating temperature of $T \sim 42.82\,$K and active temperature control had been enabled: \begin{itemize} \item 54 traditional full frame dark exposures with 160 frames/groups each obtained in NRSRAPID mode \item 80 IRS$^2$ full frame dark exposures with 245 frames/groups each obtained in NRSIRS2RAPID mode \item 48 dark integrations with the ALLSLITS subarray with 265 frames/groups each in NRSRAPID mode \end{itemize} With the frame times listed in Sec.~\ref{sec:readout} it follows that the integration time for each of the traditional full frame darks was approximately 30 minutes, almost 60 minutes for the IRS$^2$ darks, and approximately 25 minutes for the ALLSLITS subarray darks. \section{Data Reduction} The ``ramps-to-slopes'' data processing was performed using the pre-processing pipeline developed for ground test and commissioning purposes by the ESA NIRSpec Science Operations Team. This pipeline is also used to implement and test algorithms that are then used to inform the development of the official JWST data processing pipeline run by STScI. The following processing steps were performed: \begin{itemize} \item saturation detection and flagging \item mast bias subtraction \item reference pixel subtraction / correction, in the case of IRS$^2$ data including the improved data processing using the additionally available reference pixels and reference output \item non-linearity correction \item slope estimation using optimum weights\cite{Fixsen2000}, including jump detection (cosmic ray mitigation) using 2-point differences\cite{Anderson2011}, including multiplication with the conversion gain in order to go from DN/s to e$^-$/s \end{itemize} The above yielded a 2D electron rate map for each integration / exposure. The pixel-to-pixel median of these maps was computed to derive the dark signal maps for the different readout modes (see Sec.~\ref{sec:dark}). We also computed the correlated double sample (CDS) for all frame pairs in all integrations / exposures. The CDS maps were then averaged using sigma clipping (3 iterations with a clipping threshold of 3 sigma) and the resulting standard deviation (for each pixel) is the CDS noise, which is $\sqrt{2}$ times the single frame read noise. CDS noise results are presented in Sec.~\ref{sec:cds}. Finally, we repeated the slope estimation for all input data, limiting the used number of groups $n_g$ in the ramp to 5, 10, 15, and so on, up to the maximum available number of groups for that readout mode. For each of these sets of electron rate maps we computed the mean and standard deviation $\sigma(i, j, n_g)$ for each pixel ($i, j$) as a function of used groups $n_g$ using sigma clipping (3 iterations with a clipping threshold of 3 sigma). For the RAPID readout modes ($n_f = 1$ and thus $t_g = t_f$) the total noise $\sigma_{total}(i, j, n_g)$ is then: \begin{equation} \sigma_{total}(i, j, n_g) = \sigma(i, j, n_g) \times t_{eff}(n_g) = \sigma(i, j, n_g) \times (n_g - 1) \times t_f, \end{equation} where $t_{eff}(n_g)$ is the effective integration time and $t_f$ is the frame time. The total noise as a function of effective integration time / number of groups is reported in Sec.~\ref{sec:noise}. \section{Results} \label{sec:results} The commissioning results for dark signal, CDS and total noise are summarized in the following sections. \subsection{Dark Signal} \label{sec:dark} The median dark signal of the two NIRSpec SCAs for the different readout modes is presented in Table~\ref{tab:dark}. Figure~\ref{fig:dark} shows the dark signal maps for IRS$^2$ readout mode for both detectors. \begin{table}[h] \centering \begin{tabular}{c|c|c|c} & \multicolumn{3}{c}{Readout mode}\\ Detector & TRAD & IRS2 & ALLSLITS\\\hline NSR1 & 0.0090 (0.0077) & 0.0082 (0.0071) & 0.0223 (0.0136)\\ NRS2 & 0.0069 (0.0051) & 0.0048 (0.0040) & 0.0153 (0.0137)\\ \end{tabular} \caption{Median dark signal in e$^-$/s for the NIRSpec detectors for traditional full frame (TRAD), IRS$^2$ (IRS2), and ALLSLITS subarray (ALLSLITS) readout modes as measured during commissioning. Comparison numbers from the last ground test are in brackets.} \label{tab:dark} \end{table} As known from ground testing, the dark signal for NRS1 is higher than for NRS2. The region of lower dark signal in the bottom center of NRS1 is due to a void in the epoxy back-filling between the detector and its attached readout integrated circuit (ROIC). The dark signal is higher for readout modes with shorter frame times, as most of the dark signal is not due to dark current, but rather multiplexer glow that occurs during readout\cite{Regan2000}. Other differences are due to the different detector tuning (supply voltages and currents) that are unique to each detector and readout mode. However, the dark signal is higher than measured during the last cryogenic ground test in 2017\cite{Kimble2018} for both detectors, with a pronounced increase towards the edges of the arrays. This increase is likely related to the cosmic ray environment at L2, see discussion in Sec.~\ref{sec:cr}. Even with the small increase compared to ground, the dark signal is still very low for most pixels and not a driver for the total noise. \subsection{CDS Noise} \label{sec:cds} The correlated double sample noise measured during commissioning is presented in Table~\ref{tab:cds}. It is in line with the numbers derived during the last cryogenic ground test, also indicating that the detector tuning is unchanged and as expected. \begin{table}[h] \centering \begin{tabular}{c|c|c|c} & \multicolumn{3}{c}{Readout mode}\\ Detector & TRAD & IRS2 & ALLSLITS\\\hline NRS1 & 13.0 (12.9) & 9.59 (9.61) & 11.2 (11.4) \\ NRS2 & 13.1 (13.1) & 11.6 (11.6) & 10.7 (11.0) \\ \end{tabular} \caption{Median CDS noise in data numbers (DN) for the NIRSpec detectors for traditional full frame (TRAD), IRS$^2$ (IRS2), and ALLSLITS subarray (ALLSLITS) readout modes as measured during commissioning. Comparison numbers from the last ground test are in brackets.} \label{tab:cds} \end{table} As the reported CDS noise numbers are in DN they appear lower for the ALLSLITS subarray than the traditional full frame, because of the difference in conversion gain between the two readout modes (conversion gain in e$^-$/DN is about a factor 1.4 higher for subarrays than that used for full frame). The numbers reported in Table~\ref{tab:cds} are the median for the full detector. There are small differences between the outputs of the detectors, as is illustrated by the histograms in Fig.~\ref{fig:cds}. \subsection{Total Noise} \label{sec:noise} The total noise as a function if effective integration time / ramp length is shown in Figures~\ref{fig:noise:trad} through \ref{fig:noise:sub} for the traditional full frame, IRS$^2$, and ALLSLITS subarray readout modes, respectively, and is summarized in Table~\ref{tab:noise} below. \begin{table}[h] \centering \begin{tabular}{c|c|c|c} & \multicolumn{3}{c}{Total noise [e$^-$]} \\ Readout / Detector & T$_{eff}\sim$950\,s & $\sim$1700\,s & $\sim$3560 \\\hline\hline TRAD / NRS1 & 6.9 & 7.4 & N/A \\ TRAD / NRS2 & 7.3 & 7.7 & N/A \\\hline IRS2 / NRS1 & 5.9 & 6.6 & 8.5 \\ IRS2 / NRS2 & 7.2 & 7.6 & 9.2 \\\hline SUB / NRS1 & 7.0 & 7.8 & N/A \\ SUB / NRS2 & 7.0 & 7.5 & N/A \\ \end{tabular} \caption{Total noise for the two NIRSpec detectors for different readout modes and effective integration times as measured during commissioning.} \label{tab:noise} \end{table} The total noise is higher than measured during ground testing\cite{Birkmann2018}. This is expected and can be attributed to the cosmic ray environment at L2. Cosmic rays have the following effects on data that results in an increase of total noise: \begin{itemize} \item Detected cosmic ray hits are flagged and the ramp is broken into multiple segments, resulting in a higher noise of this integration compared an undisturbed one. \item Some cosmic ray hits are strong enough to saturate one or more pixels, resulting in the loss of data after the hit occurred, shortening the effective integration time and increasing the noise. \item Residuals of undetected (jump below detection threshold) cosmic rays. \item Other secondary effects, like the charge trapping (``inverse persistance'') that will occur after a significant jump in a detector pixel. \end{itemize} As is evident from the plots, the average total noise is in line with the predictions / noise model used by the JWST exposure time calculator (ETC)\cite{Pickering2016} for the two full frame readout modes, traditional and IRS$^2$. The ETC underestimates the total noise for the ALLSITS subarray, which is likely due to limitations in the internal noise model, where the same readout noise is assumed for traditional and subarray readout mode. For all readout modes, total noise decreases with increasing integration times to approximately 500 seconds where it levels out and then increases with longer integration times. For detector noise limited observations of faint sources it is still beneficial to use long integration times for optimum signal-to-noise ratios. However, due to potential early saturation after a strong cosmic ray hit, it is advisable to have multiple integrations per observation, ideally in the form of dithered exposures. \subsection{Cosmic rays} \label{sec:cr} As discussed above, the slight increase in measured total noise can be attributed to the impacts of cosmic rays. The cosmic ray rate is in line with pre-flight predictions\cite{Giardino2019}, with approximately 60\% of all pixels being affected by one or more cosmic ray events in a one hour exposure. In our one hour long darks, only 2 to 3\% of pixels saturated prematurely due to strong cosmic ray events. Due to inter-pixel capacitive (IPC) coupling\cite{Moore2004}, typical cosmic rays affect at least 5 pixels, i.e. one cosmic ray leads to a small cluster of pixels experiencing a jump. One surprise during commissioning was the ubiquitous appearance of so-called ``snowballs'': up-the-ramp jump events with mostly (but not exclusively) a spherical region of heavily saturated pixels (the core, typical radius 2 to 5 detector pixels), plus a more extended region of elevated signal that steeply drops with increasing distance from the center of the saturated region. Many snowballs are also associated with a shower of more compact or ``classical'' cosmic ray events in the same CDS difference. A few examples for snowballs and their radial profiles are shown in Fig.~\ref{fig:snowballs}. As of this writing the origin of the snowballs is not understood, but given the number of fully saturated pixels and the extended halo it is clear that high energies of many keV or even MeV must be involved. \section{Conclusion} As demonstrated during commissioning and presented in this paper, the NIRSpec detector system is operating and performing to predictions. Its read noise is consistent with the last on-ground measurements, indicating that the detector tuning has not changed. The total noise is higher than measured on-ground due to the impact of cosmic rays, but it is in line with the predictions and noise model used by the ETC. Combining this with the excellent throughput of the instrument\cite{GiardinoSPIE}, the sensitivity requirements for NIRSpec are expected to be met with margin. \appendix % \acknowledgments % We would like to acknowledge the hard work and dedication of the NIRSpec commissioning team, the NIRSpec science readiness team, and all people involved in the commissioning of JWST and NIRSpec. This work would not have been possible without them. \bibliography{report} % \bibliographystyle{spiebib} %
Title: The Sun's Mean Line-of-Sight Field
Abstract: We regard the Sun-as-a-star magnetic field (i.e. the mean field) as a filter for the spherical harmonic components of the photospheric field, and calculate the transmission coefficients of this filter. The coefficients for each harmonic, $Y_{l}^{m}$, are listed in three tables according to their dependence on $B_{0}$, the observer's latitude in the star's polar coordinate system. These coefficients are used to interpret the 46-yr sequence of daily mean-field measurements at the Wilcox Solar Observatory. We find that the non-axisymmetric part of the field originates in the $Y_{1}^{1}$, $Y_{2}^{2}$, and a combination of the $Y_{3}^{3}$ and $Y_{3}^{1}$ harmonic components. The axisymmetric part of the field originates in $Y_{2}^{0}$ plus a $B_{0}$-dependent combination of the $Y_{1}^{0}$ and $Y_{3}^{0}$ components. The power spectrum of the field has peaks at frequencies corresponding to the ~27-day synodic equatorial rotation period and its second and third harmonics. Each of these peaks has fine structure on its low-frequency side, indicating magnetic patterns that rotate slowly under the influence of differential rotation and meridional flow. The sidebands of the fundamental mode resolve into peaks corresponding to periods of ~28.5 and ~30 days, which tend to occur at the start of sunspot maximum, whereas the ~27-day period tends to occur toward the end of sunspot maximum. We expect similar rotational sidebands to occur in magnetic observations of other Sun-like stars and to be a useful complement to asteroseismology studies of convection and magnetic fields in those stars.
https://export.arxiv.org/pdf/2208.03216
\title{The Sun's Mean Line-of-Sight Field} \author[0000-0002-6612-3498]{Neil R. Sheeley, Jr.} \affiliation{Visiting Research Scientist\\ Lunar and Planetary Laboratory, University of Arizona \\ Tucson, AZ 85721, USA} \keywords{Solar magnetic fields (1503)--- Solar rotation (1524),---Solar cycle (1487)--- Stellar magnetic fields (1610)} \section{Introduction} \label{sec:intro} The Sun's mean line-of-sight field is obtained by averaging the line-of-sight component of the photospheric magnetic field over the (flat) solar disk. The measurement is obtained from Earth, sometimes in integrated sunlight, and is often called the `Sun-as-a-star' magnetic field, as if the observation were obtained from the even greater distance of another star. In the early 1970s, John Wilcox proposed to build a new solar telescope in the hills south of the Stanford University campus. The telescope would have relatively coarse spatial resolution ${\sim}1$ arcmin and the capability of measuring the Sun's mean line-of-sight field. The idea was not to compete with the telescopes at Mount Wilson and Kitt Peak, which were already obtaining daily observations of the solar disk with much higher spatial resolution, but instead to concentrate on factors like sensitivity and zero-point stability to produce a long-term sequence of relatively precise, global observations that could be used to study the Sun's large-scale field. Wilcox and Ness had discovered the interplanetary sector structure using spacecraft data from the Interplanetary Monitoring Platform (IMP) \citep{1965ICRC....1..302W,1965JGR....70.5793W,1965Sci...148.1592N}. Also, by comparing those IMP spacecraft data with photospheric magnetograms obtained at the Mount Wilson Observatory (MWO), Wilcox and Robert Howard had shown that the sector structure originated in long-lived, unipolar magnetic regions on the Sun \citep{1968SoPh....5..564W}. Consequently, Wilcox thought that mean field observations would be important for solar-terrestrial studies and, in particular, would help to improve the ${\sim}$4.5-day timing between the central meridian passage of a sector boundary at the Sun and at the Earth. (See the discussion following Douglas Jones's talk at the Second Solar Wind Conference \citep{1972NASSP.308..122J}.) As pointed out by \cite{1977SoPh...52D...6S}, Kotov and Severny had already begun daily observations of the mean field at the Crimean Observatory in 1968, and Robert Howard began them at the Mount Wilson Observatory in 1970. So Wilcox's interest in what we now call `space weather' was the motivation for building the Stanford Solar Observatory\footnote{The observatory was renamed the Wilcox Solar Observatory (WSO) in 1983 when John Wilcox died while swimming in Mexico}. Wilcox's proposal was accepted and daily measurements of the Sun-as-a-star field began on May 16, 1975. These measurements were obtained in `integrated sunlight' using a 2.7 m focal length objective lens that creates a 2.5 cm solar image located 3.8 m above the entrance slit of the spectrograph \citep{1977SoPh...52D...6S}. The new observations quickly confirmed that the strength of the mean field is correlated with the central-meridian-passage time of low-latitude coronal holes and with the associated pattern of interplanetary sectors \citep{1976BAAS....8Q.370S,1977SoPh...54..353S,1976SoPh...49..271S}. In addition, 27-day Bartels displays of the WSO mean-field measurements matched the corresponding displays of mean field calculated using the flux-transport model. This provided one of the first verifications of the transport model and showed that the mean field originated in flux that spread out from its sources in active regions \citep{1985SoPh...98..219S, 1986SoPh..103..203S,1986SoPh..104..425S}. Many years later, we learned that the mean-field correlates with the occurrence of coronal inflows seen with white-light coronagraphs on the Solar and Heliospheric Observatory (SOHO) and Solar Terrestrial Relations Observatory (STEREO) spacecraft \citep{2015ApJ...809..113S}. The reason for these correlations is that the mean field is an approximate measure of the Sun's non-axisymmetric field, and, in particular, of its horizontal dipole and quadrupole components, $Y_{1}^{1}$ and $Y_{2}^{2}$. These non-axisymmetric fields are strengthened by the emergence of flux in active regions, whose bright coronal extensions provide backgrounds for seeing the much fainter inflows that rain downward from reconnection sites in the outer corona \citep{ 2017ApJ...835L...7S,2018ApJ...859..135W}. The outward components of these reconnections are sometimes observed as streamer blobs moving out through the 30$R_{\odot}$ field of view like `leaves in the wind' and gradually swept up by high speed streams to form regions of high density \citep{2008ApJ...674L.109S, 2008ApJ...675..853S,2010ApJ...715..300S}. The purpose of this paper is to analyze an idealized Sun-as-a-star field into its spherical harmonic components to determine the ones that ought to contribute (both for the Sun and for a distant star), and then to decompose (or demodulate) the observed WSO mean field to find out what those contributions have been since the observations began in 1975. Although unknown to Wilcox in the early 1970s, the extension of these techniques to other Sun-like stars may complement asteroseismology and exoplanet studies. \section{Theoretical Analysis of the Mean Field}\label{sec:harm_components} \subsubsection{Definition of the Field} Let's begin with a definition for the mean line-of-sight field, $B_{m}$. In general, it is just the average of the line-of-sight field over the flat solar disk of radius, R: \begin{equation} B_{m}~=~\int{B_{los}}dA_{disk}/{\pi}R^2. \end{equation} However, we would like to convert to an integral of the radial field, $B_{r}$, over the surface area $A_{surf}$ of the Sun. In that case, we need two factors of $\sin{\theta}\cos{\phi}$ -- one factor to convert $B_{los}$ to $B_{r}$, and the other factor to convert $dA_{los}$ to $dA_{surf}$. Also, we note that ${\theta}$ and ${\phi}$ are the usual polar and azimuthal angles in a spherical coordinate system with the $x$-axis pointing toward Earth. Therefore, Eq(1) becomes \begin{equation} B_{m}~=~\int{B_{los}}dA_{disk}/{\pi}R^2~=~\int{B_{r}}(\sin{\theta}\cos{\phi})^{2}dA_{surf}/{\pi}R^2 ~=~(1/{\pi})\int_{-{\pi}/2}^{{\pi}/2}\int_{0}^{\pi}{B_{r}}(\sin{\theta}\cos{\phi})^{2}\sin{\theta}d{\theta}d{\phi}, \end{equation} where ${\theta}$ runs from 0 to ${\pi}$, and ${\phi}$ runs from $-{\pi}/2$ to $+{\pi}/2$. It is interesting to note that $\sin{\theta}\cos{\phi}$ is the axisymmetric quantity that is usually called ${\mu}$ in theories of line formation. So ${\mu}$ is 1 at disk center, 0 at the solar limb, and $B_{r}$ is heavily weighted toward disk center as \begin{equation} B_{m}~=~\int{B_{r}}{\mu}^{2}dA_{surf}/{\pi}R^2. \end{equation} As discussed by \cite{1977SoPh...52D...6S}, the weighting toward disk center is even greater for the WSO observations due to solar limb darkening and diffraction from the entrance slit of the spectrograph. In the Appendix of this paper, we consider the limb darkening in detail and find that the same harmonic components contribute to the mean field for a so-called gray atmosphere in the Eddington approximation as would occur in the absence of limb darkening. However, the limb darkening reduces the strengths of the $l=1$ and $l=2$ components by 12\%, and 3.75\%, respectively, and increases the strength of the much weaker $l=3$ components by 22\%. Up to this point, I have ignored the $7.25^{\circ}$ tilt of the Sun's axis away from the normal to the ecliptic plane. We can include this effect by replacing $\sin{\theta}\cos{\phi}$ with $\sin{\theta}\cos{\phi}\cos{B_{0}}+\cos{\theta}\sin{B_{0}}$, where $B_{0}$ is Earth's heliolatitude and varies annually from $-7.25^{\circ}$ in February-March to $+7.25^{\circ}$ in August-September. In that case, $B_{m}$ becomes \begin{equation} B_{m}~=~(1/{\pi})\int_{-{\pi}/2}^{{\pi}/2}\int_{0}^{\pi}{B_{r}}(\sin{\theta}\cos{\phi}\cos{B_{0}}+\cos{\theta}\sin{B_{0}})^{2}\sin{\theta}d{\theta}d{\phi}, \end{equation} where $B_{r}$ depends on $({\theta},{\phi})$, but $B_{0}$ does not. Next, our objective is to expand the binomial factor, $(\sin{\theta}\cos{\phi}\cos{B_{0}}+\cos{\theta}\sin{B_{0}})^2$, and express $B_{m}$ as the sum of three parts -- one proportional to $\cos^{2}{B_{0}}$, another proportional to $(\sin{2{B_{0}}})/2$, and the third proportional to $\sin^{2}{B_{0}}$. \begin{equation} B_{m}~=~(\cos^{2}{B_{0}})~\frac{1}{{\pi}}\int {B_{r}}(\sin{\theta}\cos{\phi})^{2}d{\Omega}~+~ (\frac{\sin{2{B_{0}}}} {2}) \frac{1}{{\pi}}\int{B_{r}}\sin{2{\theta}}\cos{\phi}~d{\Omega}~+~ (\sin^{2}{B_{0}})~\frac{1}{{\pi}}\int{B_{r}}\cos^{2}{\theta}~d{\Omega}, \end{equation} where $d{\Omega}=\sin{\theta}d{\theta}d{\phi}$ and the integral sign refers to the double integral over ${\theta}$ and ${\phi}$, as indicated in Eq(4). For the Sun, $|B_{0}|~{\leq}~7.25^{\circ}~{\approx}~0.126$ radians so that the $\sin^{2}{B_{0}}$-factor is ${\leq}0.016$, and can be neglected. Likewise, the $\cos^{2}{B_{0}}$-factor is ${\geq}0.984$ and can be replaced by 1. Finally, the $(\sin{2{B_{0}}})/2$-factor is approximately $B_{0}$, which will vary between $-0.126$ and $+0.126$ during the year. Of course, for a star whose tilt angle is large, we cannot make these approximations, and we may need to keep all three terms. \subsubsection{Spherical Harmonic Components} Next, we consider the form of $B_{r}$. There are several ways that we could represent this field. One way would be to express $B_{r}$ as a linear combination of the `barber pole' eigenfunctions of the flux-transport equation, as \cite{1987SoPh..112...17D} did in his theoretical analysis of the Sun's large-scale field. This approach might help us to interpret the power spectrum of the mean field in terms of the rigidly rotating patterns that are caused by the latitudinal transport of flux \citep{1987ApJ...319..481S,1994ApJ...430..399W,1998ASPC..154..131W}. Another way would be to represent the field in terms of sectors of the form $B_{r}=f({\theta})\cos{m}\{{\phi-{\omega}({\theta})t}\}$ (where ${\omega}({\theta})$ is the angular rotation profile), as \cite{1986SoPh..103..203S} did in their analysis of the decay of the Sun's mean field. In this paper, I will try a third approach, representing $B_{r}$ as a linear combination of spherical harmonic components, $Y^{m}_{l}({\theta},{\phi})$, which are the familiar eigenfunctions of ${\nabla}^{2}B_{r}=0$ on the surface of a sphere. (See Eqs (9) and (10) below.) With this understanding, \begin{equation} B_{r}({\theta},{\phi},t)~=~\sum_{l=0}^{\infty}\sum_{m=-l}^{m=l}{{\rho}_{lm}}(t)e^{i{\delta}_{lm}(t)} Y_{l}^{m}({\theta},{\phi}), \end{equation} where $i=\sqrt{-1}$. Also, ${\rho}_{lm}$ and ${\delta}_{lm}$ are the amplitude and phase of each harmonic component, $Y_{l}^{m}$, and are defined so that $B_{r}$ is real. Consequently, \begin{equation} B_{m}~=~(\cos^{2}{B_{0}})\sum_{l=0}^{\infty}\sum_{m=-l}^{m=l}{\rho}_{lm}e^{i{\delta}_{lm}} I_{lm} ~+~(\frac{\sin{2{B_{0}}}} {2})\sum_{l=0}^{\infty}\sum_{m=-l}^{m=l}{\rho}_{lm}e^{i{\delta}_{lm}}J_{lm}~+~ (\sin^{2}{B_{0}})\sum_{l=0}^{\infty}\sum_{m=-l}^{m=l}{\rho}_{lm}e^{i{\delta}_{lm}}K_{lm}, \end{equation} where the coefficients $I_{lm}$, $J_{lm}$, and $K_{lm}$ are given by \begin{subequations} \begin{align} I_{lm}~=~\frac{1}{{\pi}}\int Y_{l}^{m}({\theta},{\phi}) (\sin{\theta}\cos{\phi})^{2}d{\Omega},\\ J_{lm}~=~\frac{1}{{\pi}}\int Y_{l}^{m}({\theta},{\phi})\sin{2{\theta}}\cos{\phi}~d{\Omega},\\ K_{lm}~=~\frac{1}{{\pi}}\int Y_{l}^{m}({\theta},{\phi})\cos^{2}{\theta}~d{\Omega}, \end{align} \end{subequations} and the integral sign refers to the double integral in Eq(4). Thus, $I_{lm}$, $J_{lm}$, and $K_{lm}$ (weighted by the respective $B_{0}$-dependent factor) are real, and indicate the amounts by which the mean-field `filter' reduces the amplitude ${\rho}_{lm}$ of the field. The spherical harmonic functions, $Y_{l}^{m}({\theta},{\phi})$, are defined in terms of the Associated Legendre functions $P_{l}^{m}(\cos{\theta})$, given by \cite{JE_45}, and the normalization factor, $N_{lm}$, as follows \begin{equation} Y_{l}^{m}({\theta},{\phi})~=~N_{lm}P_{l}^{m}(\cos{\theta})e^{im{\phi}} \end{equation} \begin{equation} N_{lm}~=~\sqrt {\frac{2l+1}{4{\pi}} \frac{(l-m)!}{(l+m)!}}. \end{equation} After some algebra, we can rewrite Eqs(8a)-(8c) as \begin{subequations} \begin{align} I_{lm}~=~\frac{N_{lm}}{{\pi}} \int_{-1}^{1}P_{l}^{m}(x)(1-x^2)dx \left [ \frac{-4 \sin{(m{\pi}/2)}}{m(m+2)(m-2)} \right ],\\ J_{lm}~=~2\frac{N_{lm}}{{\pi}} \int_{-1}^{1}P_{l}^{m}(x)x(1-x^2)^{1/2}dx \left [ \frac{-2 \cos{(m{\pi}/2)}}{(m+1)(m-1)} \right ],\\ K_{lm}~=~\frac{N_{lm}}{{\pi}} \int_{-1}^{1}P_{l}^{m}(x)x^{2}dx \left [ \frac{2 \sin{(m{\pi}/2)}}{m} \right ]. \end{align} \end{subequations} I used standard Mathematica software \citep{10.5555/320042} to evaluate these expressions and entered the results in Tables 1-3. However, Mathematica defines the Associated Legendre functions using $P_{l}^{m}(x) = (-1)^{m}(1-x^{2})^{m/2}d^{m}P_{l}(x)/dx^{m}$, which differ by a factor of $(-1)^{m}$ from the \cite{JE_45} values. Therefore, I changed the signs of the odd-m entries in Tables 1-3 to be consistent with the \cite{JE_45} convention. In retrospect, I could have done this automatically by including an extra factor of $(-1)^{m}$ in each integrand. Tables 1 and 3 give non-zero contributions to the $Y_{0}^{0}$ magnetic-monopole component. Ignoring this term, the main contributions to the $\cos^{2}{B_{0}}$-part of the mean field come from the $Y_{1}^{1}$, $Y_{2}^{2}$, and $Y_{2}^{0}$ components with smaller additional contributions from $Y_{3}^{3}$ and $Y_{3}^{1}$. There are no contributions when $l=4$ and the higher-order terms are less than 1\%. Also, for the Sun, $|B_{0}|~{\leq}~0.126$ and $\cos^{2}B_{0}~{\approx}~1$, so that the contributions of these harmonics are not weakened appreciably by the $B_{0}$-dependence. However, in Table 2, the main contributions to the $(\sin{2B_{0}})/2$-part of the mean field are from the $Y_{1}^{0}$ and $Y_{2}^{1}$ components, which are antisymmetric across the equator. These contributions are +0.244 and +0.206, respectively. For small $B_{0}$, $(\sin{2}{B_{0}})/2~{\approx}~B_{0}$, which varies annually from -0.126 to +0.126. Consequently, the expected mean-field contributions of the $Y_{1}^{0}$ and $Y_{2}^{1}$ components vary annually and have peak amplitudes of $\pm$3.1\% and $\pm$2.6\%, respectively. These contributions are comparable to the relatively small, but finite, 3.5\% and 2.7\% contributions of the $Y_{3}^{3}$ and $Y_{3}^{1}$ components in Table 1. Thus, they ought to be noticeable, especially during sunspot cycles when the polar fields are strong. The other contributions from Table 2 are comparable to the contributions of the higher-order terms in Table 1, which we have already chosen to neglect. In Table 3, $K_{20}=+0.168$ and $K_{31}=+0.121$. However, these relatively large values can be ignored because the $\sin^{2}{B_{0}}$ factor ($~{\sim}~0.016$) reduces their net mean-field contributions to less than 1\%. Although these entries in Table 3 are unimportant for the Sun, they might contribute appreciably to the mean-field of other stars whose rotation axes may be directed closer to the line of sight. (Of course, for those distant stars, the annual variations induced by Earth's motion around the Sun would be negligible.) \begin{table}[h!] \caption{Elements of $I_{lm}$ \{for the $\cos^{2}B_{0}$-term\}} \begin{center} \begin{tabular}{c c c c c c c c c} \hline\hline $l/m$ & 0&1 & 2 & 3 & 4 & 5 & 6 & 7 \\[0.5ex] \hline\ 0 &+0.188 & \\[1.5ex] 1 & 0 & +0.173 \\[1.5ex] 2 & -0.084 & 0 & +0.103 \\[1.5ex] 3 & 0 & -0.027 & 0 & +0.035 \\[1.5ex] 4 & 0 & 0 & 0 & 0 & ~~0~~ \\[1.5ex] 5 & 0 & -0.003 & 0 & +0.004 &0 & -0.005 \\[1.5ex] 6 & 0 & 0 & 0 & 0 & 0 & 0 & ~~0~~ \\[1.5ex] 7 & 0 & -0.001 & 0 & +0.001 & 0 & -0.001 & 0 & +0.002 \\[1.5ex] \hline \end{tabular} \end{center} \end{table} \begin{table}[h!] \caption{Elements of $J_{lm}$ \{for the $(\sin{2B_{0}})/2$-term\}} \begin{center} \begin{tabular}{c c c c c c c c c} \hline\hline $l/m$ & 0&1 & 2 & 3 & 4 & 5 & 6 & 7 \\[0.5ex] \hline\ 0 & 0 & \\[1.5ex] 1 & +0.244 & 0 \\[1.5ex] 2 & 0 & +0.206 & 0 \\[1.5ex] 3 & -0.093 & 0 & +0.085 & ~~0~~ \\[1.5ex] 4 & 0 & 0 & 0 & 0 & 0 \\[1.5ex] 5 & -0.018 & 0 & +0.018 & 0 & -0.015 & ~~0~~ \\[1.5ex] 6 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[1.5ex] 7 & -0.007 & 0 & +0.007 & 0 & -0.007 & 0 & +0.006 & ~~0~~ \\[1.5ex] \hline \end{tabular} \end{center} \end{table} \begin{table}[h!] \caption{Elements of $K_{lm}$ \{for the $\sin^{2}B_{0}$-term\}} \begin{center} \begin{tabular}{c c c c c c c c c} \hline\hline $l/m$ & 0&1 & 2 & 3 & 4 & 5 & 6 & 7 \\[0.5ex] 0 & +0.188 & \\[1.5ex] 1 & 0 & +0.086 \\[1.5ex] 2 & +0.168 & 0 & ~~0~~ \\[1.5ex] 3 & 0 & +0.121 & 0 & -0.017 \\[1.5ex] 4 & 0 & 0 & 0 & 0 & ~~0~~ \\[1.5ex] 5 & 0 & +0.045 & 0 & -0.034 & 0 & +0.007 \\[1.5ex] 6 & 0 & 0 & 0 & 0 & 0 & 0 & ~~0~~ \\[1.5ex] 7 & 0 & +0.026 & 0 & -0.023 & 0 & +0.017 & 0 & -0.004 \\[1.5ex] \hline \end{tabular} \end{center} \end{table} In summary, the Sun-as-a-star field is dominated by the $Y_{1}^{1}$, $Y_{2}^{2}$, and $Y_{2}^{0}$ components of the Sun's field. Also, the mean field has much smaller contributions from the $Y_{3}^{3}$ and $Y_{3}^{1}$ components, and from the $Y_{1}^{0}$ and $Y_{2}^{1}$ components whose strengths are modulated by the annual variation of $B_{0}$ as Earth orbits the Sun. Finally, it is important to realize that we have been describing the `transmission factors' of the mean-field filter and that the real mean field also depends on the amplitude, ${\rho}_{lm}$, and phase, ${\delta}_{lm}$, of the radial field, $B_{r}$, that is being filtered. Next, we will use these results to interpret mean-field observations obtained daily at the Wilcox Solar Observatory during the 46-yr interval from May 16, 1975 to the present. \clearpage \section{Sun-as-a-Star Magnetic Field Measurements from WSO} \subsection{27-day rotational modulation} Figure~1 shows daily measurements of the Sun's mean field obtained at WSO since 16 May 1975. Approximately \noindent 17,000 points over 46 years give a blurred distribution with peaks and valleys around sunspot maximum and minimum in each of four sunspot cycles. This figure is essentially the same as the one that is shown on the WSO web site (http://wso.stanford.edu), and leaves us with the question of how to extract information from these data. A big clue is contained in a plot of the data obtained during the first year of observations, as shown in Figure~2. The field \noindent oscillates with a period of about 27 days (${\sim}$0.074 yr), before degrading toward the end of the year, as the corresponding low-latitude coronal hole gradually died \citep{1976SoPh...49..271S}. What we would like to know is how much power is contained in this 27-day modulation and how that power varies with time during the 46-yr interval. In effect, we are asking for the envelope of this mean-field time series. There are several approximately equivalent ways to produce this envelope. One way is to divide the time base into 27-day segments and to compute the maximum-minimum difference of the mean-field values on each segment. By plotting the absolute values of these differences, we would obtain a display of the mean field variation similar to the one that \cite{2015ApJ...809..113S} obtained in their paper describing the rejuvenation of the Sun's large-scale magnetic field. A similar result is obtained by computing the standard deviation of the mean field on each 27-day segment (allowing for data gaps when taking the averages), and then plotting that value as a function of time. Another, nearly equivalent procedure is to set the mean-field data gaps equal to zero before computing the standard deviations, and then to perform a 27-day moving average of those standard deviations. This is the approach that I shall use in the remainder of this paper. Figure~3 shows this 27-day moving average, compared with the monthly averaged sunspot number (divided by 2000) during cycles 21 - 24. As described previously using the `max-min' display \citep{2015ApJ...809..113S}, the mean field originates in episodic bursts whose amplitudes often tend to be large as the sunspot cycle enters its declining phase. Also, one can see the decrease of the mean field strength during 1976 as the sunspot number reached its 11-year minimum and the low-latitude coronal hole died (\textit{cf.} Figure~2). Finally, note that the amplitudes of the peaks in Figure~3 are about 1.4 times smaller than those in Figure~1. This is close to the ${\sqrt{2}}$ that one might expect for the difference between the standard deviation and the envelope of a curve. (For example, if $f(t)=A(t)cos{\omega}t$, the envelope is ${\pm}A(t)$ and the standard deviation is $A(t)/\sqrt{2}$.) \subsection{Using the Fourier transform approach} Next, we return to the WSO mean-field measurements that we displayed as a function of time in Figure~1. After setting the missing field strengths equal to 0, we take the discrete Fourier transform defined by \begin{equation} f_{s}~=~\frac{1}{{\sqrt{N}}}\sum_{k=1}^{N}{B_{k}}e^{2{\pi}i(k-1)(s-1)/N} \end{equation} where $B_{k}$ refers to the individual mean-field measurements whose index, $k$, runs from 1 to $N=16,987$, corresponding to the most recent measurement on 16 November 2021. In this case, the frequency, ${\omega}$, in rad $\textrm{day}^{-1}$ is given by \begin{equation} {\omega}=2{\pi}s/N. \end{equation} Because $f_{s}$ is a complex number, one typically plots the power, $P({\omega})$, defined as the positive number, $f_{s}^{*}f_{s}$, versus ${\omega}$. However, to keep the units in Gauss, I plot the positive square root of this power. Also, it is not necessary to include the full range $(0,2{\pi})$ because the spectrum is symmetric about the point ${\omega}={\pi}$. Moreover, we do not even need to include all of the half range $(0,{\pi})$ because the lines disappear after the $m=3$ peak around ${\omega}=0.7$ rad $\textrm{day}^{-1}$. This is consistent with our expectations from Table 1, which gives 0 for $m=4$, and less than 1\% for $m=5$. The power spectrum in Figure~4 shows three main peaks at frequencies of approximately ${\omega}=0.231$, 0.460, and 0.702 $\textrm{rad~day}^{-1}$. These frequencies are in the ratio of approximately 1:2:3, corresponding to a fundamental rotation rate \noindent with $m=1$ and its first two harmonics with $m=2$ and $m=3$. The associated periods are approximately 27.2, 13.6, and 9.0 days, respectively. Evidently, we are seeing rigidly rotating recurrence patterns of two-sector, four-sector, and six-sector fields (\textit{i.e.} the dipole, quadrupole, and hexapole fields). These regions are shown separately in the three panels of Figure~5. Each spectrum is displayed with the same 12 unit \noindent smoothing, which is 12 times the $2{\pi}/N$ resolution for $N{\sim}$ 16,987 points, and corresponds to 0.0044 $\textrm{rad~day}^{-1}$. On the low-frequency side of the fundamental peak at ${\omega}=0.231~ \textrm{rad~day}^{-1}$, there are peaks at 0.219 and 0.208 $\textrm{rad~day}^{-1}$, corresponding to periods of approximately 28.7 and 30.2 days, respectively. The 28.7-day period is comparable to the ${\sim}$28.5-day recurrence period of the slanted patterns seen around sunspot maximum in Bartels displays of the interplanetary magnetic field observed by in-ecliptic spacecraft and inferred from Earth-based magnetometers \citep{1975SoPh...41..461S, 1984PhDT.........5H,1985SoPh...98..219S,1986SoPh..104..425S,1994JGR....99.6597W}. However, the 30.2-day peak has no in-ecliptic counterpart, and therefore probably originates in rigidly rotating structures at latitudes that are beyond the reach of the in-ecliptic measurements, as discussed in Section 3.2.1 below. Although the frequencies of the main peaks of the $m=1$, $m=2$, and $m=3$ distributions occur in the ratio of 1:2:3, the frequencies of the sidebands do not occur in this ratio. In particular, the $m=2$ and $m=3$ sidebands are not `blurred out' harmonics of the peaks at 0.208 and 0.219 $\textrm{rad~day}^{-1}$. Not only do the $m=2$ structures occur at different frequencies than we would expect for second harmonics, but also these structures are accompanied by additional features for which there is no corresponding peak in the sidebands of $m=1$. For $m=3$, there are even more fluctuations, crowding into a broad slope of nearly continuous intensity. Finally, we note in Figure~4 that there is a noisy `ledge' of strength ${\sim}$0.6 G at ${\omega}~{\sim}~0.017~\textrm{rad~day}^{-1}$. This structure corresponds to a weak annual variation associated with the motion of the Earth around the Sun. As discussed in Section 2, this annual variation is introduced through $B_{0}$ - Earth's latitude in the Sun's polar coordinate system. At even lower frequencies, the spectrum rises steeply, and the expected peak at ${\omega}~{\approx}~0.0016~\textrm{rad~day}^{-1}$ (corresponding to the 11-yr sunspot cycle) is not visible in this $0.0044~\textrm{rad~day}^{-1}$ smoothed plot. \cite{2019Ap&SS.364...45K} has used mean-field observations since 1968 to study this annular variation in greater detail. \subsubsection{The temporal origin of the peaks in the power spectrum} The next problem is to find the temporal origin of these spectral peaks. We do this by selecting the frequency range of interest and then taking the inverse Fourier transform through that spectral window. We use the inverse transform \begin{equation} B_{k}~=~\frac{1}{{\sqrt{N}}}\sum_{s=1}^{N}{f_{s}}e^{-2{\pi}i(k-1)(s-1)/N}, \end{equation} but with $f_{s}$ multiplied by a function of $s$ (or equivalently ${\omega}$) that is 1 on the interval of interest and 0 elsewhere. Note that $f_{s}$ is the original complex Fourier transform given by Eq(12), and not the absolute value that was used in Figure~4. In general, the inverse Fourier transform is also a complex number, so we calculate the standard deviation, ${\sigma}$, from the relation ${\sigma}^2 =<|B_{k}|^2>-~|<B_{k}>|^2$, where in the second term, we compute the 27-day average of $B_{k}$ before we take its absolute value and square it. In this case, it is easy to show that ${\sigma}^2={\sigma}_{r}^2+{\sigma}_{i}^2$ (\textit{i.e.} the standard deviation of $B_{k}$ is the square root of the sum of the squares of the standard deviations of its real and imaginary parts). Figure~6 was created by selecting a disjoint interval consisting of the three principal peaks of the power spectrum \noindent - specifically, (0.20-0.25), (0.42-0.50), and (0.65-0.73) $\textrm{rad~day}^{-1}$ - and then by displaying the root-mean-square power in the inverse transform. This plot is essentially the same as Figure~3 without the noise. Because we have not included power at the frequency of the annual variation (0.0172 $\textrm{rad day}^{-1}$), we have removed potential contributions from Table 2, so that the only contributions come from modes in Table 1. This leaves only the $Y_{1}^{1}$ (and possibly the $Y_{3}^{1}$) components as likely contributors from the $m=1$ sector, and only the $Y_{2}^{2}$ and $Y_{3}^{3}$ components as contributors from the $m=2$ and $m=3$ sectors, respectively. So, the red curve in Figure~6 indicates contributions from the horizontal dipole, quadrupole, and hexapole components ($Y_{1}^{1}$, $Y_{2}^{2}$, and $Y_{3}^{3}$), and possibly a small contribution from $Y_{3}^{1}$. Next, we ask how this non-axisymmetric power is distributed among the $m=1$, $m=2$, and $m=3$ sectorial modes. For this purpose, we use the individual frequency ranges (0.20-0.25), (0.42-0.50), and (0.65-0.73) $\textrm{rad~day}^{-1}$, which correspond to the fundamental, second, and third harmonics shown in Figures~4 and 5. The 27-day running \noindent averages are shown in Figure~7. In general, the $m=1$ component contributes the most and the $m=3$ component contributes the least. With a few exceptions, the mean-field is dominated by the $m=1$ and $m=2$ sectoral modes. The $m=1$ component has large peaks in 1982, 1991, 2003, and a small one in 2015 that we recall from a nearly identical plot of the equatorial dipole, that was derived from spatially resolved WSO observations \citep{ 2015ApJ...809..113S}. Also, the $m=2$ component has moderately large, but narrow, peaks in 1981, 2000, and 2012. The $m=3$ component has a large, narrow peak in 1991 when the $m=1$-2 values are temporarily low. Also, in the relatively weak sunspot cycle 24, the three components have nearly coincident peaks of approximately equal strength, which combine to give a stronger peak of total mean-field power, as seen in Figure~6. Based on the factors in Tables 1 and 2, we expect that the $m=1$ power originates primarily from the $Y_{1}^{1}$ horizontal dipole component of field and secondarily from the $Y_{3}^{1}$ component. As mentioned above, by filtering out annual variations, we have excluded contributions from the $Y_{2}^{1}$ component, that would otherwise occur through the $B_{0}$ factor. Likewise, the $Y_{3}^{2}$ component is excluded from the $m=2$ sector, which indicates power in the $Y_{2}^{2}$ component alone. Finally, we have the impression in Figure~7 that the temporal fluctuations become systematically finer as $m$ increases from 1 to 3. This is consistent with the increased coarseness of the power spectra in Figures~4 and 5 as $m$ increases: For $m=1$, the peaks were often resolvable; for $m=2$, they formed coarser structures; and for $m=3$, they merged into a `bumpy' continuum. In a previous analytical study of the mean field, we noted that the field depended on the product $mt$, not on $m$ and $t$ separately. This meant that the mean field would decay as $1/m$ in the absence of meridional flow \citep{1986SoPh..103..203S}. In particular, a 4-sector field would decay twice as fast as a 2-sector field, and a 6-sector field would decay three times as fast. When flow was present, this analytical simplification did not occur. However, the apparent trend in Figure~7 indicates that a monotonic relation may still be present. Next, we look for the origins of the three $m=1$ peaks at ${\omega}=0.231$, 0.219, and~0.208 $\textrm{rad day}^{-1}$, shown in the left panel of Figure~5. To do this, we select relatively narrow spectral windows surrounding these peaks, and then invert the Fourier transform and plot the running 27-day rms averages. For the intervals, ${\omega} = 0.226-0.237$, ${\omega} = 0.215-0.225$ \noindent and ${\omega} = 0.198-0.212~ \textrm{rad day}^{-1}$, we obtain the blue curves in Figure~8. Here, the red curve indicates the total $m=1$ power from the top panel of Figure~7. So we are comparing the temporal origin of the individual $m=1$ peaks with the temporal origin of the combined $m=1$ power. Like Fourier transforms of continuous functions, these Fourier transforms of discrete functions have peaks whose widths, ${\Delta}{\omega}$, are inversely related to the lifetimes, ${\Delta}t$, of the corresponding temporal structures. In fact, ${\Delta}{\omega}{\Delta}t~{\sim}~8$ for full widths at $e^{-1}$ maximum and ${\Delta}{\omega}{\Delta}t~{\sim}~8~ln2~{\approx}~5.54$ for full widths at half maximum. Consequently, the narrow $m=1$ peaks with ${\Delta}{\omega}~{\sim}~0.01~ \textrm{rad day}^{-1}$ in the left panel of Figure~5 correspond to long-lived features with ${\Delta}t~{\sim}~1-2~ \textrm{yrs}$. In the top panel of Figure~8 the blue curve refers to the power in the spectral `line' at ${\omega} = 0.231~ \textrm{rad day}^{-1}$ (corresponding to a period of approximately 27.2 days). This blue curve tends to follow the more rapidly fluctuating red curve, with appreciable contributions during each of the four sunspot cycles. Thus, most of the two-sector power originates in long-lived features that recur with a period of 27.2 days, and presumably corresponds to quasi-vertical patterns in the 27.27-day Carrington stackplots of mean-field observations. In the middle panel, the blue curve refers to power in the spectral line at ${\omega}=0.219~ \textrm{rad day}^{-1}$ (28.7days). Most of the 28.7-day power originates in 1979-1985 and 1989-1993 coincident with large peaks of 27-day power. This overlapping trend did not continue into sunspot cycles 23 and 24 when the 28.7-day power was much smaller. This suggests that ${\sim}$28.5-day stackplot patterns may have been weaker or less frequent in cycles 23 and 24 than in cycles 21 and 22. In the bottom panel, the blue curve indicates power in the spectral line at ${\omega}=0.208~ \textrm{rad day}^{-1}$ (30.2 days). Most of this $\sim$30-day power occurred in 1989-1990, with lesser amounts during 1980 and 1982-1983, and only trace amounts in sunspot cycles 23 and 24. To summarize the results of Figure~8, the power depends on the rotation period with 27-day power coming from all four sunspot cycles (but with a relatively small contribution from the weakest sunspot cycle 24). The 28.5-day power originates mainly in sunspot cycles 21 and 22 with very small contributions from cycles 23 and 24. The 30-day power comes mainly from the year 1989 in cycle 22 and secondarily from small peaks in cycle 21. The lack of substantial 30-day power after 1990 provides another way to isolate the power during 1989 - 1990. We simply move the starting point of the Fourier transform backwards in time through the year 1989 and watch the height of the 30-day peak increase. Figure~9 shows a sample of the power spectra obtained by moving the starting time backward from 02 February 1990 (CR1826) in steps of 3 Carrington rotations (approximately 82 days) to 09 January 1989 (CR1811). During this sequence, the 30-day ($0.208~ \textrm{rad day}^{-1}$) peak emerges from a continuum level at about 0.5 G to a maximum height of about 1.0 G. Although not shown here, a movie with 1-rotation time resolution indicated that the 30-day peak emerged from the continuum at CR1824 (29 December 1989) and strengthened until it reached about 1.0 G at CR1811 (09 January 1989), corresponding to a lifetime of 13 rotations (1 yr and 11 days). Thus, the 30-day oscillation spanned the 1-year interval from 1989 to 1990. We can continue this approach by moving the starting point of the Fourier transform forward in time to successively exclude major contributions to the 30-day, 28.5-day, and 27-day periods. Referring to Figure~8, we select the first starting time on 09 January 1989 when substantial power remained in all three rotational periods. This is shown in the upper-left panel of Figure~10. Then, we move farther in time to 01 June 1991, which is well after the peak of 30-day power, but still includes power at 28.5 days and 27-days, as shown in the upper-right panel of Figure~10. Next, we choose 17 January 1996, which is after the large peak of 28.5-day power. However, 17 January 1996 is still before the occurrence of the large peak of 27-day power in 2003-2004, and this contribution is shown in the lower-left panel of Figure~10. Finally, we select 02 January 2005 to remove this large peak of 27-day power. The remaining 27-day power comes from a small peak in 2015 -2016, as shown in the lower-right panel of Figure~10. This contribution seemed very important when it rejuvenated the large-scale field in sunspot cycle 24 \citep{2015ApJ...809..113S}. In the next section, we shall compare these observations with spatially resolved magnetograms and find that the peaks of 27-day power tend to occur toward the end of sunspot maximum when the sunspot belts are closer together and allow large unipolar magnetic regions to form at the equator. \subsubsection{Corresponding solar images} Next, we look for these magnetic patterns in spatially resolved solar observations. We begin with Figure~11, which shows Carrington maps of the photospheric magnetic field obtained at the National Solar Observatory (NSO)\footnote{https://nispdata.nso.edu/ftp/kpvt/synoptic/mag}. At NSO, each of these maps was divided by ${\mu}=\sin{\theta}\cos{\phi}$ to convert the observed line-of-sight component to a radial component, assuming that the fields are radial at the photosphere where they are measured. Thus, we regard these maps as displays of the radial component of photospheric field. This figure contains images from the start of sunspot maximum (left column) and the end of sunspot maximum (right column) in sunspot cycles 21 (top row), 22 (middle row), and 23 (bottom row). In the left panels, each sunspot cycle has progressed far enough that several large active regions have emerged to form activity belts with flux streaming poleward and eastward (leftward) from those belts. However, the sunspot cycles have not progressed far enough for flux to spread equatorward to fill in the wide gaps between the two belts, as has happened at the times of the images in the right panels. In fact, by the end of the sunspot-maximum era, the two activity belts have reached sufficiently low latitudes and narrow separations that the equatorward diffusion of flux dominates the poleward convection by meridional flow, causing flux to accumulate around the equator. We have encountered these phenomena before. First, in numerical simulations of the mean field, \cite{1986SoPh..104..425S} found that the 28-29-day recurrent patterns originated in flux that was migrating poleward from its sources in the sunspot belts. The dramatic eastward drift of these patterns is well known to many of us from viewing time-lapse movies of Carrington maps like the ones in Figure~11. Second, \cite{2015ApJ...809..113S} found that a juxtaposition of northern-hemisphere and southern-hemisphere active regions during the second half of 2014 created a large region of positive-polarity flux at the equator, which produced a major rejuvenation of the Sun's large scale field. As shown in the top panel of Figure~8, the 27-day power in the two-sector component of the mean field reached a peak at this time. Also, \cite{2015ApJ...809..113S} noted that this rejuvenation of the large-scale field was not an isolated characteristic of sunspot cycle 24, and that similar enhancements of the equatorial dipole field in 1982, 1991, and 2003 have marked the end of the sunspot maximum era (or the start of the declining phase) of cycles 21, 22, and 23. Another reason for selecting the images in Figure~11 was to relate these flux distributions to the profiles of spectral power, especially for the two-sector ($m=1$) plots in Figure~8. Thus, the maps in the top panel of Figure~11 occur in 1979 and 1982 when there were peaks in the spectra for 27 days, 28.5 days and, to a lesser extent, 30 days. The maps in the middle panels occurred in 1989 when the 30-day power reached its maximum, and in 1991 when the 27-day and 28.5-day power reached their maximum values. The map in the bottom-right panel was chosen because it occurred in 2003 when the 27-day power dominated the two-sector spectrum. As one can see, it shows a large two-sector pattern of equatorial flux with positive-polarity flux left of center and negative-polarity flux right of center. In choosing these images, I looked for strong poleward streams in each sunspot cycle. However, I did not always find them. Whereas the maps in the upper-left and middle-left panels show major streams, the map in the lower-left panel (CR1955, 11 October 1999) shows relatively weak streams, despite the fact that it was at the same phase of the sunspot cycle. Those 1999 streams are only slightly more impressive than the weak streams in 1982, 1991, and 2003 in the right column. This favoring of sunspot cycles 21 and 22 over cycle 23 is consistent with the power spectra in Figure~8, which show more 28.5-day and 30-day power during cycles 21 and 22 than during cycle 23. If the 28.5-day power originates in poleward migrating flux from large active regions, as previously reported \citep{1986SoPh..104..425S}, then it seems plausible that the much rarer 30-day power in 1989 is a statistical fluctuation caused by the emergence of an especially large, high-latitude active region at that time. The strong, northern-hemisphere stream in the middle-left panel of Figure~11 originated in such an active region. The right panel of Figure~12 shows the evolution of this region during CR1811-1818 (09 January 1989 - 19 July 1989) when sunspot cycle 22 reached the start of its 3-4 years of high sunspot activity\footnote{J.W. Harvey recently reminded me that during CR1813, this region (5395) was the source of many X-ray flares and coronal mass ejections (CMEs), two of which were responsible for the blackout of the Hydro-Qu\'ebec power system on 13 March 1989 \citep{2019SpWea..17.1427B}.}. Each image is the northern-hemisphere part of a Carrington map that has been cropped at the equator. Thus, longitude runs left to right from 0$^{\circ}$ to 360$^{\circ}$ and sine latitude runs bottom to top from 0 to +1. The faint, yellow lines provide a reference drift for a 30.2-day rotation, corresponding to the frequency ${\omega}$ = 0.208 $\textrm{rad~day}^{-1}$ that we obtained from the power spectra in Figures~4 and 5. For comparison, the left panel of Figure~12 shows the evolution of a much smaller region during CR1788-1795 (22 April - 30 October 1987) at the \noindent start of cycle 22. Both regions emerged at latitudes ${\sim}$35$^{\circ}$, and their fluxes evolved into similar patterns at comparable speeds. (An interesting, and probably coincidental, similarity is that both streams were replenished by flux from a second active region that emerged 3-4 rotations later toward the left side of each panel.) The major difference is that the 1989 source was larger and stronger than the 1987 source. Presumably, this difference in strength was responsible for the difference in spectral power shown in Figure~8. The 1989 stream coincided with a large peak of 30-day spectral power, whereas the 1987 stream did not. At the ${\sim}$35$^{\circ}$ latitude of these two active regions, the rotation period of small-scale magnetic tracers is about 29 days \citep{1983ApJ...270..288S}. However, within a few Carrington rotations, these streams of trailing, positive-polarity flux had drifted poleward into the 45$^{\circ}$ latitude range where the rotation period is 30 days. Although one can confirm this by measuring the vertical locations of the streams in these images, one can also refer to the faint yellow lines, which provide a 30.2-day reference drift. These lines track the trailing streams of positive-polarity flux fairly well until the tails of the streams reach latitudes above 45$^{\circ}$ and also begin to merge with flux from other active regions. Multi-latitude stackplots of the WSO-derived source-surface field support the identification of the 1989 pattern as the source of the 30-day power \citep{1994JGR....99.6597W}. Calculated for 1977 - 1993, these stackplots show a rare, 30-day, two-sector recurrence pattern over a wide range of latitudes from 20$^{\circ}$N to 80$^{\circ}$N during 1989. It is the most prominent 30-day pattern during that 16-year interval. In addition, its positive-polarity sector moves from right to left across the Carrington frame during 1989, so that the phase of this two-sector pattern also agrees with that of the field in Figure~12. The `1989 pattern' continued its poleward and eastward migration well after the end of 1989 when the 30-day power ended, according to Figure~9. By CR1830 (11 June 1990), the trailing end of the stream extended to about 68$^{\circ}$ latitude where the \cite{1983ApJ...270..288S} rotation period is 35.3 days. However, the rotation period of the mean field did not increase beyond 30 days. Thus, for latitudes above 45$^{\circ}$, even this relatively strong field was too weak to overcome the ${\mu}^{2}$-dependence of the mean-field integral in Eq(3), and (as we shall see in the Appendix), an extra factor of $\mu$ due to limb darkening. In the next section, we shall see that this result provides a clue for understanding some puzzling associations between the axisymmetric component of the mean field and the Sun's polar magnetic fields. \subsubsection{Power in the Axisymmetric Component of the Sun's Mean Field} To display the non-axisymmetric part of the mean field, $B_{m}$, we used the 27-day running mean of the standard deviation, defined by $B_{m}^{rms}=(<B_{m}^2>-<B_{m}>^2)^{1/2}$, where the brackets refer to averages over a 27-day moving window. In this subsection, we are interested in $<B_{m}>$, the 27-day moving average of the mean field (\textit{i.e.} the axisymmetric component that we squared and subtracted from $<B_{m}^{2}>$ to get that non-axisymmetric component). Another way to compare the axisymmetric and non-axisymmetric components of $B_{m}$ is to express Eq(7) in terms of real variables and separate the $m=0$ and $m{\neq}0$ terms. To first order in $B_{0}$, we obtain \begin{equation} B_{m}~{\approx}~\left \{ \sum_{l=1}^{\infty} {\rho}_{l0} I_{l0}~+~2 \sum_{l=1}^{\infty} \sum_{m=1}^{l} {\rho}_{lm} I_{lm} \cos{\delta_{lm}} \right \} ~+~B_{0} \left \{\sum_{l=1}^{\infty} {\rho}_{l0} J_{l0}~+~2 \sum_{l=1}^{\infty} \sum_{m=1}^{l} {\rho}_{lm} J_{lm} \cos{\delta_{lm}} \right \}. \end{equation} Averaging over ${\delta}_{lm}$ (which we assume varies linearly with time, $t$, or more precisely with $mt$), we get \begin{equation} B_{m}^{rms}~{\approx}~\sqrt{2}~\sqrt{ {\rho^{2}_{11}} I_{11}^2~+~{\rho^{2}_{22}} I_{22}^2~+~ {\rho^{2}_{33}} I_{33}^2~~+ {\rho^{2}_{31}} I_{31}^2 } \end{equation} for the root-mean-square deviation of $B_{m}$ from its average value, and \begin{equation} <B_{m}>~{\approx}~{\rho}_{20}I_{20}~+~B_{0} \left \{{\rho}_{10} J_{10}~+~ {\rho}_{30}J_{30} \right \} \end{equation} for the 27-day moving average of $B_{m}$. In Eq(16), we have omitted $J_{lm}$-dependent terms because they occur with the second-order factor, $B_{0}^2$, and we have omitted terms with $I_{21}$ and $I_{32}$, which vanish, as indicated in Table~1. Likewise, in Eq(17), we have omitted terms with $I_{10}$, $I_{30}$, and $J_{20}$, which also vanish as indicated in Tables~1 and 2. Thus, $B_{m}^{rms}$ has non-axisymmetric ($m~{\neq}~0$) contributions from $Y_{1}^{1}$, $Y_{2}^{2}$, $Y_{3}^{3}$, and $Y_{3}^{1}$ for which $I_{11}=+0.173$, $I_{22}=+0.103$, $I_{33}=+0.035$, and $I_{31}=-0.027$, as obtained from Table~1. In that case, we expect the non-axisymmetric component of $B_{m}$ to be dominated by contributions from $Y_{1}^{1}$ and $Y_{2}^{2}$ with much smaller contributions from $Y_{3}^{3}$ and $Y_{3}^{1}$. The axisymmetric quantity, $<B_{m}>$, has contributions from $Y_{2}^{0}$ with $I_{20}=-0.084$, plus $B_{0}$-dependent contributions from $Y_{1}^{0}$ and $Y_{3}^{0}$, with the somewhat larger values of $J_{10}=+0.244$ and $J_{30}=-0.093$. However, the factor of $B_{0}$ reduces these contributions and modulates them with a 1-year period. When $B_{0}$ has its maximum value of $7.25^{\circ}$ (0.1265 rad), $B_{0}J_{10}$ is only 0.031 (37\% of $I_{20}$) and $B_{0}J_{30}$ has the even smaller value of -0.0118 (38\% of $B_{0}J_{10}$). Thus, in rough terms, we can regard $Y_{2}^{0}$, $Y_{1}^{0}$, and $Y_{3}^{0}$ as making three monotonically decreasing contributions to the axisymmetric part of $B_{m}$, in which each contribution is about 40\% of the previous one. Now, let us see how well these terms fit the WSO mean-field measurements. The top panel of Figure~13 contains a plot of $<B_{m}>$ (red) and the monthly averaged sunspot number from the Royal Observatory of Belgium (SILSO) (blue) during sunspot cycles 21 - 24. The sunspot number has been divided \noindent by 2000 and shifted downward by 0.55 units on the vertical scale. At first glance, the red curve seems to be a meandering collection of relatively uninteresting noisy wiggles. However, closer inspection reveals a rough trend for the signal to be negative during the early years of sunspot maximum in sunspot cycles 22 and 24 and positive during the declining phase of those cycles. The polarities are reversed during the odd-numbered cycles 21 and 23. In addition, there are well defined annual variations during the 1976, 1986, and 2019 sunspot minima, which are presumably the most visible indications of the $B_{0}$-dependence of the axisymmetric component of the mean field as Earth orbits the Sun during the year. These $B_{0}$-induced variations are removed in the bottom panel of Figure~13, whose purple curve is the 365-day running mean of the $<B_{m}>$ values associated with the red curve in the upper panel. We suppose that this purple curve indicates the $Y_{2}^{0}$ contribution given by ${\rho}_{20}I_{20}=-0.084{\rho}_{20}$ in Eq(17)\footnote{Note that $Y^{0}_{2}$ has the sign of its polar region, which is negative when its equatorial region is positive, according to the definition $P_{2}(\cos{\theta})=(1/2)(3\cos^{2}{\theta}-1)$. This accounts for the negative sign of $I_{20}$ in Table 1.}. Thus, the mean field reduces the contribution of the $Y_{2}^{0}$ component by a factor of about 12 and has a negative value corresponding to the sign of the equatorial part of a positively directed quadrupole. The temporal profile of the purple curve in Figure~13 is similar to the profile of the $Y_{2}^{0}$ component calculated from spatially resolved observations at both WSO and MWO (but not shown here). The main difference occurred during 1992-1996 when the spatially resolved measurements give a much larger positive field than the mean field in Figure~13. This stronger field would have strengthened the rough, alternating-polarity rule noted from the red curve in the top panel of Figure~13. As pointed out previously by \cite{2011ApJ...736..136W} and \cite{2012ApJ...755..135R}, this polarity rule reflects the tendency of equatorial flux to originate from the leading parts of active regions together with the greater activity in the southern hemisphere than in the northern hemisphere during the declining phases of sunspot cycles 21-24. Likewise, the northern hemispheres were more active during the rising phases of those cycles. However, \cite{2011ApJ...736..136W} also found that this alternating-polarity rule broke down during the very high activity of sunspot cycle 19 when the northern hemisphere tended to be more active than the southern hemisphere throughout the cycle. (See also Figure~1 of \cite{1977ApJS...33..391W} who found no systematic variations during 1874-1971\footnote{The reader may have to go to the hardcopy edition of this paper because the figure was a large foldout that was not scanned into the online edition.}.) The annually varying part of the axisymmetric field is shown by the purple curve in Figure~14. \noindent We obtained this curve by subtracting the 1-year averaged field (purple curve in the bottom panel of Figure~13) from the total axisymmetric field (red curve in the top panel of Figure~13), and then taking a 75-day running average to remove the noise. Referring to Eq(17), we expect this $B_{0}$-induced field to be $B_{0}({\rho}_{10}J_{10}+{\rho}_{30}J_{30})$, where $B_{0}$ is given by \begin{equation} B_{0}(t)~{\approx}~0.126\sin \left \{ 2{\pi} \left ( \frac{t-157}{365} \right ) \right \} \end{equation} whose amplitude vanishes on day-of-year 157 (June 6) and reaches 0.126 rad (7.25$^{\circ}$) on day-of-year 249 (September 6), and where $J_{10}=+0.244$ and $J_{30}=-0.093$, as given in Table~2. The vertical meandering of the $Y_{2}^{0}$ component, that was present in the top panel of Figure~13, is clearly gone. The annual variation is strongly visible around the 1976, 1986, and 2019 sunspot minima, but only weakly visible around the 1997 and 2009 minima. Stronger non-periodic bursts occur during the intervening sunspot-maximum intervals. Faint dotted lines have been drawn at the even-numbered years and the panels have been enlarged to help show the phase of the annual variation. For example, during 1976 and 1977, the positive peaks occurred in the fall and the negative peaks occurred in the spring, as expected for a positive axisymmetric field. In contrast, during 1985-1987, the sharp negative peaks occurred in the fall and the blunted positive peaks occurred in the spring, consistent with a negative axisymmetric field. This means that the $B_{0}$-induced axisymmetric component of the mean field changed its sign from plus to minus in going from sunspot cycle 21 to 22, in agreement with the signs of the axisymmetric dipole and the Sun's polar magnetic field. The amplitude of the $B_{0}$-induced component was much weaker during the 1997 and 2009 solar minima, but the alternation of signs was still detectable. Then in 2019, the field was strong again, and its sign was positive as expected for the continued 11-yr alternation of polarity. Thus, during the past 5 sunspot minima from 1976 to 2019, the $B_{0}$-induced component of the mean field reversed its polarity in phase with the polarity of the Sun's polar magnetic field and the Sun's axisymmetric dipole component. Figures~15 and 16 provide more graphic displays of this phase alignment. In Figure~15, plots of the WSO polar \noindent field strengths are superimposed on the $B_{0}$-induced part of the axisymmetric mean field (shown in purple again). To obtain the best overall agreement with the envelope of the mean field, I reduced the north and south polar field strengths by factors of 20 and 30, respectively, before plotting them. Now, the envelopes agree fairly well in the years around sunspot minimum, but not around sunspot maximum when the polar fields were reversing and the mean field had several large bursts. A detailed inspection of these overlapping curves shows that the annual variations of the polar fields are in sync with the annual variations of the $B_{0}$-induced field. The north polar field and its mean-field ripple reached their greatest absolute magnitudes in the fall of each year, and the south polar field and its corresponding mean-field ripple reached their greatest magnitudes in the spring. Figure~16 provides another comparison between the $B_{0}$-induced mean field and the polar fields. In this case, \noindent the weighted values of the north and south polar fields are added, to form a single oscillating blue curve similar to the overlapping purple plot of the mean field variations. Also, the weighting is changed slightly so that the contribution of the north pole is increased by a factor of $18/13~{\approx}~1.4$, instead of the factor of 30/20 = 1.50 that was used in Figure~15. The overall trend is essentially the same as we found in Figure~15 with consistent agreement in phase and fair agreement in magnitude, except in the years around sunspot maximum when the polar fields changed sign and the mean field had several large purple spikes. These graphical comparisons between the $B_{0}$-induced component of the mean field and the amplitude of the polar field clearly indicate that these two fields are in phase and that they reverse their polarities together from one sunspot cycle to the next. We might expect this phase synchronization because both of these axisymmetric fields depend strongly on $Y_{1}^{0}$, which reverses the polarity of its contribution from one sunspot cycle to the next. However, the $B_{0}$-induced field and the polar field have differences that might cause their amplitudes to differ. The $Y_{3}^{0}$ component contributes to both fields, but with opposite signs. We have seen in Eq(17), that $Y_{3}^{0}$ makes a negative contribution to the $B_{0}$-induced mean field because the coefficient, $J_{30}$, is negative. On the other hand, the topknot character of the polar field requires a weighted sum of the $Y_{1}^{0}$ and $Y_{3}^{0}$ contributions (and smaller contributions from $Y_{5}^{0}$ and $Y_{7}^{0}$) in order to strengthen the field at the poles and weaken the field around the equator \citep{1978SoPh...58..225S,1984SoPh...92....1D,1989SoPh..124....1S,1989SoPh..119..323S}. Also, the north and south polar fields often differ in strength due to symmetry-breaking contributions from harmonic components of even $l$, especially the $Y_{2}^{0}$ component. As shown in the second column of Table~2, these even-order components do not contribute to the $B_{0}$-induced axisymmetric component of the mean field. This explains why we had to equalize the strengths of the polar fields in order to bring their plots into better agreement with the envelope of the $B_{0}$-induced axisymmetric field in Figures~15 and 16. \section{Summary and Discussion} In this paper, I have regarded the Sun as an unresolved source of light like that from a distant star, and calculated the transmission that this `mean-field filter' would have for the spherical harmonic components, $Y_{l}^{m}$, of the field. This transmission depends on the mode numbers, $l$ and $m$, and $B_{0}$, the observer's latitude in the star's polar coordinate system. The transmissions fell into three separate classes, proportional to $\cos^{2}B_{0}$, $(\sin2B_{0})/2$, and $\sin^{2}B_{0}$, as given in Tables 1-3, respectively, and described in Eqs(5) and (7). For the Sun, $B_{0}$ varies between $-7.^{\circ}25$ and $+7.^{\circ}25$ during the year, so that $\sin^{2}B_{0}$ is small and Table 3 can be neglected. In that case, we expect the axisymmetric part of the mean field to originate from the $Y_{2}^{0}$ component and the first few harmonic components of odd $l$ (mainly $Y_{1}^{0}$ and $Y_{3}^{0}$), which are related to the polar fields and are coupled to the mean field \textit{via} $B_{0}$. On the other hand, we expect the non-axisymmetric part of the mean field to originate from the $Y_{1}^{1}$ and $Y_{2}^{2}$ components with occasional small contributions from the $Y_{3}^{3}$ and $Y_{3}^{1}$ components. For our observations of the Sun, $B_{0}$ is small and the contribution of the $Y_{2}^{0}$ component comes from its equatorial band, which is negative for a positively defined $Y_{2}^{0}$ component. This is why the sign of $I_{20}$ is negative in Table~1. On the other hand, for a distant star observed nearly head-on, $B_{0}$ would be ${\sim}90^{\circ}$, and the $Y_{2}^{0}$ contribution would come from one of its two polar regions of positive polarity. This is why the sign of $K_{20}$ is positive in Table~3. I applied these ideas to the 46-year series of WSO daily measurements of the Sun's line-of-sight mean field, first plotting the 27-day running average of the rms deviation of the mean field from its average value. This rms variation provided a measure of the power in the non-axisymmetric components of the field, and showed peaks already familiar to us from plots of the occurrence of coronal inflows, of the Sun's open flux and interplanetary magnetic field, of the Sun's equatorial dipole and quadrupole field, of the non-axisymmetric component of the Sun's source-surface magnetic field, and of the mean field represented by a running 27-day average of its maximum-minus-minimum values \citep{2014ApJ...797...10S, 2015ApJ...809..113S}. In retrospect, this similarity of plots should be no surprise because they all show peaks corresponding to the envelope of the non-axisymmetric component of the Sun's large-scale field as it is modulated by solar rotation, analogous to the audio component of a high-frequency radio wave. The second step was to take the Fourier transform of the mean-field measurements and display power as a function of frequency. The result was a series of peaks at frequencies corresponding to the 27-day synodic solar rotation period and its first two harmonics. In addition, each peak showed fine structure at slightly longer periods, corresponding to the Sun's rotation at higher latitudes. By inverting this Fourier transform using broad windows around these three peaks, I obtained a relatively noise-free version of the non-axisymmetric mean field. In addition, temporal plots of the $m=1$, $m=2$, and $m=3$ components of the non-axisymmetric field were obtained by selecting each of the three windows separately. By comparing these plots with the temporal plot of their sum, we learned that nearly all of the large peaks were provided by the combination of $m=1$ and $m=2$ plus a few residual contributions of $m=3$. In other words, the mean field is dominated by the $Y_{1}^{1}$ and $Y_{2}^{2}$ components of the field, consistent with our analysis of Eq(16). The third step was to find the source of the fine structure in the power spectrum, and in particular of the peak whose frequency corresponded to a period of ${\sim}$30 days. This narrow peak was particularly interesting because it had no counterpart in the interplanetary sector structure inferred from Earth-based magnetometer measurements. Those in-ecliptic measurements often show 28.5-day recurrence patterns, but never 30-day patterns. By inverting the Fourier transform of the mean field through a narrow window around the 30-day peak, I found that most of the power originated in 1989-1990 when photospheric magnetograms showed elongated patterns of magnetic fields migrating to high latitudes in the northern hemisphere. This reminds us that the mean-field is sampling large-scale magnetic patterns, which gradually rotate rigidly as supergranular diffusion and meridional flow carry their flux across latitudes \citep{1987ApJ...319..481S,1987SoPh..112...17D,1998ASPC..154..131W}. Thus, mean-field measurements give the pattern rate, which depends on the meridional transport parameters as well as the rate of differential rotation, and not just the rotation rate itself. We are still left with the interesting quantitative question of why the 1989-1990 fields gave a rotation period of ${\sim}$30 days, rather than a longer period associated with the high-latitude tail of the migrating stream or the shorter, ${\sim}$28.5-day period found for so many other migrating streams. If the answer lies in the strength of the active region that emerged at 35$^{\circ}$N latitude in 1989, then we might ask if similar (or even stronger) active regions may have emerged at high latitudes in previous sunspot cycles, like cycles 18 and 19, that were more active than cycles 21 - 24. A large northern-hemisphere pattern was visible in Ca II K-line maps and Fe I 5250 \AA~ magnetograms obtained at the Mount Wilson Observatory during Carrington rotations 1417 (8 August - 4 September, 1959) and 1419 (2-29 October 1959) \citep{2011ApJ...730...51S}. Another occurred in the southern hemisphere during rotations 1259 (21 October - 17 November 1947) and 1261 (14 December 1947 - 11 January 1948). Perhaps those migrating fields would have produced $m=1$ sidebands with rotation periods of at least 30 days in the power spectrum of the mean field. The fourth step was to look at the axisymmetric component of the mean field. This component consisted of the $Y_{2}^{0}$ component plus the strongest $B_{0}$-induced components of odd $l$, mainly $Y_{1}^{0}$ and $Y_{3}^{0}$. Because $B_{0}$ varies annually due to Earth's orbital motion around the Sun, it was possible to remove the $B_{0}$-induced term, and display the annually averaged $Y_{2}^{0}$ component separately. Although not shown here, its temporal profile was similar to those obtained from spatially resolved observations at both WSO and MWO, except for the interval 1992-1996 when the spatially resolved measurements gave a much larger positive field. Resolving this discrepancy will be a challenge for the future. Once the $Y_{2}^{0}$ component was found, it was then possible to extract the $B_{0}$-induced part of the axisymmetric field. Its annual variation was in phase with the annual variation of the polar magnetic fields. This means that the $B_{0}$-induced field was oriented in the same direction as the polar field, reversing its direction from one sunspot cycle to the next. A remaining puzzle is why the annual variations were strong around the 2019 sunspot minimum when the polar fields were weak, and why the annual variations were weak around the 1997 minimum when the polar fields were stronger. Finally, we note that this approach can be used to find spherical harmonic components of the fields in other stars. For example, the $B_{0}$-dependence of the strengths of these harmonic components may complement asteroseismology determinations of the orientations of the rotational axes of these stars \citep{2003ApJ...589.1009G}. Also, the appearance of low-frequency sidebands in the WSO spectra suggests that similar sidebands may occur in observations of other stars, providing information about differential rotation and meridional flow in those stars. This information may help to remove ambiguities in the inferences of large scale convection and magnetic cycles in asteroseismology studies of Sun-like stars. \begin{acknowledgments} I am grateful to Phil Scherrer (WSO/Stanford) and Todd Hoeksema (WSO/Stanford) for helpful comments about the WSO telescope and its data, and Yi-Ming Wang (NRL) for numerous discussions of the Sun's large-scale magnetic field. I am grateful to Jack Harvey (NSO/LPL/UA) for helping me find archived Carrington maps of observations obtained with the Kitt Peak Vacuum Telescope. This work evolved from a talk that I gave at an LPL/UA Heliophysics Group Zoom session where Joe Giacalone, Jack Harvey, and John Leibacher provided useful comments and ideas. Wilcox Solar Observatory data used in this study were obtained \textit{via} the web site http://wso.stanford.edu courtesy of J.T. Hoeksema. NSO data were acquired by SOLIS instruments operated by NISP/NSO/AURA/NSF. Sunspot numbers were obtained from WDC-SILSO, Royal Observatory of Belgium, Brussels. I am grateful to the referee for several helpful comments, including one about limb darkening which motivated the calculations in the Appendix. \end{acknowledgments} \appendix \section{Limb Darkening} In Section 2, we converted the integral of the line-of-sight field over the (flat) solar disk to a surface integral of the radial field over the visible hemisphere. As shown in Eq(3), this conversion introduced two factors of ${\mu}$ into the integrand, causing the surface integral of $B_{r}$ to be weighted toward the disk center by a factor of ${\mu}^2$. However, we did not include the natural weighting that is produced by the limb darkening of the Sun and stars in the visual region of the spectrum. This limb darkening occurs because light from disk center originates in deeper, hotter, and brighter layers than light from positions toward the limb. As described by \cite{1977SoPh...52D...6S}, the central weighting of the WSO measurements is mainly due to this natural limb darkening plus a contribution of diffraction from the entrance slit of the spectrograph. It is relatively easy to include limb darkening in our equations for the mean line-of-sight magnetic field. We simply insert the limb darkening intensity profile, $I({\mu})/I(0)$, next to the ${\mu}^2$ in the integrand of Eq (3). With the definition $F({\mu})=I({\mu})/I(0)$, Eq(3) becomes \begin{equation} B_{m}~=~\int{B_{r}}{\mu}^{2}F({\mu})dA_{surf}/{\pi}R^2. \end{equation} Then, we replace ${\mu}$ by $\sin{\theta}\cos{\phi}\cos{B_{0}}+\cos{\theta}\sin{B_{0}}$, and perform the integration over the variables ${\theta}$ and ${\phi}$ as indicated in Eqs(4) and (5). If $F({\mu})$ is a simple first- or second-order polynomial in ${\mu}$, one can do the integration analytically, but if the profile is more complicated, then it might be more convenient to integrate numerically. Let's begin by considering some of the limb darkening relations that are used to describe the Sun and stars. Classically, the natural limb darkening has been described by a linear function of ${\mu}$ of the form \begin{equation} \frac{I({\mu})}{I(0)}~=~1-{\gamma}(1-{\mu})=(1-{\gamma})+{\gamma}{\mu}. \end{equation} Here, ${\gamma}$ is a wavelength-dependent limb-darkening coefficient that indicates how dark the limb is relative to the disk center at that wavelength. For the so-called gray atmosphere in the Eddington approximation \citep{1990soas.book.....F,1906WisGo.195...41S}, ${\gamma}=0.6$ and $I({\mu})/I(0)=0.4+0.6{\mu}$. According to \cite{2018A&A...616A..39M}, \citeauthor{1950HarCi.454....1K}'s \citeyearpar{1950HarCi.454....1K} quadratic limb darkening relation of the form \begin{equation} \frac{I({\mu})}{I(0)}~=~1-c_{1}(1-{\mu})-c_{2}(1-{\mu})^2 \end{equation} is the most commonly used profile in modern exoplanet studies. Stellar limb darkening is important in these studies because it affects the light curve produced by the transiting exoplanet and therefore the accuracy with which the exoplanetary radius can be determined. Consequently, other profiles are sometimes used to match the high-precision light curves of transiting exoplanet systems \citep{2007ApJ...655..564K,2015MNRAS.450.1879E}. In particular, \cite{2018A&A...616A..39M} considered another two-parameter limb-darkening relation of the form \begin{equation} \frac{I({\mu})}{I(0)}~=~1-c(1-{\mu}^{\alpha}). \end{equation} Although any of these limb darkening relations can be substituted into Eq(A1) above, I will illustrate the procedure using the linear relation given by Eq(A2). In this case, $B_{m}$ can be obtained from proportionate parts of Eq(4) and of the modified version of Eq(4) when an extra factor of ${\mu}$ is included in its integrand. This latter quantity, which I call $B_{\mu}$ to distinguish it from $B_{m}$, is given by \begin{equation} B_{\mu}~=~(1/{\pi})\int_{-{\pi}/2}^{{\pi}/2}\int_{0}^{\pi}{B_{r}}(\sin{\theta}\cos{\phi}\cos{B_{0}}+\cos{\theta}\sin{B_{0}})^{3}\sin{\theta}d{\theta}d{\phi}. \end{equation} The third-power expansion of this binomial expression for ${\mu}$ gives four terms with factors of $\cos^{3}{B_{0}}$, $3\cos^{2}{B_{0}}\sin{B_{0}}$, $3\cos{B_{0}}\sin^{2}{B_{0}}$, and $\sin^{3}{B_{0}}$, respectively, instead of the three terms that we obtained from the second-power expansion of that binomial in Eq(4). It would be easy to evaluate all four integrals and put the results in tables as we did in the main text. However, to estimate the effect of limb darkening on the mean field of the Sun, it is sufficient to expand the $B_{0}$-dependent factors in powers of $B_{0}$ and retain only the zeroth-order and first-order terms. This means that we need to keep only the $\cos^{3}{B_{0}}$ and $3\cos^{2}{B_{0}}\sin{B_{0}}$ factors, which reduce to 1 and $3B_{0}$, respectively. Then, the limb darkened mean field becomes \begin{equation} B_{m}~=~(1-{\gamma})(I_{lm}+B_{0}J_{lm})~+~{\gamma}(I_{lm}^{\mu}+B_{0}J_{lm}^{\mu}), \end{equation} where the terms that originate from Eq(A5) are given by \begin{subequations} \begin{align} I_{lm}^{\mu}~=~\frac{N_{lm}}{{\pi}} \int_{-1}^{1}P_{l}^{m}(x)(1-x^2)^{3/2}dx \left [ \frac{12 \cos{(m{\pi}/2)}}{(m+1)(m-1)(m+3)(m-3)} \right ],\\ J_{lm}^{\mu}~=~3\frac{N_{lm}}{{\pi}} \int_{-1}^{1}P_{l}^{m}(x)(1-x^2)xdx \left [ \frac{-4 \sin{(m{\pi}/2)}}{m(m+2)(m-2)} \right ]. \end{align} \end{subequations} Next, we rearrange Eq(A6) as a power series in $B_{0}$ to obtain \begin{equation} B_{m}~=~\{(1-{\gamma})I_{lm}+{\gamma}I_{lm}^{\mu}\}~+~B_{0} \{(1-{\gamma})J_{lm}+{\gamma}J_{lm}^{\mu}\}. \end{equation} I used the conventional Mathematica software to evaluate $I_{lm}^{\mu}$ and $J_{lm}^{\mu}$ for $l$ and $m$ in the range $0-7$, again reversing the signs of the odd-m entries to be consistent with the \cite{JE_45}-convention. Then I set ${\gamma}=0.6$, and combined these results with the values of $I_{lm}$ and $J_{lm}$ given in Tables 1 and 2 to obtain the limb darkened intensities for the gray atmosphere in the Eddington approximation. The results are given in Tables 4 and 5. \begin{table}[h!] \caption{Limb-Darkened Elements of $B_{lm}$ \{zeroth-order in $B_{0}$\}} \begin{center} \begin{tabular}{c c c c c c c c c c} \hline\hline $l/m$ & 0&1 & 2 & 3 & 4 & 5 & 6 & 7 \\[0.5ex] \hline\ 0 &+0.160 & \\[1.5ex] 1 & 0 & +0.152 \\[1.5ex] 2 & -0.081 & 0 & +0.099 \\[1.5ex] 3 & 0 & -0.033 & 0 & +0.043 \\[1.5ex] 4 & +0.006 & 0 & -0.006 & 0 & ~~+0.008~~ \\[1.5ex] 5 & 0 & -0.001 & 0 & +0.002 &0 & -0.002 \\[1.5ex] 6 & +0.001 & 0 & -0.001 & 0 & +0.001 & 0 & ~~-0.001~~ \\[1.5ex] 7 & 0 & -0.000 & 0 & +0.000 & 0 & -0.000 & 0 & +0.001 \\[1.5ex] \hline \end{tabular} \end{center} \end{table} \begin{table}[h!] \caption{Limb-Darkened Elements of $B_{lm}$ \{first-order in $B_{0}$\}} \begin{center} \begin{tabular}{c c c c c c c c c} \hline\hline $l/m$ & 0&1 & 2 & 3 & 4 & 5 & 6 & 7 \\[0.5ex] \hline\ 0 & 0 & \\[1.5ex] 1 & +0.215 & 0 \\[1.5ex] 2 & 0 & +0.198 & 0 \\[1.5ex] 3 & -0.114 & 0 & +0.104 & ~~0~~ \\[1.5ex] 4 & 0 & -0.026 & 0 & +0.023 & 0 \\[1.5ex] 5 & -0.007 & 0 & +0.007 & 0 & -0.006 & ~~0~~ \\[1.5ex] 6 & 0 & -0.004 & 0 & +0.004 & 0 & -0.003 & 0 \\[1.5ex] 7 & -0.003 & 0 & +0.003 & 0 & -0.003 & 0 & +0.002 & ~~0~~ \\[1.5ex] \hline \end{tabular} \end{center} \end{table} Comparing Table 1 and Table 4, we see that the elements have the same sign and nearly the same magnitude for $l<4$. For larger values of $l$, there are some sign differences, especially for $l=4$ and $l=6$, but the magnitudes of these higher-order harmonic components are less than 0.01 and can be neglected. A comparison between Tables 2 and 5 gives a similar result. This means that the same harmonic components contribute to the mean field with or without limb darkening, but that the strengths of these contributions differ slightly, depending on the values of ${\gamma}$ and $l$ (as we shall see next). We can gain further insight by computing the relative differences between these limb-darkened and non-limb-darkened intensities. Doing this separately for the terms that are zero-order and first-order in $B_{0}$, we obtain \begin{equation} (\frac{{\Delta}B}{B})_{0}~=~\frac{\{(1-{\gamma})I_{lm}+{\gamma}I_{lm}^{\mu}\}-I_{lm}}{I_{lm}}~=~-{\gamma}(1-\frac{I_{lm}^{\mu}}{I_{lm}}) \end{equation} and \begin{equation} (\frac{{\Delta}B}{B})_{1}~=~\frac{\{(1-{\gamma})J_{lm}+{\gamma}J_{lm}^{\mu}\}-J_{lm}}{J_{lm}}~=~-{\gamma}(1-\frac{J_{lm}^{\mu}}{J_{lm}}). \end{equation} Thus, the zeroth-order change, $({\Delta}B/B)_{0}$, depends on the ratio $I_{lm}^{\mu}/I_{lm}$, and the first-order change, $({\Delta}B/B)_{1}$, depends on the ratio $J_{lm}^{\mu}/J_{lm}$. And both changes are proportional to the limb darkening coefficient, ${\gamma}$. Moreover, if we use Eqs(A7ab) and Eqs(11ab) to evaluate the ratios, $I_{lm}^{\mu}/I_{lm}$ and $J_{lm}^{\mu}/J_{lm}$, we obtain the remarkable result that the values of these ratios are the rational numbers 4/5, 15/16, and 48/35, for $l=1$, 2, and 3, respectively, independent of the values of $m$. (Of course, this applies only for the non-zero values of $I_{lm}$ and $J_{lm}$ in Tables 1 and 2, respectively.) In other words, for the $Y_{1}^{0}$ and $Y_{1}^{1}$ harmonics, the ratios, $J_{10}^{\mu}/J_{10}$ and $I_{11}^{\mu}/I_{11}$, are both equal to 4/5. Likewise, for the $Y_{2}^{0}$, $Y_{2}^{1}$, and $Y_{2}^{2}$ harmonics, the ratios are all equal to 15/16. And for the four harmonics with $l=3$, the ratios are 48/35. Subtracting these ratios from 1, we obtain 1/5, 1/16, and -13/35, as the fractional differences in Eqs(A9) and (A10) for $l=1$, 2, and 3, respectively, before the limb darkening factor, ${\gamma}$, is applied. If we let ${\gamma}=0.6$, we obtain changes of -12\%, -3.75\%, and +22.2\%, for $l=1$, 2, and 3, respectively, independent of the value of $m$. Thus, for an ideal gray atmosphere in the Eddington approximation, the amplitudes of the $Y_{1}^{0}$ and $Y_{1}^{1}$ components decrease by 12\%. The three components with $l=2$ decrease by only 3.75\% and the four relatively weak components with $l=3$ go in the opposite direction, increasing by 22.2\%. The nice aspect of Eqs(A9) and (A10) is that we can increase ${\gamma}$ to obtain the fractional changes for a greater amount of limb darkening in the violet part of the spectrum, or decrease ${\gamma}$ to obtain the smaller changes expected in the infrared. We simply return to the fractions 1/5, 1/16, and -13/35 and multiply them by $-{\gamma}$. I did this calculation to learn how solar limb darkening in the simple Eddington approximation might affect the harmonic components of the mean field. However, for more complex limb darkening profiles and the more inclined rotational axes, that might occur in exoplanet or asteroseismic studies, one could relax the small-$B_{0}$ approximation used for the Sun, and do the integration numerically for specific values of $B_{0}$. We might expect limb darkening to have a more complicated effect for a star whose rotational axis makes an oblique angle to the line of sight. \bibliography{mfield}{} \bibliographystyle{aasjournal}
Title: LightAMR format standard and lossless compression algorithms for adaptive mesh refinement grids: RAMSES use case
Abstract: The evolution of parallel I/O library as well as new concepts such as 'in transit' and 'in situ' visualization and analysis have been identified as key technologies to circumvent I/O bottleneck in pre-exascale applications. Nevertheless, data structure and data format can also be improved for both reducing I/O volume and improving data interoperability between data producer and data consumer. In this paper, we propose a very lightweight and purpose-specific post-processing data model for AMR meshes, called lightAMR. Based on this data model, we introduce a tree pruning algorithm that removes data redundancy from a fully threaded AMR octree. In addition, we present two lossless compression algorithms, one for the AMR grid structure description and one for AMR double/single precision physical quantity scalar fields. Then we present performance benchmarks on RAMSES simulation datasets of this new lightAMR data model and the pruning and compression algorithms. We show that our pruning algorithm can reduce the total number of cells from RAMSES AMR datasets by 10-40% without loss of information. Finally, we show that the RAMSES AMR grid structure can be compacted by ~ 3 orders of magnitude and the float scalar fields can be compressed by a factor ~ 1.2 for double precision and ~ 1.3 - 1.5 in single precision with a compression speed of ~ 1 GB/s.
https://export.arxiv.org/pdf/2208.11958
\begin{frontmatter} \title{LightAMR format standard and lossless compression algorithms for adaptive mesh refinement grids: RAMSES use case} \author[AIM,IRFU]{L.Strafella} \ead{loic.strafella@cea.fr} \author[IRFU]{D.Chapon} \ead{damien.chapon@cea.fr} \address[AIM]{AIM, CEA, CNRS, Universit\'e Paris-Saclay, 91191 Gif-sur-Yvette, France} \address[IRFU]{IRFU, CEA, Universit\'e Paris-Saclay, 91191 Gif-sur-Yvette, France} \begin{keyword} computational astrophysics \sep Adaptive Mesh Refinement \sep lossless compression \sep data model \end{keyword} \end{frontmatter} \section{Introduction} RAMSES is a massively parallel hydrodynamical code for self-gravitating magnetized flows widely used in the computational astrophysics community \citep{Teyssier2002}, \firstReview{to solve the evolution of dark matter, stellar populations, and gas via gravity, hydrodynamics, radiative transfer, and non-equilibrium radiative cooling/heating. For hydrodynamics, it uses the HLLC Riemann solver \citep{Toro1994} and the MinMod slope limiter to construct gas variables at cell interfaces from their cell-centered values. The dynamics of collision-less dark matter and star particles are evolved with a multi-grid particle-mesh solver and cloud-in-cell interpolation \citep{Guillet2011}.} It implements an adaptative mesh refinement (AMR) technique to optimize the memory/precision ratio in numerical simulations, \cite{Berger89}, \cite{Berger84}. Its data structure is a cell-based fully threaded octree \citep{Khokhlov_1998}, \firstReview{which means that each subdomain describes a full octree of the entire simulation domain up to its topmost root grid which has the size of the whole simulation domain. In this distributed AMR tree approach each subdomain defines its own local grid refinement.} In most use cases, the domain decomposition is based on a 1D Hilbert curve to split the calculation between MPI processes. \firstReview{It was first designed for cosmological and galaxy numerical simulation, and was used for large simulation \citep{Teyssier2009}, \citep{Ocvirk2020}, \citep{Chabanier2020}.} However, RAMSES faces I/O scalability bottlenecks when used over a few thousand of MPI processes, partly due to its "multiple file per process" I/O strategy. A typical cosmological simulation using 10,000 MPI processes will produce at least 40,000 files (and filesystem inodes) per output with small file size from dozens of megabytes to a few hundred megabytes, which is far from optimal on modern Lustre filesystems and generally rapidly reach the user's limits on supercomputer. \firstReview{What is more, since RAMSES only implements one output data format, for both checkpointing and post-processing purposes, it is not optimized for both cases. Therefore, a simulation like Extreme-Horizon \citep{Chabanier2020} produced 3.6 terabytes of data with more than 50,000 files per snapshot (AMR files, hydrodynamical quantity files, particles files, etc). Considering that several snapshots are required for scientific analysis that kind of simulation becomes rapidly challenging in terms of I/O, data management and post-processing.}\\ In a previous work, the integration of the the Hercule library \citep{bressand} for parallel I/O and data management in RAMSES was the first step to improve the I/O performance, scalability and data management \citep{Strafella2020_Astronum}. \firstReview{In terms of data management, the integration of a parallel I/O library was also the occasion to add a post-processing specific data-flow to RAMSES. This was an important step which permitted to optimize significantly the checkpoint dump since the checkpoint's raw data will no longer be used by post-processing tools for simulation data analysis. With the original version of RAMSES, sometimes the size of the simulation is calculated not based on CPU performance, memory consumption, or pure scalability of the code but simply based on the user's disk storage available capacity. We expect this problem to occur more and more often in the coming years. The usually adopted solution is either to reduce the size of the simulation, the precision of the simulation by tuning AMR parameters or reducing the number of produced snapshots. The question was then is there a standard and compact data model that can be used to describe the AMR data from RAMSES new post-processing data-flow ?} \firstReview{Hercule library's post-processing database strategy is to propose an XML dictionary containing the description of standardized data model for different kind of meshing strategies and objects. Therefore, once a standardized data model is described in the dictionary, every tool using Hercule will be able to "understand" Hercule post-processing data. As a reminder, we show in table \ref{tab:ramses-io} a small recap of I/O data-flows now available in RAMSES.} \begin{table}[!h] \begin{tabular}{l|ccc} \hline I/O library & Posix & Hercule & Hercule \\ \hline Purpose & C/R+ P-P & C/R & P-P\\ Data Structure & Specific & Free & Standardized \\ Data Volume & Large & Large & Light \\ Compression & N/A & N/A & Efficient\\ \hline \end{tabular} \caption{Difference between various RAMSES I/O formats : the legacy binary posix and the new Hercule database HProt, HDep. C/R: Checkpoint/Restart, P-P: Post-processing.} \label{tab:ramses-io} \end{table} \firstReview{Until a few years ago, AMR data did lack of a standardized format for post-processing and visualization purposes partly due to the different flavor of AMR strategies: block-based / patched-based \cite{Gamer-code}, \cite{Enzo-code}, \cite{Flash-code}, cell-based (RAMSES), etc. Thus AMR data still required some data transformation step (e.\ g.\ down-casting to unstructured grids, resampling to fixed resolution cartesian grids) to be manipulated by general purpose post-processing or visualization tools (e.\ g.\ Paraview \footnote{\href{https://www.paraview.org/}{Paraview.org}}, VisIt \footnote{\href{https://wci.llnl.gov/simulation/computer-codes/visit}{VisIt home page}}).} An AMR mesh described using unstructured grids can easily lead to memory issues and produce high volume data files, \firstReview{thus each AMR code uses a custom data model to output the data. In the case of RAMSES, the AMR internal 32 bits integer linked-lists are dumped level by level.} Code-specific data-processing libraries were then implemented as solutions to read the RAMSES datafile format and handle its AMR grid structures (e.\ g.\ PyMSES\footnote{\href{http://irfu.cea.fr/Projets/PYMSES}{http://irfu.cea.fr/Projets/PYMSES}}, Osiris \footnote{\href{https://www.nbi.dk/~nvaytet/osiris/osiris.html}{https://www.nbi.dk/~nvaytet/osiris/osiris.html}}) but these libraries needed to be updated to follow the RAMSES data format evolutions \citep{Chapon2013}. \firstReview{A standardized and self-consistent data model aims at reducing the cost of post-processing tools maintenance and encourage sharing development efforts.} \\ In this work, we introduce a new lightweight AMR data model that we applied on RAMSES AMR data. By using this data model, we show that data volume of RAMSES outputs can be significantly reduced, \firstReview{without loss of information}. In addition, we present two lossless data compression algorithms that can be built upon this new data model standard to further reduce both the size of the AMR grid refinement description and the size of associated physical quantity scalar field data (float 32/64 bits). In the results section, we present \firstReview{ lightAMR data volume reduction as well as tree pruning and} compression benchmarks on published RAMSES simulation datasets. % \section{Scientific analysis-specific AMR light tree data model} \label{sec:hdepformat} \subsection{Motivation} The RAMSES checkpoint/restart data format was designed, as in many other simulation codes used in astrophysics, to hold all the information necessary for the code to restart a simulation run and resume the dynamical evolution of the numerical model. Since our goal is to design a new output data format for the specific purpose of scientific analysis and post-processing, we are free to select from all the information available in memory at runtime what is strictly necessary to answer one's post-processing need.\\ To fit one's post-processing needs, a user can choose to export a subset of physical quantities (e.\ g.\ store only the density and thermal pressure scalar fields defined on the AMR grid, and discard the gravitational potential/acceleration AMR fields, the velocity AMR vector field and the Lagrangian particle information), hence reducing the overall dataset volume on disk. In RAMSES, this user-defined data field subset selection can be configured at runtime using the input parameter namelist, without the need to recompile the code. What is more, for some data post-processing use cases, down-casting these quantities from double to single precision could even prove sufficient, reducing further the data volume by a factor of 2. Finally, since this new data format is not meant to be read by RAMSES itself upon restart but only by data post-processing tools, we can decide to adopt a new standardized AMR grid data structure and a custom encoding to be able to answer any type of post-processing software requirement in an optimized and standardized way. \subsection{Implementation} The Hercule I/O library HDep standardized data model for AMR trees aims to offer a light and self-consistent way to describe an AMR grid. In this work, we adopt a similar approach as the one proposed in the HDep format, the details of our approach is described here. We further reference this standardized data model for AMR grids as \textit{lightAMR}. Since that kind of mesh can be expressed in the form of a tree, it uses a boolean description of the AMR tree from top to bottom, level by level, as shown on figure \ref{fig:amr_grid}. With this breadth first traversal description of an AMR tree, two main arrays are used: one for the grid refinement description and one for the cell ownership mask. The first array describes the refinement state of a cell: 1 for a refined cell and 0 for a leaf (unrefined) cell. All child cells from a refined one must be described, as shown in \ref{fig:amr_grid}. This implies that the number of cells at a level $l$ must be equal to the number of refined cells at level $l-1$ times the refinement factor to the power of the dimension. In our 2D example, the refinement factor being 2 by dimension, the level 2 must contain $3\times2^2$ boolean values since there are 3 refined cells at level 1. The value in the ownership mask array describes for each cell if it does not belong to the current domain (according to the domain decomposition policy): 0 if it belongs to the current domain, otherwise 1. In order to be self-consistent, a few other (lightweight) meta-information such as the size of the simulation box, the physical quantity units, physical model details or used numerical schemes need to be stored, using an explicit semantic (string key-value pairs). In addition, some useful meta-information like the number of cells per level, the total number of grids, the number of levels, if somewhat redundant with the AMR grid description, can be conveniently stored to enhance the post-processing tools performance. AMR-bound physical quantities are described in scalar field data arrays that follow the same ordering as the grid refinement array, hence providing a straightforward mapping (which facilitates the LOD approach). Since RAMSES describes the AMR tree from the root node which is the simulation box itself on each domain and stores scalar field values even for refined (coarse) cells, the construction of these arrays is straightforward when converting in the lightAMR format. Of course, this lightAMR tree data format can be easily extended to be compatible with other numerical codes that do not share RAMSES fully-threaded tree data structure, refined cell (coarse) scalar field values only requiring to be computed on-the-fly upon data export.\\ Collision-less particles are treated in a Lagrangian way within RAMSES and are linked to the AMR grid. The particles and their associated properties are simply added to our new data output format in the form of one dimensional arrays. Particle positions for example are in the form $x_{1},y_{1},z_{1},...,x_{n},y_{n},z_{n}$. We chose not to associate particles to their linked AMR cells and consequently not overload the data format. The particle-cell binding is left to the post-processing tools to be performed on-the-fly if needed.\\ For simulation codes that do not use the fully-threaded approach and naturally describe collections of trees at a defined minimum level $l_0$ on simulation domains, the LightAMR data model can be trivially extended. One just needs to provide an additional AMR root description array to detail the I-J-K logical position indices of the root coarse cells of those trees in a structured grid at level $l_0$. While providing an array containing the logical position in $(i_1,j_1,k_1, i_2,j_2, k_2, ...)$ format would be the natural approach, one may choose to provide an array of Morton indices of logical position at the level $l_0$. Indeed, using the Morton curve, 3D/2D logical coordinates can be mapped on 1D Morton curve by bits interleaving, hence reducing the size of the array by a factor 3 (resp. 2) in 3D (resp. 2D) but with a limitation on the maximum value of $l_0$ 32 bits Morton indices: $l_{0,\textrm{max}} = 10$ in 3D ($l_{0,\textrm{max}} = 16$ in 2D). Within this extension, compared to their fully-threaded tree version, the grid refinement, ownership mask and scalar field data arrays should be trimmed up to level $l_0$ as a consequence. \section{Tree pruning to remove redundancy} Due to the fully threaded octree approach in RAMSES, some AMR cells may be described several times in the trees of different domains, leading to data redundancy. Even if those multiple AMR cell descriptions are required by a numerical solver like the Poisson equation multi-grid solver \citep{Guillet2011}, describing those cells only once is sufficient for post-processing and visualization purposes. Therefore, a tree pruning algorithm can be implemented to reduce the memory footprint of our file format. Once the tree is built according to lightAMR format described above, most of the data redundancy in the AMR dataset can be removed. We developed a tree pruning algorithm, propagating from the bottom of the tree to its top, in order to remove the redundant cells. The unnecessary cells are removed by changing their refinement value (C.R.V on figure \ref{fig:pruning}) and removing their described children cells on the next level. The C.R.V. is done only for a refined cell if all its children belong to another domain (mask = 1), as shown on figure \ref{fig:pruning}. Then those changes are propagated to the associated arrays: the ownership mask and scalar field data arrays. Therefore, removing those refined cells and its children is the first step to reduce the total volume of data before applying any compression algorithm. Depending on the simulation, this step can prune a significant part of any RAMSES simulation dataset, see results on astrophysical simulation datasets in section \ref{subsec:results_pruning}. \firstReview{Prior to the tree pruning step, a temporary mask array needs to be adapted from the original ownership mask array to properly compute the C.R.V. This slight adaptation insures that any refined cell is not flagged as masked if at least one of its children is not masked. This temporary mask is only used during the tree pruning step}.\\ At the end of this step, the AMR grid is properly described with one or several trees and most of the data redundancy was removed. However, for a fully-threaded octree the redundancy cannot be entirely removed unless the tree is split at the coarse level of refinement (a.k.a. $levelmin$ in RAMSES) and domain boundaries set between sibling cells at that coarse level (and not deeper in the tree). The next stage in the lightAMR formatting is to apply data compression on both the grid refinement/ownership mask arrays and the simple/double precision associated scalar field data arrays. \section{AMR data compression} To overcome the I/O bottleneck, in addition to the integration of parallel I/O \citep{Strafella2020_Astronum} and a data reduction strategy, data compression can reduce the volume of data to transfer or store on disk/object store and further improve the overall I/O performance. Combining those different strategies can have a significant impact on the I/O times, data volume storage and data transfer rates, whether for in-transit or in-situ data post-processing. In this section, we present two lossless compression schemes specific to octree AMR meshes that we developed for the lightAMR data model; one for the grid description boolean valued arrays and one for the scalar field float (32/64 bits) data arrays. \subsection{LightAMR grid boolean arrays lossless compression} \label{sec:compression_amr_struct} Once the lightAMR structure has been described with boolean valued arrays, we developed a deterministic, lossless and performant algorithm to compress the information contained in the two main arrays (grid refinement and ownership mask arrays) based on \secondReview{a custom} run-length encoding \firstReview{which is well known compression technique \citep{Robinson1967}}. The first step of the algorithm is to consider the boolean array as an ordered list of packs of identical bit values, either successive zeroes or successive ones, and count the lengths of these continuous packs. The second step is to digitize these ordered pack sizes with a special encoding and store the result in a single byte integer array. Out of the 256 integer values available, our encoding only need values between 1 and 62. To encode pack sizes, we use a base-52 decomposition and we also record the number of digits (only if greater than 1) in that decomposition. The uint8 values ranging from 2 to 7 are used to store the number of bytes used for the encoding of a each pack size, and the 52 uint8 values between 11 and 62 (values 9 and 10 are used for specific purposes, see next paragraph) to store the digits of the base-52 decomposition (with an additional shift of $11$). For example, to encode a pack size of 2904 which decompose in $1\cdot52^2 + 3\cdot52^1 + 44\cdot52^0$, the number of digit (3) is stored, followed by the digits $[1,3,44]$, encoded as $[12,14,55]$ ($[1+11, 3+11, 44+11]$), resulting in the following 4 bytes $[3,12,14,55]$ instead of the 262 bytes required to store 2904 zeroes or ones. With that encoding, the greatest pack size that can be encoded is $52^7-1 \sim 10^{12}$ continuous 0 or 1, which is more than enough, the maximum number of cells per domain in a typical RAMSES simulation run never exceeding a $ 10^{7}$ due to memory limitations. Since the encoded pack sizes describe identical bit values, alternatively contiguous zeroes and contiguous ones, only the first value of the array needs to define the bit value of the first pack (see the red value in figure \ref{fig:compression_amr1}), and all subsequent bit values of the following packs can be determined by the decompression algorithm.\\ To give the possibility to data-processing tools to improve their memory footprint, we included special values in this encoding to mark the boundaries of AMR levels so that data-processing tools can perform on-the-fly decompression on a \textit{level-by-level} basis for level-of-details approach. We used the 9 and 10 uint8 values as level boundary markers, two different values being required to mark level boundaries in case a bit flip occur (or not) at the end of an AMR level (see orange values in figure \ref{fig:compression_amr2}). In addition, we developed a special algorithm which is able to directly process the compressed stream to get the value at a given offset in the uncompressed stream without the need of uncompressing the entire AMR compressed array. We further reference this LightAMR grid boolean arrays lossless compression as base 52 continuous pack size encoding (CPS52). \secondReview{While this CPS52 encoding may seem peculiar compared to the majority of other RLE schemes from the literature, it yields very good performance. In the process of LightAMR data format standardisation though, this custom RLE would require to be updated to match more closely standard encoding schemes, prior to the LightAMR format standard release.} Once the AMR grid description arrays are compressed, the next step in order to reduce the data volume is to compress the float (simple/double precision) scalar field data arrays. \subsection{LightAMR scalar field float data lossless compression} \label{sec:compression_amr_data} The development of our custom compression algorithm was motivated by the poor compression rates obtained with the deflate algorithm from the zlib \footnote{https://zlib.net/}, about 3 to 5\% with a compression speed about 50 MB/s. Even if higher compression rates could be achieved using the LZMA algorithm, it would require far more memory for a lower compression speed, % such memory-hungry approach would not be viable in memory-bound massively parallel simulations as are many RAMSES runs. In addition, compression of float valued arrays using entropic encoding without prior knowledge of the data generally leads to poor performance due to its intrinsic homogeneous byte distribution unlike for example compressing a text where the distribution of byte (letters) is not homogeneous and therefore entropic compression is highly efficient.\\ In this section we present an adaptation of delta compression algorithm for lossless float data \citep{Lindstrom2006}, \citep{Burtscher2009} that we specifically designed for lightAMR octree structures. Delta compression is a commonly used compression method based on prediction functions. Indeed, this method is based on the choice of a mathematical predictor function and encode directly the prediction error. The compression process itself is performed by removing the leading zeroes in the encoding of the prediction error \citep{Lindstrom2006}. With this kind of approach, high compression ratios are reached when the predictor function is quite accurate, which is not easy to achieve when dealing with an AMR octree structure. Thus, we extended this delta compression algorithm to AMR grids by taking advantage of the fully threaded approach of the AMR strategy in RAMSES. \\ The only information our compression algorithm needs is the grid refinement array and a physical quantity field to compress (simple or double precision float). We propose to use an AMR-specific mathematical function to predict intensive variable scalar field values called the \firstReview{\textit{Parent-Child predictor} (PCP)} function. Indeed, the value of the parent cell is used as the predicted value for its child cells and so on from the top of the AMR tree down to the bottom, as represented on figure \ref{fig:fsp_compression}. The compression is then made octant by octant (if 3 dimensions are used). There are two mains steps in the data compression process: first, the delta compression itself with the leading zeroes removal and second, the encoding of the compressed scalar field \firstReview{on a bit-field}.\\ As for the delta compression stage, the single (or double) precision parent float value of the physical field is mapped on unsigned integer of the same size (8 bytes unsigned integer for a double precision float and 4 bytes for a single precision) as well as the values of the children cells. Then we use an XOR operator to compute the differences between the mapped parent value and all the children ones, leading to a pack of prediction errors ($s_{1}, s_{2}, s_{3},s_{4}$ in the 2D example shown on figure \ref{fig:fsp_encoding}). The next step is to compute \firstReview{an OR} operation between all the $s_{i}$ values in order to determine how many leading zeroes could be removed on each $s_{i}$ value at most, which corresponds to the stage $s_0 + s_1 + s_2 + s_3$ in figure \ref{fig:fsp_encoding}. In the example, we can remove a maximum of 5 leading zeroes on each $s_{i}$ value without losing information. % \firstReview{At the encoding stage, we need to encode the residues of the prediction errors of all children cell values as well as the number of removed leading zeroes. On figure \ref{fig:exh280_histo_rho} we show the probability mass function (PMF) of the number of leading zeroes on cell basis and on pack basis on the double precision density scalar field of the Extreme-Horizon simulation (12800 density datasets). Since residue bits are close to being uniformly random, we cannot compress them well. If we compute the compression ratio in this example, without adding the encoding of the number of removed leading zeroes (nLZ), we obtain 1.303 on cell-by-cell basis and 1.237 on pack basis which is expected since more bits can be remove in the first scenario. Nevertheless, the number of removed leading zeroes for each cell or each pack needs to be encoded for the decompression stage. The shannon's entropy of the nLZ field on a cell-by-cell basis is 3.5 and 3.2 on a pack basis, this smaller value being also an expected result since we smooth the nLZ value in each pack. Nevertheless, the total number of nLZ values to encode in the cell-by-cell case is, in 3D, 8 times the total number of nLZ values to encode in the pack case. Therefore, even by adding another stage of compression of the nLZ values, we cannot hope to achieve a better compression ratio even if more bits can be removed from the data. If we take into account the encoding of both the nLZ values and the residues, the compression ratio drops to 1.217 for the the cell-by-cell case, while it only drops to 1.225 for the pack case. What is more, the cell-by-cell approach would require 8 times more computations of leading zeroes and an additional compression step to barely achieve a similar compression ratio. Thus, we chose to compute and encode the nLZ value once per pack of children as shown at the encoding stage on figure \ref{fig:fsp_encoding}.\\ After computing the PMF of scalar field data, we notice that setting the number of removed leading zeroes on 4 bits by default is a good balance between compression speed and compression ratio. In the example of the Extreme-Horizon density field, increasing to 5 bits will only increase the compression ratio by 0.23 \% but reduce the compression speed by almost 8\%. Indeed, the nLZ computation is a key point of the algorithm performance and it requires more if-branches to compute a higher value. Finally, after the 4 bits used to encode the nLZ value, all residues $s^{'}_{i}$ are encoded, see figure \ref{fig:fsp_encoding}.} According to the number of bits used for the number of removed leading zeroes, the theoretical maximum compression ratio in dimension $d$ is given by ($E_{enc}$ is the number of bits, 64 for double precision and 32 for single precision): \begin{equation} C^{d}_{max} = \frac{E_{enc} \cdot 2^{d}}{(E_{enc}-15) \cdot 2^{d} + 4} \end{equation} Therefore, the maximum compression ratio achievable for single precision data is 1.829 and 1.293 for double precision with the default encoding. \firstReview{Nevertheless, we could expect this method to yield poor performance in the case of a difference of sign between a parent cell value and one of its child cells.} Indeed, the very first bit of a negative value will be set to one thus no leading zeroes will be removed. This could be the case for example with a small velocity field component varying around zero. Finally, our delta compression algorithm works for intensive variables but could be extended to extensive variables if the predicted values for the child cells were taken as the value of the parent cell divided by the numbers of child cells. If the number of child cells was a multiple of 2, the mantissa of the prediction error would not be modified (only the exponent part would be bitwise right-shifted), which could still lead to a lossless compression algorithm. The RAMSES code using only intensive variable AMR scalar fields, this extension of the Parent-Child predictor function is beyond the scope of this work.\\ \firstReview{For the float data decompression stage, the AMR grid refinement array is used level-by-level to guide the delta decompression algorithm through the grid from top to bottom since the scalar fields values at level $l$ are necessary as predictor values to decompress scalar field values at level $l+1$.} \section{Results and discussion} \label{sec:results} In this section we present the results \firstReview{ of the lightAMR format conversion} as well as tree pruning and compression results obtained on different real simulation \firstReview{snapshots}: FRIG\footnote{\href{http://www.galactica-simulations.eu/db/STAR_FORM/FRIG/\#Simu_FRIG6_ZOOM7}{http://www.galactica-simulations.eu/db/STAR\_FORM/FRIG}}, snapshot at 10.05 Myr and 1675 coarse steps, \citep{Hennebelle2018} composed of 4096 domains, ORION\footnote{\href{http://www.galactica-simulations.eu/db/STAR_FORM/ORION/}{http://www.galactica-simulations.eu/db/STAR\_FORM/ORION}} snapshot at 1.26 Myr and 2125 coarse steps, \citep{Ntormousi2019} composed of 512 domains and Extreme-Horizon\footnote{\href{http://www.galactica-simulations.eu/db/COSMOLOGY/EXTREME_HORIZONS}{https://www.galactica-simulations.eu/db/COSMOLOGY/EXTREME\_HORIZONS}}, snapshot at $Z\sim1$ with 11866 coarse steps, \citep{Chabanier2020}, composed of 12800 domains. \firstReview{ Since, post-processing tools handle RAMSES data on a per-domain basis and that lightAMR is a self-consistent data model also on a per-domain basis, we consider a dataset as the data of one domain}. For the tree pruning algorithm we show the minimum, maximum and average value for different metrics that are computed on a per-domain basis. % \subsection{Tree pruning results} \label{subsec:results_pruning} The table \ref{tab:pruning_rate_results} gives the results of the tree pruning applied on the different datasets. The algorithm is 2.3 to 3.4 times more efficient on FRIG and ORION data than on Extreme-Horizon. FRIG and ORION runs are zoom simulations \firstReview{(nested grids with increasing spatial resolution)} where most of the fine resolution grids can be found in the center or mid-plane of the simulation box, whereas Extreme-Horizon is a cosmological simulation where finest AMR levels are more homogeneously distributed in the box. These results are consistent with the fact that RAMSES is more memory efficient in cosmological runs than in zoom simulations. Nevertheless, the tree pruning gain is applied on the whole simulation data therefore, 11.42\% on Extreme-Horizon means that the size is reduced by about 240 Gb. Moreover, tree pruning is done in a memory friendly way within a negligible time (\firstReview{a few microseconds on the data of an MPI process}) . Therefore this step has a significant impact on data volume at almost no cost. Actually, the time spent doing tree pruning is correlated with the number of coarse cells and the position of those coarse cells in the array.\\ \firstReview{The efficiency of the tree pruning algorithm is far from intuitive in most cases (at least with RAMSES AMR grids). It depends on the maximum level of refinement, on the number of domains, on the geometrical convexity of the subdomains, on their surface to volume ratio, and is greatly impacted by the 2:1 balance constraint in RAMSES required by its internal (hydro) numerical schemes. The tree pruning rates can span a wide range within a factor 3.25, even for different domains of the same simulation snapshot (see table \ref{tab:pruning_rate_results}).} \begin{table}[!ht] \centering \renewcommand{\arraystretch}{1.1} \begin{tabular}{l|p{0.07\textwidth}p{0.07\textwidth}p{0.07\textwidth}p{0.05\textwidth}} \hline Simulation & Min & Max & Global \\ \hline FRIG & 20.05 \% & 65.42 \% & 38.65 \% \\ ORION & 17.23 \% & 47.32 \% & 26.35 \% \\ ExH & ~~7.00 \% & 23.24 \% & 11.42 \% \\ \hline \end{tabular} \caption{Fraction of removed cells on average by the tree pruning algorithm on Frig, Orion and Extreme-Horizon {\color{red}collections of datasets}. Minimum and maximum values are on domain (dataset) basis and global value is for the entire simulation snapshot.} \label{tab:pruning_rate_results} \end{table} \subsection{LightAMR grid boolean arrays compression results} \label{subsec:results_amr_comp} We compared our compression speeds and ratios to commonly used and open source libraries : Zlib \footnote{\href{https://zlib.net/}{Zlib website}}, LZ4 \footnote{\href{https://lz4.github.io/lz4/}{LZ4 website}} and Snappy\footnote{\href{https://google.github.io/snappy/}{Snappy}}. The compression ratio is simply the ratio between the uncompressed and the compressed buffer sizes. We run this compression benchmark on the Extreme-Horizon datasets and compare the compression speed and ratio of the cumulated grid refinement and ownership mask arrays after tree pruning, see table \ref{tab:comp_benchs}. Domain datasets are processed sequentially, the uncompressed and compressed sizes are accumulated as well as the times to compress them. The compression speeds and ratios are then computed and reported in table \ref{tab:comp_benchs}. While our single threaded algorithm is slower than the LZ4 package, we are able to achieve a higher compression ratio. The zlib at level 9 (best compression ratio) achieves a much higher compression ratio with the grid refinement array but at the cost of a significantly reduced speed. The compression speed of all those algorithms are linked to the layout of the uncompressed array, i.e. consecutive pack of zeroes or ones. With the RAMSES current parallelism (one MPI process per core) we barely have to compress arrays greater than a few millions of cells, even in the Extreme-Horizon use case. Therefore, a compression speed of 374 Mb/s is enough to compress the entire lightAMR arrays of the most populated MPI process of the Extreme-Horizon dataset in less than 8 ms (domain 427 with 3059513 cells after tree pruning).\\ \begin{table}[!h] \centering \renewcommand{\arraystretch}{1.1} \begin{tabular}{l|p{0.07\textwidth}p{0.07\textwidth}|p{0.07\textwidth}p{0.07\textwidth}} \hline ~ & \multicolumn{2}{c}{grid refinement} & \multicolumn{2}{c}{Ownership mask} \\ library & Ratio & Speed (MB/s) & Ratio & Speed (MB/s) \\ \hline CPS52 & 23.95 & ~~373.58 & \textbf{11546.10} & ~~835.47 \\ LZ4 & 17.94 & \textbf{1255.15} & ~~~~249.49 & \textbf{6864.82} \\ Zlib 1 & 26.00 & ~~233.93 & ~~~~225.47 & ~~289.31 \\ Zlib 9 & \textbf{48.13} & ~~~~~~7.46 & ~~~~973.34 & ~~155.22 \\ Snappy & 12.00 & ~~918.88 & ~~~~~21.29 & 1350.23\\ \hline \end{tabular} \caption{Benchmark of compression of the lightAMR grid refinement and ownership mask arrays of the entire Extreme-Horizon snapshot using different libraries CPS52 (ours), the zlib version 1.2, the LZ4 package version 1.9.3 and Snappy 1.1.7. Benchmark run on a machine with a Intel Xeon Gold 5118 CPU @ 2.30 GHz.} \label{tab:comp_benchs} \end{table} \begin{table}[!h] \centering \renewcommand{\arraystretch}{1.1} \begin{tabular}{l|p{0.07\textwidth}p{0.07\textwidth}|p{0.07\textwidth}p{0.07\textwidth}} \hline ~ & \multicolumn{2}{c}{grid refinement} & \multicolumn{2}{c}{Ownership mask} \\ Simulation & Ratio & Speed (MB/s) & Ratio & Speed (MB/s) \\ \hline FRIG & 39.36 & 1080.88 & ~~1627.19 & 1447.18 \\ ORION & 48.06 & 1216.33 & ~~4397.45 & 1750.51 \\ ExH & 23.95 & ~~375.58 & 11546.10 & ~~835.47 \\ \hline \end{tabular} \caption{Compression ratios and speeds of lightAMR boolean arrays for FRIG, ORION and Extreme-Horizon collections of datasets using CPS52 algorithm. Benchmark run on a machine with a Intel Xeon Gold 5118 CPU @ 2.30 GHz.} \label{tab:comp_desc_results} \end{table} The compression ratios and speeds are much higher with the ownership mask array because it contains very large packs of successive zeroes (cells that belong to the domain). This is due to the fact that boundaries generally happen on the border of a domain but also because the tree pruning algorithm removes ghost nodes. In addition, our algorithm performs much better at compressing such an array than the other tested libraries because of encoding large packs in base 52 produce a very small pattern. One can notice that our encoding values are stored on 1 byte even if there are only ranging from 1 to 63, and therefore would require only 6 bits thus a potential gain of 25\% in compressed size. In addition to that, computing the average Shannon entropy on 1 byte 'word' of all the AMR description array on an entire simulation like Extreme-Horizon leads to a value of 4. Which means that on average only 4 bits could be used to encode the information contained in one 'word'. The reason is simple, this is because all the number from 1 to 63 are not necessaryly used for the encoding. Actually, by computing exactly the number of bits that would be required to strictly encode bit to bit the information leads to a potential gain on average of 45\% on Extreme-Horizon and thus to reduce the size on disk of the AMR data by 45\% to less than 420 MB. We did not implemented such an optimization because it adds a step that will increase the time for compression for an AMR structure that is no longer significant compared to the size of the physical fields. And it will increase the complexity to rapidly retrieve AMR level seeds but also the direct access of cells states in the compressed data. Finally, in the case of Extreme-Horizon datasets, the AMR part of the data is now only representing 0.05\% of the total simulation snapshot volume. Table \ref{tab:comp_desc_results} compare the compressed lightAMR format to the legacy RAMSES AMR format.\\ One important remark is that most data arrays stored in a RAMSES checkpoint are no longer stored in this new lightAMR format, without any loss of information on the AMR grid structure itself. \firstReview{The large compression ratios presented in table \ref{tab:comp_desc_results} are mainly due to the switch to a boolean array description and the omission of this redundant meta-information, (e.\ g. neighbour cell indices, cell centers, parent cell indices) which can be easily and efficiently reconstructed on-the-fly by post-processing tools. The tree pruning step and the CPS52 encoding further increase this AMR file size compression ratio although in a less significant way.} \begin{table}[!h] \centering \renewcommand{\arraystretch}{1.1} \begin{tabular}{l|p{0.10\textwidth}p{0.12\textwidth}} \hline Simulation & \centering Ratio & File size (MB) \\ \hline FRIG & \centering 1485.0 & ~~~~~~20 \\ ORION & \centering 1500.0 & ~~~~~~~~6 \\ ExH & \centering ~~583.3 & ~~~~840 \\ \hline \end{tabular} \caption{Chained ratio on RAMSES collections of datasets of tree pruning and CPS52 (left) and resulting lightAMR file size for the AMR grid description (right).} \label{tab:comp_desc_results} \end{table} \subsection{LightAMR scalar field float data compression results} \label{subsec:results_float_comp} Most of the data volume of an AMR simulation dataset is due to scalar fields represented as floating point data. The lossless compression algorithm takes the grid refinement array and an associated scalar field float data array as input, both after being processed by the tree pruning algorithm. We used a maximum number of removed leading zeroes of 15 (4 bits encoding) and the compression is single threaded. First, we compare in table \ref{tab:bench_comp_float} the compression performances of the different libraries on \firstReview{the whole density field (double precison, 12800 datasets) from the Extreme-Horizon datasets}. We also integrate in the benchmark the Zfp library \cite{zfp}, which is more specific to floating point data compression. We show that our algorithm achieve a higher compression ratio on this dataset due to the use of the topology information from the AMR mesh. It was expected that dictionary compression approach used in LZ4, Snappy and Zlib cannot efficiently compress floating point data. Nevertheless, since the Zlib also use an entropy encoded, it is able to achieve a few percent of compression but with very slow speed compared to our algorithm. \\ \begin{table}[!h] \centering \renewcommand{\arraystretch}{1.1} \begin{tabular}{l|p{0.07\textwidth}p{0.12\textwidth}} \hline library & Ratio & Speed (MB/s) \\ \hline PCP4 & \textbf{1.230} & 1085.58 \\ LZ4 & 0.996 & \textbf{2935.65} \\ Zlib 1 & 1.060 & ~~~~25.24 \\ Zlib 9 & 1.070 & ~~~~21.17 \\ Snappy & 1.000 & ~~990.65 \\ Zfp & 1.097 & ~~~~72.03 \\ \firstReview{Zstandard} & 1.061 & ~~585.85 \\ \hline \end{tabular} \caption{Benchmark of compression of the entire density field (12800 datasets) expressed according to the lightAMR format of Extreme-Horizon snapshot using different libraries: PCP4 (ours), the zlib version 1.2, the LZ4 package version 1.9.3, Snappy 1.1.7., zfp 0.5.5 \firstReview{and Zstandard 1.5.1. \citep{Zstd}}. Benchmark run on a machine with a Intel Xeon Gold 5118 CPU @ 2.30 GHz.} \label{tab:bench_comp_float} \end{table} On Extreme-Horizon, a single scalar quantity over one snapshot represents about $\simeq 160$ Gb (after tree pruning) thus a compression of 1.23 on the density field means a gain of about 30 Gb on disk. In table \ref{tab:data_compression_results} we show the result of the PCP4 algorithm on the density, x-axis velocity, and thermal pressure for the different RAMSES datasets. We achieve a good compression ratio on the three tested scalar fields of about 1.19 in the worst case scenario and 1.23 on the best case. The compression speed is quite regular and above 1 GB/s, we noticed that this compression speed is linked with the CPU frequency thus with a higher CPU frequency we generally achieve a higher compression speed. For example, on an AMD Ryzen 7 3700X @ 3.6 Ghz we were able to reach compression speed above 2.5 GB/s up to 3 GB/s on FRIG and ORION datasets. \firstReview{ We expected one current limitation of our algorithm for scalar fields with an average value of zero (e.\ g. velocity component). Indeed, if a parent value is positive and one of the children value is negative, then the very first bit of the integer representation will be set to 1 (sign bit) and therefore no delta compression could be achieved on that octant/quadrant. This limitation can be easily circumvented by shifting all values with a constant parameter. However, the results on table \ref{tab:data_compression_results} show that the compression ratios for the velocity component are always at least as good as those of the density and pressure fields, highlighting the fact that the ocurrence of opposite sign between predicted value (parent cell value) and child value (child cell value) is rather uncommon in typical astrophysical use cases.}\\ The main reason that explains our high compression speed is that we do not need to evaluate a mathematical function to predict each value but we merely use the parent value which is already available in memory. Moreover, this parent value is used as a predictor for all the child's cells (4 to 8 cells depending on the number of dimensions). Furthermore, based on the formula of the compression ratio, removing the bit of sign plus the entire exponent on the 64 bits integer representation of a double precision float for a pack of cells leads to a theoretical ratio of 1.219. We generally achieve a ratio higher than this threshold according to the results in table \ref{tab:data_compression_results}, which means that almost every child cell value lies within a factor two of the parent cell value, which demonstrates the interest of using our predictor function. Since the basic idea of AMR is to refine a cell to avoid high variation of the physical field between adjacent cells, it was expected that in most cases we achieve at least this ratio. Nevertheless in the density field of FRIG, this is not the case mainly because a lot of shocks (turbulent interstellar medium modeling) are present in the simulation, a large number of steep density discontinuities can lead to lower compression ratio. \begin{table}[!h] \centering \renewcommand{\arraystretch}{1.1} \begin{tabular}{l|c|p{0.07\textwidth}p{0.12\textwidth}} \hline Simulation & Quantity & Ratio & Speed (MB/s) \\ \hline FRIG & $\rho$ & 1.191 & 1274.94 \\ FRIG & $v_x$ & 1.223 & 1079.23 \\ FRIG & $P_t$ & 1.212 & 1100.95 \\ \hline ORION & $\rho$ & 1.226 & 1236.57 \\ ORION & $v_x$ & 1.219 & 1101.36 \\ ORION & $P_t$ & 1.224 & 1240.4 \\ \hline Ex-H & $\rho$ & 1.225 & 1059.78 \\ Ex-H & $v_x$ & 1.258 & 1025.72 \\ Ex-H & $P_t$ & 1.169 & ~~960.54 \\ \end{tabular} \caption{Compression ratios and speeds for \textbf{double precision} fields: density, velocity x-axis component, thermal pressure, of FRIG, ORION and Extreme-Horizon using the Parent-Child predictor algorithm using 4 bits encoding (15 removed zeros at maximum), maximum theoretical ratio is 1.293. Single-threaded on a Intel Xeon Gold 5118 CPU @ 2.30 GHz.} \label{tab:data_compression_results} \end{table} Since the float compression data algorithm process the AMR tree from the top to the bottom, from coarse cells to leaves, we can not access directly the value of a cell at a level $l$ without first decompressing the tree from level 0 up to level $l$. Therefore, we give in table \ref{tab:decomp_speeds}, the decompression speed for double precision fields. The decompression speed we obtain is on a par with compression speeds while the decompression remains memory friendly. \begin{table}[!h] \centering \renewcommand{\arraystretch}{1.1} \begin{tabular}{l|c|p{0.12\textwidth}} \hline Simulation & Quantity & Speed (MB/s) \\ \hline FRIG & $\rho$ & ~~~~911.58 \\ FRIG & $v_x$ & ~~~~931.49 \\ FRIG & $P_t$ & ~~~~911.60 \\ \hline ORION & $\rho$ & ~~~~927.62 \\ ORION & $v_x$ & ~~~~921.84 \\ ORION & $P_t$ & ~~~~928.15 \\ \hline Ex-H & $\rho$ & ~~~~760.18 \\ Ex-H & $v_x$ & ~~~~720.01 \\ Ex-H & $P_t$ & ~~~~709.69 \\ \end{tabular} \caption{Decompression speeds for double precision fields: density, velocity x-component, thermal pressure, of FRIG, ORION and Extreme-Horizon (full level decompression). Single-threaded on a Intel Xeon Gold 5118 CPU @ 2.30 GHz.} \label{tab:decomp_speeds} \end{table} Since in RAMSES one can downcast some physical quantities to single precision in order to reduce data volume by a factor 2, we give in table \ref{tab:comp_res_simple} the compression speeds and ratios for single precision data. If the entire exponent part (8 bits) of a single precision float (mapped on a 32 bits unsigned integer) is removed during the delta compression step, the theoretical compression ratio is 1.362, while it was only 1.219 for a double precision float. As shown previously with double precision float compression, we are still able to remove in most of the datasets the entire sign plus exponent part of the bit representation in the single precision float compression. Since the sign and exponent represent a bigger part of the 32 bits representation, it naturally leads to higher compression ratios. Therefore, down-casting double precision physical scalar fields within RAMSES can not only yield a reduction factor of 2 (due to down-casting) but also an even more efficient compression using our PCP algorithm. Consequently, down-casting from double to single precision can be of great interest in order to significantly reduce data volumes if double precision is not mandatory for the post-processing workflow. \begin{table}[!h] \centering \renewcommand{\arraystretch}{1.1} \begin{tabular}{l|c|p{0.07\textwidth}p{0.12\textwidth}} \hline Simulation & Quantity & Ratio & Speed (MB/s) \\ \hline FRIG & $\rho$ & 1.301 & 1259.76 \\ FRIG & $v_x$ & 1.393 & 1286.26 \\ FRIG & $P_t$ & 1.349 & 1299.11 \\ \hline ORION & $\rho$ & 1.401 & 1292.74 \\ ORION & $v_x$ & 1.380 & 1306.45 \\ ORION & $P_t$ & 1.384 & 1331.21 \\ \hline Ex-H & $\rho$ & 1.394 & 1209.06 \\ Ex-H & $v_x$ & 1.537 & 1217.75 \\ Ex-H & $P_t$ & 1.345 & 1232.63 \\ \end{tabular} \caption{Compression ratios and speeds for down-casted fields to \textbf{single precision}: density, velocity x-component, thermal pressure, of FRIG, ORION and Extreme-Horizon using the PCP4 algorithm (15 removed zeros at maximum), maximum theoretical ratio is 1.829. Single-threaded on a Intel Xeon Gold 5118 CPU @ 2.30 GHz.} \label{tab:comp_res_simple} \end{table} ~\\ \firstReview{The PCP4 algorithm is based on parent value as predictor value, this algorithm cannot be used if coarse node value are not available. Nevertheless, the basic idea of the lightAMR data model is to be a post-processing dedicated data model. AMR post-processing tools heavily use LOD approach which requires node value in order to be efficient. Therefore, if no node value is provided, then the whole dataset must be loaded to compute node values through up-sampling, such an operation will introduce very large overhead even for simple task such as low resolution images of the simulation box. The typical compression ratios that we obtain more than compensate the 12.5 \% overhead required to store node values (in 3D cell-based AMR octree).} \section{Conclusion} Adaptive Mesh Refinement suffers from a lack of standardized format to describe the grid structure and therefore AMR codes generally use their own format or eventually the unstructured grid to describe the mesh. Unfortunately, using the unstructured grid description is highly inefficient with AMR meshes and produce very large file even for relatively small meshes. In this paper, we presented an extremely compact format designed for AMR meshes, called \textit{lightAMR}, for which we developed a set of dedicated data reduction and compression algorithms. To minimize data redundancy in RAMSES AMR grids, we developed a tree pruning algorithm able to remove between 11\% and 38 \% of cells in various astrophysical datasets. The large discrepancy of the obtained values is mainly due to the diversity of adaptive refinement layouts in our selected dataset simulation boxes. On top of that, we implemented a lossless and memory friendly compression algorithm specific to the lightAMR format that compresses the grid refinement/ownership mask arrays using base 52 continuous pack size, called CPS52. \firstReview{By switching to a boolean array description of the AMR grid in the lightAMR format, by omitting unnecessary meta-information and by} chaining the tree pruning and the CPS52 algorithms on the tested RAMSES AMR datasets, we obtain AMR mesh file size compression ratios ranging from 583.3 to 1500.0, thus reducing the AMR grid description to a negligible part of the overall data volume.\\ In addition, a lossless, memory friendly and efficient floating point data compression algorithm, called PCP, designed for lightAMR physical scalar fields achieve a high compression ratio and speed compared to commonly used and open source libraries by taking advantage of the topology of the AMR structure. For double precision, we achieve a compression speed ranging from 960 MB/s to 1275 MB/s and a ratio between 1.17 to 1.26, on RAMSES datasets, with a sequential version of the algorithm and a Intel Xeon Gold 5118 @ 2.30 GHz. While down-casting physical scalar fields to single precision naturally reduces the size by a factor 2, it also increases the compression ratios that range from 1.30 to 1.54 with similar compression speeds. \\ Finally, for the tested astrophysical datasets, the use of the lightAMR format leads to overall data reduction percentages of 62.26 \% on FRIG, 49.64 \% on ORION and 37\% on Extreme-Horizon with no loss of information. \firstReview{If a RAMSES user is ready to sacrifice the precision of the scalar quantity fields and lets the lightAMR format downcast float data to single precision, then he can expect even more significant data volume reduction percentages (83.2\% on FRIG, 81.3 \% on ORION, 75.8 \% on Extreme-Horizon). In the context of the three astrophysical simulation projects tested in this work, all the while using the same disk storage capacity, far more ambitious data output policies could have been considered, with data output done at higher frequencies (e.g. x5 on ORION, x6 on FRIG, x4 on Extreme-Horizon).} The lightAMR format, as well as compression algorithms, are compatible by design with \textit{level-of-details} (LOD) approach which is a very important technique used by post-processing tools to improve performance for both visualization and data analysis. \begin{table}[!h] \centering \renewcommand{\arraystretch}{1.1} \begin{tabular}{l|ccc} \hline ~ & FRIG & ORION & Ex-H\\ \hline \small{RAMSES Ncells ($10^6$)} & 1288.15 & 445.28 & 23660.29 \\ \small{lightAMR Ncells ($10^6$)} & 790.28 & 327.93 & 20958.90 \\ \hline \small{RAMSES AMR (GB)} & 29.7~~ & 8.70 & 490 \\ \small{lightAMR (GB)} & ~0.020 & 0.006 & ~~0.84 \\ \hline \small{RAMSES HYDRO (GB)} & 113.4 & 46.9 & 2337.42 \\ \small{lightAMR HYDRO (GB)} & ~~54.0 & 28.0 & 1781~~~ \\ \hline \small{RAMSES HYDRO SP (GB)} & 56.7 & 23.45 & 1168.71 \\ \small{lightAMR HYDRO SP (GB)} & 24.0 & 10.42 & ~~683.22 \\ \end{tabular} \caption{Recap of data volume reduction and compression on RAMSES simulation datasets. LightAMR data takes into account the tree pruning algorithm. The first two lines gives the total number of cells before and after tree pruning. The AMR grid data volume using both the tree pruning and the CPS52 compression algorithms and the HYDRO data volume is reduced using the tree pruning and the PCP4 compression algorithms. The last two lines show the data volume reduction when float data are down-casted to single precision (SP).} \label{tab:recap_results} \end{table} \secondReview{Nevertheless, we must insist on the fact that the lighAMR data structure is designed to significantly reduce the data storage impact of RAMSES (or any other fully threaded octree code) simulations as presented in the paper and must be seen as a "storage data structure". For analysis purposes, a "computational-friendly" data strucuture, \citep{Bangerth2011}, should be constructed from the lightAMR data in order to achieve good performance. In our analysis workflow, for example, we use vectors or hashmaps of leave cells only, allowing us to achieve high performance data analysis with an hybrid multiprocessing / multithreading parallelism but this discussion is beyond the scope of this article.} \section{Future work} \secondReview{In the context of the standardisation process of the data format, the custom CPS52 run-length encoder will be updated in the near future to match more closely other mainstream run-length encoding schemes found in the literature, prior to the first release of LightAMR format.} While the detailed PCP4 compression is lossless, we will explore in future work lossy compression as a compromise to down-casting physical quantities to single precision in order to further reduce I/O volume. In order to further reduce the data volume for post-processing and since the lightAMR data model is self-consistent and self-describing even over one domain, we will explore the concatenation of multiple ligthAMR data from adjacent domains. \firstReview{Finally, we plan to use this updated version of RAMSES with the integrated lightAMR format and parallel I/O with Hercule library for a large and ambitious astrophysical simulation run on a pre-exascale architecture.} \subsection*{Acknowledgements} The authors are thankful to P. Hennebelle and F. Bournaud for their useful comments on this manuscript. The authors acknowledge financial support from the European Research Council (ERC) via the ERC Synergy Grant {\em ECOGAL} (grant 855130). \bibliographystyle{elsarticle-num} % \bibliography{ms} %
Title: Novel infrared-blocking aerogel scattering filters and their applications in astrophysical and planetary science
Abstract: Infrared-blocking scattering aerogel filters have a broad range of potential applications in astrophysics and planetary science observations in the far-infrared, sub-millimeter, and microwave regimes. Successful dielectric modeling of aerogel filters allowed the fabrication of samples to meet the mechanical and science instrument requirements for several experiments, including the Sub-millimeter Solar Observation Lunar Volatiles Experiment (SSOLVE), the Cosmology Large Angular Scale Surveyor (CLASS), and the Experiment for Cryogenic Large-Aperture Intensity Mapping (EXCLAIM). Thermal multi-physics simulations of the filters predict their performance when integrated into a cryogenic receiver. Prototype filters have survived cryogenic cycling to 4K with no degradation in mechanical properties.
https://export.arxiv.org/pdf/2208.03755
\keywords{aerogel, infrared, far-IR, sub-millimeter, microwave, metamaterial, filter} \section{INTRODUCTION} \label{sec:intro} % Cryogenic telescope receivers common in the far-infrared, sub-millimeter, and microwave regimes require strong out-of-band infrared rejection in order to maintain cryogenic performance. A variety of different strategies have been employed to realize thermal blocking structures, which include reflective meshes, absorptive, and scattering designs\cite{ULRICH196737, ULRICH196765, ade_filters, Bock:95, Halpern, Munson:17, Timusk:81, choi, Whitbourn:85}. These low-pass filters have been implemented using a variety of different materials. We report on the continued development of broadband, tunable, polymer aerogel-based infrared-blocking filters for millimeter, sub-millimeter, and far infrared observations in planetary science and astrophysics, first reported in Ref. \citenum{Essinger-Hileman:20}. The filters presented are primarily polyimide aerogels\cite{guo, guo2, meador1} loaded with diamond scattering particles ranging from 1 to 60 microns, although different polymers for the host aerogel matrix have been explored. These filters have strong out-of-band rejection but also a low index of refraction, (n $\simeq$ 1.15) eliminating the need for anti-reflection (AR) coating. These filters function primarily by scattering away unwanted light and subsequently preventing it from reaching the coldest parts of the receiver. \section{Optical Modeling and Filter Design} Filter optical design relies primarily on a Mie scattering dielectric model\cite{Essinger-Hileman:20}. Mie scattering occurs because of the contrast between the higher index of refraction of the embedded particles and the lower index of the surrounding medium. Table \ref{tab:materials_prop} shows a summary of material and optical properties of substrates used to construct aerogel scattering filters. While this publication focuses only on diamond-loaded polyimide aerogels, data for silica and silicon are provided for context. Even though bulk polyimide has an index of refraction of 1.7, due to the low volume filling fraction of an aerogel, an unloaded polyimide aerogel typically has an index of refraction of about 1.1. The diamond-loaded aerogel filters have an index of refraction typically less than 1.2. \begin{table}[h] \centering \begin{tabular}{|c|c|c|} \hline \textbf{Material} & \textbf{Index of Refraction, n} &\textbf{Density [$\frac{g}{cc}$]} \\ \hline Silica & 1.95 & 2.65 \\ \hline\ Polyimide & 1.7 & 1.42 \\ \hline Silicon & 3.4 & 2.33 \\ \hline Diamond & 2.38 & 3.53 \\ \hline \end{tabular} \caption{Materials properties for bulk components of scattering aerogel filters. Early prototypes used a variety of materials, but the filters in this publication used only polyimide and diamond.} \label{tab:materials_prop} \end{table} Early filter prototypes from Ref.~\citenum{Essinger-Hileman:20} were fabricated with silicon powder produced by grinding up float-zone silicon wafers in a ball mill. The resulting powder was sieved through meshes of various sizes, down to US 140 mesh size, or 106 microns. This produced a powder that had a broad, non-uniform particle size distribution. Early modeling efforts assumed this to be a uniform size distribution in particle radius, however later analysis showed a significant excess of smaller particles. The large dispersion in the particle size distribution produced by ball milling and subsequent sieving process is responsible for the relatively soft cut-off frequency in the measurement and the disagreement between measurement and model seen in the left plot of Figure \ref{fig:mie_model}. Figure \ref{fig:mie_model} shows a comparison of transmission spectra between earlier modeling and fabrication efforts and more recent ones. Ultimately commercial, off-the-shelf industrial diamond particles were chosen as scattering media. We selected powders of 1-3, 3-6, 6-12, 10-20, 15-30, 20-40, and 40-60 micron distributions. All powders are from the PUREON Microdiamant MONO-ECO monocrystalline synthetic diamond. Per the manufacturer, particle size distributions are approximately Gaussian, with the end ranges representing the approximate three standard deviation limits for particle diameter. Table \ref{powders} shows a sub-sample of particle size distribution data. The particle size distributions are much more tightly controlled than the silicon powder made in-house. Diamond powder composition, particle sizes and shapes were later analyzed with optical microscopy, particle size analysis, and Raman spectroscopy. Figure \ref{fig:diamond_particles} shows images of the diamond particles and the silicon particles. \begin{table}[h] \centering \begin{tabular}{ |c|c| } \hline \textbf{Powder Size Range} & \textbf{Median Size} \\ \hline 3-6\,$\mu$m & 4.2\,$\mu$m \\ \hline\ 6-12\,$\mu$m & 9\,$\mu$m \\ \hline 15-30\,$\mu$m & 22.5\,$\mu$m \\ \hline 40-60\,$\mu$m & 47\,$\mu$m \\ \hline \end{tabular} \caption{Diamond particle size distributions and median sizes. All diamond powders are from the PUREON Microdiamant MONO-ECO monocrystalline synthetic diamond. Median particle sizes all fall well within the stated specifications from the manufacturer. } \label{powders} \end{table} The diamond particle size and loading density is controlled to tune the filter to the desired cutoff frequency. Particle size has the strongest effect on cutoff frequency. Larger particles have a lower cut-off frequency and smaller particles have a higher cut-off frequency. Polyimide aerogel filters have been fabricated with a range of scatterer loading densities ranging from 20\, mg/cc up to 100\, mg/cc. For a given particle size distribution, increasing the loading density creates a filter with a lower cut-off frequency and vice versa. For a given loading density, smaller particles produce a stronger cutoff than larger particles because scattering cross-section goes as the square of the particle radius (area $\propto r^2$) and mass density goes as the cube of particle radius (volume $\propto r^3$). Figure \ref{fig:comparison} shows the effects of varying particle radius and density on the predicted filter cutoff frequency. Multiple particle size distributions and densities can be mixed together, e.g. primarily using a higher density of larger particles to set the primary cut-off frequency and mixing in lower densities of smaller particles to control high frequency leakage. Since first reported, continued development of the Mie scattering model has yielded additional features that allow us to capture the behavior of multiple size distributions of particles as well as include the absorption and emission features of the base polymer chemistry in the transmission curve predictions. A more detailed discussion is presented in Section \ref{sec:chemistry}. Additionally, the tightly constrained diamond particle size distributions bring the predicted filter behavior in more agreement with the measured data than in Ref \citenum{Essinger-Hileman:20}. See Ref \citenum{Barlis}, for details regarding the optical measurements of film samples. Mie scattering modeling code is now also able to accommodate user defined particle size distributions, beyond basic uniform and Gaussian options. \subsection{Designing Filters for Specific Missions} With a greater control of filter cut-off frequencies, in-band transmission, and knowledge of the base aerogel transmission spectrum, it is possible to design filters to meet specific mission requirements. During the development and prototyping of filters, filters for three different missions were designed, fabricated, and characterized. The first experiment is the Sub-milliter Solar Observation Lunar Volatiles Experiment (SSOLVE), which will observe lines from HDO, H${}_2$O, and OH molecules in the lunar atmosphere at 23\,GHz, 500\,GHz, and 2.5\,THz, respectively\cite{SSOLVE}. SSOLVE requires good rejection of higher frequency THz light but greater than 90\% transmission at 2.5 THz. Figure \ref{fig:prototypes} shows a comparison of different diamond-loaded polyimide aerogel designed to meet the SSOLVE requirements. Filters have also been designed and fabricated to meet the needs of experiments that observe in the microwave regime, like the Cosmology Large Angular Scale Surveyor (CLASS), a cosmic microwave background (CMB) polarimeter designed to observe from the ground from 33-234\,GHz\cite{CLASS}. Filters designed for use in microwave observations are also being prototyped for the upcoming Experiment for Cryogenic Large-Aperture Intensity Mapping (EXCLAIM), which is a balloon-borne spectrometer that will carry out a line intensity mapping survey at frequencies from 420-540\,GHz, targeting carbon monoxide and ionized cabron emissions from galaxies in redshift slices covering $z$ = 0 - 3.5 \cite{EXCLAIM}. Figure \ref{fig:prototypes} shows several different prototype filters for these two missions. \section{Exploring Aerogel Polymer Chemistries} \label{sec:chemistry} Scattering aerogel filters have been manufactured with silica and polyimide base aerogels, as well as with silicon and diamond scattering particles\cite{Essinger-Hileman:20}. As demonstrated in Ref.~\citenum{guo2}, polyimide aerogels synthesized with different diamines produce backbone structures with varying mechanical properties. However, different backbone structures may also vary the in-band transmission for infrared blocking applications. Initially, two formulations were identified for further investigation: \begin{itemize} \item 4,4'-Bis (4-aminophenoxy) biphenyl (BAPB) (25mol\%); with 2,2'-Dimethylbenzidine (DMBZ)(75mol\%); 3,3',4,4'-biphenyltetracarboxylic dianhydride (BPDA); and 1,3,5- triaminophenoxybenzene (TAB), aka the ``BAPB-based polyimide'' \item 1,12-dodecyldiamine (DADD)(40 mol\%); with DMBZ (60 mol\%); BPDA; and TAB, aka the ``DADD-based polyimide''. \end{itemize} As shown in Figure \ref{fig:backbone_comparison}, the two different polymers show similar absorption features around 10-20 THz, however the BAPB formulation shows measurably higher low-frequency transmission, so this formulation was used for the vast majority of the prototype filters. Measuring the optical performance of the different polyimide backbone formulations also enabled the incorporation of the base aerogel properties into the Mie scattering model. Including the dielectric properties of the base aerogel into the model creates more realistic predictions of filter cutoff frequency as well as in-band transmission. Figure \ref{fig:backbone_comparison} shows a model filter with the DADD-based and BAPB-based polymers incorporated. Other polyimide polymer formulations were also explored, including 7wt\% pyrometallic dianhydride (PMDA)+DMBZ/ 1,3,5-benzenetricarbonyl trichloride (BTC) and 9wt\% PMDA+DMBZ/BTC films. Preliminary results suggest the 7wt\% PMDA+DMBZ/BTC polyimide formulation to have better in-band transmission than the BAPB-based formula. During the continued development of the polyimide-based aerogels, efforts were made to identify different polymers that might provide better in-band transmission and reduced absorption. Cyclic olefin copolymer (COC) plastics have been identified as promising candidates for aerogel substrates. The COC polymers were chosen for their high in-band transmission and reduced absorption\cite{Topas} compared to the polyimide formulations. Prototype unloaded and diamond-loaded COC aerogel filters are being manufactured. \section{Thermal modeling} \label{sec:thermal} The performance of the aerogel filters in a cryogenic receiver will ultimately determine their utility in future missions. Filter equilibrium temperatures and subsequent conductive and radiative thermal loading are key benchmarks. Thermal finite element models of the polyimide aerogel scattering filters were constructed with COMSOL Multiphysics software. The CLASS receiver filter geometry was used as a representative filter implementation. The filter stack consists of three vertically stacked filters over two filter stages, at 60\,K and 4\,K respectively. The model calculates the background infrared loading on the 4\,K stage after loading from a 250\,K surface passes through the upper filters. The polyimide aerogel filter performance was compared with model alumina and polytetrafluoroethylene (PTFE) filters. The thermal conductivities of each material are described in Table \ref{tab:filters}. \begin{table}[] \centering \begin{tabular}{|c|c|} \hline \textbf{Material} & \textbf{Thermal Conductivity $[\frac{\text{W}}{\text{m\,K}}]$}\\ \hline Diamond-loaded polyimide aerogel & 0.024 \\ \hline Alumina & 27 \\ \hline PTFE & 0.24 \\ \hline \end{tabular} \caption{Summary of material properties used in COMSOL thermal finite element analysis models. Alumina and PTFE are commonly used materials for infrared rejection in cryogenic telescope receivers in the sub-millimeter and microwave.} \label{tab:filters} \end{table} The first two filters are mounted to the 60\,K stage and the side wall of the optics tube around the top two filters is held at 60\,K. The third filter was mounted to the 4\,K stage and the sidewall along the bottom filter as well as the wall between the last filter and the focal plane were held at 4\,K. The surface above the top of the first filter is held at 250\,K to simulate the radiation incoming from the stage that sits above the 60\,K stage in the CLASS receiver. The separation of filters within the 60K stage of the receiver can be varied to minimize the background radiation that reached the 4\,K stage. The geometry of the filters from the COMSOL model is shown in Figure \ref{fig:filter_stack}. The COMSOL radiation module was used to estimate the total thermal load on the 4\,K stage. In addition to the thermal loading from electromagnetic radiation, the COMSOL model also includes the thermal loading due to conduction through the parts of the filter stack in physical contact with other components of the receiver. The emissivity of the aerogel, alumina, and PTFE filters was swept parametrically from $\epsilon$ = 0.05 to 0.9, in order to predict filter performance across a wide range of physical parameters. A total equilibrium thermal load on the third filter was calculated for each emissivity. Figure \ref{fig:emissivity_sweep} shows the comparison of the performance between the three different filter substrates. The two 60\,K filters are held at the nominal CLASS receiver spacing of 43\,mm. A stack of three alumina filters shows the most power rejection by several orders of magnitude. The results are a bit misleading, however, because most implementations of alumina filters do not use a stack of three because of their lower in-band transmission. Alumina filteras also have increased cost and more complex AR-coating requirements, due to the high index of refraction, n$\simeq 3.1$. The performance of the aerogel filters depends strongly on the emissivity, but they meet the CLASS thermal loading requirement of less than 100\,mW as long as the emissivity is below 0.5. In contrast, alumina and PTFE filters are high emissivity by design, with expected emissivities of 0.75 and 0.97 respectively, set by reflectivity from the surface. A second simulation of aerogel filters was run to determine the optimal filter spacing of the 60\,K stack. Total thermal load on the 4\,K filter was calculated for spacing ranging from 27\,mm to 120\,mm at emissivities of 0.1, 0.5, and 0.9. The current CLASS design has a spacing of 43\,mm. Maximal filter spacing minimizes thermal loading in all cases, by approximately a factor of 5. The data suggest that spacing the 60\,K filters as distant apart as is reasonable is optimal. Cryogenic testing to verify the predicted thermal performance of the aerogel scattering filters is expected to begin in the fourth quarter of 2022. \section{Concluding Remarks} Diamond-loaded polyimide aerogel scattering filters are a promising emerging filter technology for use in far-infrared, sub-millimeter, and microwave astrophysics, cosmology, and planetary science missions. Prototype filters demonstrate excellent out-of-band rejection, high in-band transmission, and tunable cut-off frequencies. Filters are being manufactured in large diameters (larger than 40\,cm) and cryogenic testing is underway to verify their performance. In addition to polyimide aerogel filters, other polymeric aerogel filters are being explored and will potentially yield better in-band transmission with reduced in and out-of-band absorption. \acknowledgments % The authors would like to thank Aerogel Technologies, LLC, for their assistance in manufacturing film prototypes while NASA facilities were closed due to the COVID-19 pandemic. The material is based upon work supported by NASA under award number 80GSFC21M0002. \bibliography{report} % \bibliographystyle{spiebib} %
Title: Focal-plane wavefront sensing with photonic lanterns I: theoretical framework
Abstract: The photonic lantern (PL) is a tapered waveguide that can efficiently couple light into multiple single-mode optical fibers. Such devices are currently being considered for a number of tasks, including the coupling of telescopes and high-resolution, fiber-fed spectrometers, coherent detection, nulling interferometry, and vortex-fiber nulling (VFN). In conjunction with these use cases, PLs can simultaneously perform low-order focal-plane wavefront sensing. In this work, we provide a mathematical framework for the analysis of the photonic lantern wavefront sensor (PLWFS), deriving linear and higher-order reconstruction models as well as metrics through which sensing performance -- both in the linear and nonlinear regimes -- can be quantified. This framework can be extended to account for additional optics such as beam-shaping optics and vortex masks, and is generalizable to other wavefront sensing architectures. Lastly, we provide initial numerical verification of our mathematical models, by simulating a 6-port PLWFS. In a companion paper, we provide a more comprehensive numerical characterization of few-port PLWFSs, and consider how the sensing properties of these devices can be controlled and optimized.
https://export.arxiv.org/pdf/2208.10563
\title{Focal-plane wavefront sensing with photonic lanterns I: theoretical framework} \author{Jonathan Lin,\authormark{1,*} Michael P. Fitzgerald,\authormark{1} Yinzi Xin,\authormark{2} Olivier Guyon,\authormark{3} Sergio Leon-Saval,\authormark{4} Barnaby Norris,\authormark{5} Nemanja Jovanovic\authormark{3}} \address{\authormark{1} Physics \& Astronomy Department, University of California, Los Angeles (UCLA), 475 Portola Plaza, Los Angeles 90095, USA\\ \email{\authormark{*}jon880@astro.ucla.edu} % \authormark{2} Department of Astronomy and Steward Observatory, The University of Arizona, 933 N. Cherry Ave., Tucson, AZ 85719, USA\\ \authormark{3} Department of Astronomy, California Institute of Technology, Pasadena, CA, 91125, USA\\ \authormark{4} Sydney Astrophotonic Instrumentation Laboratory, School of Physics, The University of Sydney, Sydney, NSW 2006, Australia \\ \authormark{5} Sydney Institute for Astronomy, School of Physics, Physics Road, The University of Sydney, NSW 2006, Australia} \section{Introduction} High-contrast imaging is becoming one of the primary tools for the direct detection and characterization of exoplanets. This class of techniques combines ground-based extreme adaptive optics (AO), which corrects for wavefront aberrations induced by passage of light through the atmosphere and the instrument, and coronagraphy, which suppresses on-axis starlight to reveal the circumstellar environment, as well as contrast-boosting post-processing techniques such as angular differential imaging \cite{Marois:06:ADI} and spectral differential imaging \cite{Sparks:02:SDI}. Together, these techniques enable contrasts down to $\sim 10^{-6}$ and angular separations down to 200 mas. So far, some 30 exoplanets have been detected through high-contrast imaging techniques \cite{Bowler:16}; however, almost all are widely separated gas giants with masses several times that of Jupiter. One of the main roadblocks in increasing current sensitivity are non-common-path aberrations (NCPAs): quasi-static aberrations evolving on the timescale of minutes to hours that occur due to instrument instabilities induced by humidity, temperature, and gravity vector changes \cite{Martinez:12:NCPA1,Martinez:13:NCPA2}. Because these aberrations appear downstream from the wavefront sensor, they cannot be removed via typical pupil-plane wavefront control systems. As a result, wavefront control must be improved before instruments can attain the necessary contrasts and angular separations typical for systems similar to the Sun and Earth: $\sim 10^{-10}$ and $\sim 100$ mas, at a distance of 10 pc, in visible light \cite{Traub:10}. One way forward is to sense wavefront aberrations in the final focal plane with the science camera, so that sensor and science light travel down the same optical path. This approach, known as focal-plane wavefront sensing (FPWFS), removes NCPAs. \\\\ In parallel, a number of new ideas and techniques are being proposed to further advance direct exoplanet characterization. One development is in short-exposure exoplanet imaging, which leverages statistical differences in planet and star speckle behavior at millisecond timescales to distinguish between planet light from starlight \cite{Rodack:21,Galicher:19}. This technique is distinct from ADI and SDI. Coherent detection, which exploits the incoherence of planet light, presents an alternative pathway for separating planet light and starlight. A related technique is nulling interferometry, an alternative to conventional coronagraphy that can achieve smaller inner working angles, and which works by destructively interfering starlight collected from different subapertures or telescopes. Other advances in direct characterization will need to be made not in the isolation of planet light, but the spectral analysis of that light. The high-resolution spectral analysis of faint objects like exoplanets will require methods for both the efficient coupling of light into the science instrument, and stabilization of that same light, which will vary with time due to passage through the atmosphere and instrument. These two requirements are typically in tension \cite{Lin:21}, and thus hard to achieve simultaneously. \\\\ The photonic lantern (PL; \cite{Birks:15}) provides a capable platform for the above applications; other notable applications include OH line suppression through fiber Bragg gratings \cite{Trinh:13:GNOSIS,ellis:18:PRAXIS}, and spectroastrometry \cite{Gatkine:19}. As seen in Figure \ref{fig:PL}, the PL is a tapered waveguide that gradually transitions from a few-mode optical fiber (FMF) geometry to multiple widely-spaced single-mode cores, similar to a multi-core fiber (MCF), which can then be fanned out to an array of single-mode fibers (SMFs). When the FMF end is placed in the focal plane, the PL can efficiently couple multi-modal telescope light into multiple SMFs. While PLs come in a wide array of port counts and geometries, they can be largely classified into three groups. In what we call the ``standard'' PL, embedded cores are uniform in structure and refractive index. At the other extreme, ``mode-selective'' PLs use differing single-mode core radii or index contrasts, so that each fiber mode at the FMF-like lantern entrance routes to a distinct output port \cite{Leon-Saval:14}. Lastly, we term lanterns that operate between these two extremes ``hybrid lanterns.'' These lanterns have one core mismatched from the rest, thereby funnelling light from the fundamental fiber mode into a single output port while mixing the remaining light in the rest of the ports. This concept is similar to the ``mode-group selective'' lantern, introduced in \cite{Vel:18}. \\\\ Critically, in the process of coupling light into an array of SMFs, PLs map phase aberrations into intensity variations in a one-to-one manner, at least for small aberrations. This behavior enables the PL to additionally act as a 100\% duty cycle focal-plane WFS \cite{Corrigan:18,Norris:20,Wright:22}. Because PLs have a limited number of outputs (set by the manufacturing process, though PLs with up to 511 modes have been reported \cite{Birks:15}), these devices as of now can only give low-order wavefront information. Therefore, while PLs are well-suited to sense low-order aberrations like NCPAs \cite{Sauvage:07} and island modes \cite{NDiaye:18}, they are not a standalone WFS solution in XAO systems, which correct upwards of 1000 modes. In such applications, PLs will likely need to work in tandem with pupil-plane sensors like the Shack-Hartmann or pyramid WFS. \\\\ We show an example of this phase-to-intensity mapping in Figure \ref{fig:astig}, which plots the non-degenerate intensity responses of a 6-port PL in the presence of positive and negative astigmatism. The focus of this work is to assess the performance of the photonic lantern wavefront sensor (PLWFS), in contexts like instrument coupling or coherent detection where PLs are already being considered for use. In these scenarios, the utility of the PL is doubled, enabling both the aforementioned non-WFS applications as well as focal-plane wavefront sensing. We focus on two contexts the first being fiber-fed, high-resolution spectrometry, mentioned above; and vortex-fiber nulling (VFN), a high contrast imaging technique which exploits symmetries in optical fiber modes to separate star and planet light \cite{Ruane:19:VFN}. In turn, we restrict our analysis to the infrared, since this wavelength regime will be the staging ground for the next push in direct exoplanet spectrometry, with upcoming instruments such as HISPEC and MODHIS \cite{Mawet:19:HISPEC}. \\\\ Research in PL wavefront sensing is ongoing. For instance, \cite{Norris:20} recently combined a 19-port PL with a neural net to enable nonlinear wavefront reconstruction of the first 9 non-piston Zernike modes. In comparison, we take a broader, but less in-depth approach: our goal is to provide a general baseline overview of the capabilities of the PLWFS, as well as the methods through which the sensing properties of these devices might be controlled. We place added emphasis on the linear analysis of the PLWFS, in order assess the limits of the PLWFS under more standard and simplistic linear AO control schemes. In Section \S\ref{sec:analytic}, we establish the math that will enable wavefront reconstruction with the PLWFS. To begin, we present power series expansions for the PLWFS intensity response to first and second order in phase (\S\ref{ssec:genmodel}-\S\ref{ssec:second}). We also consider methods through which these models can be inverted, thereby enabling wavefront sensing. Next, we expand our models to arbitrary modal basis (\S\ref{ssec:modalbasis}): this both increases computational efficiency of the reconstruction models and allows them to be expressed in terms of common phase aberration bases such as the Zernike polynomials. In Section \S\ref{sec:analytic2}, we apply our models to quantify the behavior of the PLWFS. This analysis includes deriving conditions for WFS linearity (\S\ref{ssec:modeselectivity}-\S\ref{ssec:lincond}), and estimating maximum amount of WFE that can be handled by these sensors (\S\ref{ssec:range}). \\\\ Finally, we combine our models with numerical simulations, to provide a first look at the wavefront-sensing abilities of a standard, hybrid, and mode-selective 6-port PL. Our aim in this work is to develop an initial understanding of the capabilities of the PLWFS, and in doing so we assume ``perfect'' lanterns and neglect noise (though we provide some reference to noise propagation in the linear regime in \S\ref{ssec:linearize}). We present an overview of our numerical method in \S\ref{sec:method}, and the corresponding results in \S\ref{sec:results}. In a companion paper \cite{paper2}, we extend these simulations to cover a range of PLWFS configurations beyond the 6-port geometries considered in this paper, in order to establish a rough baseline of the sensing abilities of PLWFSs. There, we also investigate potential strategies through which PLWFS performance can be further controlled and optimized. \section{Propagation analysis and phase reconstruction}\label{sec:analytic} \subsection{General model}\label{ssec:genmodel} Consider the following general setup for a backend device to an AO-equipped telescope. AO-corrected light passes into an instrument backend, which may contain components such as beam-shaping (PIAA) optics \cite{Guyon:03:PIAA} and additional phase and/or amplitude optics (e.g. vortex fiber nuller mask). After light passes through some number of upstream components, it is focused onto the FMF end of a PL, ultimately propagating into the SMF ports at the PL output. These output ports may also optionally be inteferometrically combined. Because optical propagation is linear in complex electric field, the action of all backend optical components can be lumped into a single complex-valued transfer matrix, which we denote $A$. This matrix connects the input electric field ${\bm u}_{\rm in}$ and the output electric field ${\bm u}_{\rm out}$ of the backend device: \begin{equation}\label{eq:amplitude} {\bm u}_{\rm out} = A {\bm u}_{\rm in}. \end{equation} In the case of the PLWFS, the transfer matrix $A$ will contain a projection component, since an $N$-port lantern will support only $N$ complex-valued electric field modes, meaning that the vector ${\bm u}_{\rm out}$ is $N$-dimensional. Note that, unlike the modes of a standard optical fiber, the modes of a PL are three-dimensional, encompassing the full propagation of light from the FMF-like input to the MCF-like output of the lantern. Here, we have a choice of mode basis. The modes we use in this work, which we term ``lantern modes,'' look like individual SMF modes at the lantern exit, and complex linear combinations of fiber modes at the lantern entrance. These modes can be computed by illuminating a single output core at the lantern exit and numerically back-propagating light to the lantern entrance. Simulated cross-sections of lantern modes at the PL entrance, computed in this manner, are shown in Figure \ref{fig:lanternmodes} for a standard 6-port lantern. The $A$ matrix accounts for optical propagation through the telescope and any subsequent beam-shaping to the PL entrance, and then projects the focal plane electric field onto these lantern modes. Accordingly, $A$ has dimensions $N\times M$, for an $N$-port lantern and $M$ pupil samples. \\\\ Since we ultimately measure intensity, not complex amplitude, we recast equation \ref{eq:amplitude} in terms of the intensity response $\bm{p}_{\rm out}$: \begin{equation}\label{eq:intensity} \bm{p}_\text{out} = |A\bm{u}_\text{in}|^2. \end{equation} For phase-only aberrations, the goal of wavefront sensing is to invert equation \ref{eq:intensity} and recover the phase of $\bm{u}_{\rm in}$. We go over methods to do so in the following subsections. \subsection{Linearizing intensity response}\label{ssec:linearize} In this subsection, we provide a review of wavefront sensing in the linear regime. While optical propagation is linear in complex amplitude, it is nonlinear in intensity. However, for small changes in aberration amplitude, the intensity response will vary in a near-linear manner. Consider a phase-only aberration $\bm{\phi}$ in an electric field with assumed uniform intensity $I_{\rm in}=1$. We can approximate the intensity response of the system about some arbitrary reference phase $\bm{\phi}_0$ as \begin{equation}\label{eqn:linstart} {\bm u}_{\rm in} = \exp(i\bm{\phi}) \approx e^{i\bm{\phi}_0} \odot \left[\bm{1} + i (\bm{\phi}-\bm{\phi}_0)\right] \end{equation} where the vector $\bm{1}$ represents the electric field of a flat wavefront, and ($\odot$) represents element-wise (Hadamard) vector-vector multiplication. For clarity, we denote $\bm{\Delta \phi}\equiv \bm{\phi} - \bm{\phi}_0$, and modify the transfer matrix as $A_{ij} \rightarrow A_{ij}e^{i\phi_{0,j}}$; for a flat reference wavefront, $\phi_{0,j}=0$ and $A_{ij}$ is unchanged. The intensity resulting from the phase aberration $\bm{\phi}$ is \begin{equation} \begin{split} \bm{p}_\text{out} &= |A\bm{u}_\text{in}|^2 \\ &\approx \big|A \left[\bm{1} + i \bm{\Delta\phi}\right]\big|^2\\ &\approx |A\bm{1}|^2 + 2\,{\rm Im}\left[(A\bm{1})\odot (A^*\bm{\Delta \phi})\right]\\ \end{split} \end{equation} where the squaring and ($||$) operators are element-wise, and Im denotes taking the imaginary part. We can define the matrix $B$, having the same dimensions as $A$, as \begin{equation} B_{ij} \equiv2\,{\rm Im}\left[ A^*_{ij} \sum_kA_{ik}\right] \end{equation} and recover \begin{equation} \label{eq:linear} \bm{p}_\text{out} \approx |A \bm{1}|^2+B\bm{\Delta\phi}. \end{equation} We see that the first quantity represents the bias intensity when there is no phase error, while the matrix $B$ (often called the ``interaction matrix'' in the context of adaptive optics) describes the linear response of the intensities to phase perturbations from the reference wave -- in other words, $B$ is the Jacobian of the PL's intensity response, evaluated at the reference wavefront determined by $\bm{\phi}_0$. Equation \ref{eq:linear} can be inverted (e.g. via Moore-Penrose pseudo-inverse), enabling the reconstruction of phase errors from intensity responses. The phase aberration modes which this backend device can sense in the linear regime will be determined by the $B$ matrix; un-sensed aberration modes will lie in the null space of $B$. Alternatively, $A$ and $B$ can be used to compute gradients and Hessians for cost functions, enabling iterative nonlinear estimation for phase aberrations (see \cite{Frazin:18} for an example of this sort of analysis, with the pyramid WFS). \\\\ Finally, to understand how error and noise propagates through the linear reconstruction process, we follow the analysis of \cite{Frazin:18}. We write the intensity response of an $N$-port PLWFS as \begin{equation} \bm{p}_\text{out} = |A \bm{1}|^2 +B\bm{\Delta\phi} + \bm{n}(\bm{\Delta \phi}) +\bm{\nu} \end{equation} where $\bm{n}$ is an $N$-length vector of functions that accounts for the error caused by linearization, and $\bm{\nu}$ is the noise, assumed to be composed of $N$ independent zero-mean random processes. The least-squares estimate $\widetilde{\bm{\Delta\phi}}$ for the original phase aberration $\bm{\Delta\phi}$ is obtained through $B^+$, the pseudo-inverse of $B$, as follows: \begin{equation} \begin{split} \widetilde{\bm{\Delta\phi}} &= B^+ \left[\bm{p}_\text{out} - |A \bm{1}|^2\right]\\ &= B^+B\bm{\Delta \phi} + B^+\left[\bm{n}(\bm{\Delta \phi}) +\bm{\nu}\right]. \end{split} \end{equation} Small singular values of $B$ will amplify both the error incurred by linearization, as well as random noise. Such amplification can be partially mitigated through regularization of the singular values. Future analysis of the reconstruction properties of the PLWFS, particularly for nonlinear reconstruction and closed-loop operation, will require more detailed considerations of noise and error propagation --- we leave this for later work. \subsection{Second-order analysis of intensity response}\label{ssec:second} Under perfect knowledge of the system transfer matrix $A$, we may obtain greater accuracy by expanding WFS response to second order. Express the incident electric field as \begin{equation} \bm{u}_{\rm in} \approx e^{i\bm{\phi}_0}\left( 1 + i \bm{\Delta\phi} -\dfrac{1}{2}\bm{\Delta\phi}^2\right). \end{equation} Repeating the analysis of the previous subsection, again making the substitution $A_{ij} \rightarrow A_{ij}e^{i\phi_{0,j}}$, leads to the following: \begin{equation} \bm{p}_{\rm out} = |A \bm{1}|^2 + 2\,{\rm Im}\left[(A\bm{1})\odot (A^*\bm{\Delta \phi})\right] - {\rm Re} \, \left[(A \bm{1})\odot (A^*\bm{\Delta\phi}^2) \right] + |A \bm{\Delta\phi}|^2. \end{equation} We define the matrix $C$ as \begin{equation}\label{eq:defC} C_{ij} \equiv 2\,{\rm Re}\left( A^*_{ij} \sum_k A_{ik}\right) \end{equation} where Re denotes taking the real part. This yields the following formula for how phase errors up to second order affect intensity: \begin{equation}\label{eq:2ndorder} \bm{p}_{\rm out} \approx |A\bm{1}|^2 + B \bm{\Delta\phi} - \dfrac{1}{2}C \bm{\Delta\phi}^2 + |A\bm{\Delta\phi}|^2. \end{equation} Inversion of equation \ref{eq:2ndorder} can be accomplished using iterative techniques like Landweber iteration, the Levenberg-Marquardt algorithm, or gradient descent. Such methods often benefit from knowledge of the Jacobian, which can be derived from equation \ref{eq:2ndorder}: \begin{equation}\label{eq:2ndorderJac} J_{ij} = \frac{\partial I_{\text{out},i}}{\partial\Delta\phi_j} = B_{ij}+(|A_{ij}|^2-C_{ij})\Delta\phi_j + \sum_{k} A_{ij}A^*_{ik} \Delta\phi_k. \end{equation} Our reliance on numerical solving techniques begs the question: prior to inversion, why approximate the intensity response of the PLWFS at all? An alternative strategy is to numerically solve equation \ref{eq:intensity} directly. However, we note two benefits of making the initial approximation. First, doing so simplifies the inverse problem, which improves numerical stability and mitigates issues where the numerical solver becomes stuck in local minima (similar to the phenomenon observed by \cite{Frazin:18}, for the pyramid WFS). This issue is exacerbated as the nonlinearity of a PL increases. Second, as we will later see in Section \S\ref{ssec:modalbasis}, truncation of the power series enables the use of a modal basis for phase aberrations, which increases computational efficiency, especially for a low-order sensors like the PLWFS. As an added note, the preliminary analysis presented in this work may one day enable or accelerate non-iterative nonlinear reconstruction akin to neural net methods \cite{Norris:20}, as well as more detailed analytic or semi-analytic characterization of the PLWFS. \\\\ However, it is important to emphasize that inversion of quadratic and higher-order models is more complicated than their linear counterpart, primarily because nonlinear models can admit multiple solutions, at least for large WFE. This multiplicity may be a fundamental property of the WFS system, or an artifact due to truncation of the power series. We briefly discuss how some of these issues may be mitigated in \S\ref{ssec:quadrecon}. \\\\ A cubic expansion is presented in Appendix \ref{ap:cube}. \subsection{Modal basis}\label{ssec:modalbasis} The matrices $A$, $B$, and $C$ will each have $N$ by $M$ entries, where $N$ is the number of output ports and $M$ is the number of sample points in the pupil plane. This is computationally inefficient --- the number of sample points will almost always greatly exceed the number of lantern ports, making the above matrices unnecessarily large. It is more efficient to represent phase aberrations in terms of some modal basis (e.g. the Zernike modes, the Karhunen-Lo\`eve modes derived from second-order phase aberration statistics, or the singular vectors of the $A$ matrix, projected onto pupil phase). To do so, write the phase aberration displacement vector $\bm{\Delta\phi}$ as \begin{equation}\label{eq:modalbasis} \bm{\Delta\phi} = R\bm{a}, \end{equation} where $\bm{a}$ is the real-valued vector of modal coefficients and $R$ is the change-of-basis-matrix, whose columns correspond to the basis vectors. Defining $B'\equiv BR$, the linear model given by equation \ref{eq:linear} is easily extended to modal basis as follows: \begin{equation} \label{eq:linear_modal} \bm{p}_\text{out} \approx |A \bm{1}|^2+B'\bm{a}. \end{equation} Extension of the quadratic model to modal basis is more involved. Inserting equation \ref{eq:modalbasis} into equation \ref{eq:2ndorder} results in the following: \begin{equation} \label{eq:quad_modal} \bm{p}_{{\rm out},i} \approx |A \bm{1}|^2_i + \left(B'\bm{a}\right)_i - \dfrac{1}{2}\sum_{jk} C'_{ijk} a_j a_k + |A'\bm{a}|^2_i \end{equation} where the tensor $C'$ is defined as \begin{equation} C'_{imn} \equiv \sum_j C_{ij} R_{jm}R_{jn} \end{equation} and the $A' \equiv AR$. Differentiating equation \ref{eq:quad_modal} yields the Jacobian, under the quadratic approximation, in terms of modal basis: \begin{equation}\label{eq:quadjac} J'_{ij} = B'_{ij} + \sum_k \left({\rm Re}\left[ A'_{ij}A'^*_{ik}\right] - \dfrac{1}{2}C'_{ijk}\right) a_k + \left( |A'_{ij}|^2 -\dfrac{1}{2}C'_{ijj} \right)a_j. \end{equation} \section{PLWFS properties}\label{sec:analytic2} In this section, we provide an initial analysis into the wavefront-sensing properties of the PLWFS. Denote $\bm{u}_{\rm in}$ and $\bm{u}_{\rm out}$ as the input electric field (located in the pupil-plane) and output electric field (located at the backend of lantern), respectively, of the overall telescope-PLWFS system. The number of PL outputs is $N$. Following the analysis of the previous section, $\bm{u}_{\rm in}$ and $\bm{u}_{\rm out}$ are related by the complex-valued transfer matrix $A$. Additionally, assume that there is no flux loss during propagation through the PL. We expand the $A$ matrix as a product of constituent matrices $U$, $P$, and $F$ such that \begin{equation} \bm{u}_{\rm out} = UPF\bm{u}_{\rm in}. \end{equation} Here, $F \propto -i \mathcal{F}$ is the Fraunhofer propagator, where $\mathcal{F}$ is the Fourier transform. The $P$ matrix determines how the electric field couples into the lantern entrance, which resembles an FMF. More specifically, $P$ projects $\bm{u}_{\rm out}$ onto the basis of the $N$ first guided fiber modes for an FMF matching the geometry of the lantern entrance. For this work, our fiber modes are assumed to be the linearly polarized/LP modes, relevant for weakly guiding, circular, step-index optical fibers, which implies that $P$ is real-valued. Lastly, $U$ is the unitary matrix representing propagation through the lantern. In other words, $U$ transforms a focal-plane electric field, expressed in terms of LP mode amplitudes, into a set of complex-valued SMF amplitudes. Let us further assume phase-only aberrations. Expanding the complex exponential $e^{i\bm{\phi}}$ with Euler's identity yields \begin{equation} \bm{u}_{\rm in} = \bm{t}\odot \cos{\bm{\phi}} + i \bm{t}\odot \sin\bm{\phi} \end{equation} where $\bm{t}$ is the real-valued transmission mask of the pupil. We now derive some results. \subsection{Impact of perfect mode selectivity}\label{ssec:modeselectivity} In this section, we show that for an even pupil transmission $\bm{t}$, a ``perfect" mode-selective lantern (i.e. one free of manufacturing imperfections, which can separate the LP modes with zero crosstalk) maps $\pm \bm{\phi}$ to the same intensity response. This symmetry in the intensity response makes wavefront reconstruction impossible, preventing mode-selective lanterns from performing effectively as wavefront sensors. First, note that for such a lantern, the propagation matrix $U$ is the identity matrix. Therefore, the complex response of the system for a positive and negative phase aberration is \begin{equation} \begin{split} \bm{u}_{\rm out}(\pm\bm{\phi}) &= -iP\mathcal{F}\left[\bm{t}\odot \cos{\bm{\phi}} \pm i \bm{t}\odot\sin\bm{\phi}\right]\\ &= -iP\left[ \bm{a} \pm i \bm{b}\right] \end{split} \end{equation} where we have defined \begin{equation} \begin{split} \bm{a} &\equiv \mathcal{F} \left[ \bm{t} \odot \cos\bm{\phi}\right], \\ \bm{b} &\equiv \mathcal{F}\left[\bm{t}\odot\sin\bm{\phi}\right]. \end{split} \end{equation} We now make use of the following properties of the Fourier transform: \begin{enumerate} \item The Fourier transform of a real, even function is real and even. \item The Fourier transform of a real, odd function is imaginary and odd. \end{enumerate} First, consider $\bm{\phi}$ even. In this case, due to the Fourier transform properties, the real-ness of $\bm{\phi}$, and the symmetry properties of composite functions, both $\bm{a}$ and $\bm{b}$ are real and even. Therefore, the intensity response is \begin{equation} \bm{p}_{\rm out}(\pm \bm{\phi}_{\rm even}) = |\bm{u}_{\rm out}(\pm\bm{\phi}_{\rm even})|^2 = (P\bm{a})^2 + (P\bm{b})^2. \end{equation} For even phase aberrations, the intensity response of a mode-selective PLWFS is even. Next, consider odd phase aberrations. Repeating a similar analysis, we now find that while $\bm{a}$ is still real and even, $\bm{b}$ is now odd and imaginary. Therefore, \begin{equation}\label{eq:odd_response} \bm{p}_{\rm out}(\pm \bm{\phi}_{\rm odd}) = |\bm{u}_{\rm out}(\pm\bm{\phi}_{\rm odd})|^2 = (P\bm{a})^2 + (iP\bm{b})^2 \pm 2(P\bm{a})\odot (iP\bm{b}). \end{equation} While an even phase aberration produces a real and imaginary field component, an odd phase aberration produces two real field components that interfere with each other. Under certain circumstances, this interference can break sign ambiguity. However, for the PLWFS, the vectors $\bm{a}$ and $\bm{b}$ are ultimately projected by $P$ onto the LP mode basis: a basis of real-valued, even and odd electric field distributions. As a result, the last term in equation \ref{eq:odd_response} is always 0. This is because $\bm{a}$ is even, and only has non-zero overlap with even modes, while $\bm{b}$ is odd, and only has non-zero overlap with odd modes. Finally, since any field can be decomposed into an even and odd component, the intensity response of the mode-selective PLWFS is even for all $\bm{\phi}$, at least in the vicinity of the origin. \\\\ As a corollary, the above implies that mode-selective lanterns have a linear response matrix $B=0$. \subsection{Non-mode-selectivity can break sign ambiguity}\label{ssec:nonselective} For a non-mode-selective lantern, the matrix $U$ is not the identity matrix; the rows of the matrix $UP$ are the (complex-conjugated) lantern modes. We repeat the analysis from the prior section. The intensity response is \begin{equation} \bm{p}_{\rm out}(\pm \bm{\phi}) = |\bm{u}_{\rm out}(\pm\bm{\phi})|^2 = |UP(\bm{a}\pm i\bm{b})|^2. \end{equation} From the above, we see sign ambiguity is broken. The matrix $U$ applies a ``rotation" to the vector $P\bm{a}+iP\bm{b}$. While this rotation preserves the overall norm of the vector, it alters the the modulus of the individual elements, and hence, the powers in the individual ports of the PLWFS. \\\\ In other words, switching the sign of a phase aberration is equivalent to conjugating the complex response of the telescope. If we immediately measure the focal plane electric field in the LP mode basis, this conjugation cannot be detected. However, if we apply a unitary transformation (e.g. a PL) after this conjugation, and then measure, the conjugation can be detected. \subsection{Conditions for linearity}\label{ssec:lincond} In this section we derive criteria that the PLWFS must meet to maximize linear sensitivity to a given mode. We will restrict ourselves to the second-order expansion of intensity response for the PLWFS, equation \ref{eq:2ndorder}. \\\\ To maximize the linear response of the PLWFS for a particular aberration mode, denoted by unit vector $\hat{\bm{z}}_i$, we require that the linear term in equation \ref{eq:2ndorder} is maximized and the quadratic terms are minimized. We can encourage this behavior by demanding that the quantity \begin{equation} Q\equiv \left[(A\bm{1})\odot (A\hat{\bm{z}}_i)^*\right] \end{equation} is purely imaginary. Repeating the same expansion of $A$ from the prior subsections, we equivalently require that \begin{equation}\label{eq:lincond} Q\equiv \left[UP\mathcal{F}\bm{1}\odot (UP\mathcal{F}\hat{\bm{z}}_i)^* \right] \end{equation} is purely imaginary. To connect with the analysis of \S\ref{ssec:linearize}, note that $B\hat{\bm{z}}_i = 2 \, {\rm Im}\, Q$. Ultimately, linearity imposes a phase restriction on $Q$: linear response is maximized when $Q$ is purely imaginary, and minimized when $Q$ is purely real. Note that this maximization only enforces that intensity response of the PLWFS is predominately linear in the vicinity of the reference wavefront $e^{i\bm{\phi}_0}$ ; this is not a maximization of linear range, although it is likely the first step in an analytically-informed optimization of the latter. \\\\ Optimization for the above metric entails designing a PL such that its corresponding propagation matrix $U$ satisfies equation \ref{eq:lincond}. This is tricky, but can be simplified in certain cases. In Appendix \ref{ap:6port}, we simplify the above linearity condition for a standard 6-port lantern in the presence of defocus. \subsection{WFS limitations}\label{ssec:range} Even with a perfect nonlinear reconstruction model, wavefront sensing breaks down when two distinct phase aberrations can map to the same WFS response. These ``degenerate" aberrations are not a concern when the WFS is operating in the linear regime and the mapping of aberrations to sensor intensity responses is one-to-one, but become increasingly problematic as the amplitude of phase aberrations increases. A way to estimate when degenerate aberrations may become problematic is to find input phase aberrations for which a column of the Jacobian becomes zero-valued. This estimate may be conservative, as the response of the PL in this regime can still carry useful information about the input WFE for a subset of the sensed modes. Mathematically we look for an aberration vector $\bm{a}_0$ (defined, for instance, in Zernike basis) such that \begin{equation} \label{eq:jac_extrema} \dfrac{\partial {\bm p}_{\rm out}}{\partial a_j}\bigg|_{\bm{a}_{0}} = 0. \end{equation} To motivate this criterion, suppose we find some aberration $\bm{a}_0$ where the above criterion is fulfilled. In turn, the WFS response about $\bm{a}_0$, in the $a_j$ direction, may behave quadratically: \begin{equation}\label{eq:degen} \bm{p}_{\rm out}(a_{0,k}+a_j) = \bm{p}_{\rm out}(a_{0,k}) + \dfrac{\bm{p}_{\rm out}''(a_{0,j})}{2} a_j^2 + o\left(a_j^3\right). \end{equation} Here, $a_{0,j}$ is the $j_{\rm th}$ element of $\bm{a}_0$. We immediately see that for small $a_j$, aberrations $a_{0,j}\pm a_j$ map to the same intensity response. More widely separated pairs of degenerate aberrations may also occur around $\bm{a}_0$, although they most likely will not be positioned symmetrically about $a_{0,j}$. For an alternative perspective, consider the modal-basis representation of the Jacobian, which has dimensions $N$ rows by $M$ columns for $N$ lantern ports and $M$ aberration modes, with $N\geq M$. The zeroing of a column makes the Jacobian rank-deficient, implying that locally about $\bm{a}_0$, the mapping of phase aberrations to PL intensity outputs can no longer be injective. In other words, we are guaranteed scenarios where two or more distinct phase aberrations map to the same intensity response. \\\\ The norm (or total RMS WFE) of the smallest aberration vector $\bm{a}_0$ which satisfies \ref{eq:jac_extrema} sets the scale in phase aberration space beyond which degeneracy can occur. We term this scale the ``degenerate radius." To actually compute the degenerate radius, we take a numerical approach: feeding a standard root-solving algorithm (e.g. Levenberg-Marquardt) a series of random initial guesses in the vicinity of the origin, repeatedly solving \ref{eq:jac_extrema}, and then taking the solution with the smallest norm from the returned set. In this approach, we require the full form of the Jacobian for the WFS, without any power series approximations. We derive the following form for the Jacobian: \begin{equation} \dfrac{\partial (p_{{\rm out},i}/p_{\rm in})}{\partial a_k} = - 2 \, {\rm{ Im}} \left[ \sum_{j} A_{ij}e^{i \phi_j} R_{jk} (1-a_k) \times \sum_{j'} A^*_{ij'}e^{-i\phi_{j'}} \right]. \end{equation} Here, $\bm{\phi} \equiv R \bm{a}$, similar to section \S\ref{ssec:modalbasis}, with the exception that we are no longer expanding about some reference phase $\bm{\phi}_0$. A rougher but simpler approximation for the degenerate radius can be made by expanding wavefront response only to second-order: essentially, we set equation \ref{eq:quadjac} equal to 0, for fixed aberration index $j$. This conveniently gives an ordinary matrix-vector equation which can be solved quickly and directly using the Moore-Penrose pseudo-inverse, giving exactly one solution $\bm{a}_{0}$ per aberration. However, this approach can be inaccurate if the WFS response contains little quadratic component. \\\\ Lastly, we consider the maximum number of modes that an $N$-port lantern can sense. In the linear model, it is clear that such a lantern at most can sense $N$ aberration modes. However, this limit holds for nonlinear models as well. This is because our optical system, while nonlinear in intensity, is linear in complex amplitude. A lantern attempting to sense more aberration modes than it has ports is guaranteed to map two distinct phase aberrations to the same complex-valued lantern response, and in turn, the same real-valued intensity. Topological theorems, such as invariance of domain, lead to the same conclusion. \section{Simulations}\label{sec:method} In order to provide the initial steps for general characterization of the PLWFS, we simulate these devices using a numerical model in Python. This model has three primary components: a telescope model, which takes in an incident wavefront and returns a focal plane electric field; a PL propagator, which takes both a focal plane electric field and a lantern geometry, and returns the resulting power distribution of the output ports; and wavefront reconstructer, based on the analysis in Section \S\ref{sec:analytic}. Sections \S\ref{ssec:tele}, \S \ref{ssec:lant}, and \S\ref{ssec:recon} expand upon these components, respectively. Finally Section \S\ref{ssec:param} goes over the specific 6-port PL geometries which we simulate with our numerical model. \subsection{Telescope simulation} \label{ssec:tele} Propagation through telescope optics is handled using the HCIPy package \cite{hcipy}. Simulations are monochromatic, at a wavelength of 1.55 $\upmu$m. We additionally assume a 10 m circular, unobstructed aperture; the focal ratio of the system is optimized to ultimately maximize coupling of an unaberrated wavefront into the PL. Pupil-to-focal plane propagation is handled via HCIPy's Fraunhofer propagator. \subsection{Lantern propagation} \label{ssec:lant} After computing the focal-plane electric field distribution, the next step is to determine the corresponding electric field at the output of the lantern. To do so, we multiply the electric field vector by the lantern's propagation matrix, $UP$, which can be computed in pixel basis by discretizing the input plane of the PL and repeatedly propagating single-pixel electric fields. Alternatively, we can compute the lantern modes for a given PL design by illuminating each single-mode port at the lantern output with its fundamental mode and backpropagating light to the lantern entrance; the complex conjugate of the lantern modes form the rows of the propagation matrix. When the number of PL outputs is less than the number of pixels in the input plane, the backpropagation approach is more efficient; in this work, we use the latter. Numerical propagations through PLs are handled with the Lightbeam Python package \cite{mybpm}. \subsection{Wavefront reconstruction}\label{ssec:recon} Given some PLWFS intensity response, we may now attempt to reconstruct the original phase aberration. Critically, to simplify our models, we neglect the impact of noise; the treatment of noise, and related analyses of PLWFS sensitivity and closed-loop performance, are left for future work. In the meantime, our noiseless model will still be useful for an initial characterization of PLWFS capabilities. We also set our reference wavefront to be flat (i.e. in equation \ref{eqn:linstart} we set $\bm{\phi}_0 = 0$) Our reconstruction model is as follows. \\\\ First, we expand phase aberrations in terms of the Zernike basis. To implement linear reconstruction, we compute the matrix $B'$ from equation \ref{eq:linear_modal}; this is done by numerically measuring the matrix of slopes $\partial I_i/\partial a_j$ about the origin. Here, $I_i$ denotes the intensity of the $i_{\rm th}$ output port and $a_j$ denotes the amplitude of the $j_{\rm th}$ Zernike mode, in radians RMS. We then calculate the Moore-Penrose pseudo-inverse of $B'$, which enables inversion of equation \ref{eq:linear_modal}. Note that this reconstruction method neglects any sort of flux normalization, which is unnecessary in the context of simulations but may be more desirable in a more practical implementation with real optics. \\\\ In contrast, quadratic reconstruction requires knowledge of the $A$ matrix, equation \ref{eq:2ndorder}, which in turn determines the modal-basis matrices $A'$ and $B'$, and the tensor $C'$. The $A$ matrix can be computed by probing the pupil-plane electric field (resolved into a 128 by 128 grid of samples) pixel-by-pixel, and measuring the complex-valued response of the PLWFS, or alternatively through a backpropagation technique like in Section \S\ref{ssec:lant}. This is straightforward in the case of simulations, since the complex-valued electric field is known. In contrast, experimental measurement of the $A$ matrix will likely require some phase-diversity method. Inversion of the quadratic model, equation \ref{eq:quad_modal}, is handled using the Levenberg-Marquardt root-finding algorithm, as implemented by the Python package SciPy. We set the starting point of the root-finding routine to the linearly-reconstructed wavefront aberration. \subsection{Simulated lanterns}\label{ssec:param} To demonstrate the validity of our mathematical analysis, we simulate wavefront reconstruction with two types of 6-port PL: standard and hybrid. Both PLs obey the following assumptions. Firstly, we assume that PLs taper uniformly and linearly so that cross sections of the cores and overall cladding of a PL remain perfectly circular throughout the transition zone. While this is an idealization, it remains a useful starting point for a first-order analysis of the PLWFS, especially since it is unclear whether PL imperfections (such the non-circular claddings exhibited by PLs formed via the tapering of SMF bundles) will help or hurt sensing performance. \\\\ Beyond the above idealization, we assume that all PLs taper by a factor of 8 from entrance to exit, with cores spaced in the cladding in such a way that is consistent with the geometries produced when constructing lanterns from a bundle of uniformly sized SMFs. Cladding index is set to 1.444, corresponding to fused silica at 1.55 $\upmu$m wavelength, while jacket-cladding contrast is set to $5.5\times10^{-3}$; these parameters are typical for lantern construction (private communication with S. Leon-Saval). Core index is set so that the mode field diameter is $\sim$7.5 $\upmu$m, matching OFS ClearLite 980 16 fiber. The main difference between our simulated standard and hybrid PLs is in lantern core diameter. In the standard non-selective variant, all SMF cores have the same diameter ($4.4$ $\upmu$m), while in the hybrid variant one SMF core is made 2 $\upmu$m micron larger in diameter to accept the LP$_{01}$ mode. In either case, entrance diameter (i.e. the diameter of the cladding at the input FMF end of the lantern) is set to 20 $\upmu$m. Additionally, both lanterns have their lengths set by an optimization routine that maximizes for linearity in the lantern's intensity response to the first five non-piston Zernike aberrations. For more details on this procedure, see our companion paper \cite{Lin:21}. \\\\ Lastly, as a sanity check, we also simulate a fully mode-selective variant of the 6-port lantern, to verify our result from \S\ref{ssec:modeselectivity} that such lanterns are insensitive to all aberration modes. For simplicity, we assume that the modes of this lantern are exactly the first 6 LP modes, bypassing the need for numerical beam propagation. \section{Results}\label{sec:results} In this section, we apply our numerical model to a standard, hybrid, and mode-selective 6-port lantern. In \S\ref{ssec:intensity}, we look at the intensity response of these PLs, in the presence of single aberrations. These response curves can be thought of as 1D slices of the PLWFS response ``surface'' in the presence of many aberration modes. Subsections \S\ref{ssec:linrecon} and \S\ref{ssec:quadrecon} compare the performances of the linear and quadratic reconstruction models in the presence of the first five non-piston Zernike aberrations. While this basis cannot fully describe ``realistic'' seeing conditions (and neglects any sort of cross-talk in the reconstruction process from higher order aberration modes) we leave analysis of low-order wavefront reconstruction in the presence of higher-order error for future work. Nevertheless, because the spatial-frequency spectrum of real WFE is typically very bottom-heavy \cite{Sauvage:07,NDiaye:18}, and because PLs are primarily sensitive to low-order modes, we believe that our simplified analysis is still useful. \subsection{Intensity response}\label{ssec:intensity} Figure \ref{fig:intensity}a shows the intensity response of a standard 6-port lantern as a function of mode amplitude for the first 5 (non-piston) Zernike modes. Empirically, we find that this is the maximum number of modes a 6-port lantern can sense. In \cite{paper2}, we find the more general result that an $N$-port PL can sense at most $N-1$ Zernikes, without additional optics. Our heuristic explanation is that the complex-valued response of an $N$-port PL is sensitive to piston, which takes up one degree of freedom out of the $N$ total degrees in the system. This piston sensitivity is typically useless for wavefront sensing, and is lost in the conversion of complex amplitude to intensity. \\\\ We additionally mark the regions where the linear approximation holds. This ``linear range'' is defined as the interval in Zernike mode amplitude within which the linear model reconstructs the original phase aberration with less than 0.1 radians RMS of error. Intensity responses to the tilt and astigmatism modes exhibit good linearity in the interval around $[-0.4,+0.4]$ radians, while defocus exhibits linearity over a larger but more asymmetric range: around $[-0.4,0.8]$ radians. Note that the large linear range for defocus is primarily due to the taper length optimization outlined in \S\ref{ssec:param}. Conversely, certain values of taper length can lead to a lantern that is almost completely insensitive to defocus. We consider this and similar effects in more detail in our companion paper \cite{Lin:21}. \\\\ Figure \ref{fig:intensity}b shows intensity responses for a 6-port hybrid lantern against the same modes. The introduction of a single, larger lantern core changes the lantern mode structure, both by replacing one of the modes with the LP$_{01}$ mode and by breaking the rotational symmetry of the lantern. We find that the 6-port hybrid lantern begins to behave nonlinearly more quickly than its non-selective counterpart. Additionally, as seen in Figure~\ref{fig:intensity}c, a fully mode-selective 6-port lantern has completely symmetric intensity responses, and therefore is not useful for wavefront sensing. This simulated result corroborates our analytic result from Section \S\ref{ssec:modeselectivity}. Finally, we find that our tested 6-port hybrid lanterns outperformed its standard counterpart in terms of degenerate radius (1.3 vs. 0.86 radians). This suggests that hybrid lanterns may exceed standard lanterns when using nonlinear reconstruction methods. \\\\ Crucially, we emphasize that the above results are for a specific subset of 6-port lantern geometries. In \cite{paper2}, we extend these results to a wider range of PL designs. \subsection{Linear reconstruction}\label{ssec:linrecon} Given the intensity responses in Figures \ref{fig:intensity}, computed over a range of Zernike mode amplitudes, we now apply the linear model (equation \ref{eq:linear_modal}) in an attempt to reconstruct the original mode amplitude. Figure \ref{fig:lin_recon} plots reconstructed aberration mode amplitude against true mode amplitude for Zernike tilt, defocus, and astigmatism, both for a standard and hybrid 6-port lantern. From the Figure, we see that in terms of reconstruction range, the hybrid lantern performs worse than the standard lantern in all modes, particularly in astigmatism. This is in line with results from \S\ref{ssec:intensity}. \\\\ In order to test reconstruction performance in the presence of multiple aberrations, we use a Monte-Carlo approach. We first randomly draw 10,000 aberrated wavefronts (each composed of a random linear combination of Zernike modes 2-6), then pass each wavefront through our PLWFS model to obtain the corresponding intensity response. Given the intensity response, we attempt linear reconstruction. The root-mean-square of the difference between the ``true" wavefront and the reconstructed wavefront gives an estimate of the overall accuracy of the reconstruction scheme. Figures \ref{fig:recon_all_modes}a and b plots this accuracy against total aberration, for the standard and hybrid lantern, respectively. From the Figure, we see that reconstruction accuracy for the standard lantern remains under 0.1 radians for wavefront aberrations with up to $\sim 0.35$ radians of total RMS WFE; the hybrid lantern remains similarly accurate up to a lesser $\sim 0.25$ radians of total RMS WFE. While this result --- that hybrid lanterns behave more nonlinearly than standard lanterns --- is specific to 6-port PLs, we find in \cite{paper2} that it also applies for PLs of other sizes. \subsection{Quadratic reconstruction}\label{ssec:quadrecon} In this subsection we present simulated results for the simplest nonlinear reconstruction method: quadratic reconstruction. This method is based off equation \ref{eq:quad_modal}, which we invert using the Levenberg-Marquardt root-finding algorithm as implemented by the Python package SciPy. For the initial guess required by the root-finder, we use the linearly reconstructed phase aberration vector. \\\\ We use the same Monte-Carlo approach outlined in the previous section to test the reconstruction performance of the quadratic model. Our results -- reconstruction accuracy against total RMS WFE for 10,000 randomly sampled aberrations -- are shown in Figure \ref{fig:recon_all_modes}d and e, for the standard and hybrid 6-port lanterns, respectively. Comparing with panels a and b, which were generated using the linear reconstruction model, we see that the quadratic model lowers the overall error in wavefront reconstruction, as expected. Specifically, for the standard lantern, quadratic reconstruction allows aberrations with up to $\sim 0.45$ radians of total RMS WFE to be reconstructed to an accuracy of 0.1 radians RMS. The hybrid lantern is similarly accurate up to $\sim 0.35$ radians of total RMS WFE. These results represents a $\sim 30-40$\% increase in reconstruction range over the linear model. Notably, the hybrid PL benefits more from quadratic reconstruction than the standard PL, which reinforces the notion that the hybrid PL behaves more nonlinearly. \\\\ The quadratic model has the potential to provide even greater gains in reconstruction range when applied to PLs that behave more nonlinearly than the 6-port lanterns tested in this work, whose lengths were specifically optimized to maximize linearity. To show this, we apply the linear and quadratic reconstruction models to a 6-port hybrid lantern without any linearity optimization. Results are shown in Figures \ref{fig:recon_all_modes}c and f, respectively. The large spread and diverging pattern of points in panel c clearly shows the highly nonlinear nature of this particular PL; nevertheless, when switching to quadratic reconstruction model in panel f, the reconstruction error for most aberrations drops dramatically. In scenarios where linearity optimization is infeasible, quadratic reconstruction may provide an alternate path to improving WFS performance. \\\\ However, the quadratic model is not without downsides. The major issue is that quadratic reconstruction tends to become increasingly numerically unstable as total RMS WFE increases. We see this behavior reflected in Figure \ref{fig:recon_all_modes} particularly in panels e and f, where the scatter of points increases substantially with increasing RMS WFE. These instabilities can occur when the root-finder used to invert equation \ref{eq:quad_modal} gets stuck in a local minimum; a similar phenomenon was observed for the pyramid WFS in \cite{Frazin:18}). We discuss how this instability may be circumvented in Section \S\ref{sec:disc}. \section{Discussion}\label{sec:disc} In Section \S\ref{sec:analytic}, we laid out a general mathematical framework, in arbitrary modal basis, for the intensity response of a WFS to errors in phase. While we recover the usual linear model in our first-order expansion, we additionally derive a quadratic reconstruction model. This model can improve reconstruction accuracy, especially for PLs: the general nonlinearity of these devices often leads to quadratic-like intensity responses which are not well-fit by the linear model. However, the added accuracy of this scheme is offset by increased complexity: the inversion from intensity to aberration phase may require iterative methods that are slower than the linear model's single matrix multiplication. The higher-order nature of this model also introduced degeneracy, allowing for the mapping of two distinct phase aberrations to the same intensity response (though this is often simply a reflection of the fundamentally degenerate behavior of the PLWFS at large enough WFE). This degeneracy makes inversion more numerically unstable, and also enables scenarios where the root-solver becomes stuck in a local minimum. It remains to be seen whether the increased accuracy afforded by the quadratic and higher-order models outweigh the penalties in numerical stability and computation time, and if these techniques can be applied to closed-loop operation. We expect additional complications when moving to wavefront reconstruction with real PLWFSs. For one, we will have to contend with detector and photon noise, which will degrade both sensitivity and reconstruction range. Noise will likely be particularly problematic at the kHz refresh rates typically used for atmospheric compensation, but may be less of an issue when sensing slower NCPAs. An additional complication is that, in practice, the complex-valued $A$ matrix must be experimentally determined (e.g. through phase diversity methods), and hence will be prone to the effects of random and systematic uncertainties. While linear reconstruction, which requires only intensity knowledge, will be largely unaffected, uncertainties in $A$ may make nonlinear reconstruction even more numerically unstable. These uncertainties may be mitigated if we can constrain the $A$ matrix (for instance, through its modulus, or through $B$). \\\\ We imagine several potential next steps in our mathematical analysis. One interesting continuation is to extend our phase-only aberration analysis to amplitude aberrations as well. Another is the expansion of WFS intensity response to third order, which we begin in Appendix \ref{ap:cube}; see also Figure \ref{fig:recon_all_modes}g-i, which plots the similar reconstruction heatmaps as panels a-f but for a cubic reconstruction model. Cubic expansion is particularly interesting because many PL intensity response functions (e.g. for the 6-port standard lantern, Figure \ref{fig:intensity}a) appear predominantly cubic. Figure \ref{fig:recon_all_modes} confirms that this expansion can offer a significant boost in reconstruction accuracy, especially for the 6-port standard lantern. However, the drawbacks are similar to the quadratic model. Each increase in order is accompanied by an increase in the model degeneracy, as well as an increase in the rank of the tensors required by the model. \\\\ More advanced reconstruction models may overcome these drawbacks. For one, stochastic optimization algorithms like simulated annealing, while computationally expensive, are one potential way to avoid local minima in the inversion process. Another idea is to use wavelength diversity, leveraging the chromatic dependence of the PLWFS response: extra measurements at multiple wavelengths may make the reconstruction process for our nonlinear models significantly easier. These measurements can be made though spectral dispersion of the PLWFS outputs, as in the so-called photonic ``TIGER'' configuration \cite{Saval:12:TIGER}. Lastly, we emphasize that while going to higher order may amplify numerical instability, it does not amplify experimental uncertainties in the $A$ matrix; this is because intensity will always have a second-order dependence on complex amplitude. \\\\ Besides enabling wavefront reconstruction, mathematical models have a second, important use: they allow us to derive certain WFS properties and metrics through which the WFS can be optimized. For instance, in \S\ref{ssec:modeselectivity}, we derived that a fully mode-selective lantern is insensitive to phase aberrations, for even pupil transmission. It remains to be shown whether or not this limitation can be practically overcome with pupil masks or other additional optics. In contrast, there are no such restrictions with standard and hybrid lanterns. As a corollary, we found that the linearity of the PLWFS, at least for small aberrations, depends on the phase of what we call the $Q$ metric (equation \ref{eq:lincond}). We also show how this linearity condition simplifies for certain cases, such as the 6-port standard PL in the presence of defocus (Appendix \ref{ap:6port}). In the future, it may be desirable to optimize the PLWFS for this linearity condition. However, if nonlinear reconstruction methods, such as the quadratic or cubic methods in this work or the neural-net approach from \cite{Norris:20}, can be developed that are fast and stable enough to compete with linear reconstruction, it may instead be desirable to optimize lanterns according to degenerate radius (equation \ref{eq:degen}). Both the linear $Q$ metric and the degenerate radius are only the first steps in analytically defining the sensing properties of the PLWFS. Next steps will be to derive expressions for other potentially more useful properties, such as linear range (different from our condition \ref{eq:lincond}, which only ensures local linearity about the origin). Collectively, these analytically-derived expressions will help inform the manufacture of real PLWFSs in the future. \\\\ Finally, we used our mathematical models to numerically simulate and compare the wavefront-sensing performance of an idealized standard, hybrid, and mode-selective 6-port PL. As expected, we recovered our analytic result that the mode-selective PL under even pupil illumination is insensitive to phase aberrations. We also found that the hybrid PL behaved more nonlinearly than the standard PL, suggesting that the latter may make a better wavefront sensor if used with a linear reconstruction scheme. In contrast, the larger degenerate radius of the 6-port hybrid lantern may make it a better choice with nonlinear reconstruction schemes. The next step will be to improve our model accuracy by accounting for manufacturing imperfections in simulated PLs, and to verify these models on an experimental testbed. \section{Conclusion} In this work, we provide an end-to-end mathematical analysis of the PLWFS. In Sections \S\ref{sec:analytic} and \S\ref{sec:analytic2}, we developed linear and higher-order mathematical models for the intensity response of the PLWFS. These models enable the reconstruction of wavefront aberrations from intensity responses, and enable the derivation of certain metrics, such as the degenerate radius, which estimates the maximum amount of RMS WFE an aberration can have before the mapping of aberrations to intensities is no longer one-to-one. Such metrics can be used to benchmark and control the sensing behavior of these devices. Higher-order reconstruction models, such as quadratic (\S\ref{ssec:second} and \S\ref{ssec:quadrecon}) and cubic (Appendix \ref{ap:cube}), can additionally enable greatly improved reconstruction accuracy over the the standard linear model, but at the cost of added computation time and potentially increased numerically instability. Through our framework, we also show that a fully mode-selective lantern cannot sense wavefront aberrations with even pupil illuminations. \\\\ As a proof-of-concept, we apply our reconstruction models to a standard, hybrid, and mode-selective 6-port lantern in Section \S\ref{sec:results}, and successfully show that for the first two cases wavefront reconstruction of the first 5 non-piston Zernike modes is possible; 5 is the maximum number modes that can be sensed by either 6-port variant. We additionally confirm, numerically, that mode-selectivity (at least with an even pupil) hinders wavefront sensing. Comparing the performance of the standard and hybrid lanterns at a single output wavelength of 1.55 $\upmu$m, we find that the standard lantern has the highest linear range, accurately sensing the first five non-piston Zernike modes out to $\sim 0.5$ radians, followed by the hybrid lantern. Conversely, the 6-port hybrid PL outperformed the standard PL in terms of degenerate radius. In the second part of this paper, we extend our analysis and simulate reconstruction performance for a range of PLs in various configurations. We additionally provide initial investigations into new strategies through which the sensing properties of PLs can be controlled and optimized. In the near future, we hope to verify our results with real, imperfect photonic lanterns, through experimental and on-sky testing, and in doing so, add to the next generation of focal-plane wavefront sensors. \section*{Acknowledgements} This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-2034835. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. This work was also supported by the National Science Foundation under Grant No. 2109232. \begin{appendices} \section{Defocus performance for standard 6-port lantern} \label{ap:6port} The linearity criterion from \S\ref{ssec:lincond} can be simplified for a standard 6-port lantern, located in the focal plane of a telescope with a filled circular aperture, in the presence of defocus. We order our basis of LP modes as (LP$_{01}$, LP$_{02}$, rest of the LP modes) and our output ports as (central port, rest of the ports). For simplicity, we also assume a reference phase $\bm{\phi}_0=0$. Due to symmetry, both an unaberrated wavefront and a defocused wavefront will only couple into LP$_{01}$ and LP$_{02}$. Furthermore, the coupling coefficients will be real. Therefore, we can set \begin{equation} P\mathcal{F}\bm{1}\equiv \begin{pmatrix} a \\ b \\ 0 \\ \vdots \end{pmatrix} , P\mathcal{F}\bm{z}\equiv \begin{pmatrix} d \\ f \\ 0 \\ \vdots \end{pmatrix} \end{equation} where $a,b,c,d$ are real numbers and $\bm{z}$ is the vector corresponding to the defocus mode. Denoting the columns of the lantern propagation matrix $U$ as $\bm{c}_i$, we find that equation \ref{eq:lincond} becomes \begin{equation} Q = ad|\bm{c}_1|^2 + af\bm{c}_1 \odot \bm{c}_2^* + bd \bm{c}_2\odot\bm{c}_1^* + bf|\bm{c}_2|^2. \end{equation} We want $Q$ to be ``as imaginary as possible.'' Clearly, the first and last terms are real, so a lantern that satisfies \begin{equation} ad|\bm{c}_1|^2 + bf|\bm{c}_2|^2 =0 \end{equation} will behave ``more linearly" than one that doesn't. The middle terms apply another condition: the each element in $\bm{c}_1$ should be $90^\circ$ out of phase with its corresponding element in $\bm{c}_2$. In turn, this condition implies that the LP$_{01}$ and LP$_{02}$ components for each lantern mode must be $90^\circ$ out of phase. We have verified this behavior numerically. \\\\ It is also useful to consider the converse of the above conclusion. Suppose that the LP$_{01}$ and LP$_{02}$ mode coefficients are in phase. Then, $\bm{c}_1\odot\bm{c}_2^*$ will be real, and $Q$ will be purely real. Consequently, the linear $B$ matrix will be 0 - lantern response is locally quadratic. \section{Cubic expansion}\label{ap:cube} Expand the incident electric field with a phase $\bm{\phi}$ about a reference phase $\bm{\phi}_0$: \begin{equation} \bm{u}_{\rm in} = e^{i\bm{\phi}_0} \odot \left[\bm{1} + i\bm{\Delta\phi} - \frac{1}{2}\bm{\Delta\phi}^2 - \frac{i}{6}\bm{\Delta\phi}^3 + o\left(\bm{\Delta\phi}^4\right)\right], \end{equation} where $\bm{\Delta\phi}\equiv \bm{\phi}-\bm{\phi}_0$. As before, the intensity response of the WFS is \begin{equation} \bm{p}_{\rm out} = \big| A \bm{u}_{\rm in}|^2 \end{equation} where $A$ is the complex-valued transfer matrix of the overall optical system. Modifying $A_{ij} \rightarrow A_{ij}e^{i\phi_{0,j}}$ and combining the above two equations, keeping only terms up to third order, yields \begin{equation} \bm{p}_{\rm out} \approx \bm{p}_{\rm out ,\, quad} - \frac{1}{3} {\rm Im}\left[ A\bm{1} \odot A^* \bm{\Delta\phi}^3 \right] + {\rm Im}\left[A\Delta\bm{\phi}\odot A^* \bm{\Delta\phi}^2\right] \end{equation} where $\bm{p}_{\rm out ,\, quad}$ is the quadratic approximation for output intensity, as per equation \ref{eq:2ndorder}. We now expand the above model to an arbitrary modal basis. Let $R$ be a change-of-basis matrix, such that $\bm{\Delta\phi}=R\bm{a}$. The additional terms from the cubic expansion can be expressed as a single tensor multiplication of the form \begin{equation} \sum_{lmn} D'_{ilmn} \, a_l a_m a_n \end{equation} where the tensor $D'_{ilmn}$ is defined as \begin{equation} D'_{ilmn} = {\rm Im} \left[ -\frac{1}{3}\sum_j A_{ij}\sum_k A^*_{ik} R_{kl}R_{km}R_{kn} +\sum_{jk}A_{ij}A^*_{ik}R_{jl}R_{km}R_{kn}\right]. \end{equation} \noindent The $D'$ tensor has dimensions $N\times M\times M\times M$ for an $N$-port lantern sensing $M$ aberration modes. The full cubic model, in modal basis, is \begin{equation} \label{eq:cubic_modal} \bm{p}_{{\rm out},i} \approx |A\bm{1}|^2_i + \left(B'\bm{a}\right)_i - \dfrac{1}{2}\sum_{jk} C'_{ijk} a_j a_k + |A'\bm{a}|^2_i + \sum_{lmn} D'_{ilmn}\, a_l a_m a_n \end{equation} Brief empirical testing with this model shows that it can provide a significant increase in reconstruction accuracy, especially for PLs that have already been optimized for linearity. Heatmaps of reconstruction error against total RMS WFE for 10,000 randomly sampled aberrations are shown in Figure \ref{fig:recon_all_modes}g, h, and i, for various 6-port lantern designs. Notably, going to higher order consistently extends the reconstruction range of the sensor, suggesting that the main downside of going to a higher-order model is additional computational complexity rather than numerical instability, at least for the first few orders. \end{appendices} \bibliography{refs}
Title: Spiral Arms in Broad-line Regions of Active Galactic Nuclei. I. Reverberation and Differential Interferometric Signals of Tightly Wound Cases
Abstract: As a major feature in spectra of active galactic nuclei, broad emission lines deliver information of kinematics and spatial distributions of ionized gas surrounding the central supermassive black holes (SMBHs), that is the so-called broad-line regions (BLRs). There is growing evidence for appearance of spiral arms in the BLRs. It has been shown by reverberation mapping (RM) campaigns that the characterized radius of BLRs overlaps with that of self-gravitating regions of accretion disks. In the framework of the WKB approximation, we show robust properties of observational features of the spiral arms. The resulting spiral arms lead to various profiles of the broad emission line. We calculate RM and differential interferometric features of BLRs with $m=1$ mode spiral arms. These features can be detected with high-quality RM and differential interferometric observations via such as GRAVITY onboard Very Large Telescope Interferometer. The WKB approximation will be relaxed and universalized in the future to explore more general cases of density wave signals in RM campaigns and differential spectroastrometry observations.
https://export.arxiv.org/pdf/2208.04095
\title{Spiral Arms in Broad-line Regions of Active Galactic Nuclei} \subtitle{I. Reverberation and Differential Interferometric Signals of Tightly Wound Cases} \author{Jian-Min Wang \inst{1, 2, 3} \and Pu Du \inst{1} \and Yu-Yang Songsheng \inst{1} \and Yan-Rong Li \inst{1}} \institute{ Key Laboratory for Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, 19B Yuquan Road, Beijing 100049, China \\ \and University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing 100049, China \\ \and National Astronomical Observatories of China, Chinese Academy of Sciences, 20A Datun Road, Beijing 100020, China} \abstract{ As a major feature in spectra of active galactic nuclei, broad emission lines deliver information of kinematics and spatial distributions of ionized gas surrounding the central supermassive black holes (SMBHs), that is the so-called broad-line regions (BLRs). There is growing evidence for appearance of spiral arms in the BLRs. It has been shown by reverberation mapping (RM) campaigns that the characterized radius of BLRs overlaps with that of self-gravitating regions of accretion disks. In the framework of the WKB approximation, we show robust properties of observational features of the spiral arms. The resulting spiral arms lead to various profiles of the broad emission line. We calculate RM and differential interferometric features of BLRs with $m=1$ mode spiral arms. These features can be detected with high-quality RM and differential interferometric observations via such as GRAVITY onboard Very Large Telescope Interferometer. The WKB approximation will be relaxed and universalized in the future to explore more general cases of density wave signals in RM campaigns and differential spectroastrometry observations. } \keywords{galaxies: active -- quasars: emission lines -- quasars: general -- reverberation mapping} \section{Introduction} Active galactic nuclei (AGNs) discovered by \cite{Seyfert1943} are characterized by the appearance of prominent broad emission lines in their spectra (e.g., see the composite spectra in \citealt{Berk2001} and \citealt{Hu2012}), which usually have full widths at half maximum (FWHM) of $\gtrsim 10^{3}\,\kms$ or even a few $10^{4}\,\kms$ \citep[e.g., see reviews of][]{Ho2008,Netzer2013}. Such broad widths of emission lines undoubtedly indicate very deep potential wells in centers of AGNs first realized by \cite{Woltjer1959}, which directly motivated the establishment of the scenario that accretion onto supermassive black holes (SMBHs) powers the huge radiation energy of AGNs \citep{ZelDovich1965,Salpeter1964,Lynden-Bell1969}. It is generally accepted that broad emission lines stem from BLR gas photoionzied by ionzing photons emitted from accretion disks \citep{Osterbrock2006}. It has been made great progress in understanding AGN and quasar activities \citep[e.g.,][]{Netzer2013}, but two major issues as to BLRs remain under debate so far: 1) origin of BLR gas; 2) structure and dynamics \citep[see a brief review in][]{Wang2017}. Thanks are given to reverberation mapping (RM) campaigns since 1980s, with the underlying principle proposed by \cite{Bahcall1972} and \cite{Blandford1982}. Photons of emission lines from structured ionized gas undergo different paths to observers, leading to time lags ($\tau$) of the emission lines with respect to the ionizing photons. RM campaigns focusing on broad Balmer lines had detected the anticipated lags in a number of Seyfert galaxies \citep[e.g.,][]{Peterson1993,Peterson1998, Bentz2013, Barth2015,U2021} and quasars \citep[e.g.,][]{Kaspi2000, Du2014, Du2018, Shen2019} in the past decades. Along with the growing investment of observing resources and the development of analytical algorithms, the general geometry and kinematics of BLRs in some AGNs have been revealed by velocity-resolved delay analysis \citep[e.g.,][]{Bentz2010, Denney2010, Grier2012, Grier2013,Du2016, U2021}, velocity-delay maps \citep[e.g.,][]{Xiao2018,Horne2021}, or dynamical modeling \citep[e.g.,][]{Bottorff1997,Pancoast2014,Li2018,Williams2020}. Many resolved BLRs have disk-like geometry of H$\beta$ line region (the other have inflow or outflow, or a kind of mixture of the three configurations)\footnote{It turns out that high-ionization lines, such as \civ\, line, favor origination from outflows. See \cite{Bottorff1997} for a first detailed study of \civ\, line in NGC 5548 observed by Hubble Space Telescope and International Ultraviolet Explorer.}. Moreover, the repeated RM observations of the same emission line and the RM results of the emission lines with different ionization in a few AGNs (for instance NGC 5548, 3C 390.3, NGC 3783, NGC 7469, Mrk 817 etc.) approximately demonstrated a relation $V_{\rm FWHM} \propto \tau^{-1/2}$, showing evidence for potential of SMBHs \citep[e.g.,][]{Peterson2000,Peterson2004, Lu2021}, where $V_{\rm FWHM}$ is the full-width-half-maximum of emission lines. Considering that disk-like geometry of BLRs in some AGNs, this relation probably indicates nearly Keplerian rotation of the disk BLRs. More recently, the GRAVITY instrument mounted in Very Large Telescope Interferometer (VLTI) spatially resolved the BLRs in several AGNs \citep[e.g., 3C\,273, NGC\,3783, IRAS\,09149 by][respectively]{Sturm2018, Gravity2020_09149,Gravity2021_3783} and also found that their BLRs are approximately characterized by Keplerian rotating disks. Moreover, there is growing evidence for the existence of sub-structures or the inhomogeneity on the BLR disks from RM observations. For examples, a well-known phenomenon in RM is that the emission-line profiles in the mean spectra (corresponding to the entire region of line emission) and root mean square ones (RMS, to the portion of the region with response) are generally different in most AGNs \citep[e.g.,][]{Bentz2009, Denney2009, Barth2013, Du2018MAHA}, which implies that the BLRs are inhomogeneous in terms of gas distributions. Furthermore, the sub-features in the velocity-delay maps of NGC~5548 \citep[e.g.,][]{Xiao2018, Horne2021} indicate explicitly that there are probably gas inhomogeneity in BLRs. In particular, \cite{Horne2021} recently found helical ``Barber-Pole'' patterns in the Ly$\alpha$ and C {\sc iv} lines of NGC~5548, which suggests azimuthal structures in their emitting regions. Multiple-peaked and asymmetric profiles of lines also indicate complicated BLR structures. These increasing pieces of evidence potentially suggest that there exist sub-structures in BLRs. Questions naturally arise: are the motions of the BLR sub-structures chaotic or ordered? What are their physical origin? Motivated by the observational evidence, some preliminary efforts have been made that spiral arms are invoked to explain the asymmetric double-peaked broad emission lines, however, by only assuming some analytical form of the arm patterns without considering their physical origin or introducing any dynamical physics. These efforts were made by \cite{Gilbert1999} and \cite{Storchi-Bergman2003, Storchi-Bergman2017} who adopted spiral arms to explain the asymmetric double-peaked emission lines and their variations in some AGNs. \cite{Horne2004} calculated the transfer function of BLRs with arms for RM, but all of them assumed some analytical forms of the spirals. All these mathematical models for observational data should be derived by the first principle in order to advance our understanding BLR physics. The BLR radii measured by RM span from $10^3 R_{\rm g}$ to $10^5 R_{\rm g}$ depending on accretion rates and SMBH masses \citep[see Figure 6 in][]{Du2016V,Du2019}, which overlap with outer part of accretion disks, where $\Rg=G\bhm/c^{2}=1.5\times 10^{13}\,M_{8}$\,cm is the gravitational radius, $G$ is the gravitational constant, $c$ is speed of light, and $M_{8}=\bhm/10^{8}\sunm$ is the SMBH mass. Some pioneering works suggest that the outer regions may be self-gravitating \citep[SG, e.g.,][]{Paczynski1978a,Paczynski1978b} for viscosity mechanism transferring angular momentum outward, in particular, in AGN accretion disks \citep[see details of][]{Shlosman1989}. It is not difficult to give an rough estimate of the SG radius using the famous Toomre parameter $Q = \kappa a / {\pi G \sigma}$ \citep{Toomre1964}, which is the criterion of the instability, where $\kappa$ is the epicyclic frequency (equal to the angular speed $\Omega$ in a Keplerian disk), $a$ is the sound speed, and $\sigma$ is the surface density. Using $Q=1$, \cite{Goodman2003} approximated $R_{\rm SG}/\Rg=3.1\times 10^{3}\,\alpha_{0.1}^{2/9}\left(L_{\rm Bol}/L_{\rm Edd}\right)^{4/9} M_{8}^{-2/9}$, where $\alpha_{0.1}=\alpha/0.1$ is the viscosity parameter and $L_{\rm Bol}/L_{\rm Edd}$ is the Eddington ratio. Interestingly, SG part of the disks spatially overlap BLRs generally in AGNs, and this motivates that BLR structures and dynamics link with SG part somehow. The ultimate fates of the SG accretion disks remain a matter of debate, however self-regulation processes, balanced by radiation cooling and the heating internally from the dissipation of gravito-turbulence \citep[e.g.,][]{Paczynski1978a, Paczynski1978b, Lin1987, Gammie2001, Lodato2004} and probably also magneto-rotational instabilities \citep[MRI, e.g.,][]{Balbus1998, Rafikov2015}, or externally from irradiation (driven by the inner part of accretion disk, e.g., \citealt{Rice2011, Rafikov2015}) or star formation (inside the disks, e.g., \citealt{Collin1999, Sirko2003,Wang2011}), are believed to maintain the disks so that they can stay in marginally stable states. In such states, non-axisymmetric perturbations (spiral structures) may inevitably grow in the SG parts, although clumps or stars may also form through condensations if the cooling time scale $t_{\rm cool} < \beta \Omega^{-1}$, where $\beta$ is a factor of a few \cite[e.g.,][]{Gammie2001, Rice2003, Rice2011, Kratter2016, Brucy2021}. The aforementioned phenomenological evidences of BLR substructures and inhomogeneity enlighten us that they may be connected with or originated from the spiral arms in the SG part of accretion disks, at least for those AGNs with disk-like BLRs. As the first paper of this series, we adopt the simple tight-winding approximation and use analytical formulations to discuss the observational characteristics of tightly wound cases of density waves in BLRs. The basic formulations, equilibrium states, and boundary conditions are provided in Section 2. In Section 3, observational features of the arms are discussed for RM campaigns and interferometric observations of GRAVITY/VLTI. Brief discussions and conclusions are provided in Section 4 and 5, respectively. It should be noted that the purpose of this paper is to demonstrate the general features of BLR spiral arms in observations (e.g., RM) rather than establish a perfect model. \section{Model and Formulations} In the BLR, the ionized gas is rotating with nearly Keplerian velocity around the central SMBH. This assumption is supported by the evidence that $V_{\rm FWHM}$ and $\sigma_{\ell}$ are roughly proportional to $\sim \tau_{\rm H\beta}^{-1/2}$ in some AGNs from the multiple RM campaigns \citep{Peterson2004,Lu2021}, where $\sigma_{\ell}$ is velocity dispersion of the H$\beta$ profiles in the RMS spectra, respectively, and $\tau_{\rm H\beta}$ are the H$\beta$ time lags with respect to the 5100\AA\, continuum variations. However, current accuracy of RM data doesn't allow us to quantitatively determine deviations from exactly Keplerian rotation, it should be reasonable to assume the disk-BLR has nearly Keplerian. As one of possible mechanisms, the magneto-rotational instability (MRI) \citep[e.g.,][]{Balbus1998} drives radial motion with velocity approximated by $u/u_{\rm K} \approx 0.1\alpha_{0.1}$ which is much smaller than the rotation, where $u_{\rm K}$ is the Keplerian roation. Moreover, for the disks dominated by the central point masses, weak self-gravity of the disks can still maintain spiral arms \citep[e.g.,][]{Lee1999,Tremaine2001}. In this paper, for simplicity, we apply the classic theory of density waves \citep[e.g.,][]{Lin1964, Lin1966,Lin1979} to calculating the broad-emission-line profiles and the transfer function for RM, and the differential phase curve signal for GRAVITY/VLTI. \subsection{Self-gravitating disk and BLR}\label{sec:BLR_structure} Following \cite{Paczynski1978a}, without specifying regulation mechanisms (dusty gas, star formation, photoionization and accretion etc.), we use the polytropic relation as a prescription of $Q_{\rm disk}\sim 1$ region for general cases, which is given by $p=K_{0}\rho^{1+1/n}$, where $p$ is the pressure, $\rho$ is the density, $K_{0}$ is a constant, and $n$ is the polytropic index. Fortunately, $K_{0}$ can be generally constrained by observations of the BLR geometry. The sound speed is given by $a_{0}=\left(p/\rho\right)^{1/2}=K_{0}^{1/2}\rho^{1/2n}$. The vertical equilibrium admits $H=a_{0}\Omega^{-1}$, and the Toomre parameter is given by $Q_{\rm disk}={\kappa a_0}/{\pi G\sigma_0}$, where $\sigma_{0}=2\rho H$ is the surface density of the SG region. Here we would like to point out that self-gravity is neglected for vertical equilibrium for simiplicity. The epicyclic frequency is given by $\kappa=2\Omega\left(1+\frac{1}{2}d\ln\Omega/d\ln R\right)^{1/2} \simeq \Omega$, and we adopt a Keplerian velocity here that $\Omega=\sqrt{G\bhm/R^{3}}\approx 2\times 10^{-9}\,M_{8}^{-1}r_{4}^{-3/2}\,{\rm s^{-1}}$. We have \begin{equation} a_0 = \frac{K_0^{1/2}}{(2 \pi G Q_{\rm disk})^{1/2n}}\,\Omega^{1/n}, \end{equation} \begin{equation} \sigma_{0} = \frac{2 K_0^{1/2}}{{(2 \pi G Q_{\rm disk})^{(1+2n)/2n}}}\,\Omega^{(1+n)/n}, \end{equation} and \begin{equation} H = \frac{K_0^{1/2}}{(2 \pi G Q_{\rm disk})^{1/2n}}\,\Omega^{(1-n)/n}. \end{equation} Given $n$, $K_{0}$ and $Q_{\rm disk}$, we can drive the radial structures of BLR. The polytropic index $n$ is a free parameter in this paper, of which the value can be $0 \sim +\infty$. In practice, for instance, \cite{Paczynski1978a} discussed the SG disks with $n=1.5$ and $3$, \cite{Lubow1998} chose $n=3$ in their work, and \cite{Korycansky1995} employed $n=5$ as a typical case in the discussion of axisymmetric waves in accretion disks. In the present paper, the polytropic index $n$ controls the radius-dependent thickness of the gaseous disk, i.e., $H/R \propto R^{(n-3)/2n}$. If $n > 3$, the disk becomes thicker at outer radius and the surface of the disk tend to be ``bowl-shaped'' (similar to \citealt{goad2012}). The bowl shape enables the gas on the disk surface to be illuminated by the ionizing photons from the inner part of the disk, otherwise, the geometrically thin disks are not able to be sufficiently ionized. Observational constrains on BLR gas indicate that the BLR mass of H$\beta$-emitting gas could be in a wide range from $10^{3}\sim 10^{4}\sunm$, and even more massive \citep{Baldwin2003}. As a simple estimation, we integrate the surface density of the SG part of accretion disks and obtain a mass of $M_{\rm disk}\approx 4.1 \times 10^6 \alpha_{0.1}^{-4/5} M_8^{11/5} \mathdotM^{7/10} r_4^{5/4}\,\sunm$ in light of the standard model of accretion disks \citep{Shakura1973}, where $\dot{\mathscr{M}}=\dot{M}_{\bullet}/\dot{M}_{\rm Edd}$ is the dimensionless accretion rates, $\dot{M}_{\bullet}$ is the mass accretion rates, $\dot{M}_{\rm Edd}=L_{\rm Edd}/c^{2}$ is the Eddington rates, and $L_{\rm Edd}=1.3\times 10^{46}\,M_{8}\,{\rm erg\,s^{-1}}$ is the Eddington luminosity. This indicates that the BLRs are the disk surface taking only tiny fraction of this SG portion. In the present paper, we assume that the ionized BLR gas is proportional to the density of the disk $\rho_{\rm ion} \propto \rho$ (see the following Section \ref{sec:line_profiles}). It is found that $M_{\rm disk}/\bhm\ll 1$, namely, the accretion disks are much lighter than the central SMBH unless for cases with extremely super-Eddington accretion rates ($\mathdotM\gg 1$). Though $\mathdotM\sim 900$ AGNs have been found from RM campaigns \citep{Du2014,Du2016,Du2016V,Du2018}, we limit the present scope for sub-Eddington accretion disks ($\mathdotM\lesssim 3$) with nearly Keplerian rotation of a point potential of SMBH mass. We would point out here a possibility that enough high density outflows emitting high ionization lines could partially shield the outer part of BLRs making reverberation of H$\beta$ line complicated \citep[][]{Dehghanian2021}. If it happens in such a case, H$\beta$ line would undergo a holiday driven by the inner outflows (of \civ\ line) like in NGC 5548 \citep[see Figure 7 in][]{Pei2017}, namely, appearing a lack of reverberation. However, H$\beta$ line only had the holiday once over the last 20 years \citep{Pei2017}, indicating that such a holiday is quite rare and the obscurations are not common. \subsection{Equations and boundary conditions} \subsubsection{Perturbation equations} We adopt the formalisms and notations used in \cite{Lin1979}. For an $m$-fold axisymmetric perturbation, $(u,\vup,\sigma)=(u_{0},\vup_{0},\sigma_{0})+(u_{1},\vup_{1},\sigma_{1})e^{i(\omega t-m\varphi)}$, where $u$ and $v$ are the radial and azimuthal velocities, and $\sigma$ is the surface density. The parameters with subscript ``0'' correspond to the equilibrium state (we take $u_{0}=0$ in this paper), which are functions of radius. We have the perturbation of the equations given in Appendix \ref{app:basic_equations}, \begin{equation} \frac{1}{R}\frac{d}{dR}(R\sigma_{0}u_{1})-\frac{im}{R}\sigma_{0}\vup_{1}+i(\omega-m\Omega)\sigma_{1}=0, \end{equation} \begin{equation} i(\omega-m\Omega)u_{1}-2\Omega \vup_{1}=-\frac{d(\psi_{1}+h_{1})}{dR}, \end{equation} and \begin{equation} \frac{\kappa^{2}}{2\Omega}u_{1}+i(\omega-m\Omega)\vup_{1}=im\frac{\psi_{1}+h_{1}}{R}, \end{equation} yielding the following differential equation \begin{equation}\label{eq:h1} \frac{d^{2}}{dR^{2}}(h_{1}+\psi_{1})+\calA\frac{d}{dR}(h_{1}+\psi_{1})+ \calB(h_{1}+\psi_{1})=-\calC h_{1}, \end{equation} where $h_{1}=a_0^{2}\sigma_{1}/\sigma_{0}$. Here the coefficients $\calA,\calB,\calC$ are given in Appendix \ref{app:coefficients}. The Poisson equation of the SG portion of the accretion disks reads $\nabla^{2}\psi=4\pi G \sigma \delta(z)$, through vertical integration, yielding \begin{equation}\label{eq:Poisson} \frac{d\psi_{1}}{dR}=-\frac{\psi_{1}}{2R}-is_{\rm k}\Sigma h_{1}, \end{equation} where $\delta(z)$ is the Dirac-$\delta$ function, $\Sigma=2\pi G\sigma_{0}/a_{0}^{2}$, and $s_{\rm k}=\mp 1$ is the sign function of wave vector $(\vec{k})$ for trailing and leading waves, respectively. The Poisson equation holds approximately in the order of $(H/R)^{2}$. Combining the perturbation equations, we have the equation of the reduced enthalpy ($U$) \begin{equation}\label{eq:U} \frac{d^{2}U}{dR^{2}}+k_{3}^{2}U=0, \end{equation} where \begin{equation} U=h_{1}\left[\frac{\kappa^{2}(1-\nu^{2})}{\sigma_{0}R}\right]^{-1/2} \exp\left(\frac{i}{2}\int\Sigma dR\right), \end{equation} and \begin{equation} k_{3}^{2}=\left(\frac{\kappa}{a_{0}}\right)^{2}\left(Q_{\rm disk}^{-2}-1+\nu^{2}\right);\quad \nu=\frac{\omega-m\Omega}{\kappa}. \end{equation} \cite{Bertin2014} presents more detailed derivations of the above equations (\ref{eq:h1} and \ref{eq:U}). Equation (\ref{eq:U}) works approximately in the order of $H/R$, which agrees with that of the Poisson equation. As a first application of density waves in BLR, we retain this order of approximation for simplicity. \subsubsection{Boundary conditions} In the context of spiral galaxies, the outer boundary conditions are imposed by radiation condition \citep{Lin1979}. Considering that dynamics of dusty and dust-free gas will be very different due to radiation pressures (either from local or central part) of accretion disks. The boundary could be distinguished by the dust sublimation radius. For the present SG disk, the outer boundary is fixed at the inner edge (inward is dust-free) of dusty torus where the waves are evanescent. Although the dusty torus is generally not spatially resolved (except for NGC 1068 which shows a near-infrared cavity with a sharp edge in \citealt{Gravity2020}), fortunately, the RM campaigns of near-infrared continuum emissions show that the inner edge of torus is about $R_{\rm torus}\approx 0.1\, L_{43.7}^{0.5}\,{\rm pc}$ \citep{Suganuma2006, Koshida2014, Minezaki2019,Lyu2019}, where $L_{43.7}$ is the $V$-band luminosity in units of $10^{43.7}\ergs$ \citep[see the latest version in][]{Minezaki2019}. It was long believed that the out edge of BLR is just the inner edge of torus \citep[e.g.,][]{Netzer1993, Suganuma2006, Czerny2011}. Therefore, we can obtain the outer radius of the BLR \begin{equation}\label{eq:Rtorus} R_{\rm out}=205.9\,\eta_{0.1}^{1/2}\epsilon_{10}^{-1/2}\mathdotM^{1/2}M_{8}^{1/2}\,{\rm ltd}, \end{equation} from $R_{\rm out}=R_{\rm torus}$, the bolometric luminosity is $L_{\rm Bol}=\epsilon L_{V}=\eta \dot{\mathscr{M}}\dot{M}_{\rm Edd}c^{2}$, $\epsilon=10\epsilon_{10}$ is the bolometric luminosity correction factor and $\eta=0.1\eta_{0.1}$ is the radiative efficiency. This agrees with the observation of NGC 1068 \citep{Gravity2020} by GRAVITY onboard VLTI. We adopt a vanishing perturbation of density at the outer boundary. For the simplest case of the inner boundary, we assume \begin{equation}\label{eq:sigma1} \frac{dU}{dR}=0, \end{equation} just as in \cite{Lau1978}. We set the inner radius to be 10 percent of the outer radius, and have checked that the detailed value of inner radius does not change the general features of spiral arms. With the boundary conditions, it becomes a eigenvalue problem to solve Eqn (\ref{eq:U}). Then, we can obtain the perturbation of the surface density \citep[more details can be found in, e.g.,][]{Lau1978, Lin1979}. It should be pointed out that outer boundary conditions could be revised for individual AGNs if the spatially resolved conditions are different from the present. The adopted conditions are obvious that any H$\beta$ photons will be extincted by dusty gas within the torus whatever it is a kind of outflows \citep{Konigl1994,Elitzur2006}, clumpy structures \citep{Nenkova2008} or classical continue torus \citep{Antonucci1993}. On the other hand, accretion disks could extend outward and correspond to the mid-plane of torus, and density waves also extend (but depends on the local radiation pressure). In some AGNs, ALMA (Atacama Large Millimeter/submillimeter Array) observations show spiral arms, like in a few Seyfert galaxies \citep{Combes2019}, and it would be interesting to test if they are consistent with ones in BLRs. Recent interferometric observations of NGC 1068 show much more complicated structure \citep{Rosas2022} as well as the counter-rotating disk from 0.2 to 7 pc by ALMA observations \citep{Imanishi2018,Impellizzeri2019}. In this paper, the simplest conditions are taken for the outer boundary physics. \subsection{Line profiles and 2-dimensional transfer functions} \label{sec:line_profiles} The emissivity distributions in BLRs are still unclear from observations. From photoionization, the locally optimally emitting clouds model suggests that the line emission irradiates the most efficiently from a relatively narrow range of ionization parameter $U_{\rm ion} = Q_{\rm ion} / 4 \pi R^2 c n_{\rm ion}$, where $Q_{\rm ion}$ is the number of ionization photons, $n_{\rm ion} = \rho_{\rm ion} / m_{\rm H}$ is the number density of hydrogen, $\rho_{\rm ion} \propto \rho = \left[\sigma_{0}(R)+\sigma_{1}(R,\varphi)\right] / 2H = \left[\sigma_{0}(R)+\sigma_{1}(R,\varphi)\right] \Omega / 2 a_0$ is the ionized hydrogen density and assumed to be proportional to the density of the disk, and $m_{\rm H}$ is the mass of hydrogen atom \citep{Baldwin1995, Korista1997, Korista2000}. For simplicity, we assume that the emissivity (reprocessing coefficient) is a Gaussian function of ionization parameter as \begin{equation}\label{eq:Xi} \Xi_{R} \propto \frac{1}{\sqrt{2\pi} \sigma_U} e^{-(U_{\rm ion} - U_{\rm c})^2 / 2 \sigma_U^2}, \end{equation} where $U_{\rm c}=U_{\rm ion}(R_{\rm c})$ is the ionization parameter corresponding to the most efficient line emission at radius $R_{\rm c}$ and $\sigma_U = \tilde{\sigma}_U \times (U_{\rm ion,max} - U_{\rm ion,min})$ is a parameter that controls the range of line emission. Actually, the form of Eqn. (\ref{eq:Xi}) is one simplified version of the popular model \citep[e.g.,][]{Pancoast2014,Li2018}. The typical BLR radii are on average smaller than the inner edges of tori by factors of $4\sim5$ \citep{Koshida2014, Minezaki2019}, thus we adopt $R_{\rm c} = 1/4 R_{\rm torus}$ and assume $\tilde{\sigma}_U = 0.20$ (corresponding to a not very compact line-emitting region). Given the configuration of the disk-like BLR in Section \ref{sec:BLR_structure}, the emission-line profile can be expressed as \begin{equation} F_{\ell}(\lambda)=\int_{R_{\rm in}}^{R_{\rm out}}RdR\int_{0}^{2\pi}\,d\varphi\, \Xi_{R}\, \delta\left[\lambda-\lambda_{0}\left(1+\frac{\bm{\vup}\cdot\bmnobs}{c}\right)\right], \end{equation} where $\bm{\vup}$ is the velocity of emitting gas, $\bmnobs=(0,\sin i_{0},\cos i_{0})$ is the vector of line of sight, and $i_{0}$ is the inclination angle. \cite{Blandford1982} developed the linear reverberation technique to map BLRs. Denoting the ionizing continuum light curve and broad-line light curve at velocity $v$ of the line profile as $L_{\rm c}(t)$ and $L_{\ell}(\vup,t)$, respectively, we have \begin{equation} L_{\ell}(\vup,t)=\int_{-\infty}^{\infty}dt^{\prime}\,L_{\rm c}(t^{\prime})\Psi(\vup,t-t^{\prime}), \end{equation} where $\Psi(\vup,t)$ is the 2D transfer function (velocity-delay map), $\Psi(\vup,t)=0$ for $t<0$ and $\Psi(\vup,t)\ge 0$ for $t\ge 0$. RM campaigns obtain $L_{\ell}(\vup,t)$ and $L_{\rm c}(t)$, which can be used to infer $\Psi(\vup,t)$. With the geometric and kinematic configurations of BLRs with density waves, the 2D transfer function can be obtained by \begin{equation} \Psi(\vup,t)=\int d{\bmR}\,\frac{g({\bmR},\vup)\,\Xi_{R}}{4\pi R^{2}}\, \delta\left[t-\frac{(R+{\bmR}\cdot{\bmnobs})}{c}\right], \label{eqn_psi} \end{equation} where $g({\bmR},\vup)$ is the projected one-dimensional velocity distribution function. With Equation~(\ref{eqn_psi}), features of density waves can be calculated and then compared with observations. In principle, high fidelity data from RM campaigns can be used to generate 2D transfer functions through the maximum entropy \citep{Horne2004} or the improved Pixon-based method \citep{Li2021}. Spiral arms as prominent inhomogeneous components of BLRs can be directly tested avoiding uncertainties of explanations in light of complicated profiles alone. \subsection{Signals for GRAVITY/VLTI} As a powerful technique, spectroastrometry (SA) developed from ``Differential Speckle Interferometry" \citep{Becker1982,Petrov1989,Rakshit2015} measures the center of photons and therefore greatly improve the spatial resolution. The spectroastrometry with GRAVITY/VLTI can reach an unprecedentedly high spatial resolution of $\sim 10\mu$as. It had been successfully applied to 3C 273 for the geometry and kinematics of its BLR \citep{Sturm2018}\footnote{Current accuracy of GRAVITY measurements of 3C 273 is only enough to test a simple disk model rather than sub-structures of spiral arms \citep[see details of ][]{Sturm2018}.}. The spiral arms developed from density waves may show some signatures that can be detected by GRAVITY/VLTI. The detailed scheme for spectroastrometry technique is described in \cite{Rakshit2015} and \cite{Songsheng2019}. Below we outline a brief description for the sake of completeness. Given the surface brightness distribution, we have the photon center of the source at wavelength $\lambda$ \begin{equation}\label{eq:ph-center} \bm{\epsilon}(\lambda) = \frac{\int \bm{\alpha} \calO(\bm{\alpha},\lambda) \dd[2]{\bm{\alpha}}} {\int \calO(\bm{\alpha},\lambda) \dd[2]{\bm{\alpha}}}, \end{equation} where $\calO(\bm{\alpha},\lambda)=\calO_{\ell}+\calO_{\rm c}$ is the surface brightness distribution of the source contributed by the BLR and continuum emissions, and $\bm{\alpha}$ is the angular displacement on the celestial sphere. Given the geometry and kinematics of a BLR, its $\calO_{\ell}$ can be calculated for the broad emission line with the observed central wavelength $\lambda_{\rm cen}$ through \begin{equation}\label{eq:calO} \calO_{\ell}=\int \frac{\Xi_R F_{\rm c}}{4 \pi R^2} f(\bm{R},\bm{\vup})\, \delta\left(\bm{\alpha}-\bm{\alpha}' \right)\delta\left(\lambda -\lambda^{\prime}\right) \dd[3]{\bm{R}} \dd[3]{\bm{\vup}}, \end{equation} where $\lambda^{\prime}=\lambda_{\rm cen}\gamma_{0}\left(1+\bm{\vup}\vdot\nobs/c\right) \left(1-R_{\rm S}/R\right)^{-1/2}$ includes the gravitational shift due to the central SMBH, $R_{\rm S}=2R_{\rm g}$ is the Schwarzschild radius, $\gamma_{0}=\left(1-\vup^2/c^2\right)^{-1/2}$ is the Lorentz factor, ${\bm \alpha}^{\prime}=\left[\bm{R}-\left(\bm{R}\vdot\bmn\right)\bmn\right]/D_{\rm A}$, $\bm{R}$ is the distance to the central SMBH, $f(\bmR,\bmv)$ is the velocity distribution of BLR clouds at $\bmR$, and $F_{\rm c}$ is the ionizing flux. By introducing the fraction of the emission-line flux to the total ($\fline$), we have \begin{equation}\label{eq:epsilon} \bm{\epsilon}(\lambda) = \fline\,\bm{\epsilon}_{\ell}(\lambda), \end{equation} where \begin{equation} \bm{\epsilon}_{\ell}(\lambda) = \frac{\int \bm{\alpha} \calO_{\ell} \dd[2]{\bm{\alpha}}}{\int \calO_{\ell} \dd[2]{\bm{\alpha}}},\,\,\,\fline = \frac{\Fline(\lambda)}{F_{\rm tot}(\lambda)},\, \ \ \Fline(\lambda) = \int \calO_{\ell} \dd[2]{\bm{\alpha}}, \end{equation} and \begin{equation} F_{\rm tot}(\lambda)=\Fline(\lambda) + F_{\rm c}(\lambda). % \end{equation} For an interferometer with a baseline $\bm{B}$, a non-resolved source, with a global angular size smaller than its resolution limit $\lambda / B$, has the interferometric phase \begin{equation}\label{eq:phase} \phi_*(\lambda,\lambda_{\rm r})=-2\pi\bm{u}\vdot[\bm{\epsilon}(\lambda)-\bm{\epsilon}(\lambda_{\rm r})], \end{equation} where $\bm{u}=\bm{B}/\lambda$ is the spatial frequency, and $\lambda_{\rm r}$ is the wavelength of a reference channel. Given the geometry and kinematics of the BLR, the spectroastronometric signals can therefore be calculated. \section{Results: SA and RM signals} \label{sec:results} It is known that the nearly-Keplerian disks dominated by the potential of central sources favor $m=1$ mode, namely a single spiral arm \citep{Adams1989,Shu1990,Lee2019}. We can obtain $\omega$ from the eigenvalue problem of Equation (\ref{eq:U}) as well as the perturbed component ($\sigma_{1}/\sigma_{0}$) for density waves. In this paper, we only focus on the general patterns of the arms for the nearly Keplerian rotating disks, and we leave their growth rates connected with the imaginary part of $\nu=\left(\omega-\Omega\right)/\kappa$ in the second paper of this series (Du et al. 2022 in preparation). The dispersion relation can be used to give a rough estimate to the tightness of winding of the spiral arms, which is expressed by $(\omega-\Omega)^{2}=\kappa^{2}+k^{2}a_{0}^{2}-2\pi G|k|\sigma_{0}$ \citep{Lin1979}, we have \begin{equation} k=-k_{0}\left[1\pm\sqrt{1-Q_{\rm disk}^{2}(1-\nu^{2})}\right],\quad k_{0}% =\frac{(2 \pi G)^{1/2n} Q_{\rm disk}^{(1-2n)/2n}}{K_0^{1/2} \Omega^{(1-n)/n}}. \end{equation} Pitch angles of spiral arms are determined by $\tan i=1/kR$ for given $n$, $K_{0}$ and $Q_{\rm disk}$, and $k_{0}R$ can be representative of the global pitch angles. In order to conveniently show the spiral arms, we use the proxy of pitch angles defined by $\overline{k_0 R}=\left(R_{\rm out}-R_{\rm in}\right)^{-1} \int_{R_{\rm in}}^{R_{\rm out}} k_{0}R\, dR$ along the radial axis as an input parameter in following calculations, rather than $K_0$. Moreover, the validity of the tightly wound approximation can be obviously justified by this parameter. Given $\mathdotM$ and $\bhm$, we show dependence of $\overline{k_0 R}$ on $K_0$ and $n$ in Appendix \ref{app:wavenumbers}. In general, decreasing $K_0$ and $n$ leads to increases of $\overline{k_0 R}$, and $\overline{k_0 R}$ slightly depends on $\mathdotM$ and $\bhm$. This is caused by the dependence of inner and outer boundaries on $\mathdotM$ and $\bhm$, see Eqn \ref{eq:Rtorus} and Appendix \ref{app:wavenumbers}. As a representative, we focus on the case with $\mathdotM = 1.0$ and $\bhm = 10^8 M_{\odot}$, and $Q_{\rm disk}=1$ in the present paper. Figure \ref{fig:spiral-arm} shows the spiral arms of several cases with different $K_0$ and polytropic index $n$ for $m=1$ mode. In this Figure, we arrange the panels in the same line have the same $\overline{k_0 R}$. The corresponding eigenvalues ($\Omega_{p}=\omega/m$) are marked in the left corners in individual panels. Although the BLR gas may not be illuminated by the central ionizing radiation if $n < 3$ (see more details in Section \ref{sec:BLR_structure}), we still show the cases with $n=2$ in Figure \ref{fig:spiral-arm} for comparison. It is obvious that the arms wind more tightly if $\overline{k_0 R}$ is larger. In addition, the contrast of the perturbations in the central regions becomes clearer if $n$ increases. Figure \ref{fig:profile} shows the emission-line profiles of the case with $n=4$ and $\overline{k_0 R}=5.0$ (the arm pattern in the upper-right corner {\ccyan in Figure~\ref{fig:spiral-arm}}) for different azimuthal and inclination angles ($\theta$ and $i_{0}$) of the line of sight. The maximum value of $\sigma_1/\sigma_0$ is fixed to be $(\sigma_1/\sigma_0)_{\rm max} = 0.2$. The real situations may be smaller or larger than the value adopted here. The azimuthal angle $\theta=0^{\circ}$ refers to the line of sight that the observer looks the BLR pattern in Figure \ref{fig:spiral-arm} from the right side, and the azimuthal angle increases counter-clockwise (the rotational velocity of gas is also counter-clockwise). For comparison, the profiles of the corresponding unperturbed disks without spiral arms are also superposed in Figure~\ref{fig:profile}. As expected, the unperturbed disk (geometrically thin) structure of the BLR generates symmetric double-peaked profiles. The double peaks are blended if the disk tends to be observed from a face-on direction (e.g., $i_0=20^{\circ}$ in Figure \ref{fig:profile}). The line profiles of the disks with the spiral arms are obviously asymmetric. For example, in the cases with $\theta=45^{\circ}$, the blue peaks are significantly higher than the red ones because the bright parts of the arms are approaching. As $\theta$ increases from $0^{\circ}$, the asymmetry first becomes stronger until $\theta\sim45^{\circ}-90^{\circ}$ and then gets weaker. The width of the line profile increases if the inclination angle increases. The differences between the profiles with and without spiral arms are also shown in Figure \ref{fig:profile}. {\ccyan If $\sigma_1/\sigma_0$ is larger, the asymmetry of the profiles will become stronger, and vise versa.} In Figure \ref{fig:profile}, we also calculate spectroastrometric signals (orange lines), which are detectable for GRAVITY/GRAVITY+ on VLTI. The signals are significantly different from the standard $S$-curves. The $S$-curves of the unperturbed BLR disks are symmetric that the amplitudes of the blue and red peaks are the same. The phase curves of the BLR with the spiral arm for different azimuthal and inclination angles are also asymmetric just like their emission-line profiles. The differences between the perturbed and unperturbed BLR disks are also shown as the residuals in Figure \ref{fig:profile}. They are the functions of inclination and azimuthal angles. Comparing with the unperturbed BLR disk, the amplitudes of the blue troughs in the cases with the spiral arm are stronger and those of the red peaks become weaker. Similar to the line profiles, the asymmetry of the phase curve first increases if $\theta$ becomes larger, and then decreases after $\theta\sim45^{\circ}-90^{\circ}$. The width of the phase curve increases with the inclination angle increasing. The current GRAVITY/VLTI can conveniently detect differential phase angles to $\sim 0.1^{\circ}$, and GRAVITY+ as its next generation\footnote{see more details from \url{https://www.mpe.mpg.de/7480772/GRAVITYplus_WhitePaper.pdf}} will definitely observe the features much weaker than the present in the near future. Figure \ref{fig:BLR} shows the transfer functions (velocity-delay maps) of the spiral arm with $n=4$ and $\overline{k_0 R}=5$ for different azimuthal and inclination angles. For a brief comparison with the present, we refer readers to the figures showing the transfer functions of Keplerian disks in \cite{Welsh1991} and \cite{Wang2018}. A Keplerian disk shows a symmetric bell-like feature. For clarity, in Figure \ref{fig:BLR}, we only plot the response of the density perturbations ($\sigma_{1}$). For comparison, an example of the transfer function for the unperturbed disk is shown by Figure \ref{fig:vdm_unperturbed} in Appendix \ref{app:transfer_function_unperturbed}. As shown in Figure \ref{fig:BLR}, the major influences of the spiral arm are the variations of the bell's waist. We find that the higher and lower density perturbations (wave crest and wave trough, see also in Figure \ref{fig:spiral-arm}) generate positive and negative signals (stronger and weaker responses) with superposition to the transfer functions of Keplerian disks, respectively. Actually, there is some deficit of the right waist in the bell-like transfer function of NGC 5548 as shown in \citep{Xiao2018}. When $\theta=0^{\circ}$, the positive signals are mainly located in near side (smaller time lags) and the negative signals tend to be at larger time lags. Along with $\theta$ increasing from $0^{\circ}$, the positive and negative signals rotate clockwise. Measuring the rotation of the features is one of the keys to test the presence and dynamics of the spiral arms in BLRs. For cases with $Q_{\rm disk}=1+\Delta Q$, where $\Delta Q<0.3$, we have done the calculations and found the general patterns of arms do not change significantly for given $\overline{k_{0}R}$. We omit the figures. \section{Discussions} \subsection{Tight-winding approximation}\label{sec:complex_BLR} Employing the traditional WKB approximation \cite[e.g.,][]{Lin1979}, we for the first time apply the theory of density waves to BLRs for broad emission line profiles, differential phase curves, and velocity-delay maps. The validity of this approximation can be simply estimated by comparing the global pitch angle proxy of $\overline{k_{0}R}$ as shown in Figure \ref{fig:spiral-arm} with detailed calculations in Du et al. (2022 in preparation) which relaxes the WKB approximation for the more loosely wound cases. We find that the difference of pitch angles can be less than $\lesssim 20\%$ for $\overline{k_{0}R}=5$. It is generally believed the WKB approximation is good enough for $\overline{k_{0}R}\gtrsim 5$, which is consistent with \cite{Lin1979}. Moreover, the non-linear effects have been extensively studied by \cite{Lee1999}, who draw a conclusion that single-armed density waves can exist even in nearly Keplerian disks with only weak self-gravity. Pitch angle of the arms can be significantly larger with non-linear effects. Relaxing the WKB approximation schemed by \cite{Adams1989} will generate more general results for AGN BLR issues as shown in a forthcoming paper (Du et al. 2022 in preparation). On the other hand, some simulations show that $m=1$ is also favored when the disk mass is comparable with the central SMBHs \cite[e.g.,][]{Lodato2004, Kratter2016}. In such a context, the disk will be thicker than the present cases so that the Poisson equation has more complicated expression than Eqn. (\ref{eq:Poisson}). Density waves will be modified by radial self-gravity of the disks. \subsection{Observational appearance} The present profiles of broad emission lines can be conveniently compared with observations. The suggested spiral arms for asymmetric profiles of some AGNs \citep[e.g.,][]{Eracleous1994,Storchi-Bergman2017} could originate from the results of self-gravity instability. Observational appearance should be tested by examining individual AGNs and statistic properties. Asymmetric profiles of Palomar-Green quasars are common, and statistically, the asymmetries are significantly correlated with the strengths of \Feii\, (which clearly depends on accretion rates $\mathdotM$; see Figure 5 in \citealt{Boroson1992}). This has a clear implication that the homogeneity of ionized gas distributions is governed by accretion rates. \cite{Marziani2003, Marziani2009, Marziani2010} investigated the line profiles of low- and intermediate-redshift AGNs, and concluded that the line asymmetry changes systematically along the so-called quasar ``eigenvector 1 sequence''. Parameters of $n$, $K_{0}$, and $Q_{\rm disk}$ may dependent directly on $\mathdotM$ in reality, resulting in the dependence of the arms on accretion rate. This could probably explain these phenomena. We also note that asymmetries of H$\beta$ profiles are changing with time from red to blue asymmetries or {\ccyan reverse}, which could be naturally explained by the pattern motion of $m=1$ mode spiral arms. High fidelity RM of AGNs, that employs high spectral resolution and homogeneous cadence, will finally reveal sub-structures of the BLRs through detailed 2D transfer functions from response to the ionizing sources \citep{Welsh1991,Horne2004,Wang2018,Songsheng2019,Songsheng2020} as well as the spectroastrometric observations of GRAVITY/VLTI with the predicted characteristics. Detailed comparisons with observations will be deferred to a future paper. However, the true situation can be more complicated than the simplified model adopted here, which could make the density waves (spiral arms) more complex, especially in the cases of radiation pressure-driven warped disks \citep{Pringle1996}, MRI-driven turbulence or other instabilities \citep[e.g., a brief review in][]{Ogilvie2013}, or star formation \citep{Shlosman1989, Collin1999, Gammie2001, Collin2008, Wang2011, Wang2012}. Magnetic fields could be very important in some cases. Moreover, fast cooling could make the disk suffer from violent instability, condense into discrete clouds, and even generate filamentary spiral pattern \cite[e.g.,][]{Gammie2001, Rice2003, Rice2011, Kratter2016, Brucy2021}. However, on one hand, heating caused by MRI or radiation from inner region may balance the cooling effect. On the other hand, from the perspective of observation, high-resolution spectroscopy in \cite{Arav1997, Arav1998} provides a lower limit to the number of clouds in BLRs and exclude that BLRs consist of ``discrete'' clouds. Whatever how complicated the physics is in BLRs, future detections will advance understanding the mystery of the BLRs. \subsection{Relation between BLR and accretion disk}\label{sec:BLR-disk} The origin of BLRs and their relation with accretion disks are still a puzzle. In the present paper, we assume that the BLR is the illuminated surface layer of the accretion disk (in the SG region). Such kind of assumption was also adopted by, e.g., \cite{goad2012}. Only in $n>3$ cases, the gas on the surface can be illuminated by the central ionizing photons. It should be noted that radiation pressure from the disk may puff up the height of this region \citep[e.g.,][]{Emmering1992, Murray1995, Chiang1996, Czerny2011, Elvis2017, Baskin2018} and provide a covering factor that is large enough to explain the BLR observations, which will ease the restriction of polytropic index. However, more thick disk may potentially stabilize against non-axisymmetric perturbation and reduce the lifetime of spiral arms \citep{Ghosh2021}. The observation in \cite{Horne2021} has indicated the azimuthal structures in BLRs, which gives a constraint that the lifetime of the arms cannot be too short. The influence of the thick BLR layers should be investigated both theoretically and observationally in future. \subsection{Self-gravity of the accretion disks} To maintain density waves needs $Q_{\rm disk}\sim 1-1.3$ \citep[e.g.,][]{Lodato2004} driven by several mechanisms mentioned in Section 1, however, it is expected to distinguish them from observations. Star formation in the self-gravitating disks could support the state of $Q_{\rm disk}$ as suggested by \citep[e.g.,][]{Shlosman1989,Collin1999,Thompson2005,Wang2011}. As a self-regulation, higher star formation will decrease the surface density of the disks increasing $Q_{\rm disk}$ whereas lower star formation rates increase the surface density decreasing $Q_{\rm disk}$. Except for releases of gravitational energy of accreting gas, star formation and supernovae explosion will supply additional energies to this region. As independent evidence for this, fortunately, AGNs and quasars are known to be metal-rich providing observational constraints on these processes. It would be interesting to test potential dependence of broad emission line profiles on the metallicity ($Z$). As hint evidence, asymmetries of H$\beta$ profiles strongly correlate with \Feii\, strength (${\cal R}_{\rm Fe}$) \citep[see Figure 5 in][]{Boroson1992} while ${\cal R}_{\rm Fe}$ is a proxy of accretion rates \citep[e.g.,][]{Boroson1992,Marziani2003,Hu2012,Shen2014} correlating with $Z$. Finally, self-gravity has been neglected in vertical direction of the BLR for its height (i.e., in the equation of $H=a_{0}\Omega^{-1}$). A sophisticated treatment of the vertical structure will include self-gravity as well as radiation pressure from viscosity dissipation and star formation (and supernovae explosion). We leave this in a future paper. \section{Conclusions} There is growing evidence for appearance of spiral arms in broad-line regions of active galactic nuclei. In this paper, using the WKB approximation, we start from the perturbation equations to study dynamics and structures of ionized gas in {\ccyan the SG} regions around SMBHs, which constitute the major parts of BLRs. We calculate the major properties of density waves excited by the $m=1$ mode. The features of density waves can be detected by asymmetric profiles, differential interferometric signals (by GRAVITY/GRAVITY+ onboard VLTI), and 2D transfer functions from RM campaigns. In particular, the patterns of spiral arm in the 2D transfer functions are unique due to rotation motion of spiral arm. These features help to better understand the physical connection between SG disks and BLRs in AGNs. Using the hypothesis that SG regions maintain the Toomre constant $Q\sim 1$, we show that the excited density waves arising from perturbations of SG disks are responsible for inhomogeneous distributions of BLRs. It is possible to observationally test density waves in BLRs with the current instruments. Our preliminary results show that density waves in BLRs provide a new avenue for studying BLR structures and dynamics as well as for resolving the long-standing issues of BLRs. \vglue 0.5cm \begin{acknowledgements} We are grateful to an anonymous referee for a large number of comments and suggestions from a helpful report to improve the manuscript. We acknowledge the support by National Key R\&D Program of China (grants 2016YFA0400701), by NSFC through grants {NSFC-11991054, -11991051, -12022301, -11873048, -11833008, -11573026}, and by Grant No. QYZDJ-SSW-SLH007 from the Key Research Program of Frontier Sciences, CAS, by the Strategic Priority Research Program of the Chinese Academy of Sciences grant No.XDB23010400. \end{acknowledgements} \begin{appendix} \section{Basic equations} \label{app:basic_equations} We start from the ideal fluid equations in the cylindric coordinates ($R,\varphi,z$). For reader's convenience, we list the classical equations which can be found from \cite{Lau1978}, \cite{Lin1979} and \cite{Binney2008}. The continuity equation reads \begin{equation}\label{eq:continuity} \frac{\partial\sigma}{\partial t}+ \frac{1}{R}\frac{\partial}{\partial R}(R\sigma u)+ \frac{1}{R}\frac{\partial}{\partial\varphi}(\sigma \vup)=0, \end{equation} and the motion euqtions are \begin{equation} \frac{\partial u}{\partial t}+ u\frac{\partial u}{\partial R}+ \frac{\vup}{R}\frac{\partial u}{\partial \varphi}-\frac{\vup^{2}}{R} =-\frac{\partial}{\partial R}(\mathscr{V}_{0}+\psi+h), \end{equation} and \begin{equation} \frac{\partial \vup}{\partial t}+ u\frac{\partial \vup}{\partial R}+ \frac{\vup}{R}\frac{\partial \vup}{\partial \varphi}+\frac{u\vup}{R} =-\frac{1}{R}\frac{\partial}{\partial \varphi}(\psi+h), \end{equation} where $\mathscr{V}_{0}$ is the potential. We should mention that the viscosity is neglected in above equations. This is valid for the quasi-Keplerian rotation disk as in the standard disk model \citep{Shakura1973}. \section{Coefficients} \label{app:coefficients} The coefficients are given by follows \begin{equation} \calA=-\frac{1}{R}\frac{d\ln{\mathscr{A}}}{d\ln R},\quad {\mathscr{A}}=\frac{\kappa^{2}(1-\nu^{2})}{\sigma_{0}R},\quad \nu=\frac{\omega-m\Omega}{\kappa}; \end{equation} \begin{equation} \calB=-\frac{m^{2}}{R^{2}}-\frac{4m\Omega(R\nu^{\prime})}{\kappa R^{2}\left(1-\nu^{2}\right)}+ \frac{2m\Omega}{R^{2}\kappa\nu}\frac{d\ln\left(\kappa^2/\sigma_{0}\Omega\right)}{d\ln R}, \,\,% \calC=-\frac{\kappa^{2}\left(1-\nu^{2}\right)}{a_{0}^{2}}. \end{equation} \section{Wave numbers of spiral arms} \label{app:wavenumbers} We present the dependence of $\overline{k_0 R}$ on the parameter $K_0$ and the polytropic index $n$ in Figure \ref{fig:k0r}. $\overline{k_0 R}$ decreases with $K_0$ and $n$ increase. In addition, $\overline{k_0 R}$ increases slightly if $\mathdotM$ and $\bhm$ increase. The current dependence on $\mathdotM$ and $\bhm$ in the present model is mainly caused by the dependence of the inner and outer boundaries on these two parameters (see Eqn \ref{eq:Rtorus}). In reality, $K_0$, $n$, and even $Q_{\rm disk}$ may rely on $\mathdotM$ and $\bhm$ more directly. \section{An example of the transfer function for unperturbed disk} \label{app:transfer_function_unperturbed} In Figure \ref{fig:BLR} of Section \ref{sec:results}, we present the transfer functions of the spiral arms ($\sigma_1$). Here, for comparison, we provide an example of the transfer function of the unperturbed disk ($\sigma_0$) in Figure \ref{fig:vdm_unperturbed}. \end{appendix}
Title: Jupiter and Saturn as Spectral Analogs for Extrasolar Gas Giants and Brown Dwarfs
Abstract: With the advent of direct imaging spectroscopy, the number of spectra from brown dwarfs and extrasolar gas giants is growing rapidly. Many brown dwarfs and extrasolar gas giants exhibit spectroscopic and photometric variability, which is likely the result of weather patterns. However, for the foreseeable future, point-source observations will be the only viable method to extract brown dwarf and exoplanet spectra. Models have been able to reproduce the observed variability, but ground truth observations are required to verify their results. To that end, we provide visual and near-infrared spectra of Jupiter and Saturn obtained from the \emph{Cassini} VIMS instrument. We disk-integrate the VIMS spectral cubes to simulate the spectra of Jupiter and Saturn as if they were directly imaged exoplanets or brown dwarfs. We present six empirical disk-integrated spectra for both Jupiter and Saturn with phase coverage of $1.7^\circ$ to $133.5^\circ$ and $39.6^\circ$ to $110.2^\circ$, respectively. To understand the constituents of these disk-integrated spectra, we also provide end member (single feature) spectra for permutations of illumination and cloud density, as well as for Saturn's rings. In tandem, these disk-integrated and end member spectra provide the ground truth needed to analyze point source spectra from extrasolar gas giants and brown dwarfs. Lastly, we discuss the impact that icy rings, such as Saturn's, have on disk-integrated spectra and consider the feasibility of inferring the presence of rings from direct imaging spectra.
https://export.arxiv.org/pdf/2208.05541
@article{Acton_1999, title={in Lunar and Planetary Science Conference}, author={{Acton}, C. H.}, journal={Lunar and Planetary Science Conference}, year={1999}, } @article{Apai_2013, title={HST spectral mapping of L/T transition brown dwarfs reveals cloud thickness variations}, author={Apai, D{\'a}niel and Radigan, Jacqueline and Buenzli, Esther and Burrows, Adam and Reid, Iain Neill and Jayawardhana, Ray}, journal={The Astrophysical Journal}, volume={768}, number={2}, pages={121}, year={2013}, publisher={IOP Publishing} } @misc{Apai_2017, title={Exploring Other Worlds: Science Questions for Future Direct Imaging Missions (EXOPAG SAG15 Report)}, author={Daniel Apai and Nicolas Cowan and Ravikumar Kopparapu and Markus Kasper and Renyu Hu and Caroline Morley and Yuka Fujii and Stephen Kane and Mark Maley and Anthony del Genio and Theodora Karalidi and Thaddeus Komacek and Eric Mamajek and Avi Mandell and Shawn Domagal-Goldman and Travis Barman and Alan Boss and James Breckinridge and Ian Crossfield and William Danchi and Eric Ford and Nicolas Iro and James Kasting and Patrick Lowrance and Nikku Madhusudhan and Michael McElwain and William Moore and Ilaria Pascucci and Peter Plavchan and Aki Roberge and Glenn Schneider and Adam Showman and Margaret Turnbull}, year={2017}, archivePrefix={arXiv}, primaryClass={astro-ph.EP} } @article{Arnold_2004, title={The detectability of extrasolar planet surroundings-I. Reflected-light photometry of unresolved rings}, author={Arnold, Luc and Schneider, Jean}, journal={Astronomy \& Astrophysics}, volume={420}, number={3}, pages={1153--1162}, year={2004}, publisher={EDP Sciences} } @article{Aumann_1969, title={The internal powers and effective temperatures of Jupiter and Saturn}, author={Aumann, HH and Gillespie Jr, CM and Low, FJ}, journal={The Astrophysical Journal}, volume={157}, pages={L69}, year={1969} } @article{Baines_2005, title={THE ATMOSPHERES OF SATURN AND TITAN IN THE NEAR-INFRARED: FIRST RESULTS OF CASSINI/VIMS}, author={{Baines}, K. H. and P. Drossart and T. W. Momary and V. FORMISANO and C. GRIFFITH and G. BELLUCCI and J. P. BIBRING and R. H. BROWN and B. J. BURATTI and F. CAPACCIONI and P. CERRONI and R. N. CLARK and A. CORADINI and M. COMBES and D. P. CRUIKSHANK and R. JAUMANN and Y. LANGEVIN and D. L. MATSON and T. B. MCCORD and V. MENNELLA and R. M. NELSON and P. D. NICHOLSON and B. SICARDY and C. SOTIN }, journal={Earth, Moon, and Planets}, volume={96}, pages={119--147}, year={2005}, publisher={Springer} } @article{Barnes_2007, title={Global-scale surface spectral variations on Titan seen from Cassini/VIMS}, author={{Barnes}, Jason W and Brown, Robert H and Soderblom, Laurence and Buratti, Bonnie J and Sotin, Christophe and Rodriguez, Sebastien and Le Mou{\`e}lic, Stephane and Baines, Kevin H and Clark, Roger and Nicholson, Phil}, journal={Icarus}, volume={186}, number={1}, pages={242--258}, year={2007}, publisher={Elsevier} } @article{Biller_2017, title={The time domain for brown dwarfs and directly imaged giant exoplanets: the power of variability monitoring}, author={Biller, Beth}, journal={Astronomical Review}, volume={13}, number={1}, pages={1--27}, year={2017}, publisher={Taylor \& Francis} } @article{Biller_2018, title={Exoplanet atmosphere measurements from direct imaging}, author={Biller, Beth A and Bonnefoy, Micka{\"e}l}, journal={arXiv preprint arXiv:1807.05136}, year={2018} } @article{Brown_2004, title={The Cassini visual and infrared mapping spectrometer (VIMS) investigation}, author={{Brown}, Robert H and Baines, Kevin H and Bellucci, Gꎬ and Bibring, J-P and Buratti, Bonnie J and Capaccioni, F and Cerroni, P and Clark, Roger N and Coradini, Angioletta and Cruikshank, Dale P and others}, journal={Space Science Reviews}, volume={115}, number={1-4}, pages={111--168}, year={2004}, publisher={Springer} } @article{Burgasser_2002a, title={The spectra of T dwarfs. I. Near-infrared data and spectral classification}, author={{Burgasser}, Adam J and Kirkpatrick, J Davy and Brown, Michael E and Reid, I Neill and Burrows, Adam and Liebert, James and Matthews, Keith and Gizis, John E and Dahn, Conard C and Monet, David G and others}, journal={\apj}, volume={564}, number={1}, pages={421}, year={2002}, publisher={IOP Publishing} } @article{Burgasser_2002b, title={Evidence of cloud disruption in the L/T dwarf transition}, author={Burgasser, Adam J and Marley, Mark S and Ackerman, Andrew S and Saumon, Didier and Lodders, Katharina and Dahn, Conard C and Harris, Hugh C and Kirkpatrick, J Davy}, journal={The Astrophysical Journal}, volume={571}, number={2}, pages={L151}, year={2002}, publisher={IOP Publishing} } @article{Crossfield_2014a, title={A global cloud map of the nearest known brown dwarf}, author={{Crossfield}, IJM and Biller, B and Schlieder, JE and Deacon, NR and Bonnefoy, M and Homeier, D and Allard, F and Buenzli, E and Henning, Th and Brandner, W and others}, journal={Nature}, volume={505}, number={7485}, pages={654--656}, year={2014}, publisher={Nature Publishing Group} } @article{Crossfield_2014b, title={Doppler imaging of exoplanets and brown dwarfs}, author={{Crossfield}, Ian JM}, journal={Astronomy \& Astrophysics}, volume={566}, pages={A130}, year={2014}, publisher={EDP Sciences} } @article{Cushing_2011, title={The discovery of Y dwarfs using data from the wide-field infrared survey explorer (WISE)}, author={{Cushing}, Michael C and Kirkpatrick, J Davy and Gelino, Christopher R and Griffith, Roger L and Skrutskie, Michael F and Mainzer, A and Marsh, Kenneth A and Beichman, Charles A and Burgasser, Adam J and Prato, Lisa A and others}, journal={\apj}, volume={743}, number={1}, pages={50}, year={2011}, publisher={IOP Publishing} } @article{Cushing_2016, title={THE FIRST DETECTION OF PHOTOMETRIC VARIABILITY IN AY DWARF: WISE J140518. 39+ 553421.3}, author={Cushing, Michael C and Hardegree-Ullman, Kevin K and Trucks, Jesica L and Morley, Caroline V and Gizis, John E and Marley, Mark S and Fortney, Jonathan J and Kirkpatrick, J Davy and Gelino, Christopher R and Mace, Gregory N and others}, journal={The Astrophysical Journal}, volume={823}, number={2}, pages={152}, year={2016}, publisher={IOP Publishing} } @article{Damiano_2020a, title={Multi-orbital-phase and multiband characterization of exoplanetary atmospheres with reflected light spectra}, author={Damiano, Mario and Hu, Renyu and Hildebrandt, Sergi R}, journal={The Astronomical Journal}, volume={160}, number={5}, pages={206}, year={2020}, publisher={IOP Publishing} } @article{Damiano_2020b, title={Exorel: a Bayesian inverse retrieval framework for exoplanetary reflected light spectra}, author={Damiano, Mario and Hu, Renyu}, journal={The Astronomical Journal}, volume={159}, number={4}, pages={175}, year={2020}, publisher={IOP Publishing} } @BOOK{Decadal_2020, author = "{National Academies of Sciences, Engineering, and Medicine}", title = "Pathways to Discovery in Astronomy and Astrophysics for the 2020s", isbn = "978-0-309-46586-1", year =2021, publisher = "The National Academies Press", } @article{Dyudina_2005, title={Phase light curves for extrasolar Jupiters and Saturns}, author={Dyudina, Ulyana A and Sackett, Penny D and Bayliss, Daniel DR and Seager, Sara and Porco, Carolyn C and Throop, Henry B and Dones, Luke}, journal={The Astrophysical Journal}, volume={618}, number={2}, pages={973}, year={2005}, publisher={IOP Publishing} } @article{Fortney_2020, title={Beyond Equilibrium Temperature: How the Atmosphere/Interior Connection Affects the Onset of Methane, Ammonia, and Clouds in Warm Transiting Giant Planets}, author={Fortney, Jonathan J and Visscher, Channon and Marley, Mark S and Hood, Callie E and Line, Michael R and Thorngren, Daniel P and Freedman, Richard S and Lupu, Roxana}, journal={The Astronomical Journal}, volume={160}, number={6}, pages={288}, year={2020}, publisher={IOP Publishing} } @article{Ge_2019, title={Rotational light curves of Jupiter from ultraviolet to mid-infrared and implications for brown dwarfs and exoplanets}, author={Ge, Huazhi and Zhang, Xi and Fletcher, Leigh N and Orton, Glenn S and Sinclair, James and Fernandes, Josh and Momary, Tom and Kasaba, Yasumasa and Sato, Takao M and Fujiyoshi, Takuya}, journal={The Astronomical Journal}, volume={157}, number={2}, pages={89}, year={2019}, publisher={IOP Publishing} } @inproceedings{Gelino_2000, title={Variability in an unresolved Jupiter}, author={Gelino, C and Marley, M}, booktitle={From Giant Planets to Cool Stars}, volume={212}, pages={322}, year={2000} } @article{Gu_2021, title={Earth as a Proxy Exoplanet: Deconstructing and Reconstructing Spectrophotometric Light Curves}, author={Gu, Lixiang and Fan, Siteng and Li, Jiazheng and Bartlett, Stuart J and Natraj, Vijay and Jiang, Jonathan H and Crisp, David and Hu, Yongyun and Tinetti, Giovanna and Yung, Yuk L}, journal={The Astronomical Journal}, volume={161}, number={3}, pages={122}, year={2021}, publisher={IOP Publishing} } @article{Heng_2021, title={Jupiter as an exoplanet: insights from Cassini phase curves}, author={Heng, Kevin and Li, Liming}, journal={The Astrophysical Journal Letters}, volume={909}, number={2}, pages={L20}, year={2021}, publisher={IOP Publishing} } @article{Jiang_2018, title={Using deep space climate observatory measurements to study the Earth as an exoplanet}, author={Jiang, Jonathan H and Zhai, Albert J and Herman, Jay and Zhai, Chengxing and Hu, Renyu and Su, Hui and Natraj, Vijay and Li, Jiazheng and Xu, Feng and Yung, Yuk L}, journal={The Astronomical Journal}, volume={156}, number={1}, pages={26}, year={2018}, publisher={IOP Publishing} } @article{Kirkpatrick_1999, title={Dwarfs cooler than “M”: The definition of spectral type “L” using discoveries from the 2-Micron All-Sky Survey (2MASS)}, author={{Kirkpatrick}, J. Davy and Reid, I Neill and Liebert, James and Cutri, Roc M and Nelson, Brant and Beichman, Charles A and Dahn, Conard C and Monet, David G and Gizis, John E and Skrutskie, Michael F}, journal={\apj}, volume={519}, number={2}, pages={802}, year={1999}, publisher={IOP Publishing} } @article{Leggett_2015, title={Near-infrared photometry of Y dwarfs: low ammonia abundance and the onset of water clouds}, author={Leggett, SK and Morley, Caroline V and Marley, MS and Saumon, D}, journal={The Astrophysical Journal}, volume={799}, number={1}, pages={37}, year={2015}, publisher={IOP Publishing} } @article{Leggett_2016, title={Near-infrared spectroscopy of the Y0 WISEP J173835. 52+ 273258.9 and the Y1 WISE J035000. 32--565830.2: The importance of non-equilibrium chemistry}, author={Leggett, Sandy K and Tremblin, Patrick and Saumon, Didier and Marley, Mark S and Morley, Caroline V and Amundsen, David S and Baraffe, Isabelle and Chabrier, Gilles}, journal={The Astrophysical Journal}, volume={824}, number={1}, pages={2}, year={2016}, publisher={IOP Publishing} } @article{Lew_2020, title={Cloud Atlas: Unraveling the vertical cloud structure with the time-series spectrophotometry of an unusually red brown dwarf}, author={Lew, Ben WP and Apai, D{\'a}niel and Marley, Mark and Saumon, Didier and Schneider, Glenn and Zhou, Yifan and Cowan, Nicolas B and Karalidi, Theodora and Manjavacas, Elena and Bedin, LR and others}, journal={The Astrophysical Journal}, volume={903}, number={1}, pages={15}, year={2020}, publisher={IOP Publishing} } @article{Livengood_2011, title={Properties of an Earth-like planet orbiting a Sun-like star: Earth observed by the EPOXI mission}, author={Livengood, Timothy A and Deming, L Drake and A'hearn, Michael F and Charbonneau, David and Hewagama, Tilak and Lisse, Carey M and McFadden, Lucy A and Meadows, Victoria S and Robinson, Tyler D and Seager, Sara and others}, journal={Astrobiology}, volume={11}, number={9}, pages={907--930}, year={2011}, publisher={Mary Ann Liebert, Inc. 140 Huguenot Street, 3rd Floor New Rochelle, NY 10801 USA} } @article{Luhman_2014, title={Discovery of a~ 250 K brown dwarf at 2 pc from the sun}, author={{Luhman}, K. L.}, journal={\apjl}, volume={786}, number={2}, pages={L18}, year={2014}, publisher={IOP Publishing} } @article{Lupu_2016, title={Developing atmospheric retrieval methods for direct imaging spectroscopy of gas giants in reflected light. I. Methane abundances and basic cloud properties}, author={Lupu, Roxana E and Marley, Mark S and Lewis, Nikole and Line, Michael and Traub, Wesley A and Zahnle, Kevin}, journal={The Astronomical Journal}, volume={152}, number={6}, pages={217}, year={2016}, publisher={IOP Publishing} } @article{Macdonald_2018, title={Exploring H2O prominence in reflection spectra of cool giant planets}, author={MacDonald, Ryan J and Marley, Mark S and Fortney, Jonathan J and Lewis, Nikole K}, journal={The Astrophysical Journal}, volume={858}, number={2}, pages={69}, year={2018}, publisher={IOP Publishing} } @article{Marley_1999, title={Reflected spectra and albedos of extrasolar giant planets. I. Clear and cloudy atmospheres}, author={Marley, Mark S and Gelino, Christopher and Stephens, Denise and Lunine, Jonathan I and Freedman, Richard}, journal={The Astrophysical Journal}, volume={513}, number={2}, pages={879}, year={1999}, publisher={IOP Publishing} } @article{Marley_2010, title={A patchy cloud model for the L to T dwarf transition}, author={{Marley}, Mark S and Saumon, Didier and Goldblatt, Colin}, journal={\apjl}, volume={723}, number={1}, pages={L117}, year={2010}, publisher={IOP Publishing} } @article{Marley_2014, title={On the cool side: modeling the atmospheres of brown dwarfs and giant planets}, author={Marley, Mark S and Robinson, Tyler D}, journal={arXiv preprint arXiv:1410.6512}, year={2014} } @article{Martin_2020, title={The Planet as Exoplanet Analog Spectrograph (PEAS): design and first-light}, author={Martin, Emily C and Skemer, Andrew J and Radovan, Matthew V and Allen, Steven L and Black, David and Deich, William TS and Fortney, Jonathan J and Kruglikov, Gabriel and MacDonald, Nicholas and Marques, David and others}, booktitle={Ground-based and Airborne Instrumentation for Astronomy VIII}, volume={11447}, pages={135--143}, year={2020}, organization={SPIE} } @article{Mayorga_2016, title={Jupiter’s phase variations from Cassini: a testbed for future direct-imaging missions}, author={Mayorga, LC and Jackiewicz, Jason and Rages, Kathy and West, Robert A and Knowles, Ben and Lewis, N and Marley, Mark S}, journal={The Astronomical Journal}, volume={152}, number={6}, pages={209}, year={2016}, publisher={IOP Publishing} } @article{Metchev_2015, title={Weather on other worlds. II. Survey results: Spots are ubiquitous on L and T dwarfs}, author={{Metchev}, Stanimir A and Heinze, Aren and Apai, D{\'a}niel and Flateau, Davin and Radigan, Jacqueline and Burgasser, Adam and Marley, Mark S and Artigau, {\'E}tienne and Plavchan, Peter and Goldman, Bertrand}, journal={The Astrophysical Journal}, volume={799}, number={2}, pages={154}, year={2015}, publisher={IOP Publishing} } @article{Morley_2018, title={An L band spectrum of the coldest brown dwarf}, author={Morley, Caroline V and Skemer, Andrew J and Allers, Katelyn N and Marley, Mark S and Faherty, Jacqueline K and Visscher, Channon and Beiler, Samuel A and Miles, Brittany E and Lupu, Roxana and Freedman, Richard S and others}, journal={The Astrophysical Journal}, volume={858}, number={2}, pages={97}, year={2018}, publisher={IOP Publishing} } @article{Nayak_2017, title={Atmospheric retrieval for direct imaging spectroscopy of gas giants in reflected light. II. Orbital phase and planetary radius}, author={Nayak, Michael and Lupu, Roxana and Marley, Mark S and Fortney, Jonathan J and Robinson, Tyler and Lewis, Nikole}, journal={Publications of the Astronomical Society of the Pacific}, volume={129}, number={973}, pages={034401}, year={2017}, publisher={IOP Publishing} } @article{Perez_2016, title={Saturn’s tropospheric particles phase function and spatial distribution from Cassini ISS 2010--11 observations}, author={P{\'e}rez-Hoyos, Santiago and Sanz-Requena, Jos{\'e} Francisco and S{\'a}nchez-Lavega, Agust{\'\i}n and Irwin, Patrick GJ and Smith, Andrew}, journal={Icarus}, volume={277}, pages={1--18}, year={2016}, publisher={Elsevier} } @article{Simon_2016, title={Neptune’s dynamic atmosphere from Kepler K2 observations: implications for brown dwarf light curve analyses}, author={Simon, Amy A and Rowe, Jason F and Gaulme, Patrick and Hammel, Heidi B and Casewell, Sarah L and Fortney, Jonathan J and Gizis, John E and Lissauer, Jack J and Morales-Juberias, Raul and Orton, Glenn S and others}, journal={The Astrophysical Journal}, volume={817}, number={2}, pages={162}, year={2016}, publisher={IOP Publishing} } @article{Skemer_2016, title={The first spectrum of the coldest brown dwarf}, author={Skemer, Andrew J and Morley, Caroline V and Allers, Katelyn N and Geballe, Thomas R and Marley, Mark S and Fortney, Jonathan J and Faherty, Jacqueline K and Bjoraker, Gordon L and Lupu, Roxana}, journal={The Astrophysical Journal Letters}, volume={826}, number={2}, pages={L17}, year={2016}, publisher={IOP Publishing} } @article{Sudarsky_2000, title={Albedo and reflection spectra of extrasolar giant planets}, author={Sudarsky, David and Burrows, Adam and Pinto, Philip}, journal={The Astrophysical Journal}, volume={538}, number={2}, pages={885}, year={2000}, publisher={IOP Publishing} } @article{Sudarsky_2003, title={Theoretical spectra and atmospheres of extrasolar giant planets}, author={Sudarsky, David and Burrows, Adam and Hubeny, Ivan}, journal={The Astrophysical Journal}, volume={588}, number={2}, pages={1121}, year={2003}, publisher={IOP Publishing} } @article{Tan_2019, title={Atmospheric variability driven by radiative cloud feedback in brown dwarfs and directly imaged extrasolar giant planets}, author={Tan, Xianyu and Showman, Adam P}, journal={The Astrophysical Journal}, volume={874}, number={2}, pages={111}, year={2019}, publisher={IOP Publishing} } @article{Tan_2021, title={Atmospheric circulation of brown dwarfs and directly imaged exoplanets driven by cloud radiative feedback: effects of rotation}, author={Tan, Xianyu and Showman, Adam P}, journal={Monthly Notices of the Royal Astronomical Society}, volume={502}, number={1}, pages={678--699}, year={2021}, publisher={Oxford University Press} } @article{Tomasko_1980, title={Photometry of Saturn at large phase angles}, author={Tomasko, MG and McMillan, RS and Doose, LR and Castillo, ND and Dilley, JP}, journal={Journal of Geophysical Research: Space Physics}, volume={85}, number={A11}, pages={5891--5903}, year={1980}, publisher={Wiley Online Library} } @article{Traub_2010, title={Direct imaging of exoplanets}, author={{Traub}, Wesley A and Oppenheimer, Ben R}, journal={Exoplanets}, pages={111--156}, year={2010}, publisher={University of Arizona Press, Tucson} } @article{Vdovichenko_2021, title={Zonal Features in the Behavior of Weak Molecular Absorption Bands on Jupiter}, author={{Vdovichenko}, V. D. and A. M. Karimov and G. A. Kirienko and P. G. Lysenko and V. G. Tejfel and V. A. Filippov and G. A. Kharitonova and A. P. Khozhenets}, journal={Solar System Research}, volume={55}, pages={35--46}, year={2021}, publisher={Springer} } @article{Yang_2016, title={Extrasolar storms: Pressure-dependent changes in light-curve phase in brown dwarfs from simultaneous HST and Spitzer observations}, author={Yang, Hao and Apai, D{\'a}niel and Marley, Mark S and Karalidi, Theodora and Flateau, Davin and Showman, Adam P and Metchev, Stanimir and Buenzli, Esther and Radigan, Jacqueline and Artigau, {\'E}tienne and others}, journal={The Astrophysical Journal}, volume={826}, number={1}, pages={8}, year={2016}, publisher={IOP Publishing} } @article{Zhang_2014, title={Atmospheric circulation of brown dwarfs: jets, vortices, and time variability}, author={Zhang, Xi and Showman, Adam P}, journal={The Astrophysical Journal Letters}, volume={788}, number={1}, pages={L6}, year={2014}, publisher={IOP Publishing} } @article{Zhang_2020, title={Atmospheric regimes and trends on exoplanets and brown dwarfs}, author={Zhang, Xi}, journal={Research in Astronomy and Astrophysics}, volume={20}, number={7}, pages={099}, year={2020}, publisher={IOP Publishing} } @article{Zhou_2016, title={Discovery of rotational modulations in the planetary-mass companion 2M1207b: intermediate rotation period and heterogeneous clouds in a low gravity atmosphere}, author={Zhou, Yifan and Apai, D{\'a}niel and Schneider, Glenn H and Marley, Mark S and Showman, Adam P}, journal={The Astrophysical Journal}, volume={818}, number={2}, pages={176}, year={2016}, publisher={IOP Publishing} } @ARTICLE{2013A&A...558A..33A, author = {{Astropy Collaboration} and {Robitaille}, Thomas P. and {Tollerud}, Erik J. and {Greenfield}, Perry and {Droettboom}, Michael and {Bray}, Erik and {Aldcroft}, Tom and {Davis}, Matt and {Ginsburg}, Adam and {Price-Whelan}, Adrian M. and {Kerzendorf}, Wolfgang E. and {Conley}, Alexander and {Crighton}, Neil and {Barbary}, Kyle and {Muna}, Demitri and {Ferguson}, Henry and {Grollier}, Fr{\'e}d{\'e}ric and {Parikh}, Madhura M. and {Nair}, Prasanth H. and {Unther}, Hans M. and {Deil}, Christoph and {Woillez}, Julien and {Conseil}, Simon and {Kramer}, Roban and {Turner}, James E.~H. and {Singer}, Leo and {Fox}, Ryan and {Weaver}, Benjamin A. and {Zabalza}, Victor and {Edwards}, Zachary I. and {Azalee Bostroem}, K. and {Burke}, D.~J. and {Casey}, Andrew R. and {Crawford}, Steven M. and {Dencheva}, Nadia and {Ely}, Justin and {Jenness}, Tim and {Labrie}, Kathleen and {Lim}, Pey Lian and {Pierfederici}, Francesco and {Pontzen}, Andrew and {Ptak}, Andy and {Refsdal}, Brian and {Servillat}, Mathieu and {Streicher}, Ole}, title = "{Astropy: A community Python package for astronomy}", journal = {\aap}, keywords = {methods: data analysis, methods: miscellaneous, virtual observatory tools, Astrophysics - Instrumentation and Methods for Astrophysics}, year = "2013", month = "Oct", volume = {558}, eid = {A33}, pages = {A33}, doi = {10.1051/0004-6361/201322068}, archivePrefix = {arXiv}, eprint = {1307.6212}, primaryClass = {astro-ph.IM}, adsurl = {https://ui.adsabs.harvard.edu/abs/2013A&A...558A..33A}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @ARTICLE{1996A&AS..117..393B, author = {{Bertin}, E. and {Arnouts}, S.}, title = "{SExtractor: Software for source extraction.}", journal = {\aaps}, keywords = {METHODS: DATA ANALYSIS, TECHNIQUES: IMAGE PROCESSING, GALAXIES: PHOTOMETRY}, year = "1996", month = "Jun", volume = {117}, pages = {393-404}, doi = {10.1051/aas:1996164}, adsurl = {https://ui.adsabs.harvard.edu/abs/1996A&AS..117..393B}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @ARTICLE{2018AJ....156...82C, author = {{Cloutier}, Ryan and {Doyon}, Ren{\'e} and {Bouchy}, Francois and {H{\'e}brard}, Guillaume}, title = "{Quantifying the Observational Effort Required for the Radial Velocity Characterization of TESS Planets}", journal = {\aj}, keywords = {methods: analytical, planets and satellites: detection, planets and satellites: fundamental parameters, techniques: radial velocities, Astrophysics - Earth and Planetary Astrophysics}, year = "2018", month = "Aug", volume = {156}, number = {2}, eid = {82}, pages = {82}, doi = {10.3847/1538-3881/aacea9}, archivePrefix = {arXiv}, eprint = {1807.01263}, primaryClass = {astro-ph.EP}, adsurl = {https://ui.adsabs.harvard.edu/abs/2018AJ....156...82C}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @ARTICLE{2015ApJ...805...23C, author = {{Corrales}, Lia}, title = "{X-Ray Scattering Echoes and Ghost Halos from the Intergalactic Medium: Relation to the Nature of AGN Variability}", journal = {\apj}, keywords = {accretion, accretion disks, dust, extinction, quasars: general, intergalactic medium, X-rays: ISM, Astrophysics - High Energy Astrophysical Phenomena, Astrophysics - Astrophysics of Galaxies}, year = "2015", month = "May", volume = {805}, number = {1}, eid = {23}, pages = {23}, doi = {10.1088/0004-637X/805/1/23}, archivePrefix = {arXiv}, eprint = {1503.01475}, primaryClass = {astro-ph.HE}, adsurl = {https://ui.adsabs.harvard.edu/abs/2015ApJ...805...23C}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @ARTICLE{2013RMxAA..49..137F, author = {{Ferland}, G.~J. and {Porter}, R.~L. and {van Hoof}, P.~A.~M. and {Williams}, R.~J.~R. and {Abel}, N.~P. and {Lykins}, M.~L. and {Shaw}, G. and {Henney}, W.~J. and {Stancil}, P.~C.}, title = "{The 2013 Release of Cloudy}", journal = {\rmxaa}, keywords = {atomic processes, galaxies: active, methods: numerical, molecular processes, radiation mechanisms: general, Astrophysics - Galaxy Astrophysics, Astrophysics - Cosmology and Extragalactic Astrophysics, Astrophysics - Instrumentation and Methods for Astrophysics}, year = "2013", month = "Apr", volume = {49}, pages = {137-163}, archivePrefix = {arXiv}, eprint = {1302.4485}, primaryClass = {astro-ph.GA}, adsurl = {https://ui.adsabs.harvard.edu/abs/2013RMxAA..49..137F}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @INPROCEEDINGS{1989BAAS...21..780H, author = {{Hanisch}, R.~J. and {Biemesderfer}, C.~D.}, title = "{T$_{E}$X and LAT$_{E}$X Macro Definition Files for Astronomical Publications}", booktitle = {\baas}, year = "1989", month = "Mar", pages = {780}, adsurl = {https://ui.adsabs.harvard.edu/abs/1989BAAS...21..780H}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @BOOK{lamport94, author = {{Lamport}, L.}, title = "{LaTeX: A Document Preparation System}", publisher = {Addison-Wesley Professional}, year = "1994", edition = {2}, isbn = {0201529831} } @ARTICLE{2018ApJ...868L..33L, author = {{Li}, Leping and {Zhang}, Jun and {Peter}, Hardi and {Chitta}, Lakshmi Pradeep and {Su}, Jiangtao and {Song}, Hongqiang and {Xia}, Chun and {Hou}, Yijun}, title = "{Quasi-periodic Fast Propagating Magnetoacoustic Waves during the Magnetic Reconnection Between Solar Coronal Loops}", journal = {\apj}, keywords = {magnetic reconnection, plasmas, Sun: corona, Sun: UV radiation, waves, Astrophysics - Solar and Stellar Astrophysics}, year = "2018", month = "Dec", volume = {868}, number = {2}, eid = {L33}, pages = {L33}, doi = {10.3847/2041-8213/aaf167}, archivePrefix = {arXiv}, eprint = {1811.08553}, primaryClass = {astro-ph.SR}, adsurl = {https://ui.adsabs.harvard.edu/abs/2018ApJ...868L..33L}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @ARTICLE{2016AJ....152...41P, author = {{Pr{\v{s}}a}, Andrej and {Harmanec}, Petr and {Torres}, Guillermo and {Mamajek}, Eric and {Asplund}, Martin and {Capitaine}, Nicole and {Christensen-Dalsgaard}, J{\o}rgen and {Depagne}, {\'E}ric and {Haberreiter}, Margit and {Hekker}, Saskia}, title = "{Nominal Values for Selected Solar and Planetary Quantities: IAU 2015 Resolution B3}", journal = {\aj}, keywords = {planets and satellites: fundamental parameters, standards, stars: fundamental parameters, stars: general, Sun: fundamental parameters, Astrophysics - Solar and Stellar Astrophysics, Astrophysics - Earth and Planetary Astrophysics, Astrophysics - Instrumentation and Methods for Astrophysics}, year = "2016", month = "Aug", volume = {152}, number = {2}, eid = {41}, pages = {41}, doi = {10.3847/0004-6256/152/2/41}, archivePrefix = {arXiv}, eprint = {1605.09788}, primaryClass = {astro-ph.SR}, adsurl = {https://ui.adsabs.harvard.edu/abs/2016AJ....152...41P}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @ARTICLE{2011ApJS..197...31S, author = {{Schwarz}, Greg J. and {Ness}, Jan-Uwe and {Osborne}, J.~P. and {Page}, K.~L. and {Evans}, P.~A. and {Beardmore}, A.~P. and {Walter}, Frederick M. and {Helton}, L. Andrew and {Woodward}, Charles E. and {Bode}, Mike and {Starrfield}, Sumner and {Drake}, Jeremy J.}, title = "{Swift X-Ray Observations of Classical Novae. II. The Super Soft Source Sample}", journal = {\apjs}, keywords = {novae, cataclysmic variables, ultraviolet: stars, X-rays: stars, Astrophysics - Solar and Stellar Astrophysics, Astrophysics - High Energy Astrophysical Phenomena}, year = "2011", month = "Dec", volume = {197}, number = {2}, eid = {31}, pages = {31}, doi = {10.1088/0067-0049/197/2/31}, archivePrefix = {arXiv}, eprint = {1110.6224}, primaryClass = {astro-ph.SR}, adsurl = {https://ui.adsabs.harvard.edu/abs/2011ApJS..197...31S}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @ARTICLE{2014ApJ...793..127V, author = {{Vogt}, Fr{\'e}d{\'e}ric P.~A. and {Dopita}, Michael A. and {Kewley}, Lisa J. and {Sutherland}, Ralph S. and {Scharw{\"a}chter}, Julia and {Basurah}, Hassan M. and {Ali}, Alaa and {Amer}, Morsi A.}, title = "{Galaxy Emission Line Classification Using Three-dimensional Line Ratio Diagrams}", journal = {\apj}, keywords = {galaxies: abundances, galaxies: general, galaxies: Seyfert, galaxies: starburst, H II regions, ISM: lines and bands, Astrophysics - Astrophysics of Galaxies}, year = "2014", month = "Oct", volume = {793}, number = {2}, eid = {127}, pages = {127}, doi = {10.1088/0004-637X/793/2/127}, archivePrefix = {arXiv}, eprint = {1406.5186}, primaryClass = {astro-ph.GA}, adsurl = {https://ui.adsabs.harvard.edu/abs/2014ApJ...793..127V}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @ARTICLE{2013ApJ...777..161B, author = {{Barnes}, Jason W. and {Clark}, Roger N. and {Sotin}, Christophe and {{\'A}d{\'a}mkovics}, M{\'a}t{\'e} and {App{\'e}r{\'e}}, Thomas and {Rodriguez}, Sebastien and {Soderblom}, Jason M. and {Brown}, Robert H. and {Buratti}, Bonnie J. and {Baines}, Kevin H. and {Le Mou{\'e}lic}, St{\'e}phane and {Nicholson}, Philip D.}, title = "{A Transmission Spectrum of Titan's North Polar Atmosphere from a Specular Reflection of the Sun}", journal = {\apj}, keywords = {planets and satellites: individual: Titan, radiative transfer, techniques: spectroscopic}, year = 2013, month = nov, volume = {777}, number = {2}, eid = {161}, pages = {161}, adsurl = {https://ui.adsabs.harvard.edu/abs/2013ApJ...777..161B}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} }
Title: Millimeter Gap Contrast as a Probe for Turbulence Level in Protoplanetary Disks
Abstract: Turbulent motions are believed to regulate angular momentum transport and influence dust evolution in protoplanetary disks. Measuring the strength of turbulence is challenging through gas line observations because of the requirement for high spatial and spectral resolution data, and an exquisite determination of the temperature. In this work, taking the well-known HD 163296 disk as an example, we investigated the contrast of gaps identified in high angular resolution continuum images as a probe for the level of turbulence. With self-consistent radiative transfer models, we simultaneously analyzed the radial brightness profiles along the disk major and minor axes, and the azimuthal brightness profiles of the B67 and B100 rings. By fitting all the gap contrasts measured from these profiles, we constrained the gas-to-dust scale height ratio $\Lambda$ to be $3.0_{-0.8}^{+0.3}$, $1.2_{-0.1}^{+0.1}$ and ${\ge}\,6.5$ for the D48, B67 and B100 regions, respectively. The varying gas-to-dust scale height ratios indicate that the degree of dust settling changes with radius. The inferred values for $\Lambda$ translate into a turbulence level of $\alpha_{\rm turb}\,{<}\,3\times10^{-3}$ in the D48 and B100 regions, which is consistent with previous upper limits set by gas line observations. However, turbulent motions in the B67 ring are strong with $\alpha_{\rm turb}\,{\sim}1.2\,{\times}\,10^{-2}$. Due to the degeneracy between $\Lambda$ and the depth of dust surface density drops, the turbulence strength in the D86 gap region is not constrained.
https://export.arxiv.org/pdf/2208.09230
\ensubject{subject} \ArticleType{Article} \Year{2017} \Month{January} \Vol{60} \No{1} \DOI{10.1007/s11432-016-0037-0} \ArtNo{000000} \ReceiveDate{January 11, 2017} \AcceptDate{April 6, 2017} \title{Millimeter Gap Contrast as a Probe for Turbulence Level in Protoplanetary Disks}{Millimeter Gap Contrast as a Probe for Turbulence Level in Protoplanetary Disks} \author[1,2]{Yao Liu}{{yliu@pmo.ac.cn}} \author[3]{Gesa H.-M. Bertrang}{} \author[3]{Mario Flock}{} \author[4,5]{Giovanni P. Rosotti}{} \author[1]{Ewine F. van Dishoeck}{} \author[6]{\\ Yann Boehler}{} \author[7]{Stefano Facchini}{} \author[8]{Can Cui}{} \author[9]{Sebastian Wolf}{} \author[1]{Min Fang}{} \AuthorMark{Liu, Y.} \AuthorCitation{Liu, Y.; Bertrang, G.H.-M.; Flock, M.; Rosotti, G.P.; van Dishoeck, E.F.; Boehler, Y.; Facchini, S.; Cui, C.; Wolf, S.; Fang, M.} \address[1]{Max-Planck-Institut f\"ur Extraterrestrische Physik, Giessenbachstrasse 1, 85748 Garching, Germany} \address[2]{Purple Mountain Observatory \& Key Laboratory for Radio Astronomy, Chinese Academy of Sciences, Nanjing 210023, China} \address[3]{Max-Planck-Institut f\"ur Astronomie, K\"onigstuhl 17, D-69117 Heidelberg, Germany} \address[4]{Leiden Observatory, Leiden University, P.O. Box 9531, NL-2300 RA Leiden, the Netherlands} \address[5]{School of Physics and Astronomy, University of Leicester, Leicester LE1 7RH, UK} \address[6]{Univ. Grenoble Alpes, CNRS, IPAG, F-38000 Grenoble, France} \address[7]{European Southern Observatory, Karl-Schwarzschild-Str. 2, 85748 Garching, Germany} \address[8]{DAMTP, University of Cambridge, CMS, Wilberforce Road, Cambridge CB3 0WA, UK} \address[9]{Institut f\"ur Theoretische Physik und Astrophysik, Christian-Albrechts-Universit\"at zu kiel, Leibnizstr. 15, 24118 Kiel, Germany} \abstract{Turbulent motions are believed to regulate angular momentum transport and influence dust evolution in protoplanetary disks. Measuring the strength of turbulence is challenging through gas line observations because of the requirement for high spatial and spectral resolution data, and an exquisite determination of the temperature. In this work, taking the well-known HD\,163296 disk as an example, we investigated the contrast of gaps identified in high angular resolution continuum images as a probe for the level of turbulence. With self-consistent radiative transfer models, we simultaneously analyzed the radial brightness profiles along the disk major and minor axes, and the azimuthal brightness profiles of the B67 and B100 rings. By fitting all the gap contrasts measured from these profiles, we constrained the gas-to-dust scale height ratio $\Lambda$ to be $3.0_{-0.8}^{+0.3}$, $1.2_{-0.1}^{+0.1}$ and ${\ge}\,6.5$ for the D48, B67 and B100 regions, respectively. The varying gas-to-dust scale height ratios indicate that the degree of dust settling changes with radius. The inferred values for $\Lambda$ translate into a turbulence level of $\alpha_{\rm turb}\,{<}\,3\times10^{-3}$ in the D48 and B100 regions, which is consistent with previous upper limits set by gas line observations. However, turbulent motions in the B67 ring are strong with $\alpha_{\rm turb}\,{\sim}1.2\,{\times}\,10^{-2}$. Due to the degeneracy between $\Lambda$ and the depth of dust surface density drops, the turbulence strength in the D86 gap region is not constrained.} \keywords{protoplanetary disks, radiative transfer, planet formation} \PACS{97.82.Jw, 95.30.Jx, 97.82.Fs} \begin{multicols}{2} \section{Introduction} \label{sec:intro} Protoplanetary disks, as the birthplace of planetary systems, always exhibit turbulent motions \cite{Lesur2022}. There are several mechanisms currently discussed as main contributors: hydrodynamical instabilities as the vertical shear instability \cite{Goldreich1967, Fricke1968, Flock2017}, convective overstability \cite{Lyra2014, Klahr2014}, Zombie vortex stability \cite{Marcus2015, Marcus2016}, and magneto-hydrodynamical instabilities like the magnetorotational instability \cite{Balbus1991, Balbus1996, Balbus1998}. Turbulence regulates the angular momentum transport to sustain gas accretion onto the central star \cite{Shakura73a,Pringle1981}, influences the evolution of dust grains in disks \cite{Birnstiel2016}, and plays an important role in controlling the dynamics of embedded planets \cite{Kley2012}. Hence, a detailed understanding of disk evolution and planet formation requires knowledge of the strength of turbulent motions. Placing constraints on the turbulence level is also important in interpreting observational data with numerical simulations. In recent years, high-resolution images at infrared and (sub-)millimeter wavelengths have shown that gaps and rings are frequently observed in planet-forming disks \cite{Avenhaus2018, Long2018, Andrews2018}. These interesting substructures are often thought to be created by planet-disk interaction \cite{Dipierro2018,Zhang2018,Liu2019}. The description of the underlying physics relies heavily on (magneto-)hydrodynamical simulations in which turbulence strongly affects the resulting depth and number of gaps \cite{Pinilla2012a,Flock2015,Rosotti2016,Bertrang2018,Dong2018}. As a consequence, the inferred properties (e.g., mass and location) and number of the ``unseen'' (proto)planets are dependent on the input strength of turbulence in the simulation. However, measuring turbulence with gas line observations is very challenging because on the one hand it demands for data at high spatial and spectral resolution, and on the other hand thermal motion usually dominates the broadening of lines, leading to substantial difficulties when separating its contribution from the measured total line width \cite{Teague2016}. Therefore, the measurement of turbulence via gas line data so far is limited to a small number of disks, revealing low turbulent velocities typically below $5\%\,{\sim}\,10\%$ of the local sound speed ($c_s$) \cite{Guilloteau2012,Flaherty2015,Flaherty2017,Teague2018a,Flaherty2020}. An exception is for the DM Tau disk, where the measured turbulent velocity approaches $0.25\,{\sim}\,0.33\,c_s$ \cite{Flaherty2020}. Turbulence also affects the motion of the dust, either in the radial direction or in the vertical one. Dullemond \& Penzlin \cite{Dullemond2018b} suggested that the dependence of turbulence on the dust-to-gas mass ratio together with the radial drift of dust particles could be the origin of the ring structures commonly found in protoplanetary disks. By comparing the width of the millimeter continuum emission ring with the pressure scale height of the disk, Dullemond et al. \cite{Dullemond2018} found strong evidence of dust trapping operating in all the rings analyzed in their sample, and put constraints on the quantity $\alpha_{\rm turb}/{\rm St}$, where $\alpha_{\rm turb}$ is the turbulence parameter, and $\rm{St}$ is the Stokes number of the dust particles. Vertical stirring induced by turbulent motions acts as a counter process against the settling of dust grains. Theoretically speaking, millimeter continuum emission is dominated by millimeter-sized dust particles that are located near the midplane of the disk. However, material residing in the adjacent rings, located above the midplane, would hide the gap due to beam smearing. How severe this smoothening effect is depends on the scale height of millimeter-sized dust grains \cite{Pinte2016}. In stronger turbulent disks, dust grains are more vertically distributed, leading to a more substantial reduction on the gap depth. Recently, Doi \& Kataoka \cite{Doi2021} discussed the feasibility of analyzing the intensity variation as a function of azimuth on the rings to estimate the degree of dust settling. When the disk is optically thin and viewed at an oblique inclination, the optical depth $\tau$ along the line of sight on the major and minor axes differs from each other. Such a difference in $\tau$ forms a peak and dip in the brightness profile at the azimuthal angle of the major and minor axis, respectively. The ratio between the brightness peak and dip depends on the millimeter dust scale height. The authors fit the azimuthal brightness profiles of two rings in the HD\,163296 disk, and constrained the gas-to-dust scale height ratio and therefore the turbulence level. In their analysis, the disk is assumed to be vertically isothermal with a fixed midplane temperature profile. How such a simplification affects the result, particularly for rings with a large millimeter dust scale height (i.e., high turbulence regions) needs to be investigated. In this work, we take the HD\,163296 disk as an example to investigate in detail the link between millimeter gap contrasts and the strength of turbulence, and highlight some features and degeneracies that can be encountered. Sect.~\ref{sec:obs} gives an introduction about the HD\,163296 disk. The modeling assumptions are presented in Sect.~\ref{sec:modeling}, while the process of dedicated fitting to the ALMA image is described in Sect.~\ref{sec:fitalma}. We discuss our results in Sect.~\ref{sec:discussion}. The paper ends up with a summary in Sect.~\ref{sec:summary}. \section{Circumstellar disk of HD\,163296} \label{sec:obs} HD\,163296 is a Herbig Ae star (A1 spectral type) located at a distance of $D\,{=}\,101\,{\pm}\,2\,\rm{pc}$ \cite{gaia2018}. Its mass ($M_{\star}$) and age are $1.9\,M_{\odot}$ and $10.4\,\rm{Myr}$ \cite{Setterholm2018}. It has a luminosity of $L_{\star}\,{=}\,17\,L_{\odot}$, and an effective temperature of $T_{\rm eff}\,{=}\,9250\,\rm{K}$ \cite{Fairlamb2015}. Spatially resolved observations at both infrared and millimeter regimes have revealed ring structures in the disk around HD\,163296 \cite{Grady2000,Wisniewski2008,Muro-Arena2018,Isella2016,Notsu2019}. Analysis of the interferometric data taken with the Very Large Telescope Interferometer PIONIER and MATISSE yielded brightness asymmetries in the near-infrared emission, which may originate from a vortex near the inner rim ($R\,{\sim}\,0.4\,\rm{AU}$) of the disk \cite{Lazareff2017,Varga2021}. As one of the 20 targets selected in the Disk Substructures at High Angular Resolution Program (DSHARP), HD\,163296 was observed with the Atacama Large Millimeter/submillimeter Array (ALMA) in Band 6 at an unprecedented spatial resolution of $4.8\,{\times}\,3.8\,\rm{AU}$ \cite{Andrews2018}. The rms noise of the fiducial ALMA image generated by the DSHARP team is $\sigma_{\rm rms}\,{=}\,23\,\mu{\rm Jy/beam}$. The continuum image shows a few pairs of concentric rings/gaps, see panel (a) in Figure~\ref{fig:imgres}. The D48 and D86 gaps are located at a radial distance of 48 and 86\,AU, with a width of 20 and 16\,AU, respectively. The B67 and B100 rings are centered at a radial distance of 67 and 100\,AU, with a width of 16 and 12\,AU, respectively \cite{Huang2018}. We extracted the surface brightness along the disk major and minor axes, given the position angle (PA) of $133.33^{\circ}$. Along a PA of $99^{\circ}$, there is a crescent-like structure centered at a radial distance of $55\,\rm{AU}$ \cite{Isella2018}, which is probably caused by a Jupiter mass planet \cite{Rodenkirch2021}. Such an asymmetry contaminates the measurement of the gap contrast. Hence, we only considered the data on the semi-major axis to the northwest. On the minor axis, however, an average of both sides of the disk was performed to improve the signal-to-noise ratio. To apply the methodology introduced by Doi \& Kataoka \cite{Doi2021}, we also extracted the azimuthal brightness profiles on the B67 and B100 rings. The reference for the azimuthal coordinate ($\phi$) is given in panel (a) of Figure~\ref{fig:imgres}. The extracted brightnesses are shown with red dots in panels (c)-(f) of Figure~\ref{fig:imgres}. It should be noted that the mechanism responsible for generating the crescent-like structure also likely causes azimuthal perturbations to the B67 ring, which may be one of the reasons why the brightness profile shows non-axisymmetric features. The width between two adjacent points (i.e., 1.5\,AU) is about one third of the ALMA beam, which means that the brightness is first averaged over such a bin size and then extracted. The errors for each of the data points on the major axis, B67 and B100 rings are all set to $23\,\mu{\rm Jy/beam}$, but on the minor axis they are calculated to be $\frac{23}{\sqrt{2}}\,\mu{\rm Jy/beam}$ due to the average of both sides of the disk. \begin{table*}[!t] \caption{Gap contrasts of the HD\,163296 disk.} \centering \footnotesize \linespread{1.2}\selectfont \begin{tabular}{lcccccccc} \hline \multirow{2}{*}{} & \multicolumn{2}{c}{Major axis} & \multicolumn{2}{c}{Minor axis} & \multicolumn{2}{c}{B67 ring} & \multicolumn{2}{c}{B100 ring} \\ \cline {2-3} \cpartlineleft{4,1em}\cline {5-5} \cpartlineleft{6,1em}\cline {7-7} \cpartlineleft{8,1em}\cline {9-9} & D48 & D86 & D48 & D86 & $\phi=90^{\circ}$ & $\phi=270^{\circ}$ & $\phi=90^{\circ}$ & $\phi=270^{\circ}$ \\ \hline ALMA data & $0.98\,{\pm}\,0.03$ & $0.96\,{\pm}\,0.05$ & $0.94\,{\pm}\,0.02$ & $0.82\,{\pm}\,0.04$ & $0.22\,{\pm}\,0.03$ & $0.21\,{\pm}\,0.03$ & $0.00\,{\pm}\,0.07$ & $0.15\,{\pm}\,0.07$ \\ Model \texttt{I1} & 0.97 & 0.97 & 0.88 & 0.34 & 0.24 & 0.21 & 0.42 & 0.41 \\ Model \texttt{I2} & 0.97 & 0.97 & 0.94 & 0.80 & 0.13 & 0.11 & 0.20 & 0.18 \\ Model \texttt{I3} & 0.97 & 0.97 & 0.94 & 0.87 & 0.11 & 0.10 & 0.11 & 0.11 \\ Model \texttt{I4} & 0.97 & 0.97 & 0.92 & 0.81 & 0.21 & 0.18 & 0.10 & 0.10 \\ \hline \end{tabular} \linespread{1.0}\selectfont \label{tab:gapcont} \end{table*} The gap contrast is defined as $1\,{-}\,I_{\rm min}/I_{\rm max}$, where $I_{\rm min}$ is the minimum brightness within the gap, and $I_{\rm max}$ is the maximum brightness of its immediately exterior ring. The brightness profile of the B67 ring displays two dips at $\phi\,{=}\,90^{\circ}$ and $270^{\circ}$, which resemble gaps. For simplicity of description, we also call them as ``gaps'' hereafter in this work. The constrasts are defined as $1\,{-}\,I_{\phi{=}90^{\circ}}/I_{\phi{=}180^{\circ}}$ and $1\,{-}\,I_{\phi{=}270^{\circ}}/I_{\phi{=}180^{\circ}}$. On the B100 ring, the profile is quite flat in the western side, and shows only one ``gap'' at $\phi\,{=}\,270^{\circ}$. In addition to the chi-square ($\chi^2$) metrics, the observed gap contrasts summarized in Table~\ref{tab:gapcont} are the key characteristics used to evaluate the quality of fit of our models. The difference between gap contrasts measured along the disk major and minor axes is due to projection effect. Because the disk is geometrically thick, and it is tilted to an inclination of $46.7^{\circ}$, the width of the gap varies with azimuthal angle, and reaches the smallest along the minor axis, leading to the lowest gap contrast. \section{Full radiative transfer modeling} \label{sec:modeling} The key of our work is to constrain the scale height of the millimeter-sized dust grains by fitting the contrasts of gaps with self-consistent radiative transfer models, and then link the scale height to the strength of turbulence. In fact, the HD\,163296 disk has more gaps, i.e., D10 and D145. However, they are either not fully spatially resolved, or show evidence for being multiple gaps \cite{Huang2018}. We will not discuss them in detail throughout the paper, although our modeling methodology automatically captures both features. The radiative transfer models are parameterized in the framework of the \texttt{RADMC-3D} code\footnote{http://www.ita.uni-heidelberg.de/~dullemond/software/radmc-3d/.} \cite{radmc3d2012}. We assume that the disk is passively heated by stellar irradiation. The stellar spectrum is taken from the \texttt{Kurucz} database \cite{Kurucz1994}, assuming a gravity of ${\rm log}\,g\,{=}\,3.5$ and solar metallicity. Other model assumptions are for the density distribution and dust opacities, which are described below. \subsection{Dust density distribution} \label{sec:moddens} We consider a disk that extends from an inner to outer radii of $R_{\rm in}\,{=}\,0.4\,\rm{AU}$ and $R_{\rm out}\,{=}\,169\,\rm{AU}$, respectively \cite{Huang2018}. The model has two distinct dust grain populations, i.e., a small grain population (SGP) and a large grain population (LGP). The temperature structure of the disk is mainly governed by the SGP, whereas the LGP dominates the millimeter continuum emission. We fixed the mass fraction of the LGP to $f_{\rm SGP}\,{=}\,0.85$ that has been commonly used in previous modeling works of protoplanetary disks \cite{andrews2011,Liu2022}. The SGP is assumed to be well-mixed with the underlying gas distribution. Therefore, its scale height is set to the gas scale height ($H_{\rm gas}$) that is solved under the condition of vertical hydrostatic equilibrium. Large dust grains are expected to settle towards the midplane \cite{Dubrulle1995,Dullemond2004}. We characterize the degree of dust settling with the parameter $\Lambda$, and the scale height of the LGP is given by $H_{\rm gas}/{\Lambda}$. The volume density of the dust grains is parameterized as \begin{equation} \rho_{\rm{SGP}}(R,z)\,{=}\,\frac{(1-f_{\rm LGP})\,\Sigma_{\rm d}(R)}{\sqrt{2\pi}\,H_{\rm gas}}\,\exp\left[-\frac{1}{2}\left(\frac{z}{H_{\rm gas}}\right)^2\right], \\ \label{eqn:sgp} \end{equation} \begin{equation} \rho_{\rm{LGP}}(R,z)\,{=}\,\frac{f_{\rm LGP}\,\Sigma_{\rm d}(R)}{\sqrt{2\pi}\,H_{\rm gas}/{\Lambda}}\,\exp\left[-\frac{1}{2}\left(\frac{z}{H_{\rm gas}/{\Lambda}}\right)^2\right], \\ \label{eqn:lgp} \end{equation} where $\Sigma_{\rm d}(R)$ is the dust surface density, and $R$ is the distance from the central star measured in the disk midplane. Literatural studies usually took analytic forms for $\Sigma_{d}(R)$, e.g., a power law or power law with an exponential taper. However, such simple expressions have been demonstrated to be insufficient to capture the fine-scaled features revealed by high resolution ALMA observations \cite{Pinte2016,Liu2017}. Instead, we build the surface density by iteratively fitting the surface brightnesses at the ALMA wavelength where the optical depth is generally low, see Sect.~\ref{sec:surdens}. \subsection{Dust properties} \label{sec:dustopac} For the dust composition, we made use of the recipe by the DiscAnalysis (\texttt{DIANA}) project \cite{Woitke2016}. The dust grains consist of 60\% silicate ($\rm{Mg_{0.7}Fe_{0.3}SiO_{3}}$) \cite{dorschner1995}, 15\% amorphous carbon (BE$-$sample) \cite{Zubko1996}, and 25\% porosity. These percentages are volume fractions of each component, which are used to derive the effective refractory indices of the dust ensemble by applying the Bruggeman mixing rule \cite{Bruggeman1935}. We used a distribution of hollow spheres with a maximum hollow volume ratio of 0.8 \cite{Min2005}. The mean solid density of the dust ensemble $\rho_{\rm grain}\,{=}\,2.1\,\rm{g\,cm}^{-3}$ is estimated from an average between the silicate density ($3.01\,\rm{g\,cm}^{-3}$) and carbon density ($1.8\,\rm{g\,cm}^{-3}$) taking the volume fractions as the weighting factors. The distribution of grain sizes ($a$) follows a power law ${\rm d}n(a)\,{\propto}\,{a^{-3.5}} {\rm d}a$ with a minimum ($a_{\rm{min}}$) and maximum size ($a_{\rm{max}}$). For the SGP, $a_{\rm{min}}$ and $a_{\rm{max}}$ are fixed to $0.01\,\mu{\rm m}$ and $2\,\mu{\rm m}$, respectively. For the LGP, $a_{\rm{min}}$ is set to $2\,\mu{\rm m}$. Regarding $a_{\rm{max}}$, we will set it based on models that can reproduce the observed millimeter spectral slope, see Sect.~\ref{sec:sedmodel}. \subsection{Building the dust surface density} \label{sec:surdens} Previous studies have shown that surface density profiles in simple analytic expressions (e.g., a smooth power law with density drops at the gap locations) have difficulties to capture the detailed features revealed by ALMA \cite{Liu2017,Muro-Arena2018}. Using an iterative procedure, we built the surface densities by reproducing the millimeter surface brightnesses along the disk major axis that features the maximum spatial resolution. This approach was introduced by Pinte et al. \cite{Pinte2016}, and several works by other teams demonstrated its success \cite{Muro-Arena2018,Liu2019}. The iterative process consists of the following steps. \begin{itemize} \item[a)] We took a starting surface density profile $\Sigma_{\rm d}(R)\,{=}\,\Sigma_{0}\left(R/R_{\rm c}\right)^{-\gamma}{\rm exp}\left[-(R/R_{\rm c})^{2-\gamma}\right]$ with $R_{\rm{c}}\,{=}\,90\,\rm{AU}$ and $\gamma\,{=}\,0.1$ \cite{Isella2016}. For the starting point, we did not introduce any gap, and using other forms will not have a significant impact to the final result. \item[b)] With an initial guess for $H_{\rm gas}$, the dust density distribution is given by Eq.~\ref{eqn:sgp} and \ref{eqn:lgp}. Radiative transfer modeling is performed to obtain the dust temperature. Then, the dust density structure is solved assuming that the disk is in vertical hydrostatic equilibrium. We run the radiative transfer modeling with the new dust density distribution to get the new dust temperature. The iteration for the dust temperature and density goes back and forth, and convergence can be achieved after ${\sim}\,5$ iterations. For the initial choice of $H_{\rm gas}$, we assume $H_{\rm gas}\,{=}\,\sqrt{kT(R)R^3/GM_{\star}{\mu}m_{p}}$, where $G$ is the gravitational constant, $k$ is the Bolzmann's constant, $m_{p}$ is the mass of proton, $\mu\,{=}\,2.3$ is the mean molecular weight, and $T(R)=18.7(R/400\,\rm{AU})^{-0.14}$ is the midplane temperature given by Dullemond et al. \cite{Dullemond2020}. The black solid line in Figure~\ref{fig:s2hgas} shows the initial $H_{\rm gas}$. This step is time consuming because a smooth temperature structure is required to get the solution for the corresponding dust density. Thus, we use a total number of $3\,{\times}\,10^7$ photons in the simulation. \item[c)] From step b), the gas scale height ($H_{\rm gas}$) is derived self-consistently. Then, we simulate a model image at 1.25\,mm, which is convolved with the ALMA beam that has a size and position angle of $0.048^{\prime\prime}\times0.038^{\prime\prime}$ and $82^{\circ}$, respectively. \item[d)] We extracted the model surface brightness along the disk major axis to the northwest, identical to what we have done on the ALMA image. \item[e)] A ratio as a function of radius $\zeta(R)$ is obtained by dividing the observed brightness profile by the model brightness profile. \item[f)] The surface densities used as the input for the model is scaled by the point-by-point ratios $\zeta(R)$. The process goes back to step b. \end{itemize} The iteration for $\Sigma_{\rm d}$ typically converges after about 25 loops, when the change in the model brightness profile is less than 5\% at all radii. \begin{table*}[!t] \centering \footnotesize \begin{threeparttable} \caption{Overview of parameter values for different models.} \label{tab:paras} \doublerulesep 0.1pt \tabcolsep 7pt % \linespread{1.2}\selectfont \begin{tabular}{lcccccccl} \toprule Parameter & Fixed/free & Model \texttt{S1} & Model \texttt{S2} & Model \texttt{I1} & Model \texttt{I2} & Model \texttt{I3} & Model \texttt{I4} & Note \\ \hline $T_{\rm eff}$\,[K] & Fixed & \multicolumn{6}{c}{9250} & Effective temperature \\ $L_{\star}\,[L_{\odot}]$ & Fixed & \multicolumn{6}{c}{17} & Stellar luminosity \\ $D$\,[pc] & Fixed & \multicolumn{6}{c}{101} & Distance \\ $i\,[^{\circ}]$ & Fixed & \multicolumn{6}{c}{46.7} & Disk inclination \\ ${\rm PA\,[^{\circ}]}$ & Fixed & \multicolumn{6}{c}{133.33} & Position angle \\ $R_{\rm in}$\,[AU] & Fixed & \multicolumn{6}{c}{0.4} & Disk inner radius \\ $R_{\rm out}$\,[AU] & Fixed & \multicolumn{6}{c}{169} & Disk outer radius \\ $f_{\rm LGP}$ & Fixed & \multicolumn{6}{c}{0.85} & Mass fraction of the LGP \\ $a_{\rm min.SGP}$\,[$\mu{\rm m}$]& Fixed & \multicolumn{6}{c}{0.01} & Minimum grain size for the SGP \\ $a_{\rm max.SGP}$\,[$\mu{\rm m}$]& Fixed & \multicolumn{6}{c}{2} & Maximum grain size for the SGP \\ $a_{\rm min.LGP}$\,[$\mu{\rm m}$]& Fixed & \multicolumn{6}{c}{2} & Minimum grain size for the LGP \\ $a_{\rm max.LGP}$\,[cm] & Free & 0.1 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & Maximum grain size for the LGP \\ $\Sigma_{\rm d}\,\rm{[g\,cm^{-2}]}$ & Free & Figure~\ref{fig:s1s2surdens} & Figure~\ref{fig:s1s2surdens} & Figure~\ref{fig:surdensout} & Figure~\ref{fig:surdensout} & Figure~\ref{fig:surdensout} & Figure~\ref{fig:surdensout} & Dust surface density \\ $M_{\rm dust}\,[10^{-4}\,M_{\odot}]$ \tnote{(a)} & $-$ & 1.2 & 2.4 & 2.3 & 2.4 & 2.4 & 2.5 & Total dust mass \\ $\Lambda$ & Free & 5.0 & 5.0 & 1.0 & 2.6 & 10.6 & $-$ & $\Lambda$ for the entire disk, see Sect.~\ref{sec:conhratio} \\ $\Lambda{1}$ & Free & $-$ & $-$ & $-$ & $-$ & $-$ & $3.0_{-0.8}^{+0.3}$ & $\Lambda$ for ${R\,{<}\,59\,\rm{AU}}$, see Sect.~\ref{sec:varhratio} \\ $\Lambda{2}$ & Free & $-$ & $-$ & $-$ & $-$ & $-$ & $1.2_{-0.1}^{+0.1}$ & $\Lambda$ for ${59\,{\le}\,R\,{<}\,78\,\rm{AU}}$, see Sect.~\ref{sec:varhratio} \\ $\Lambda{3}$ & Free & $-$ & $-$ & $-$ & $-$ & $-$ & $1.9_{-0.1}^{+15.9}$ & $\Lambda$ for ${78\,{\le}\,R\,{<}\,94\,\rm{AU}}$, see Sect.~\ref{sec:varhratio} \\ $\Lambda{4}$ & Free & $-$ & $-$ & $-$ & $-$ & $-$ & $16.3_{-9.8}^{+3.7}$ & $\Lambda$ for ${R\,{\ge}\,94\,\rm{AU}}$, see Sect.~\ref{sec:varhratio} \\ \hline $\chi_{\rm tot}^2$ & $-$ & $-$ & $-$ & 975 & 460 & 478 & 242 & Chi-square of the model, see Eq.~\ref{eqn:chitot} \\ \bottomrule \end{tabular} \begin{tablenotes} \item[(a)] The total dust mass $M_{\rm dust}$ is obtained by integrating the surface density $\Sigma_{\rm d}$ that is constructed in the fitting procedure. Hence, $M_{\rm dust}$ is not a direct fitting parameter. \end{tablenotes} \end{threeparttable} \end{table*} \subsection{Setting $a_{\rm{max}}$ for the LGP based on SED modeling} \label{sec:sedmodel} Our model has three free parameters/quantities: the dust surface density ($\Sigma_{\rm d}$), the ratio of gas-to-dust scale height ($\Lambda$) and maximum grain size ($a_{\rm max}$) for the LGP. Note that the total dust mass ($M_{\rm dust}$) is not a free parameter, because integrating $\Sigma_{\rm d}$ within the disk naturally gives the result. A population of large dust grains will shallow the spectral index at millimeter wavelengths \cite{Ricci2010,Testi2014}. We collected photometric data from various catalogs and individual studies \cite{Mannings1994,Oudmaijer2001,cutri2003,Isella2007,ishihara2010,Sandell2011,cutri2013,Pascual2016,Tripathi2017,Andrews2018,Guidi2022}. The observed spectral energy distribution (SED) is shown as red dots in Figure~\ref{fig:bestsed}. The spectral index measured at wavelengths $\lambda\,{\ge}\,1\,\rm{mm}$ is $\alpha_{\rm mm.obs}\,{=}\,2.7\,{\pm}\,0.06$. Assuming that the emission is optically thin and in the Rayleigh-Jeans tail, this transfers into a millimeter slope of the dust absorption coefficient $\beta\,{=}\,\alpha_{\rm mm.obs}{-}\,2\,{=}\,0.7$. The $\beta$ value for the interstellar medium dust is ${\sim}\,1.7$ \cite{Li2001}. A lower $\beta$ in the HD\,163296 disk suggests that dust grains have grown up to millimeter and even centimeter sizes. To quantify the extent of grain growth in the HD\,163296 disk, we build a grid of SED models in which the ratio of gas-to-dust scale height is fixed to $\Lambda\,{=}\,5$, a typical value used in literature works \cite{andrews2011,Liu2022}. In Sect.~\ref{sec:fitalma}, we will conduct an extensive parameter study on $\Lambda$ through a dedicated fitting to the ALMA image. However, this parameter is not expected to have a significant impact to the constraint on $a_{\rm max}$ as long as the optical depth is not large. We sample 16 different $a_{\rm max}(s)$ that are logarithmically distributed from $10\,\mu{\rm m}$ to 1\,cm. The procedure of iteration for $\Sigma_{\rm d}$, as laid out in Sect.~\ref{sec:surdens}, is performed separately for each of the 16 models. As a result, 16 model SEDs are simulated. The model with $a_{\rm max}\,{=}\,1\,\rm{cm}$ (Model \texttt{S2}) best matches with the observation, see Figure~\ref{fig:bestsed}. Its converged surface density is shown in Figure~\ref{fig:s1s2surdens}, and Table~\ref{tab:paras} gives an overview of the model parameters. For the subsequent fitting to the ALMA data, we fixed $a_{\rm max}\,{=}\,1\,\rm{cm}$ for the LGP, leaving $\Sigma_{\rm d}$ and $\Lambda$ as the only two free parameters. The discrepancies in the mid- and far-infrared fluxes between model and observation is due to the presence of a puffed-up inner rim. This type of rim is a natural outcome when solving the disk structure in vertical hydrostatic equilibrium, particularly for Herbig disks \cite{Dullemond2007}. The blue solid line in Figure~\ref{fig:s2hgas} shows the gas scale height of Model \texttt{S2}. The overall geometry of the disk is flared. Disk regions just behind the inner rim cannot be exposed to the stellar light, leading to a reduced mid-infrared excess. At a certain radial distance, the disk will show up from the shadow casted by the inner rim. The surface layer of these outer regions directly absorbs stellar photons, and hence produces more far-infrared emission than the observed level. One can fully parameterize the scale height with analytic forms, e.g., a power law, and fit the infrared SED to constrain the geometry \cite{Harvey2012}. However, there are some degeneracies between the geometric parameters in SED models. Moreover, modeling the SED is not able to constrain the scale height of millimeter dust grains that is the key of this work. Therefore, we do not attempt to conduct further fine tuning on the SED fitting, and make our assumptions (i.e., number of free parameters) as few as possible. \section{Fitting the DSHARP ALMA image} \label{sec:fitalma} In this section, we will fit the surface brightnesses along the major and minor axes of the disk, and on the B67 and B100 rings to constrain $\Lambda$. Our strategy starts from a simple assumption of a constant $\Lambda$ in the radial direction, to a more complex scenario in which $\Lambda$ varies with $R$. The contrasts of gaps, as presented in Table~\ref{tab:gapcont}, are sensitive to the degree of dust settling. Therefore, to quantify the quality of fit, we first check whether or not the gap contrasts of the model are consistent with the observation. Then, we calculate the $\chi^2$ along the major axis ($\chi_{\rm major}^2$) and minor axis ($\chi_{\rm minor}^2$), and on the B67 ($\chi_{\rm B67}^2$) and B100 ring ($\chi_{\rm B100}^2$). To exclude the effect of the crescent-like substructure along ${\rm PA}\,{\sim}\,99^{\circ}$, data points between $\phi\,{=}\,{-}45^{\circ}$ and $45^{\circ}$ are not taken into account when calculating $\chi_{\rm B67}^2$ and $\chi_{\rm B100}^2$. The goodness of fit is evaluated according to \begin{equation} \chi_{\rm tot}^2\,{=}\,g_{1}\,\chi_{\rm major}^2+g_{2}\,\chi_{\rm minor}^2+g_{3}\,\chi_{\rm B67}^2+g_{4}\,\chi_{\rm B100}^2. \label{eqn:chitot} \end{equation} Four factors, i.e., $g_{1}$, $g_{2}$, $g_{3}$ and $g_{4}$, are introduced to balance the weightings. First, we calculate the factors as \begin{equation} g_{i} = \frac{\sum_{i=1}^{4}\left(N_{i}\right)}{N_{i}}, \end{equation} where $N_{i}$ is the number of data points taken into account in the calculation of $\chi^2(s)$ for the major and minor axes, and the B67 and B100 rings, respectively. Then, a normalization is performed to ensure that the sum of $g_{i}$ equals to unity. \subsection{Constant $\Lambda$ in the radial direction} \label{sec:conhratio} We first take the simplest assumption in which the ratio of gas-to-dust scale height does not change with radius ($R$). We sample 20 values for $\Lambda$, which are logarithmically distributed within 1 and 20. The case of $\Lambda\,{=}\,1$ means that millimeter dust grains are well coupled with the gas. Strongly settled models feature large values of $\Lambda$. The iteration process for $\Sigma_{d}$ is performed from scratch for each of these 20 models, ensuring that all the models are fully independent and self-consistent. None of the 20 models can reproduce all of the gap constrasts within the uncertainties simultaneously. Panels (c)-(f) of Figure~\ref{fig:imgres} shows a comparison of the brightnesses between observation and three representative models with $\Lambda\,{=}\,1.0$ (model \texttt{I1}), 2.6 (model \texttt{I2}) and 10.6 (model \texttt{I3}), respectively. Model \texttt{I2} has the lowest $\chi_{\rm tot}^2\,{=}\,460$ among the 20 samples. Figure~\ref{fig:surdensout} shows the reconstructed surface densities, whereas the gap contrasts extracted from the models are given in Table~\ref{tab:gapcont}. Along the disk major axis, the three models reproduce the data at a similar quality, see panel (c) of Figure~\ref{fig:imgres}, and the model gap contrasts in Table~\ref{tab:gapcont}. The ALMA beam can dilute the ring emission, and contributes to the adjacent gap emission. In vertically thicker (smaller $\Lambda$) disks, dust grains are located at a higher height above the midplane where the temperature is high. In this case, the ring emission is stronger, and its contribution to the gap emission is higher, which can shallow the gap contrast since the intrinsic emission from the gap is low. In addition to the millimeter dust scale height, the depth of surface density drops is another quantity influencing the gap contrast. A comparison between model \texttt{I1} and model \texttt{I3} indicates that deeper surface density drops in more turbulent disks can produce similar gap contrasts measured on the disk major axis to those generated by shallower surface density drops in more quiescent disks. This means that fitting the data on the major axis alone cannot break the degeneracy. Along the disk minor axis, the change to the gap contrast as a function of $\Lambda$ is observed due to the effect of projection. Panel (d) of Figure~\ref{fig:imgres} shows that models with a higher degree of dust settling produce more separate rings and deeper gaps, and vice versa. This fact is consistent with the findings reported by Pinte et al. \cite{Pinte2016}. Neither the D48 nor the D86 gap can be explained by model \texttt{I1}. Though both model \texttt{I2} and \texttt{I3} are consistent with the data of the D48 gap, only the former reproduces the D86 gap within the uncertainty, see Table~\ref{tab:gapcont}. The gas-to-dust scale height ratio $\Lambda$ has a strong impact on the brightness variation on the B67 and B100 rings. The well-mixed disk (model \texttt{I1}) shows two pronounced dips at $\phi\,{=}\,90^{\circ}$ and $\phi\,{=}\,270^{\circ}$, due to the difference in the optical depth ($\tau$) along the line of sight between $\phi\,{=}\,0^{\circ}$ (or $180^{\circ}$, major axis) and $\phi\,{=}\,90^{\circ}$ (or $270^{\circ}$, minor axis) \cite{Doi2021}. Such a difference in $\tau$ decreases with increasing $\Lambda$. Consequently, the contrasts of ``gaps'' on the rings are reduced in more settled disks, see for instance model \texttt{I3}. Panels (e) and (f) of Figure~\ref{fig:imgres} suggest that the degree of dust settling is different between B67 and B100. While B67 is close to a well-mixed situation, B100 favors a scenario in which large dust grains are well concentrated in the midplane. \subsection{Varying $\Lambda$ in the radial direction} \label{sec:varhratio} Though the experiment under the assumption for a constant $\Lambda$ does not return a satisfactory solution, it provides clues to improve the model. The fitting results imply that the degree of dust settling changes with $R$. Therefore, we parameterize the ratio of gas-to-dust scale height with a piecewise function \begin{equation} \Lambda = \left\{ \begin{array}{rcl} \Lambda{1} & : & {R\,{<}\,59\,\rm{AU}} \\ \Lambda{2} & : & {59\,\rm{AU}\,{\le}\,R\,{<}\,78\,\rm{AU}} \\ \Lambda{3} & : & {78\,\rm{AU}\,{\le}\,R\,{<}\,94\,\rm{AU}} \\ \Lambda{4} & : & {R\,{\ge}\,94\,\rm{AU}}. \\ \end{array} \right. \end{equation} The boundaries of the four radial bins are chosen according to the locations and widths of the gaps and rings, see Sect.~\ref{sec:obs}. We did not explore these borders in the fitting process. Using a piecewise form may have some artifacts in the boundaries. Nevertheless, how the gas-to-dust scale height ratio smoothly varies from one radial bin to another is difficult to be investigated, because it requires observational data at extremely high spatial resolutions that fully resolve the transition region between two adjacent bins. In the new model configuration, the ratios $\Lambda{2}$ and $\Lambda{4}$ are expected to play the dominated role in controlling the gap contrasts of the B67 and B100 rings, respectively. The gap contrasts of D48 and D86 are mainly influenced by a combination of $\Lambda{1}$ and $\Lambda{2}$, and a combination of $\Lambda{3}$ and $\Lambda{4}$, respectively. This is because the definition of contrasts of gaps on the major/minor axis is related to the brightnesses both in the gap and in its exterior ring, see Sect.~\ref{sec:obs}. The parameter space becomes $\left\{\Lambda{1},\,\Lambda{2},\,\Lambda{3},\,\Lambda{4},\,\Sigma_{d}\right\}$. To maintain self-consistency and independency, the time-consuming process for iterating $\Sigma_{d}$ has to be conducted for each of the sampled sets $\left\{\Lambda{1},\,\Lambda{2},\,\Lambda{3},\,\Lambda{4}\right\}$. Therefore, it is impractical to perform the parameter study using the Markov Chain Monte Carlo approach. Instead, the grid search method is invoked to finish the task. We first search for the optimum combination of $\Lambda{1}$ and $\Lambda{2}$, and then for that of $\Lambda{3}$ and $\Lambda{4}$. We sample 20 values for $\Lambda{1}$, which are logarithmically spaced from 1 and 20. Before the parameter study, we run many simulation tests, and find that models with $\Lambda{2}$ only slightly deviating from ${\sim}\,1.2$ are not able to generate gap contrasts of B67 comparable to the observation. Hence, for the sake of reducing the computational time and meanwhile being conservative, we consider 10 points for $\Lambda{2}$ from 1 to 4 in the logarithmic manner. At this stage, $\Lambda{3}$ and $\Lambda{4}$ are fixed to 2.6, i.e., the value of model \texttt{I2}. We run the iteration procedure for $\Sigma_{d}$ from scratch for each of the 200 different combinations of $\Lambda{1}$ and $\Lambda{2}$, and obtain 200 models. Then, we fix $\Lambda{1}$ and $\Lambda{2}$ to the values of the model with the lowest $\chi_{\rm tot}^2$. The exploration for $\Lambda{3}$ and $\Lambda{4}$ is similar. However, both parameters have the same grid points to those for $\Lambda{1}$, and therefore they form 400 different combinations. The final best-fit model (model \texttt{I4}) features $\Lambda{1}\,{=}\,3.0$, $\Lambda{2}\,{=}\,1.2$, $\Lambda{3}\,{=}\,1.9$, $\Lambda{4}\,{=}\,16.3$, and $\chi_{\rm tot}^2\,{=}\,245$. Its dust surface density and millimeter optical depth are shown with the blue line in Figure~\ref{fig:surdensout}. The model image and brightness profiles are compared with the observation in Figure~\ref{fig:imgres}. The gap contrasts and model parameters are summarized in Table~\ref{tab:gapcont} and \ref{tab:paras}, respectively. The best-fit model is able to explain all of the gap contrasts. We separately vary the gas-to-dust scale height ratios in each radial bin from their best-fit values with a step width of 0.1, and investigate how well the parameters are constrained. The variations of $\chi_{\rm tot}^2$ are shown in Figure~\ref{fig:chi2tot}. The dots overlaid with a red cross refer to models that cannot reproduce all of the observed gap contrasts within their errors. Therefore, we exclude them in the estimation of parameter uncertainties that are deduced from the models with $\chi_{\rm tot}^2$ less than 1.05 times the minimum $\chi_{\rm tot}^2$. For instance, all the models with $\Lambda1\,{<}\,{\sim}2.2$ produce lower contrasts (i.e., ${<}\,0.92$) for the D48 gap measured on the disk minor axis than the observed value ($0.94\,{\pm}\,0.02$). Therefore, they are considered to be invalid although some of them have better $\chi^2_{\rm tot}$ than that of the best-fit model. The profiles of $\chi_{\rm tot}^2\,{-}\,\Lambda{1}$, $\chi_{\rm tot}^2\,{-}\,\Lambda{2}$ and $\chi_{\rm tot}^2\,{-}\,\Lambda{4}$ show a clear signature of getting the optimum solution, indicating that the gas-to-dust scale height ratios in the D48, B67 and B100 regions are well constrained. Their validity ranges are estimated to be [2.2, 3.3], [1.1, 1.3], and ${\ge}\,6.5$, respectively. The distribution of $\chi_{\rm tot}^2$ as a function of $\Lambda{3}$ is quite flat, and all the $\Lambda{3}$ values in the considered range can reproduce the data well. Hence, $\Lambda{3}$ is basically unconstrained. \section{Discussion} \label{sec:discussion} Using self-consistent radiative transfer models, we have placed constraints on the degree of dust settling by fitting the gap contrasts of the D48, B67, D86 and B100 features. Our results suggest a radially varying ratio of gas-to-dust scale height ratio in the HD\,163296 disk. In this section, we compare our result with literature studies, and link the derived gas-to-dust scale height ratio to the turbulence strength in the HD\,163296 disk. \subsection{Comparison of $\Lambda$ between different works} Ohashi et al. \cite{Ohashi2019} found that the dust scale height is the key parameter for reproducing the azimuthal variation of the polarization pattern in the gaps. By analyzing the ALMA data of the 0.87\,mm dust polarization from the HD\,163296 disk, they constrained the dust scale height to be less than one-third the gas scale height for the D48 gap, and to be two-thirds the gas scale height for the D86 gap. Recently, Doi \& Kataoka \cite{Doi2021} showed that the azimuthal variation in the continuum along rings are sentitive to the degree of dust settling. Assuming that the disk is vertically isothermal with a fixed power-law temperature, they fit the DSHARP continuum data of the B67 and B100 rings, and inferred the ratio of gas-to-dust scale height to be 1.1 and ${>}\,9.5$ for the B67 and B100 ring, respectively. Figure~\ref{fig:hdustcompare} (upper panel) shows a comparison of $\Lambda$ between different works. The blue solid line refers to our best fit, whereas brown dots and green dots mark the results by Ohashi et al. \cite{Ohashi2019} and Doi \& Kataoka \cite{Doi2021}, respectively. As can be seen, our results are overall consistent with these literature values. However, as one step further, our analysis provides constraints on $\Lambda$ both for the ring and gap regions in the framework of self-consistent radiative transfer simulation. The black dashed line in the upper panel of Figure~\ref{fig:hdustcompare} shows the dust scale height. In the inner ($R\,{<}\,60\,\rm{AU}$) or outermost ($R\,{>}\,94\,\rm{AU}$) regions, the millimeter dust disk is quite thin, with scale heights less than ${\sim}\,2\,\rm{AU}$. Disk regions in the vicinity of B67 have millimeter dust scale height of ${\sim}\,4\,\rm{AU}$. Disks, when viewed at high inclinations, have a specific advantage that the vertical extent of the emission layers can be directly constrained by spatially resolved images. Villenave et al. \cite{Villenave2020} presented ALMA continuum observations of 12 edge-on disks, at an angular resolution of ${\sim}\,0.1^{\prime\prime}$. A comparison between a set of radiative transfer models and the data indicates that at least three disks in their sample are consistent with a millimeter dust scale height of a few AU. Our inferred dust scale height for the HD\,163296 disk, tilted to $46.7^{\circ}$, is comparable with those of the observed edge-on disks. \subsection{Comparison of $\alpha_{\rm turb}/\rm{St}$ and $\alpha_{\rm turb}$ between different works} Assuming an equilibrium between dust settling and vertical stirring by turbulent motions, the dust scale height and gas scale height follow the relation \cite{Youdin2007,Birnstiel2010} \begin{equation} H_{\rm dust} = H_{\rm gas}\left(1+\frac{\rm St}{\alpha_{\rm turb}} \frac{\rm 1+2\,St}{\rm 1+St}\right)^{-1/2}, \label{eqn:dusth} \end{equation} where the Stokes number St is given by \begin{equation} {\rm St}\,{=}\,\frac{\rho_{\rm grain}\bar{a}}{\Sigma_{\rm g}(R)}\frac{\pi}{2}. \end{equation} The gas surface density $\Sigma_{\rm g}(R)\,{=}\,\Sigma_{0}\,(R/R_{\rm c})^{-\gamma}\,{\rm exp}[-(R/R_{\rm c})^{2-\gamma}]$ with $\Sigma_{0}\,{=}\,8.8\,\rm{g\,cm^{-2}}$, $R_{\rm{c}}\,{=}\,165\,\rm{AU}$ and $\gamma\,{=}\,0.8$, are constrained by high resolution multiple CO line observations \cite{Zhang2021}. Considering a grain size distribution like the one prescribed for the LGP, $\bar{a}$ stands for the representative grain size of dust that dominates the continuum emission at 1.25\,mm. We check how the mass absorption coefficent $\kappa_{\rm abs}$ at 1.25\,mm changes with $a$, and find that it peaks at $a\,{\sim}\,0.2\,\rm{mm}$. This value is close to the number given by $\lambda/2\pi$. Therefore, in our calculation of St, we took $\bar{a}\,{=}\,0.2\,\rm{mm}$. \begin{table}[H] \caption{$\alpha_{\rm turb}/{\rm St}$ for the B67 and B100 ring from different studies.} \centering \linespread{1.2}\selectfont \begin{tabular}{lcc} \hline Reference & B67 ring & B100 ring \\ \hline Dullemond et al. \cite{Dullemond2018} & 0.33 & $0.13\,{\sim}\,0.77$ \\ Rosotti et al. \cite{Rosotti2020} & 0.23 & 0.04 \\ Doi \& Kataoka \cite{Doi2021} & ${>}\,2.4$ & ${<}\,0.011$ \\ This work & $2.3_{-0.9}^{+2.5}$ & $0.0038_{-0.0013}^{+0.02}$ \\ \hline \end{tabular} \linespread{1.0}\selectfont \label{tab:alpha} \end{table} The St value varies from ${\sim}\,10^{-5}$ in the inner disk to ${\sim}\,10^{-2}$ in the outer regions. Because St is much less than unity, Eq.~\ref{eqn:dusth} can be simplified as $H_{\rm dust}\,{=}\,H_{\rm gas}\left(1+\frac{\rm St}{\alpha_{\rm turb}}\right)^{-1/2}$. Therefore, the constrained $\Lambda$ directly translates into a ratio of $\alpha_{\rm turb}/\rm{St}$, which is shown with the blue solid line in the bottom panel of Figure~\ref{fig:hdustcompare}. Based on different methodologies, other groups have derived the $\alpha_{\rm turb}/\rm{St}$ values for the B67 and B100 rings. For instance, Rosotti et al. \cite{Rosotti2020} determined $\alpha_{\rm turb}/\rm{St}$ by measuring the deviation from Keplerian rotation of the gas in the proximity of the continuum peaks. Under an assumption that dust rings are caused by dust trapping in radial pressure bumps, Dullemond et al. \cite{Dullemond2018} constrained $\alpha_{\rm turb}/{\rm St}$ by analyzing the widths of the dust rings. In Doi \& Kataoka \cite{Doi2021}, the $\alpha_{\rm turb}/\rm{St}$ value was inferred by investigating the azimuthal intensity variation along dust rings. Table~\ref{tab:alpha} summarizes the reported values together with our best-fit result. As can be seen, our result is well consistent with the values derived by Doi \& Kataoka \cite{Doi2021}. This is not surprising because the idea of constraning $\alpha_{\rm turb}/\rm{St}$ is the same. But, our methodology is more realistic, and data points not only on the rings but also along the major/minor axes are simultaneously taken into account in the analysis. We note that the best-fit $\alpha_{\rm turb}/{\rm St}$ for B67 is about one order of magnitude larger than those obtained in Dullemond et al. and Rosotti et al. There are several possibilities to explain such a difference. First, our methodology is sensitive to the strength of turbulent motions in the vertical direction, while the constraints by Dullemond et al. are more related to the radial diffusion of dust grains. Second, the B67 ring has a neighboring crescent, implying that the ring itself may not be perfectly axisymmetric, thus undermining the assumption of our modeling procedure. Third, if the gaps are indeed opened by planets \cite{Pinte2018,Teague2018b,Teague2021}, the B67 ring can be substantially stirred due to meridional gas flows. Numerical simulations have shown that massive planets can stir sub-millimeter-sized dust grains up to ${\sim}\,70\%$ of the gas scale height at the gap edges \cite{Bi2021,Binkert2021}. For the B100 ring, we obtain a lower $\alpha_{\rm turb}/{\rm St}$ than that inferred by Dullemond et al. A lower turbulence in the vertical direction than in the radial direction can be explained under several physical scenarios, such as in dust feedback to turbulence \cite{Xu2022}, disk self-gravity \cite{Baehr2021}, and radial (pseudo-)diffusion \cite{Hu2021}. The black dashed line in the bottom panel of Figure~\ref{fig:hdustcompare} shows the derived turbulence strength. Except for the B67 ring, the disk has a turbulence level of $\alpha_{\rm turb}\,{<}\,3\times10^{-3}$. Theoretical works have shown that pure hydrodynamic mechanisms or the magnetorotational instability suppressed by nonideal magnetohydrodynamic effects can generate similar turbulence levels in protoplanetary disks \cite{Bai2011,Bai2015,Flock2017,Cui2020,Cui2021}. In the B67 ring, the turbulence is strong with $\alpha_{\rm turb}\,{\sim}\,1.2\,{\times}\,10^{-2}$. Several studies have tried to measure turbulence in the HD\,163296 disk through detailed analysis of gas line observations. Boneberg et al. \cite{Boneberg2016} found that models with $\alpha_{\rm turb}\,{=}\,(0.1\,{-}\,6.3)\,{\times}\,10^{-3}$ match well with the ${\rm C^{18}O}\,J\,{=}\,2{-}1$ line profile within 90\,AU of the disk. Based on CO isotopes and DCO$^{+}$ line observations, Flaherty et al. \cite{Flaherty2015,Flaherty2017} derived the gas turbulence velocity in the disk, which is less than a few percent of the sound speed, corresponding to $\alpha_{\rm turb}\,{<}\,{\sim}\,3\,{\times}\,10^{-3}$. Our inferred value for $\alpha_{\rm turb}$, except for the B67 ring, is consistent with the results set by gas observations. The value of $\alpha_{\rm turb}$ for B67 from our modeling is larger than the upper limit in either Boneberg et al. or Flaherty et al. The discrepancy may be explained by two reasons. First, as demonstrated by our analysis, $\Lambda$, and therefore $\alpha_{\rm turb}$, may vary in the radial direction. The spatial resolution of gas line observations in Boneberg et al. and Flaherty et al. is ${\sim}\,0.5^{\prime\prime}$ that is 10 times worse than that of the DSHARP data. Consequently, their constraints on $\alpha_{\rm turb}$ represent a mean level of turbulence over a much broader range of radius than ours. Due to the beam smearing, low turbulence outside B67 results in a small $\alpha_{\rm turb}$ probed by the gas lines. Second, the turbulence strength we measure describes the role of dust stirring in the vertical direction. This may be different from the turbulence of gas motions. Recent numerical simulations of dust evolution start to use different $\alpha_{\rm turb}(s)$ for gas evolution, radial diffusion and vertical stirring \cite{Pinilla2021}. Isella et al. \cite{Isella2016} presented Band 6 ALMA observations of HD\,163296 with a lower angular resolution than the DSHARP data, revealing three dust gaps at 60, 100, and 160\,AU in the continuum as well as CO depletion in the middle and outer dust gaps. Liu et al. \cite{Liu2018} investigated these gaps by performing 2D global hydrodynamic simulations of planet-disk interaction, and found that three half-Jovian-mass planets in a disk with effective viscosity being a function of radius can explain most of the observational features. Within $R\,{=}\,100\,\rm{AU}$, their model has a turbulence level of $\alpha_{\rm turb}\,{<}\,3\,{\times}\,10^{-4}$ that is weaker than ours. Such an inconsistency can be explained by the difference in the quality of data used in the analysis. As shown in the left column of Figure 3 in Liu et al. \cite{Liu2018}, the best-fit $\alpha_{\rm turb}$ is sensitive to how well the dust surface densities in the gap region are constrained. In the ALMA observation used by Liu et al. \cite{Liu2018}, the beam size is ${\sim}\,0.2^{\prime\prime}$ and the widths of the inner two gaps are narrower than ${\sim}\,0.27^{\prime\prime}$, indicating that the gaps are not fully resolved. However, our constraints are placed using the DSHARP data with four times better spatial resolution and sensitivity. \subsection{The effect of model assumptions on the results} The direct constraint from our radiative transfer analysis is on the gas-to-dust scale height ratio $\Lambda$. The scenario of dust settling that links $\Lambda$ and $\alpha_{\rm turb}$ is given by Eq.~\ref{eqn:dusth}, and the relation is based on numerical simulations performed by Dubrulle et al. \cite{Dubrulle1995} and Youdin \& Lithwick \cite{Youdin2007}. Models with more realistic physics on dust growth, sedimentation and radial mixing may alter the connection between dust and gas scale heights, therefore change the result. To calculate the Stokes number characterizing the coupling between gas and dust, one needs to know the gas surface density. In our calculation, we take the result from Zhang et al. \cite{Zhang2021} who modeled the high resolution ALMA data of CO and its isotopologue lines. How well the CO molecular lines probe the underlying total gas surface density remains uncertain. Such a fact will not affect our constaints on $\Lambda$ from the continuum radiative transfer modeling, but it will cause uncertainties when inferring $\alpha_{\rm turb}$ from $\Lambda$, see Eq.~\ref{eqn:dusth}. \section{Summary} \label{sec:summary} Constraining the strength of turbulence plays a key role in building up our knowledge on disk evolution and planet formation. It is also crucial for running numerical models to interpret high-resolution ALMA observations. % In this work, we took the HD\,163296 disk as an example, and investigated in detail the millimeter gap contrast as a probe for turbulence level. With self-consistent radiative transfer modeling, we fit the gap contrasts measured for the D48, B67, D86 and B100 substructures that are spatially resolved by the DSHARP observation. We constrained the gas-to-dust scale height ratio $\Lambda$ to be $3.0_{-0.8}^{+0.3}$, $1.2_{-0.1}^{+0.1}$ and ${\ge}\,6.5$ for the D48, B67 and B100 regions. Our results show that the degree of dust settling varies with radius in the HD\,163296 disk. The $\Lambda$ value for the D86 region is unconstrained due to the degeneracy between $\Lambda$ and the depth of surface density drops. Based on the constrained gas-to-dust scale height ratio $\Lambda$, we estimate $\alpha_{\rm turb}/\rm{St}$ to be $2.3_{-0.9}^{+2.5}$ and $0.0038_{-0.0013}^{+0.02}$ for the B67 and B100 rings, respectively. These values are well consistent with those reported by Doi \& Kataoka \cite{Doi2021}, but differ from the numbers inferred by Dullemond et al. \cite{Dullemond2018} and Rosotti et al. \cite{Rosotti2020}. The discrepancy may be due of the fact that our modeling is sentitive to the turbulence for vertical stirring of dust grains, while literature studies more likely reflect the turbulence for the radial diffusion of dust grains or the turbulent motion of gas species. We calculate the turbulence level to be $\alpha_{\rm turb}\,{<}\,3\times10^{-3}$ for the D48 and B100 regions, which agree well with the upper limit set by Boneberg et al. \cite{Boneberg2016} and Flarherty et al. \cite{Flaherty2017} from analyzing the width of gas lines. According to our analysis, the B67 ring has a strong turbulence strength of $\alpha_{\rm turb}\,{\sim}1.2\,{\times}\,10^{-2}$. Future multi-wavelength continuum observations with comparable spatial resolution to the DSHARP data are required to better constrain the degree of dust settling, and therefore the scale height of dust grains with different sizes. Higher resolution observations of multiple gas lines are pivotal to directly measure the turbulent motions, and confirm whether the strong turbulence in the local region of B67 inferred from our analysis is also seen with gas tracers. \Acknowledgements{We thank the anonymous referees for their constructive comments that highly improved the manuscript. YL acknowledges the financial support by the Natural Science Foundation of China (Grant No. 11973090), and the science research grants from the China Manned Space Project with NO. CMS-CSST-2021-B06. GHMB and MF acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 757957). GR acknowledges support from the Netherlands Organisation for Scientific Research (NWO, program number 016.Veni.192.233) and from an STFC Ernest Rutherford Fellowship (grant number ST/T003855/1). We thank Tilman Birnstiel, Guo Chen, Ke Zhang and Richard Teague for insightful discussions. We acknowledge the DSHARP team for making the calibrated CASA measurement sets, fiducial images, and the scripts used for calibration and image cleaning, available for the public. ALMA is a partnership of ESO (representing its member states), NSF and NINS, together with NRC, MOST and ASIAA, and KASI, in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ.} \InterestConflict{The authors declare that they have no conflict of interest.} \bibliographystyle{scichina} \bibliography{hd163296} \begin{appendix} \renewcommand{\thesection}{Appendix} \section{More information about the SED models} \label{sec:moresed} In Sect.~\ref{sec:sedmodel}, we modeled the SED of the HD\,163296 disk to constrain the maximum grain size ($a_{\rm max}$) for the LGP. Two models with $a_{\rm max}\,{=}\,1\,\rm{mm}$ (model \texttt{S1}) and $a_{\rm max}\,{=}\,1\,\rm{cm}$ (model \texttt{S2}) are shown in Figure~\ref{fig:bestsed}. Because the SED analysis is not the key of this study, we do not present all the information in the main section. The blue solid line in Figure~\ref{fig:s2hgas} shows the gas scale height of model \texttt{S2}. The black solid line indicates the assumption made by Doi \& Kataoka \cite{Doi2021}, which is also our initial choice for $H_{\rm gas}$ in the iteration (see Sect.~\ref{sec:surdens}). The black dashed line stands for a typical profile found from modeling the SEDs of T Tauri disks. Figure~\ref{fig:s1s2surdens} shows the reconstructed surface densities ($\Sigma_{\rm d}$) for both models. As can be seen, they follow a similar pattern. However, the surface densities of model \texttt{S2} are systematically larger than those of model \texttt{S1}. This is because the mass absorption coefficient at a wavelength of 1.25\,mm for a dust grain population with $a_{\rm max}\,{=}\,1\,\rm{mm}$ is larger than that for a population of dust grains with $a_{\rm max}\,{=}\,1\,\rm{cm}$. Therefore, higher surface densities are required to fit the observed millimeter flux when $a_{\rm max}\,{=}\,1\,\rm{mm}$. \end{appendix} \end{multicols}
Title: Magnetic Flux Transport in Radiatively Inefficient Accretion Flows and the Pathway towards a Magnetically Arrested Disk
Abstract: Large-scale magnetic fields play a vital role in determining the angular momentum transport and in generating jets/outflows in the accreting systems, yet their origin remains poorly understood. We focus on radiatively inefficient accretion flows (RIAF) around the black holes, and conduct three-dimensional general-relativistic magnetohydrodynamic (GRMHD) simulations using the Athena++ code. We first re-confirm that the dynamo action alone cannot provide sufficient magnetic flux required to produce a strong jet. We next investigate the other possibility, where the large-scale magnetic fields are advected inward from external sources (e.g. the companion star in X-ray binaries, magnetized ambient medium in AGNs). Although the actual configuration of the external fields could be complex and uncertain, they are likely to be closed. As a first study, we treat them as closed field loops of different sizes, shapes and field strengths. Unlike earlier studies of flux transport, where magnetic flux is injected in the initial laminar flow, we injected the magnetic field loops in the quasi-stationary turbulent RIAF in inflow equilibrium and followed their evolution. We found that a substantial fraction ($\sim15\%-40\%$) of the flux injected at the large radii reaches the black hole with a weak dependence on the loop parameters except when the loops are injected at high latitudes, away from the mid-plane. Relatively high efficiency of flux transport observed in our study hints that a magnetically dominated RIAF, potentially a magnetically-arrested disk, might be formed relatively easily close to the black hole, provided that a source of the large-scale field exists at the larger radii.
https://export.arxiv.org/pdf/2208.02269
. \begin{document} \title{Magnetic Flux Transport in Radiatively Inefficient Accretion Flows and the Pathway towards a Magnetically Arrested Disk} \email{prasundhang@gmail.com} \email{xbai@tsinghua.edu.cn} \author[0000-0001-9446-4663]{Prasun Dhang} \affiliation{Institute for Advanced Study, Tsinghua University, Beijing 100084, China} \affiliation{IUCAA, Post Bag 4, Ganeshkhind, Pune, Maharashtra 411007, India} \author[0000-0001-6906-9549]{Xue-Ning Bai} \affiliation{Institute for Advanced Study, Tsinghua University, Beijing 100084, China} \affiliation{Department of Astronomy, Tsinghua University, Beijing 100084, China} \author{Christopher J. White} \affiliation{Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544} \keywords{Accretion --- Accretion disk --- GRMHD} \section{Introduction} \label{sect:intro} Astrophysical accretion disks influence the systems over a large range of scales spanning from the planet formation to the galaxy evolution. They also energise the most powerful sources in the Universe. For example, disks orbiting around the stellar-mass black holes (BHs) and neutron stars are considered to be among the most luminous X-ray sources in the sky (\citealt{Remillard2006}). Active galactic nuclei (AGNs), powered by the accretion of matter onto a supermassive black hole at the centre of galaxies, are not only the most powerful sources, but also energy released by the AGNs provide feedback to the entire galaxy and determines its evolution (\citealt{Silk_Rees_1998,Harrison2017,Morganti2017}). Broadly speaking, accretion occurs via three different modes: (i) geometrically thin and optically thick Keplerian disk (standard disk; \citealt{Shakura1973, Novikov1973}), (ii) geometrically thick and optically thin radiatively inefficient accretion flows (RIAF; \citealt{Chakrabarti1989, Narayan1994, Blandford1999}) and (iii) geometrically and optically thick slim disks (\citealt{Abramowicz1988}). Slim disks accrete matter at super-Eddington rate, while mass accretion rate $\dot{m}$ is sub-Eddington in both the standard disk ($10^{-4} \lesssim \dot{m} / \dot{M}_{\rm Edd} \lesssim 1$) and in RIAFs ($\dot{m} / \dot{M}_{\rm Edd} \lesssim 10^{-4}$). In this paper, we focus on the RIAFs, where disks likely span most of their time (e.g., \citealt{Yuan2014}, such as disks around Sgr A$^*$ and the SMBH in M87), and their dynamics is relatively simple compared to the other two states. The structure and evolution of the rotationally supported accretion disks are primarily determined by the process of angular momentum transport. The current consensus is that the magnetorotational instability (MRI; \citealt{Balbus1991}) gives rise to angular momentum transport and vigorous turbulence in a fully ionised accretion flow (e.g in X-ray binaries, inner part of the AGN disk, sufficiently ionised part of proto-planetary disks). The MRI becomes more efficient in angular momentum transport if the accretion disk is threaded by a net vertical magnetic flux. This has been observed in the the local shearing-box simulations that the presence of a net vertical magnetic flux enhances the MRI turbulence and hence angular momentum transport (\citealt{Bai2013}). Additionally, net flux threading the disk helps to launch the winds/outflows (\citealt{Bai2013, Suzuki2014}). Large-scale magnetic field close to the central accretor (a BH or a neutron star) is a necessary ingredient for jet production in accreting systems (\citealt{Blandford1977, Blandford1982}). It has been proposed that a RIAF saturated with strong ploidal magnetic flux close to the BH provides an ideal condition for the jet production (\citealt{Bisnovatyi-Kogan1974a,Esin1997, Fender1999a, Narayan2003, Meier2005c}). The idea has been verified in numerical simulations (\citealt{Igumenshchev2003, Narayan2012}). The studies found that a strongly magnetized RIAF, namely magnetically arrested disk (MAD) around spinning BH produces strong jets, extracting net energy from the BH spin via the Penrose-Blandford-Znajek process (\citealt{Tchekhovskoy2011, McKinney2012}). MAD model predicts a correlation among the mass accretion rate, magnetic flux threading the BH and jet power which is found to be in agreement with the observations of radio-loud AGNs (\citealt{Zamaninasab2014,Ghisellini2014}). Recent polarization studies of M87 at 230 GHz from Even Horizon Telescope EHT observations (\citealt{EHT2021a, EHT2021b, Yuan2022}) also infer the presence of a dynamically important near-horizon organized, poloidal magnetic flux consistent with GRMHD models of MAD. What could be the possible source of the magnetic flux close the BH? Most of the numerical simulations of MAD start with a strong enough large-scale poloidal flux which is eventually brought close to the BH and get accumulated by flux-freezing (\citealt{Tchekhovskoy2011, McKinney2012}). However, the source of the large-scale field is not entirely obvious. It can potentially be generated in the disk itself by a dynamo action, or be advected in from some external sources. Efficiency of the dynamo action in generating coherent and strong large-scale poloidal field required to produce strong jets is found to be different for different numerical simulations. The $\alpha-$ effect (responsible for generation of poloidal field in dynamo) is found to be weak in simulations that start with small poloidal magnetic loops (\citealt{Hogg2018, Dhang2019, Dhang2020}). The quasi-stationary states of those simulations are in a weakly magnetized regime, popularly know as ``standard and normal evolution" (SANE; \citealt{Narayan2012}). However, recent simulations with a very strong (with gas to magnetic pressure ratio $\beta \approx 5$) and coherent initial toroidal field showed the production of large-scale poloidal field loops of the size of scale-height $H\propto R$ and led to the MAD eventually (\citealt{Liska2020}). Therefore, it is worth noting that the simulations need to start with a strong and coherent large-scale field (either poloidal or toroidal) to achieve MAD. In addition to the in-situ generations of the magnetic field by a dynamo process, it might be possible that an initially weak field supplied to the disk (from the outer part of the disk or companion star in case of XRBs, ambient medium of the AGNs) can in principle be amplified by flux freezing. Flux accumulation near the BH depends on the relative efficiency between the inward advection by the accretion flow and the outward diffusion due to a turbulent resistivity (\citealt{Lubow1994}). Additionally, turbulent pumping can also cause outward transport of the large scale magnetic field in a dynamo-active accretion flow (\citealt{Dhang2020}). However, a few studies proposed that vertical magnetic field accretion can be efficient in the hot tenuous surface layer (coronal region, where radial velocity is comparatively higher compared to that in the mid- plane) in a hot accretion flow (\citealt{Beckwith2009}). It is also interesting to note that the simulations of large-scale accretion flow around the galactic centre fed by the magnetized winds of Wolf-Rayet stars also show efficient inward transport of magnetic field towards the centre (\citealt{Ressler2020a,Ressler2020b}). This paper studies the magnetic flux transport in a fully turbulent RIAF, unlike the previous studies where magnetic flux is injected in the initial laminar condition. Therefore, first, we run a simulation to attain a quasi-stationary RIAF in the SANE regime (weakly magnetized). Then we inject the external magnetic flux on top of the existing magnetic field in this turbulent SANE RIAF. In the later part of the paper, we refer it as the Initial RIAF run. It is customary to use net vertical magnetic flux threading the disk to investigate the flux transport (\citealt{Beckwith2009,Zhu2018, Mishra2019a}. However, we argue that the geometry of the external magnetic field is likely to be closed. In this paper, as a first step, we huse magnetic field loops as the simplest possible form of the external magnetic field, studying its transport in the turbulent RIAF and its possibility of saturating the BH with magnetic flux towards the MAD regime. The paper is organized as follows. In Section 2, we discuss the solution method and physical set-up of the RIAF simulations. In Section 3, we discuss the evolution of the flow, convergence and magnetic state of the Initial RIAF run. We describe the method of flux injection and its results in section 4. Finally, the key points of results are discussed and summarized in Sections 5 and 6. \section{Method} \label{sect:method} \subsection{Equations solved} \label{sect:method_eqs} We solve the ideal general relativistic magnetohydrodyamic (GRMHD) equations \bea \label{eq:mass} && \partial_0 \left(\sqrt{-g} \rho u^0 \right) + \partial_j \left (\sqrt{-g} \rho u^j \right) = 0\\ \label{eq:mass_energy} && \partial_0 \left(\sqrt{-g} T^{0}_{\mu} \right) + \partial_j \left( \sqrt{-g} T^{j}_{\mu} \right) = \frac{1}{2}\sqrt{-g} T^{\nu \sigma} \partial_{\mu} g_{\nu \sigma} \\ \label{eq:maxwell} && \partial_0 \left(\sqrt{-g} B^{i} \right) + \partial_j \left( \sqrt{-g} \ F^{*ij} \right) = 0 \\ \label{eq:monopole} && \frac{1}{\sqrt{-g}} \partial_{i} \left( \sqrt{-g} B^{i} \right) = 0 \eea in a spherical-like Kerr-Schild coordinates ($t,\ r, \ \theta, \ \phi$) with $G=c=M_{BH}=1$. All the length scales, time scales in this work are expressed in units of the gravitational radius $r_g=GM_{BH}/c^2$ and $t_g=r_g/c$ respectively, unless stated otherwise. Here, $g_{\mu \nu} $ and $g$ are metric coefficients and metric determinant respectively. Following convention, Greek indices run through [0,1,2,3], while $i$ denotes a spatial index. Equations \ref{eq:mass}, \ref{eq:mass_energy}, \ref{eq:maxwell} \& \ref{eq:monopole} describe conservation of particle number, conservation of energy-momentum, source-free Maxwell equations and no-magnetic monopole constraint respectively. Here \be \label{eq:stress_tensor} T^{\mu \nu } = \left(\rho h + b^2 \right) u^{\mu} u^{\nu} + \left( p_{\rm gas} + \frac{b^2}{2} \right) g^{\mu \nu} - b^{\mu} b^{\nu} \ee is the stress-energy tensor and \be \label{eq:dual} F^{* \mu \nu} = b^{\mu} u^{\nu} - b^{\nu} u^{\mu} \ee is the dual of the electromagnetic field tensor given that $\rho$ is the comoving rest mass density, $p_{\rm gas}$ is the comoving gas pressure, $u^{\mu}$ is the coordinate frame 4-velocity, $\Gamma=5/3$ is the adiabatic index of the gas, $h=1 + \Gamma/(\Gamma -1) p_{\rm gas}/\rho$ is comoving enthalpy per unit mass, $B^{i}= F^{*i0}$ is the magnetic field in the coordinate frame. Four magnetic field $b^{\mu}$ are related to 3-magnetic field $B^{i}$ as \bea \label {eq:b0} && b^0 = g_{i \mu} B^{i} u^{\mu},\\ \label{eq:b1} && b^i = \frac{B^i + b^0 u^i}{u^0}. \eea For diagnostics, we also use magnetic field components ($B_r, \ B_{\theta}, \ B_{\phi}$) defined in a spherical-polar like quasi-orthonormal frame as \be \label{eq:B_r_th_phi} B_r = B^1, \ B_{\theta} = rB^2, \ B_{\phi} = r \sin \theta B^3. \ee We use the GRMHD code {\tt Athena++} \citep{White2016} to perform the simulations. We employ the HLLE solver \citep{Einfeldt1988} with a third-order piecewise parabolic method (PPM; \cite{Colella1984}) for spatial reconstruction. For time integration, a second-order accurate van Leer integrator is used with the CFL number 0.3. We use a CT \citep{Gardiner2005, White2016} update of the face-centered magnetic fields to maintain the no magnetic monopole condition. \subsection{Initial Condition} \label{sect:method_init} We initialise a geometrically semi-thick disk of aspect ratio $\epsilon^{\text *}_{\rm in}=H_G/R=0.23$ embedded in a hot corona. Here $H_G$ is the Gaussian scale-height. The rest mass density distribution of the initial disk is given by \be \rho_d(r,\theta) = e^{-z^2/2H_G^2} \left( \frac{r_{\rm in}}{R} \right)^{q_d} \delta(r_{\rm in}); \ee and the gas pressure is given by \be p_{\rm gas,d} = \rho_d c^2_{sd} = \rho_d \ \epsilon^{\text * 2}_{\rm in} \left(\frac{M_{\rm BH}}{R}\right). \ee Here, $z=r {\rm cos} \theta$, $R=r {\rm sin}\theta$ and $\delta(r_{\rm in})=1/[1+e^{-(R-r_{\rm in})/5\epsilon^{\text *}_{\rm in} H_G}]$ is a tapping function with an inner disk radius $r_{\rm in}=15$. We consider $q_d=-1.5$ and mass of the BH to be $M_{\rm BH}=1$. It is to be noted that the Gaussian scale height $H_G$ is related to the density-weighted scale height \be H = \frac{\int \sqrt{-g} \ \rho |\frac{\pi}{2} -\theta| \ d\theta \ d\phi } {\int \sqrt{-g} \ \rho \ d\theta \ d\phi} \ee as $H=\sqrt{2/\pi} H_G$ and hence, disk aspect ratio $\epsilon=\sqrt{2/\pi} \epsilon^{\text *}_{\rm in}$. The disk is surrounded by an atmosphere defied by \bea \rho_c = \rho_{\rm in} \left(\frac{r_{\rm in}}{r}\right)^{q_c}; ~~~~ p_{\rm gas,c} = \rho_c \frac{M_{BH}}{r} \eea with $q_c=-1.5$ and $\rho_{\rm in}=10^{-5}$. The tenuous atmosphere is static while the gas within the disk is rotating with a Keplerian speed given by \be u^{3} = \frac{r}{r-2}R^{-3/2} \ee in the Boyer-Lindquist coordinates. In order to attain a quasi-stationary weakly magnetized RIAF (SANE; \citealt{Narayan2012}), we initialise the multiple magnetic field loops using the vector potential (\citealt{Penna2013_init}) \begin{equation} \label{eq:vec_poten} A_{\phi} = \begin{cases} Q~ \sin \left[ f(r) - f(r_{\rm in}) \right], ~~ Q>0 \\ 0, {\rm otherwise}. \end{cases} \end{equation} Here, \bea && Q = C_B ~ {\rm sin}^3 \theta \left( \frac{p_1}{p_2} - p_{\rm cut} \right),\\ && f(r) = \left(r^{2/3} + \frac{15}{8r^{2/5}} \right) \frac{1}{\lambda_B}; \eea with $p_{1} (r, \theta)=p_{\rm gas} (r,\theta) - p_{\rm gas}(r_{B0},\pi/2)$, $p_{2} (r) = p_{\rm gas} (r,\pi/2) - p_{\rm gas}(r_{B0}, \pi/2)$. The vector potential $A_{\phi}$ vanishes for $r>r_{B0}=200$. We choose $C_B=0.5$, $p_{\rm cut}=0.4$ and $\lambda_B=0.75$ giving rise to an average plasma $\beta=800$ for the initial disk (averaging is done over the region within one scale-height of the disk). \subsection{Numerical setup} \label{sect:method_setup} We carry out all the simulations of RIAFs around a non-spinning BH (spin parameter $a=0$). The computational domain spans over $r \in [1.94 , 300]$, $\theta \in [0, \pi]$, $\phi \in [0, 2\pi/3]$. It is to be noted that one grid point is inside the event horizon in the radial direction at the root level. This allows to obtain a causally disconnected inner boundary. Radial grids are spaced logarithmically, while meridional grids are compressed towards the mid-plane using \be \theta = \theta_u + \frac{1-s}{2} \ \sin 2\theta_u \ee with $s=0.49$ which gives rise to $\Delta \theta_{\rm pole}/\Delta \theta_{\rm eq} \approx 3.0$. Uniform grids are employed in the azimuthal direction. To improve the effective resolution, we use two levels of static refinements with a root grid resolution $160 \times 56 \times 32$ giving rise to $\Delta r : r \Delta \theta: r \Delta \phi = 1.2:1:2.3$ at the equator in the Newtonian limit. While first level of refinement covers $r_{L1} \in [1.94, 180], \theta_{L1} \in [\pi/3,2\pi/3]$, a second level of refinement is applied to the region $8 < r < 140$, $4 \pi/9 < \theta < 5 \pi/9$ such that the number of $\theta$-cells per scale-height is $H/r \Delta \theta \approx 40$ in the quasi-stationary state. We use a pure inflow boundary condition ($u^{1} \leq 0$) at the radial inner boundary, while at the radial outer boundary, primitive variables are set according to their initial radial gradients. Magnetic fields in the inner ghost zones are copied from the nearest computation zone. On the other hand, magnetic fields at the outer ghost zones are set according to $B_r, B_{\phi} \propto r^{-2}$ while keeping $B_{\theta}$ unchanged from the last computation zone. Polar and periodic boundary conditions are used at the meridional and azimuthal boundaries respectively. We will introduce various diagnostics as we discuss simulation results, where the data will be averaged in different ways. Here, for future reference, we mention that the symbol `$\ \bar{} \ $' is reserved for the azimuthally averaged mean quantities, while any additional averaging (e.g. vertical or time averaging) of the quantities will be indicated by $\la . \ra$ in this paper. \section{Evolution of the Initial RIAF Run} \label{sect:results_riaf} To begin with, we would like to investigate the plausibility of the conversion of a weakly magnetized RIAF (SANE) into a highly magnetized one (MAD) due to a dynamo action. Therefore, we perform an Initial RIAF simulation as previously mentioned. We run the simulation for the time $t=8 \times 10^4$ to probe whether a SANE to MAD conversion occurs. In this section, we describe the evolution of the Initial RIAF towards the stationarity, its convergence and magnetic state. \subsection{Flow evolution of the Initial RIAF} \label{sect:flow_evo_riaf} Fig. \ref{fig:flow_evo} shows the evolution of the flow with time for the Initial RIAF run; colors describe mean toroidal field ($\phi$-averaged, for definition see equation \ref{eq:mean_B}), and streamlines depict mean poloidal fields $\bar{\mathbf{B}}_p= \bar{\mathbf{B}}_r + \bar{\mathbf{B}}_{\theta}$. The first panel of Fig. \ref{fig:flow_evo} shows the magnetic initial condition- poloidal field loops of alternate signs, aiming to achieve a weakly magnetized RIAF (SANE; \citealt{Narayan2012}) in the quasi-stationary phase. Shear in the accretion flow converts poloidal field into the toroidal field, while MRI amplifies the poloidal field. Therefore, both poloidal and toroidal fields grow exponentially in a dynamical time ($t_{\rm dyn} \approx 1/\Omega \propto R^{3/2}$) and after few dynamical time the system likely enters the non-linear regime under the influence of parasitic instabilities (\citealt{Goodman1994}) or due to different super-AlfvГ©nic rotational instabilities (SARIs; \citealt{Goedbloed_Keppens2022}). Second panel of Fig. \ref{fig:flow_evo} shows the time $t=10050$ when MHD turbulence is fully developed throughout the region of interest ($r \leq 120$). However, it is worth noting that the system still remembers the initial field geometry as indicated by the alternate signs of mean toroidal fields at different radii. As time evolves, alternate polarity fields reconnect and the accretion flow gradually removes the signature of initial field geometry as can be seen in the third panel of Fig. \ref{fig:flow_evo}. Around the time $t=40200$ (fourth panel of Fig. \ref{fig:flow_evo}), the accretion flow largely forgets its magnetic initial condition and the magnetic fields generated due to an in-situ dynamo start to dominate. Finally, the subsequent disk evolution is self-regulated with a combination of the MRI turbulence, dynamo and angular momentum transport. Magnetic field generated by dynamo in the quasi-stationary phase of RIAF are not only of large-scales in the azimuthal direction \citep{Dhang2019}, but also in the radial direction, as can easily be seen by the radially extended structures of mean poloidal and toroidal fields in the last panels of of Fig. \ref{fig:flow_evo}. However, the strength of the dynamo generated large-scale field is insufficient to form a MAD, as we will discuss in the section \ref{sect:sane_or_mad}. \subsection{Convergence} \label{sect:convergence} Before analyzing simulations results, we first verify that our simulations have achieved proper numerical convergence. In doing so, we calculate different numerical metrics which were found to be useful in defining the convergence of the MRI turbulence. In this work, we focus on the quality factors \bea && Q_{\theta} = \frac{2 \pi}{\Omega} \frac{| \bar{b}^{\hat{\theta}} |} {\sqrt{{\overline{w}_{\rm tot}}}} \frac{1} {dx^{\hat{\theta}}}, \\ && Q_{\phi} = \frac{2 \pi}{\Omega} \frac{| \bar{b}^{\hat{\phi}} |} {\sqrt{{\overline{w}_{\rm tot}}}} \frac{1} {dx^{\hat{\phi}}} \label{eq:quality_f} \eea and the magnetic tilt angle \be \theta_{B} = -\frac{\overline{b^{\hat{r}}b^{\hat{\phi}}}} {\overline{p}_{\rm mag}} \label{eq:tilt_f} \ee measured in an orthonormal fluid frame (\citealt{White2019_tilt}). Here the angular velocity is defined as $\Omega (r,\theta)= \bar{u}^{3}/\bar{u}^{0}$, and total entropy is given by $\overline{w}_{\rm tot} (r,\theta) = \overline{\rho + \Gamma/(\Gamma-1) p_{\rm gas} + p_{\rm mag}}$. Line elements are given by $dx^{\hat{\theta}} = g_{\mu \nu} e^{\mu}_{\hat{\theta}} dx^{\nu}_{BL}$, $dx^{\hat{\phi}} = g_{\mu \nu} e^{\mu}_{\hat{\phi}} dx^{\nu}_{BL}$, where $dx^{\mu}_{BL} = \left[0,~\Delta r,~ \Delta \theta, ~\Delta \phi -a r/(r^2 - 2Mr + a^2) \Delta \right]$. Quality factors $Q_{\theta}$ and $Q_{\phi}$ provide the information of number of cells across an wavelength of the fastest growing mode in the $\theta$ and $\phi$- directions respectively; while $\theta_B$ measures the magnetic field anisotropy, a key factor behind angular momentum transport. Earlier studies suggested that the toroidal and poloidal resolutions are coupled and the product of the quality factors $Q_{\theta} Q_{\phi} \geq 200-250$ is a good indicator for convergence in the MRI simulations (\citealt{Sorathia2012, Narayan2012, Dhang2019, grmhd_code2019}). In the meantime, we note that $\theta_B$ shows a narrow range of value $10^{\circ}-14^{\circ}$ for the converged runs (e.g \citealt{Sorathia2012, Hogg2018a, Dhang2019}) and turned out to be a better indicator of convergence. The top and bottom panels of Fig. \ref{fig:quality_fac} show the radial profiles of the average (averaged over $\phi$, $\theta$ and time) quality factors ($\la Q_{\theta} \ra$, $\la Q_{\phi} \ra$), and magnetic tilt angle ($\la \theta_B \ra$) close to the mid-plane of the disk for the Initial RIAF run. Meridional average is done over one scale-height above and below the mid-plane, while azimuthal average is done over all cells. Time average is done over the time interval $t=(5-10) \times 10^4$. While the quality factors indicate that our simulation is marginally resolved with $\la Q_{\theta} \ra \la Q_{\phi} \ra \gtrsim 180$ up to $r=100$, the radial profile of $\la \theta_B \ra $ clearly shows that our Initial RIAF simulation is well resolved till $r=120$ and resolvability starts to decline afterwards because of the poor resolution at larger radii. \subsection{Characterizing the magnetic state of the Initial RIAF?} \label{sect:sane_or_mad} In this section, we characterize our Initial RIAF simulation, particularly examining the indicators that distinguish the SANE from the MAD state. Following, \cite{Narayan2012}, we study the time evolution of the specific angular momentum of the accreting material \be j_{\rm net}(r,t) = -\frac{1}{\dot{m}}\int T^{1}_{3} \ dS_r \ee and the MAD parameter \be \phi_{BH}(r,t) = \frac{\sqrt{4 \pi}}{2\sqrt{\dot{m}}}\int |B^1| \ dS_r \ee to investigate the magnetic state of the Initial RIAF run. Here, the mass accretion rate at any radius $r$ is defined as \be \dot{m} (r,t) = - \int \rho u^{1} \ dS_r, \ee where area element is given by $dS_r = \sqrt{-g} \ d\theta \ d\phi$. The MAD parameter $\phi_{BH}$ is a dimensionless number which is found to be useful in characterizing the magnetic state of the simulations. Earlier studies suggest that an accretion flow attains a MAD state once $\phi_{BH}$ reaches a critical value $\phi_{BH,c}\approx 40$ at the event horizon (\citealt{Tchekhovskoy2011}). Additionally, $j_{\rm net}$ also shows a highly sub-Keplerian nature at the event horizon for MAD simulations, where angular momentum transport is highly efficient due to the large-scale Maxwell stress. On the contrary, $j_{\rm net}$ maintains a slightly sub-Keplerian value at the event horizon in the SANE simulations (\citealt{Narayan2012}). Moreover, $j_{\rm net}$ is shown to be a good indicator of convergence in the MRI-active turbulent accretion flow. For a converged simulation, $j_{\rm net}$ maintains a sub-Keplerian value inside the ISCO with a non-decreasing trend in time throughout the simulation (\citealt{Hawley2013, Dhang2019}). Fig. \ref{fig:jnet_mad_param} show the time evolution of $j_{\rm net}$ and $\phi_{BH}$ at two different radii, at the ISCO and at the event horizon. We also plot the time variation of signed flux threading the event horizon ($r_H=2$) of the BH in northern hemisphere, $\Phi_{NH} (r_H)$ (equation \ref{eq:flux_r}) for a future reference in section \ref{sect:results_flux}. The value of $\phi_{\rm BH}$ always remains around one which is well below the value ($\ge 40$) required for the MAD state. Such a low value of $\phi_{\rm BH}$ implies that the magnetic state of the Initial RIAF run is in the SANE regime. Slightly sub-Keplerian value of specific angular momentum $j_{\rm net}$ is also an indicator of the SANE magnetic state of our initial RIAF simulation. \subsection{Inflow equilibrium} We will inject external magnetic flux in the quasi-stationary turbulent RIAF to study magnetic flux transport (ref. section \ref{sect:results_flux}). Therefore, it is important to find out the inflow equilibrium radius - radius within which flow attains a quasi-stationary state, for the Initial RIAF run. Following \cite{Narayan2012}, we investigate the variation of average mass accretion rate $\la \dot{m}(r) \ra$ with time to find out the inflow equilibrium radius. Spatial averages are done over all $\theta$ and $\phi$. We use five different intervals $\Delta t_0=(3.75-7.5)\times 10^3$, $\Delta t_1=(7.5-15)\times 10^3$, $\Delta t_2=(1.5-3)\times 10^4$, $\Delta t_3=(3-6)\times 10^4$ and $\Delta t_4=(6-12)\times 10^4$ to do the time average. The top panel of Fig. \ref{fig:mdot_H_riaf} shows $\la \dot{m}(r) \ra$ at different time intervals for the Initial RIAF run. It can be inferred from the radial profiles of $\dot{m}$ that the inflow equilibrium radius for the Initial RIAF run reaches $r_{\rm eq}\approx60$ at late times. This is the radius that guides us to determine the injection radius for the external magnetic field loops. The bottom panel of Fig. \ref{fig:mdot_H_riaf} shows the radial variation of disk aspect ratio $\epsilon=H/r$ in the quasi-stationary state. The disk aspect ratio $\epsilon$ slowly increases with increasing radius until the inflow equilibrium radius and its value lies around $\epsilon = 0.25$ for $r\gtrsim20$, where general relativistic effects are negligible. Such a variation of scale height in our simulation is also in agreement with that observed in previous GRMHD simulations of the SANE RIAF (e.g \citealt{Narayan2012}). \subsection{Large-scale magnetic field and dynamo} We find that our Initial RIAF simulation is in the SANE state and an MRI dynamo generates the large-scale magnetic fields and governs the magnetic field evolution at late times as discussed in section \ref{sect:flow_evo_riaf}. To characterize the dynamo action, it is customary to visualise the spatio-temporal variation of the mean magnetic field to investigate dynamo. We define the mean magnetic field as the azimuthally averaged field \be \label{eq:mean_B} \bar{B}_{i} (r,\theta) = \frac{1}{\phi_{\rm ext}} \int_{0}^{\phi_{\rm ext}} B_i(r,\theta,\phi) d \phi, \ee where $\phi_{\rm ext} $ is the extension in the $\phi$ direction, and $i \in (r, \theta, \phi)$. Fig. \ref{fig:butter_br_bphi} shows the variation of mean radial $B_r (R_0,, \theta,t)$ (top panel) and mean toroidal field $\bar{B}_{\phi}(R_0,\theta,t)$ (bottom panel) with latitude ($90^{\circ} -\theta$) and time at a radius $R_0=60$. This is also known as the butterfly diagram. Both radial and toroidal fields show irregular behaviour in their butterfly diagrams. Additionally, radial field is less coherent compared to the toroidal field as observed in earlier studies of dynamo in the SANE RIAF \citep{Hogg2018a, Dhang2020}. This intermittent dynamo cycle in the RIAF is in contrast to the very regular dynamo cycles observed in a thin Keplerian disk (e.g. see \cite{Flock2012a}). Irregularity in dynamo cycle arises because of slightly sub-Keplerian angular velocity of the geometrical thick RIAF (\citealt{Dhang2019}). Earlier studies found that while a large-scale dynamo generates large-scale magnetic fields in the high latitudes, a fluctuation dynamo dominates close to the disk mid-plane suppressing the production of large-scale magnetic field there (\citealt{Dhang2019}). This can be qualitatively understood by the looking at the large and coherent magnetic structures (specially for the toroidal fields) in the high latitudes, while more patchy distribution near the disk mid-plane ($90^{\circ}-\theta=0^{\circ}$) in the butterfly diagram in Fig. \ref{fig:butter_br_bphi} and also in the last two panels of Fig. \ref{fig:flow_evo}. However it should be emaphasized that although the MRI dynamo does produce large-scale magnetic field, it is not strong enough to create a MAD, which is conducive for strong jets (section \ref{sect:sane_or_mad}). This inefficiency is likely to be due to the weak $\alpha$-effect (\citealt{Dhang2020}) which is responsible for poloidal field generation. Additionally, a strong turbulent pumping present in MRI-active RIAF tends to prevent accumulation of large-scale magnetic field near the BH as suggested in \cite{Dhang2020}. \section{Transport of external magnetic field loops} \label{sect:results_flux} In this section, we study the accretion of external magnetic flux injected on top of the fully turbulent SANE state obtained in Section \ref{sect:results_riaf}. Our aim is to investigate whether or not the system can bring in external magnetic flux available at the outer radii, all the way to the central BH that may eventually lead to a MAD state. While the actual configuration of the external field is unknown and could be complex, we anticipate it likely to be closed. Therefore, instead of the commonly used net vertical field, we inject the poloidal magnetic field loops of different strengths, radial and vertical sizes as shown in Fig. \ref{fig:beta_loop} and study their transport. As the controlled experiments, these field loops confined between the radii $r_{l_1}$ and $r_{l_2}$ are prescribed by \be \label{eq:loop} A_{\phi, l} = \sqrt{\frac{2 p_{\rm gas}(r_{lc},\pi/2)}{C_{l}}}\left[\frac{\rho(r,\theta^{\prime})}{\rho(r,\pi/2)} - \delta_{l}\right]^2 \sin \left[ \kappa(R-r_{l1})\right] \ee where $p_{\rm gas}$, $\rho$ are the initial pressure and density profiles respectively, $\theta^{\prime}=\theta + \theta_{\rm shift}$, $\kappa = \pi/(r_{l_2}-r_{l_1})$ and $r_{lc}=(r_{l_1}+r_{l_2})/2$. Vanishing $\theta_{\rm shift}$ implies that the loop centre is at the mid-plane, while a positive value of $\theta_{\rm shift}$ indicates that the loop is off-centred. The vertical size of the loop is set by $\delta_l$. The magnetization of the loop is controlled by the parameter $C_l$ and characterized by $\beta_l=\la p_{\rm gas} \ra/ \la p_{\rm mag,l} \ra$, where $\la p_{\rm gas} \ra$ and $\la p_{\rm mag,l} \ra$ are the gas pressure of the Initial RIAF at the time of loop injection and magnetic pressure of the injected loop respectively. Additionally note that average is performed within the loop. We choose $r_{l_1}$ to be the inflow equilibrium radius $r_{eq}=60$, and different values of $r_{l_2}$as tabulated in Table \ref{tab:loop}. We restart the Initial RIAF run at $t= 8.06 \times 10^4$, inject the external field loops (equation \ref{eq:loop}) and run till $t=t_{\rm end}$ as tabulated in Table \ref{tab:loop}. It is to be noted that we also run the Initial RIAF simulation longer to compare it with the simulations with injected magnetic field loops. We injected loops of different strengths with a wide range of plasma $\beta$ ranging from $\beta_l=7000$ (weak but stronger than the pre-existing mean fields produced by MRI dynamo) to $\beta_l=70$ (very strong field typically used in MAD simulations, but of much larger size than that used in our simulations). We also explored the effects of other parameters such as radial, vertical sizes and injection latitude of the loops on the flux transport process while considering the loops of fiducial plasma $\beta$ values $\beta_l=3500$ and $\beta_l=1500$ respectively. Additionally, we studied the transport of big magnetic loops of strength (of $\beta_l=12200$) similar to that of the mean fields produced by MRI dynamo in the quasi-stationary phase of the Initial RIAF run. The configurations of injected field loops from all these restart are illustrated in Figure \ref{fig:beta_loop}. \begin{table*} \begin{tabular}{lp{2.5cm}lp{1.8cm} p{1cm} p{1cm} p{1cm} p{1cm} p{1cm} p{1cm} p{1cm} p{1cm} } \hline \multicolumn{10}{|c|}{Parameters} \\ \hline Name & $C_l$ &$\delta_l$ &$r_{l1}$ & $r_{l2}$ & $z_{l} $ & $\beta_{l}$ & $\theta_{\rm shift}$ & $\Phi_{l, {\rm max}}$ & $t_{\rm end}/10^5$ \\ \hline Initial RIAF &- &- &- &- &- &- &- &- & 1.2 \\ $\beta \_7000$ & $10^{-3}$ &0.2 & 60 &90 & 1.5 H & 7000 &$0^{\circ}$ & 1.48 & 1 \\ $\beta \_3500$ & $5 \times 10^{-4}$ &0.2 &60 &90 & 1.5 H & 3500 &$0^{\circ}$ & 2.01 & 1.2 \\ $\beta \_1500$ & $ 2.21 \times 10^{-4}$ & 0.2 &60 &90 & 1.5 H & 1500 &$0^{\circ}$ & 2.93 & 1.2\\ $\beta \_700$ & $10^{-4}$ &0.2 &60 &90 &1.5 H & 700 &$0^{\circ}$ & 4.28 & 1.2\\ $\beta \_200$ & $2.78 \times 10^{-5}$ &0.2 &60 &90 &1.5H & 200 &$0^{\circ}$ &7.95 & 1\\ $\beta \_70$ &$10^{-5}$ &0.2 &60 &90 & 1.5 H & 70 &$0^{\circ}$ & 13.14 & 1.2\\ $\beta \_3500 \_{\rm tall}$ & $1.168\times 10^{-3}$ &0.0016 &60 &90 &2.5H &3500 &$0^{\circ}$ & 2.04 & 1\\ $\beta \_1500 \_{\rm tall}$ & $5 \times 10^{-4}$ &0.0016 &60 &90 &2.5 H &1500 &$0^{\circ}$ &2.97 & 1 \\ $\beta \_3500 \_{\rm big}$ &$1.429 \times 10^{-5}$ &0.2 &60 &120 &1.5H &3500 &$0^{\circ}$ &3.04 & 1\\ $\beta \_1500 \_{\rm big}$ &$ 6.12 \times 10^{-5}$ & 0.2 &60 &120 &1.5 H & 1500 &$0^{\circ}$ & 4.42 & 1\\ $\beta \_12200 \_{\rm big}$ & $5 \times 10^{-4}$ &0.2 &60 &120 &1.5 H & 12200 &$0^{\circ}$ &1.70 & 1.1\\ $\beta \_3500 \_{\rm offc}$ & $5.88 \times 10^{-4}$ &0.2 &60 &90 &1.5H & 3500 & $15^{\circ}$ & 1.56 & 1\\ $\beta \_1500 \_{\rm offc}$ & $2.55 \times 10^{-4}$ &0.2 &60 &90 &1.5H & 1500 & $15^{\circ}$ & 2.32 & 1\\ \hline \end{tabular} \caption{Details of the injected external field loops characterized by plasma $\beta_l$, vertical size $z_l$ and confinement radii between $r_{l1}$ and $r_{l2} \ge r_{l1}$. $\theta_{\rm shift}$ and $\Phi_{l, {\rm max} }$ are tilt of the loop with respect to the mid-plane and the total magnetic flux in the injected loop in code units, respectively. Magnetic field loops are injected at $t=8.06 \times 10^4$ and run till $t=t_{\rm end}$. } \label{tab:loop} \end{table*} \subsection{Diagnostics} Before discussing the results in details, we define the following quantities used to discuss the transport of external magnetic flux, including: The radial magnetic flux threading the $r=const$ surface in the northern hemisphere \be \label{eq:flux_r} \Phi_{NH} (r) = \int_{\theta=0}^{\pi/2} \int_{0}^{\phi_{ext}} \sqrt{4 \pi} B_r(r,\theta,\phi) \sqrt{-g} \ d\theta d\phi\ , \ee the vertical flux threading the mid-plane region \be \label{eq:flux_th} \Phi_{\rm mid} (r) = \int_{r=r_H}^{r} \int_{0}^{\phi_{ext}} - \frac{\sqrt{4 \pi}}{r} B_{\theta}(r,\theta=\pi/2,\phi) \sqrt{-g} \ dr \ d\phi\ , \ee and the total flux available for accretion at different $r$ in the northern hemisphere \be \label{eq:flux_mid} \Phi_{\rm tot} (r) = \Phi_{\rm NH} (r_H) + \Phi_{\rm mid} (r); \ee where $r_H=2r_g$ is the event horizon radius of the BH. We also define a normalised flux representing the efficiency of the flux transport and defined by \be \label{eq:efficiency} f_B=\frac{\Phi_{NH} (r_H)}{ \Phi_{l, {\rm max}} }. \ee Here, $\Phi_{l, {\rm max}}$ is the total flux at the loop centre at the time of injection or in the beginning of the simulation (only for the Initial RIAF run; also see Table \ref{tab:loop}). \subsection{Results for the fiducial parameter- plasma $\beta$} First we will discuss the dependence of the flux transport and the emergent accretion properties on the strength of the loops, defined by plasma $\beta_l$ (also see Table \ref{tab:loop} and Fig. \ref{fig:beta_loop}). We start by discussing the qualitative picture on the evolution of the magnetic flux injected between the radii $r=60$ and $r=90$ on top of the existing magnetic field in the quasi-stationary Initial RIAF. It is worth noting that the plasma $\beta$ of the total (mean + fluctuation) magnetic field is $\beta_{\rm tot}=70$, while that of the mean field alone is $\beta_{\rm mean}=12250$, for the Initial RIAF run. Fig. \ref{fig:loop_evo} shows the evolution of external magnetic field loops of three different $\beta_l$-s. Colour shows the intensity of mean radial field $\bar{B}_r$, while streamlines describe the mean poloidal fields $\mathbf{\bar{B}_p}=\mathbf{\bar{B}_r} + \mathbf{\bar{B}_{\theta}}$. Top panels show the time evolution of the weakly magnetized loop of strength $\beta_l=7000$. Injection of the weak external magnetic field loops re-excite the MRI in the accretion flow, enhances accretion stresses (see Fig. \ref{fig:acc_stress}) and hence lead to higher mass accretion rates (see Fig. \ref{fig:mdot_strength}). Poloidal flux slowly drifts towards the BH and a fraction of the injected flux accumulates near the BH (which is quantitatively shown in Fig. \ref{fig:flux_r_strength}). We see an increase in the radial magnetic flux threading the BH when compared to that in the Initial RIAF run. This can be comprehended by comparing the snapshots at $t=80601$ (the flux level close to the BH does not change significantly in the quasi-steady state of Initial RIAF as shown in Fig. \ref{fig:flow_evo}) and last panel at $t=10^5$. Next we discuss the transport of moderately strong magnetic field loops of $\beta_l=1500$ as shown in the middle panels of Fig. \ref{fig:loop_evo}. Magnetic flux reaches the BH in a shorter time compared to that in the weak field case of $\beta_l=7000$. This is due to the stronger accretion stresses produced (see Fig. \ref{fig:acc_stress}) in the accretion flow due to the injection of stronger magnetic field loops. Radial magnetic field strength in the polar region is found to be stronger than the weak field case at late times. This is because of the larger amount of flux associated with the loop of $\beta_l=1500$ than that with the loop of $\beta_l=7000$. Finally, we examine the transport of the strong field loops with $\beta_l=70$, similar to the strength of the total (mean + fluctuation) magnetic field in the qausi-stationary phase of Initial RIAF. Unlike the previous two weak-field cases, here the injected loops are so strong with most unstable wavelengths comparable to the disk scale height, and that drives strong channel flows over the entire vertical extent of the disk. This further generates a spike in the large-scale Maxwell stress (see Fig. \ref{fig:acc_stress}) producing a strong inflow of mass and magnetic flux. Magnetic flux reaches the BH very quickly and fills the polar region. The system remains to be in a strongly turbulent state till the end of the simulation. The qualitative pictures discussed above are representative among all our simulations. The animations of the other runs with external magnetic field loops along with the Initial RIAF run can be viewed in this \href{https://www.youtube.com/watch?v=3GyW2jRDbcU&list=PLUKo6vYd0sPJ69kl5JdWoBdzXAI84o7V4}{playlist}. In the upcoming sub-sections, we will quantify with different metrics of magnetic flux transport in greater detail for all the runs we performed listed in Table \ref{tab:loop}. \subsubsection{Evolution of magnetic flux} Fig. \ref{fig:flux_r_strength}(a) shows the time evolution of the magnetic flux through the event horizon in the northern hemisphere, $\Phi_{NH} (r_H)$ for runs with different strengths of injected magnetic field loops, which are also compared with that of the Initial RIAF run. It is clearly visible that the injection of an external loop enhances the amount of flux at the event horizon. In the runs with low $\beta_l\lesssim1000$, there is a transient rise of $\Phi_{\rm NH}$ due to the fast transport from the strong MRI channel flow. In the more extreme case of $\beta_l=70$, the transient phase is so extreme that it leads to a strong initial spike in $\Phi_{\rm NH}$, followed by a gradual decline towards a more steady flux level. For other runs with $\beta_l\gtrsim1000$, the build up of magnetic flux in the BH horizon is more gradual, and the build up is slower for runs with with higher $\beta_l$. It is worth noting that none of our simulations with injected loops reach the MAD state, with MAD parameter ranging from $\phi_{BH}=2$ to $\phi_{BH}=10$. We show the spatial-temporal variation of total flux $\Phi_{\rm tot} (r,t)$ available for accretion in the northern hemisphere in Fig. \ref{fig:flux_mid_rt_strength} to obtain a more complete picture of the flux transport at different radii. Each panel of Fig. \ref{fig:flux_mid_rt_strength} describes the evolution of the radial profile of $\Phi_{\rm tot}$ over time for runs with different $\beta_l$. The first panel in the top row corresponds to the Initial RIAF run. It again demonstrates that the system forgets its initial magnetic field configuration after the time $3-4\times10^4$. The rest of the panels show the spatial-temporal evolution of $\Phi_{\rm tot}(r,t)$ for other runs after we inject external magnetic field loops. In accordance with Figure \ref{fig:flux_r_strength}, we see that there are two regimes of flux transport depending on the strength of the injected loop. With very strong external flux ($\beta_l=70, \ 200$), the external flux is quickly transported both inwards and outwards, characteristic of the channel flows with flow directions alternating over height, as seen in the last tow bottom panels of Fig. \ref{fig:flux_mid_rt_strength}. The channel flows lead to an initial transient transport of a large fraction of initial flux into the BH, followed by subsequent relaxation and diffusion towards a more steady flux level. With weak external flux ($\beta_l \gtrsim3500$), the initial external magnetic flux gradually diffuses while being advected inwards. In the end, a fraction the flux overcomes diffusion to reach the BH, which will be discussed more quantitatively in the next subsection. In between between these two regimes, there lies the case of moderately strong external flux ($\beta_l=700, \ 1500$), for which the flux transport by the channel flows diffuses before reaching the BH, and subsequent transport is likely mediated by a combination of advection and diffusion. \subsubsection{The Efficiency of transport} Till now, we considered the total flux as the primary diagnostics irrespective of the amount of flux associated with the injected loops. However, it is worth noting that different loops have different amount of fluxes. Therefore, a normalised flux $f_B$ as defined in equation \ref{eq:efficiency} would be the better indicator on the efficiency of flux transport. Fig. \ref{fig:flux_r_strength}(b) shows the time variation of the fraction $f_B$ for different runs. In the regime of very strong injected flux ($\beta_l=70, \ 200$), the efficiency is quite high (up to $\sim50\%$) during the initial phase when channel flows dominate. Later, the efficiency goes down to around 15-20$\%$. In the weak field regime ($\beta_l \geq 3500$), despite that flux in the BH is accumulated gradually, the efficiency of flux transport is more or less similar, which is around 15-20 percent. For comparison, we also show the result for the Initial RIAF run, where we define $\Phi_{l, {\rm max}}$ by calculating flux at a radius $r=75$ at $t=0$. Finally, it is interesting to note that flux transport appears to be more efficient, reaching about 20-40$\%$, when the field strength is in between the two regimes, i.e for $\beta_l=700$ and $\beta_l=1500$. \subsubsection{Effects on mass accretion rate and accretion stresses} \label{sect:mdot_strength} In this subsection, we study how the injection of external magnetic flux influences the accretion properties such as the mass accretion rate and accretion stresses. Fig. \ref{fig:mdot_strength} shows time history of mass accretion rate at the event horizon for runs with a range of $\beta_l$. We further show in Fig. \ref{fig:acc_stress} the space-time plot of the azimuthally and vertically (over one scale-height) averaged total accretion stress $\la W_{\rm Tot} \ra$, which is a combination of Maxwell and Reynolds stresses defined in the orthonormal fluid frame (see section \ref{sect:convergence}) as, \bea && W_{\rm Max} = 2 p_{\rm mag} \ u^{\hat{r}} \ u^{\hat{\phi}} - b^{\hat{r}}\ b^{\hat{\phi}}, \\ && W_{\rm Rey} = \left(\rho + \frac{\gamma}{\gamma-1} p_{\rm gas} \right) u^{\hat{r}} \ u^{\hat{\phi}}, \\ && \la W_{\rm Tot} \ra= \la W_{\rm Max} \ra+ \la W_{\rm Rey} \ra \ , \eea where the Maxwell stress is the dominant component. Fresh injection of external field loops reignites the linear MRI and lead to higher accretion stresses and hence increase in mass accretion rate. We find that with high field strength in the loop (i.e $\beta_l\lesssim200$), there is substantial enhanced accretion stress, leading to a rapid, strong and transient increase of accretion rate. The stresses are reduced after the transient phase, but are still much stronger than those in the Initial RIAF run within simulation time, reflecting the prolonged influence of the initial flux loop. Simulations with $\beta_l\gtrsim3500$ show only modest increase of accretion stress and the accretion rates compared to the Initial RIAF run, indicating that the injection of external flux has only minor impact on disk turbulence. For simulations with intermediate $\beta_l$, there is modest enhancement of the accretion stress, resulting in modest enhancement of accretion rate. We also note that after $t=1.1 \times 10^5$, despite having a higher level of magnetic flux (compared to Initial RIAF run; Fig. \ref{fig:flux_mid_rt_strength}), mass accretion rate in the runs with injected loops are very similar to that in the Initial RIAF run. This happens due to the quick depletion of mass supply in the disk at earlier times due to the enhanced stresses in the runs with injected loops. \subsubsection{Disk and flow structures} In this subsection, we further examine how external magnetic flux changes the disk structure and flow properties. We start by considering the radial profiles of surface density, defined as\footnote{We note that the standard definition (\ref{eq:Sigma}) asymptotes to $\rho r^2d\theta$ at large radii, with an extra $r$ factor compared to the Newtonian definition (assuming surface density is defined by integrating along spherical shells).} \be \Sigma(t,r) = \frac{1}{\phi_{\rm ext}} \int_{\phi=0}^{\phi_{\rm ext}} \int_{\theta=0}^{\pi} \rho \ \sqrt{-g} \ d \theta \ d\phi\ ,\label{eq:Sigma} \ee and the radial velocity $\la u^1 (r) \ra$, averaged within one scale height about the midplane. The results for the Initial RIAF run and runs with external field loops, time average is done over $t=9\times 10^4-10^5$, are shown in Fig. \ref{fig:surf_den}. In the Initial RIAF run, we see that the accretion velocity approaches the free-fall velocity ($v_{ff} = \sqrt{2/r}$) within the ISCO, while accretion velocity ranges between $0.01-0.5$ of the Keplerian velocity further out till the radius of inflow equilibrium. Upon imposing external field, the higher accretion stresses leads to higher accretion velocities. The enhancement can be up to a factor of $\sim10$ in the strong field case with $\beta_l=70$ at the representative radii of $r\sim30-60r_g$, while for weak field runs (e.g., $\beta_l\gtrsim3500$), the accretion velocity is only enhanced by a modest factor of $\sim2$. We also note that the profile of $\la u^1(r) \ra$ also evolves over time accompanying magnetic flux transport, but qualitatively, the profiles shown in Figure \ref{fig:surf_den} are representative over the duration of our simulations. The altered accretion velocity profile $\la u^1(r) \ra$ further modifies the surface density profile. Generally, after imposing an external field loop, the surface density becomes steeper compared to the surface density profile in the Initial RIAF run, though the deviation is only modest. The surface density profile also evolves over time. We note that earlier RIAF simulations of the SANE state already indicated that there might not be any universal power law for the surface density profile and other flow properties \citep{White2020}. When supplied with external magnetic flux in the outer disk, our results suggest additional surface density variations during the process of magnetic flux transport. In other words, the dynamics of RIAFs are dependent on the magnetised mass reservoir at larger radii. \subsection{Results for the other parameters} In this section, we assess the robustness of our fiducial simulations results by considering different geometries for the injected field loops. In particular, we change loop sizes (both vertical and radial) and the injection latitudes. We focus on loops of fiducial strength $\beta_l=3500$ and $\beta_l=1500$ respectively. Additionally, we study the transport of a big loop of similar strength ($\beta_l=12200$) to mean poloidal fields produced by MRI dynamo in the Initial RIAF run in the quasi-stationary phase. Fig. \ref{fig:flux_r_size} shows the time evolution of the radial magnetic flux threading the event horizon in the northern hemisphere $\Phi_{\rm NH} (r_H)$ (top panels) and flux transport efficiency (bottom panels) for these additional simulations. \subsubsection{Vertical and Radial sizes} The left panels of Fig. \ref{fig:flux_r_size} compare the flux transport for taller loops of vertical size $z_l =2.5H$ with the loops of similar strength but of fiducial size $z_l=1.5H$. We find that the vertical size of the loops does not affect the amount of flux reaching the BH, and the efficiency of flux transport remains almost unaltered with the change of the vertical size of the loops. Similarly, the middle panels of Fig. \ref{fig:flux_r_size} compares flux transport between loops of different radial sizes, where we consider bigger loops of radial size $\Delta r_l=60$ as opposed to the fiducial radial size of $\Delta r_l=30$. We observed that a larger amount of flux reaches the BH for the bigger loops, which is reasonable because more magnetic flux is available in these loops compared to its smaller counterparts. However, the fraction of the flux reaching the BH remains similar for both the smaller and bigger loop cases with the same plasma $\beta_l$. This result also holds for our additional run with $\beta_l=12200$. This indicates that the efficiency of flux transport remains unaffected by the radial extent of the injected loops. \subsubsection{ Injection latitude} In addition to study the effects of strength and size of the loops on the transport process, we also consider injecting off-centred loops with plasma $\beta_l=3500$ and $\beta_l=1500$ to examine whether loop injection away from the mid-plane facilitates flux transport or not. The comparison with our fiducial injection prescription are shown in the right panel of Fig. \ref{fig:flux_r_size}. Surprisingly, injection of off-centred loops lead to a distinctly lower flux level at the event horizon. While it is not entirely clear why this is the case, we speculate that it is related to stronger magnetic reconnection in the off-centred case that leads to more considerable destruction of magnetic flux, which occurs during the interplay between the injected field and the dynamo-generated background field. The stark difference between the magnetic field evolution in off-centred and the fiducial cases can be also seen by comparing the movies describing the magnetic field loop evolution for the runs beta\_1500 (\href{https://www.youtube.com/shorts/duxJRtFO0zI}{movie-beta-1500}) and beta\_1500\_offc (\href{https://www.youtube.com/shorts/y0lweEVgfGI}{movie-beta-1500-offc}) respectively. \section{Discussion} \subsection{Inefficiency of Dynamo in SANE/RIAF} We threaded the initial geometrically semi-thick disk ($H/R\approx 0.2$) with small magnetic field loops of alternating polarity and attained a quasi-stationary weakly magnetized RIAF (SANE; see Fig. \ref{fig:jnet_mad_param}), that does not remember the initial field geometry (see sections \ref{sect:flow_evo_riaf} and \ref{sect:sane_or_mad}). An MRI dynamo is responsible for generating and sustaining magnetic fields (both small scale and large-scale) in the quasi-stationary RIAF (\citealt{Hawley2013,Hogg2018}). A large-scale dynamo does operate (\citealt{Dhang2019}) and generate large-scale magnetic fields in the weakly magnetized RIAF (see last two panels of Fig. \ref{fig:flow_evo}), but not efficient enough to produce strong magnetic fields that can convert a SANE to MAD. This result aligns with earlier works which found dynamo action in a SANE RIAF does not lead to the jet formation (\citealt{Beckwith2008,Narayan2012}). This inefficiency is likely to be attributed to insufficient poloidal field generation (weak $\alpha$-effect) and strong turbulent pumping which transports large scale magnetic field radially outward in a RIAF(\citealt{Dhang2020}). Recently, \citet{Liska2020} reported that when starting the simulation with an unusually strong ($\beta \approx 5$) and coherent toroidal magnetic field, the MAD state can be achieved at late times. They argued that an MRI dynamo can produce strong poloidal field loops of size $H \propto R$ from the very strong and coherent initial toroidal field. The further the creation location is, the bigger the loops are. Most of the loops move outward, while a few `lucky' loops created at large radii are somehow arrested and stretched inward and lead to the MAD state. However, how the accretion disk can possess such a coherent initial toroidal field of the same polarity spanning several decades in radii at first place remains questionable. Overall, we reaffirm that the MRI dynamo in standard SANE state does not spontaneously generate strong coherent large-scale poloidal field to turn the disk into the MAD state. In the absence of initial poloidal field, achieving the MAD state may require unusually strong and coherent toroidal field that may be unpractical in reality. \subsection{Plausible sources of external magnetic fields} \label{sect:source_loop_discuss} In this work, we considered the possibility that the disk acquires external poloidal field in the form of field loops of different sizes and shapes. What can be the source of such external field loops? While definitive evidence is lacking, we speculate that accreting such external field loops is plausible in a variety of systems. In the hard state of the XRBs, a RIAF close to the BH is proposed to be connected to an outer thin disk \citealt{Esin1997, Done2007}, which can supply large-scale magnetic flux to the inner RIAF. The outer thin disk can in principle harbour large-scale magnetic field due to an efficient dynamo action (\citealt{Flock2012a, Gressel2015}) or due to the coronal accretion of magnetic flux (\citealt{Guilet2012}) from the companion/donor star, or a combination of both. The donor star in the low mass XRBs are likely to be either K or M type dwarf stars (\citealt{Fragos2015}) or evolved stars (e.g., as in GRS 1915+1105). The donor stars in the XRBs are supposed to be tidally locked to the rotation period of the binaries with an orbital period of hours to days (\citealt{Coriat2012}). These fast rotating dwarf stars show vigorous magnetism with surface magnetic field of strength ($\sim 10^3$ G) similar to sunspots (\citealt{West2008,Davenport2016}). Additionally, in the active region, the magnetic field is one order of magnitude stronger than the average stellar magnetic field. Magnetized matter from the donor star passes through the first Lagrange point ($L_1$) and enters the Roche lobe of primary (accretor) almost ballistically and circularizes at the circularization radius (\citealt{Frank2002}). We speculate that the mass-loss from the L1-nozzle may proceed through a chain of mass blobs encircled by field loops (e.g., as also considered in \citealt{Ju2017}), which may gets amplified and become quasi-axisymmetric during the circularization process. Thus, if this external flux can be brought in through the outer thin Keplerian disk, then it may further feed the inner RIAF, where flux transport is efficient and saturate the BH. The accretion flow in a low-luminosity AGN is also thought be a RIAF. In this case, the gas supplied by the ambient medium to the accretion flow is magnetized. It can harbour a large-scale magnetic field as inferred from the observation of the large scale poloidal flux in the Galactic centre (\citealt{Nishiyama2010}). Recent numerical simulations by \citet{Ressler2020a,Ressler2020b} found that the large-scale accretion flow around the galactic centre fed by the winds of Wolf-Rayet stars can achieve the MAD state, with efficient inward transport of magnetic field embedded in the accreting material. Their injected magnetic fields have pure toroidal component with random orientation, thus we may consider that such fields effectively enter the accretion disk in the form of closed field loops from some random directions. Our results are in line with their findings, while our controlled experiments further provide a physical basis for better understanding the efficient flux transport around SMBHs. \subsection{Transport efficiency in the SANE and its possibility of transformation to MAD} In Section \ref{sect:results_flux}, we have seen that except for off-centered loops, transport of externally injected magnetic flux loops is relatively efficient, with typically $\sim20\%$ of the available flux end up being accreted to the central BH regardless of initial field strength and size. Note that none of our simulations reach the MAD state, but if this relatively high efficiency of flux transport obtained from our controlled experiments is universal, we can estimate the requirement on the external flux to potentially transform a SANE disk into MAD. The MAD parameter $\phi_{\rm BH}$ (the normalised unsigned flux threading the BH) is related to the magnetic flux (signed) $\Phi_{NH}$ threading the northern hemisphere of the BH as follows \be \phi_{\rm BH} \approx \frac{\Phi_{NH} (r_H)}{\sqrt{\dot{m}}r_g c^{1/2}}. \ee We have found that a certain fraction $f_B = \Phi_{NH} (r_H)/\Phi_{in}$ (equation \ref{eq:efficiency}) of the injected flux $\Phi_{\rm in}$ reaches the BH. Earlier numerical experiments suggest that the MAD could be achieved if the MAD parameter exceeds a critical value $\phi_{BH,c}$ at the event horizon (e.g., see \citealt{Tchekhovskoy2011}). This indicates to a critical value of the injected flux $\Phi_{in,c}$ which is a plausible minimum flux required for the MAD state and it is given by \be \Phi_{\rm in,c} = \left(\frac{r_g c^{1/2}}{f_B} \right) \phi_{{\rm BH},c} \sqrt{\dot{m}}. \ee Therefore, the minimum poloidal magnetic field required at the injection location is given by \be B_{\rm in,c} = \left(\frac{r_g c^{1/2}}{2 \pi f_B} \right) \left(\frac{r}{\Delta r} \right) \left( \frac{\phi_{{\rm BH},c}} {r^2} \right) \sqrt{\dot{m}}, \ee where we have estimated that for a loop centered on radius $r$ with half width $\Delta r$, $\Phi_{\rm in,c}\approx 2\pi B_{\rm in,c} r\Delta r$. If we take $\Delta r/r=0.2$, then the value of $B_{\rm in,c}$, in terms of Eddington accretion rate $\dot{M}_{\rm Edd}=1.5 \times 10^{19} \ M_{10} \ {\rm gm} \ {\rm s}^{-1}$ is given by \be B_{\rm in,c} \approx \frac{10^{4}}{f_B} \ \phi_{40} \ r_{100}^{-2} \ \dot{m}_{-4}^{1/2} \ M_{10} ^{-1/2} \ G, \ee where $\phi_{40}=\phi_{{\rm BH},c}/40$, $r_{100}=r/(100r_g)$, $\dot{m}_{-4} = \dot{m}/10^{-4}\dot{M}_{\rm Edd}$, and $M_{10}=M_{BH}/10M_{\odot}$. Would this amount of magnetic field be available for accretion at the outer radii of the RIAF? We will estimate the poloidal field strength available for accretion in case of an XRB. In the low hard state, the RIAF close to the BH is proposed to be connected to an outer thin disk. The plausible source of the large-scale magnetic field in the thin disk could be the dynamo action. Another scenario would be the advection of large-scale field loops from the companion star as discussed in section \ref{sect:source_loop_discuss}. Independent of the mechanism, we can estimate the characteristic poloidal field strength in the thin accretion disk given that the accretion is driven by radial transport of angular momentum in the disk as (e.g., \citealt{Bai2009}) \be -\overline{B_r B_{\phi}} \approx \frac{\dot{m} \Omega}{h_{a}}. \ee Here, $h_a=\xi H_{\rm thin}$ is the thickness of the disk over which accretion proceeds. Further, if we assume that $|B_r|\approx 1/5 |B_{\phi}|$ (which is found to be consistent in the MRI simulations of accretion disks) and $\xi = 6$, then the total radial magnetic field in the thin disk of aspect ratio $\epsilon_{\rm thin} = H_{\rm thin}/R$ is given by, \be \label{eq:Br_thin_disk} B_{r,d} \approx 10^4 \ \epsilon^{-1/2}_{0.05} \ m_{-4}^{1/2} \ r_{100}^{-5/4} \ M_{10}^{-1/2} \ G, \ee where $\epsilon_{0.05}=\epsilon_{\rm thin}/0.05$. It is to be noted that in the strongly magnetized coronal region of the thin disk, a large share of this estimated total field $B_{r,d}$ will likely be in the mean coherent part of the magnetic field. In reality, the mass accretion rate in the outer thin disk is expected to be higher compared to that in a RIAF (\citealt{Yuan2014}), and hence the total poloidal field will also be higher. Therefore, comparison of $B_{{\rm in,c}}$ and $B_{r,d}$ leads to the inference that it is quite possible that in an XRB, the outer thin disk reservoir can potentially supply adequate amount of magnetic flux to the inner RIAF that eventually may form a MAD close to the BH. \section{Summary} In this paper, we studied the magnetic field generation and transport in a geometrically thick RIAF. We initialize the disk with magnetic field loops of alternate polarity so that the quasi-stationary RIAF is in the weakly magnetized, i.e in the SANE regime. In this quasi-sationary turbulent SANE RIAF, we study the transport of external magnetic flux (in the form of loops) of different strengths, sizes and shapes. Here we outline the key findings of our work. \begin{itemize} \item We reconfirm that the MRI dynamo in a standard SANE RIAF does not generate strong coherent large-scale poloidal field to turn the SANE state into the MAD state. \item Magnetic flux transport is relatively efficient in the SANE RIAF: fifteen to forty percent of the external magnetic flux injected at the outer radii is able to reach the BH. \item Flux transport efficiency is independent of the loop parameters such as strength and size. However if the loops are injected at high latitudes rather than at the mid-plane, the efficiency becomes poor. \end{itemize} We also find that accretion flow profiles (e.g surface density, accretion velocity) are altered as external magnetic flux is injected in the disk. We propose that the dynamics of the RIAF depends on the magnetized mass reservoir at the outer radii. Based on our results, we argue that it might be easier to transform a SANE disk to a MAD by supplying external poloidal field loops at the outer disk provided that the relatively high efficiency of flux transport obtained from our controlled experiments is universal. It should also be noted that we have studied transport of external magnetic flux in the quasi-stationary turbulent RIAF in a limited parameter space. For example, we have considered only one injection location with inner edge of the loop being at $r=60$, whereas, in reality, the loops are supposed to be available for accretion as far as in the disk truncation region in XRBs, or even at a larger radii in Low-luminosity AGNs. We plan to explore magnetic flux transport with different configurations and with larger dynamical ranges. Furthermore, future work should extend this study to the thin disk regime, which is applicable to regions beyond the truncation radius in the low/hard state of the XRBs, as well as in luminous AGNs. We thank Ramesh Narayan for initial inputs into this project. This research was supported by NSFC grant 11873033. Numerical simulations are conducted on TianHe-1 (A) at National Supercomputer Center in Tianjin, China, and on the Orion cluster at Department of Astronomy, Tsinghua University. All the movies of the simulations mentioned in Table \ref{tab:loop} are available in this \href{https://www.youtube.com/watch?v=3GyW2jRDbcU&list=PLUKo6vYd0sPJ69kl5JdWoBdzXAI84o7V4}{link}. \bibliography{bibtex_tran}{} \bibliographystyle{aasjournal}
Title: The effect of magnetic field on the inner Galactic rotation curve
Abstract: In the past few decades, some studies pointed out that magnetic field might affect the rotation curves in galaxies. However, the impact is relatively small compared with the effects of dark matter and the baryonic components. In this letter, we revisit the impact of magnetic field on the rotation curve of our Galaxy. We show that the inner Galactic rotation curve could be affected significantly by the magnetic field. The addition of the inner bulge component, which has been proposed previously to account for the inner rotation curve data, is not necessary. The magnetic field contribution can fully account for the excess of the inner rotation velocity between 5 pc to 50 pc from the Galactic Centre. Our analysis can also constrain the azimuthal component of the central regular magnetic field strength to $B_0 \sim 50-60$ $\mu$G, which is consistent with the observed range.
https://export.arxiv.org/pdf/2208.06098
\date{Accepted XXXX, Received XXXX} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \pubyear{XXXX} \label{firstpage} \date{\today} \begin{keywords} Galaxy: centre; Galaxy: kinematics and dynamics \end{keywords} \section{Introduction} The rotation curves of galaxies are important indicators of mass distribution in galaxies. Many rotation curves were revealed by the neutral atomic gas (e.g. HI, CO) which is believed to be a very good tracer of the gravitational field (e.g. Spitzer Photometry and Accurate Rotation Curves \citep{Lelli}). In particular, many past studies have shown that large-scale magnetic field can affect the gas dynamics in spiral galaxies \citep{Piddington,Nelson,Battaner,Battaner2}. Some studies even show that this magnetic field effect can explain the flatness of rotation curves in galaxies without the need of dark matter \citep{Nelson,Battaner,Battaner2}. This is known as the `magnetic alternative to dark matter' \citep{Sanchez}. Nevertheless, later studies have shown that the boost of rotation curves due to magnetic field contribution is less than 20 km/s at the outermost point of rotation curves \citep{Sanchez,Sanchez2}. Since then, the `magnetic alternative to dark matter' was no longer a popular model. In the past decade, the idea of the magnetic field effect on galactic rotation curves was revived. Some studies have shown that the magnetic field effect can explain why rotation curves in some galaxies start to rise again at the outer edges of the HI discs, such as our Galaxy \citep{Ruiz2} and the M31 galaxy \citep{Ruiz}. Considering the effects of magnetic field can somewhat improve the fits of the outer part of galactic rotation curves. However, some other studies have argued that the effect in the outer rotation curve region is not very significant \citep{Sanchez2,Elstner}. Although the effect of magnetic field in the outer rotation curve region has been greatly debated, such effect in the inner region of a galaxy has not been discussed thoroughly. In this letter, we particularly investigate the magnetic field effect on the inner rotation curve of our Galaxy. There is a small rotation velocity excess range (between 5 pc to 50 pc from the Galactic Centre) which could not be accounted by the contributions of the supermassive black hole and the central bulge component \citep{Sofue}. An extra inner bulge has to be added to account for this abnormal excess range. We show that this small excess range could be explained by the magnetic field effect so that adding the extra inner bulge component is not necessary. \section{Magnetic field effect on rotation curve} In a gaseous disc in equilibrium, magnetic field effects on the gas can be modelled as a pressure term in the asymmetric drift \citep{Sanchez2}. Such asymmetric drift is a consequence of the support by thermal, turbulent, cosmic ray and magnetic pressures. The dynamical effects of the regular magnetic field can significantly boost the gravitational orbital velocity due to the magnetic tension \citep{Nelson}. The total magnetic field in a galaxy can be simply expressed as a sum of the regular field term (the azimuthal component) $B_{\phi}$ and a random field term (the turbulent magnetic field component) $B_{\rm ran}$ \citep{Sanchez2,Elstner}. In particular, the random field can be isotropic or anisotropic. The contribution of the regular magnetic field component to the circular velocity is given by \citep{Ruiz} \begin{equation} v_{\rm B1}^2=\frac{r}{4\pi \rho_g} \left(\frac{B_{\phi}^2}{r}+\frac{1}{2} \frac{dB_{\phi}^2}{dr} \right), \end{equation} where $\rho_g$ is the gas density and $r$ is the radial distance from the Galactic Centre. The random magnetic field component would contribute to the circular velocity via the magnetic pressure term $P_B$ as \citep{Sanchez2} \begin{equation} v_{\rm B2}^2=\frac{r}{\rho_g}\frac{dP_B}{dr}=\frac{r}{\rho_g}\frac{d}{dr} \frac{\langle B_{\rm ran}^2 \rangle}{8 \pi}, \end{equation} where $\langle B_{\rm ran}^2 \rangle$ is the mean-square value of the random magnetic field strength. Therefore, the total contribution of the magnetic field to the circular velocity is: \begin{equation} v_{\rm mag}^2=v_{\rm B1}^2+v_{\rm B2}^2=\frac{r}{8\pi \rho_g} \left[\frac{2B_{\phi}^2}{r}+\frac{d}{dr}(B_{\phi}^2+\langle B_{\rm ran}^2 \rangle) \right]. \end{equation} In the inner Galactic Centre region ($r \le 500$ pc), the rotation curve is also contributed by the supermassive black hole $v_{\rm BH}$ and the baryonic bulge $v_{\rm bulge}$ components. Therefore, including the magnetic field contribution, the observed total rotation curve is \begin{equation} v^2=v_{\rm BH}^2+v_{\rm bulge}^2+v_{\rm mag}^2. \end{equation} The supermassive black hole rotation curve contribution is \begin{equation} v_{\rm BH}^2=\frac{GM_{\rm BH}}{r}, \end{equation} where $M_{\rm BH}=(4.154 \pm 0.014) \times 10^6M_{\odot}$ is the mass of the supermassive black hole \citep{Abuter}. The bulge mass density can be modeled by the exponential spheroid model with scale radius $a$ as \citep{Sofue}: \begin{equation} \rho_{\rm bulge}(r)=\rho_ce^{-r/a}, \end{equation} where $\rho_c$ is the central bulge mass density. Therefore, the bulge rotation curve contribution is given by \begin{equation} v_{\rm bulge}^2=\frac{GM_0}{r}F \left(\frac{r}{a}\right), \end{equation} where $M_0=8\pi a^3\rho_c$ and $F(x)=1-e^{-x}(1+x+x^2/2)$ \citep{Sofue}. We take the values $M_0=8.4\times 10^9M_{\odot}$ and $a=0.12$ kpc obtained in \citet{Sofue} to perform our analysis. To match the inner Galactic rotation curve data without considering the magnetic field contribution, two bulge components (inner bulge and outer bulge) were assumed in previous studies \citep{Sofue}. However, in the followings, we will investigate whether the magnetic field contribution can mimic the effect of the inner bulge component. Therefore, we assume that there is only one bulge component to minimise the number of parameters in our analysis. Moreover, the dark matter contribution is not considered here because it is not significant in the inner Galactic Centre region ($r \le 500$ pc). Assuming the Navarro-Frenk-White (NFW) profile \citep{Navarro}, the total mass of dark matter is less than 5\% of the bulge mass inside 500 pc, with the fitted parameters in \citet{Sofue2}. Therefore, we neglect the contribution of dark matter for simplicity. We also neglect the gravitational contribution of gas in the inner Galactic Centre region as the gas mass is less than 3\% of the bulge mass inside 500 pc \citep{Ferriere}. Over $\sim 300$ pc along the Galactic plane and $\sim 150$ pc in the vertical direction at the Galactic Centre, the magnetic field is approximately horizontal and the strength is very strong, which can range from $B \sim 0.01-1$ mG for general intercloud medium \citep{Ferriere2}. The equipartition between magnetic field energy and turbulent energy suggests that $B \propto \rho_g^{1/2}$ \citep{Schleicher}. This is also supported by the observed relation between magnetic field and star-formation rate \citep{Heesen,Tabatabaei}. Therefore, we assume that the regular magnetic field profile follows the exponential spheroid model as \begin{equation} B_{\phi}=B_0e^{-r/2a}, \end{equation} where $B_0$ is the central regular magnetic field strength. Note that Eq.~(8) follows from Eq.~(6) only if gas density is assumed to be proportional to the baryonic bulge mass density $\rho_{\rm bulge}(r)$. For the random magnetic field, it is an important component in the interstellar medium of galaxies \citep{Beck}. We define $\eta$ as the ratio of the regular magnetic field to the total magnetic field so that the random magnetic field strength can be expressed as $\langle B_{\rm ran}^2 \rangle=(\eta^{-2}-1)B_{\phi}^2$. Although some earlier studies found that the ratio is around $\eta \sim 0.6-0.7$ in some galaxies \citep{Fletcher,Beck2,Sanchez2}, recent studies have shown that these ordered fields are dominated by anisotropic random fields and the ratio should be $\eta \sim 0.01-0.3$ in galactic disk region \citep{Beck}. Nevertheless, the actual value of $\eta$ at the Galactic Centre is uncertain and the value of $\eta$ may be larger. In the followings, we will first assume $\eta=0.65$. Then, we will also demonstrate the cases for $\eta=0.3$ and $\eta=0.9$ for comparison. For the gas density, observational data show that the gas number density is close to $n_H \sim 10-100$ cm$^{-3}$ at the inner Galactic Centre and $n_H \sim 1$ cm$^{-3}$ out to $\sim 220$ pc along the Galactic plane \citep{Ferriere,Ferriere2}. We follow \citet{Ferriere} to assume that the gas density decreases exponentially with $r$ for small $r$ and then approaches a constant value $\rho_0'$ when $r$ becomes large. Therefore, we write the gas density profile as \begin{equation} \rho_g=\rho_0 \exp \left(-\frac{r}{1~\rm pc} \right)+\rho_0'. \end{equation} Putting Eq.~(8) and Eq.~(9) into Eq.~(3), we get \begin{equation} v_{\rm mag}^2=\frac{v_0^2e^{-r/a}[1-\eta^{-2}(r/2a)]}{\exp(-r/1~{\rm pc})+y}, \end{equation} where $v_0^2=B_0^2/(4 \pi \rho_0)$, $y=\rho_0'/\rho_0$. Here, $v_0$ and $y$ are free parameters in our model. \section{Data analysis} The inner rotation curve data have been obtained in \citet{Sofue}. We will focus on the region $r \le 360$ pc because the rotation curve attains its maximum at around $r=360$ pc. At this position, the percentage contribution of the bulge component is maximised. Larger than $r=360$ pc, the dark matter and disc components start to contribute more to the Galactic rotation curve. We fit our predicted total rotation curve $v(r)$ with the observed data $v_{\rm obs}(r)$. To quantify the goodness of fits, we calculate the reduced $\chi^2$ value of the fits, which is defined as \begin{equation} \chi_{\rm red}^2=\frac{1}{N-M}\sum_{i=1}^{N} \frac{(v_i-v_{{\rm obs},i})^2}{\sigma_i^2}, \end{equation} where $\sigma_i$ is the uncertainty of the observed rotation curve data, $N$ is the total number of data points and $M$ is the number of free parameters. Here, we have $M=2$. In Fig.~1, we present the best fit of our model (with $\eta=0.65$) and separate the corresponding rotation curve contributions. The best-fit values are $v_0=16.4$ km/s and $y=0.023$, with $\chi_{\rm red}^2=0.14$. We can see that including the magnetic field contribution could provide an excellent fit to the observed inner rotation curve without introducing any inner bulge component suggested in \citet{Sofue}. The reduced $\chi^2$ value for adding an inner bulge component without magnetic field contribution is $\chi_{\rm red}^2=0.12$, almost the same goodness of fit. Note that the contribution of the magnetic field effect to the rotation velocity could be slightly negative when $r>2\eta^2a \approx 0.1$ kpc. Fig.~2 shows the comparison of the residual plots between the Galactic total rotation curve data and the two scenarios. No systematic trend of the residuals is shown for both scenarios. At the Galactic Centre, the gas density is $\rho_g \approx 1.4m_pn_H$, where $m_p$ is the proton mass. If we take the asymptotic number density $n_H=1$ cm$^{-3}$ \citep{Ferriere}, we have $\rho_0' \approx 2.3 \times 10^{-24}$ g/cm$^3$. Using the best-fit values $y=0.023$ and $v_0=16.4$ km/s, we get $\rho_0=1.0\times 10^{-22}$ g/cm$^{-3}$ (i.e. $n_H \approx 43$ cm$^{-3}$) and $B_0=58$ $\mu$G. These values are consistent with the number density observed and the central total magnetic field constrained ($B \sim 10-100$ $\mu$G) \citep{Ferriere,Ferriere2,Guenduez}. As the actual value of $\eta$ at the Galactic Centre is uncertain, we also investigate the cases for $\eta=0.3$ and $\eta=0.9$. For $\eta=0.9$, we also get a very good fit with $v_0=15.2$ km/s ($B_0=54$ $\mu$G) and $y=0.023$ ($\chi_{\rm red}^2=0.11$). However, for $\eta=0.3$, a relatively poor fit is obtained ($\chi_{\rm red}^2=1.46$). We plot the corresponding components and the total rotation curves fitted in Fig.~3. Generally speaking, for $\eta>0.6$, a very good fit would be obtained ($\chi_{\rm red}^2<0.17$). \section{Discussion} In this letter, we show that adding the magnetic field contribution can satisfactorily explain the inner Galactic rotation curve data without invoking an inner bulge component. The magnetic field effect on rotation curve is a predicted effect in magneto-hydrodynamics (MHD). Our results provide an indirect evidence that magnetic field can affect the galactic rotation curve significantly. Previous studies have shown that the magnetic field contribution cannot boost the gas rotating speed by more than 20 km/s in the outermost region \citep{Sanchez2}. Now we show that such effect can be large in the central region of a galaxy. The maximum contribution of the magnetic field on the rotation curve is 97 km/s at $r \sim 12$ pc. In our analysis, we have assumed that magnetic field strength traces the baryonic distribution (i.e. the bulge density), which is predicted by theoretical models. For example, numerical simulations and the equipartition theory show that the magnetic field strength follows the baryonic density in galaxy clusters and galaxies \citep{Dolag,Govoni,Schleicher} and it is supported by observational data \citep{Heesen,Tabatabaei,Weeren}. Our constrained magnetic field strength at the Galactic Centre ($B_0 \sim 50-60$ $\mu$G) is also consistent with the order of magnitude of the observed total magnetic field strength $B \sim 10-100$ $\mu$G \citep{Ferriere2,Guenduez}. These could be verified by future observational data of magnetic field at the Galactic Centre. Moreover, we have first taken the value of $\eta$ to be a constant $\eta=0.65$. Some other studies have found that the magnetic field strength might be dominated by anisotropic random field rather than the regular field \citep{Houde,Beck}. The value of $\eta$ can be as small as $\sim 0.1$ for the disk region in spiral galaxies \citep{Beck}. However, some studies have revealed a very large regular field strength $\sim 1$ mG at the Galactic Centre \citep{Eatough}. Therefore, the actual value of $\eta$ in the Galactic Centre region is uncertain. We have particularly investigated the cases of $\eta=0.3$ and $\eta=0.9$ for comparison. We have found that $\eta=0.9$ can also give a good fit for the data. Generally, $\eta>0.6$ could provide good fits with the rotation curve data without invoking the inner bulge component. Further radio observations are definitely required to examine the value of $\eta$ as well as our model presented. On the other hand, we have also assumed that the gas density follows the exponential density profile in the deep central region and approaches a constant value at a relatively large $r \sim 100$ pc. The constant gas density at $r \sim 100$ pc is supported by observational data \citep{Ferriere,Ferriere2}. We have also tried the isothermal density profile, which is commonly used as a model for interstellar medium \citep{Kalashnikov}, to model the gas density distribution. A good fit can still be obtained (with $B_0=62$ $\mu$G and $\chi_{\rm red}^2=0.12$). Therefore, our results are not very sensitive to the gas density profile assumed. To conclude, we show that magnetic field can affect inner rotation curve significantly. We anticipate that such effect can also be seen in the inner region of other galaxies. Future high-quality observations of the inner galactic rotation curves could verify our suggestion. \section{acknowledgements} We thank the anonymous referee for useful constructive feedback and comments. This work was partially supported by the Seed Funding Grant (RG 68/2020-2021R) and the Dean's Research Fund (activity code: 04628) from The Education University of Hong Kong. \section{Data availability statement} The data underlying this article will be shared on reasonable request to the corresponding author. \label{lastpage}
Title: A dusty starburst masquerading as an ultra-high redshift galaxy in JWST CEERS observations
Abstract: Lyman Break Galaxy (LBG) candidates at $z\gtrsim12$ are rapidly being identified in JWST/NIRCam imaging. Due to the (redshifted) break produced by neutral hydrogen absorption of rest-frame UV photons, these sources are expected to drop out in the bluer filters like F150W and F200W while being well-detected in redder filters (e.g., F277W, F356W, F444W). However, dust-enshrouded star-forming galaxies at lower redshifts ($z\lesssim7$) may also mimic the near-infrared colors of $z>12$ LBGs, representing potential contaminants in LBG candidate samples. Here, we report a galaxy, CEERS_DSFG_1, that drops out in the F115W, F150W, and F200W filters, for which a photometric redshift fit to the JWST data alone predicts a redshift of $z_{\rm phot}\sim18$. However, we show it is a dusty star-forming galaxy (DSFG) at $z\approx5$ based on deep millimeter interferometric observations conducted with NOEMA. We also present a $2.6\sigma$ SCUBA-2 detection at 850$\,\mu\rm m$ around the position of a recently reported $z\approx16.7$ LBG candidate in the same field, CEERS-93316. While we cannot conclusively show this detection is astrophysical or associated with this object, we illustrate that if it is associated, the available photometry are consistent with a $z\sim5$ DSFG with strong nebular emission lines despite its blue NIR colors. Hence, we conclude that robust (sub)millimeter detections in NIRCam dropout galaxies likely imply $z\sim4-6$ redshift solutions, where the observed near-infrared break would be the result of a strong rest-frame optical Balmer break combined with high dust attenuation and strong nebular line emission, rather than the rest-frame UV Lyman break. This provides evidence that DSFGs may contaminate searches for ultra high-redshift LBG candidates from JWST observations.
https://export.arxiv.org/pdf/2208.01816
command. \newcommand{\vdag}{(v)^\dagger} \newcommand\aastex{AAS\TeX} \newcommand\latex{La\TeX} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \shorttitle{\rm DSFGs masquerading as ultra-high redshift galaxies} \shortauthors{The CEERS collaboration} \graphicspath{{./}{figures/}} \begin{document} \title{Dusty starbursts masquerading as ultra-high redshift galaxies in JWST CEERS observations} \suppressAffiliations \author[0000-0002-7051-1100]{Jorge A. Zavala} \affiliation{National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan} \author[0000-0003-3441-903X]{V\'eronique Buat} \affiliation{Aix Marseille Univ, CNRS, CNES, LAM Marseille, France} \author[0000-0002-0930-6466]{Caitlin M. Casey} \affiliation{Department of Astronomy, The University of Texas at Austin, Austin, TX, USA} \author[0000-0001-8519-1130]{Steven L. Finkelstein} \affiliation{Department of Astronomy, The University of Texas at Austin, Austin, TX, USA} \author[0000-0002-4193-2539]{Denis Burgarella} \affiliation{Aix Marseille Univ, CNRS, CNES, LAM Marseille, France} \author[0000-0002-9921-9218]{Micaela B. Bagley} \affiliation{Department of Astronomy, The University of Texas at Austin, Austin, TX, USA} \author[0000-0003-0541-2891]{Laure Ciesla} \affiliation{Aix Marseille Univ, CNRS, CNES, LAM Marseille, France} \author[0000-0002-3331-9590]{Emanuele Daddi} \affiliation{Universit\'e Paris-Saclay, Universit\'e Paris Cit\'e, CEA, CNRS, AIM, 91191, Gif-sur-Yvette, France} \author[0000-0001-5414-5131]{Mark Dickinson} \affiliation{NSF's National Optical-Infrared Astronomy Research Laboratory, 950 N. Cherry Ave., Tucson, AZ 85719, USA} \author[0000-0001-7113-2738]{Henry C. Ferguson} \affiliation{Space Telescope Science Institute, Baltimore, MD, USA} \author[0000-0002-3560-8599]{Maximilien Franco} \affiliation{Department of Astronomy, The University of Texas at Austin, Austin, TX, USA} \author[0000-0002-2640-5917]{E.~F. Jim\'enez-Andrade} \affiliation{Instituto de Radioastronomía y Astrofísica, UNAM Campus Morelia, Apartado postal 3-72, 58090 Morelia, Michoacán, México} \author[0000-0001-9187-3605]{Jeyhan S. Kartaltepe} \affiliation{Laboratory for Multiwavelength Astrophysics, School of Physics and Astronomy, Rochester Institute of Technology, 84 Lomb Memorial Drive, Rochester, NY 14623, USA} \author[0000-0002-6610-2048]{Anton M. Koekemoer} \affiliation{Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA} \author[0000-0002-9466-2763]{Aur{\'e}lien Le Bail} \affil{Universit{\'e} Paris-Saclay, Université Paris Cit{\'e}, CEA, CNRS, AIM, 91191, Gif-sur-Yvette, France} \author[0000-0001-7089-7325]{E.~J. Murphy} \affiliation{National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA 22903, USA} \author[0000-0001-7503-8482]{Casey Papovich} \affiliation{Department of Physics and Astronomy, Texas A\&M University, College Station, TX, 77843-4242 USA} \affiliation{George P.\ and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, Texas A\&M University, College Station, TX, 77843-4242 USA} \author[0000-0002-8224-4505]{Sandro Tacchella} \affiliation{Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge, CB3 0HA, UK}\affiliation{Cavendish Laboratory, University of Cambridge, 19 JJ Thomson Avenue, Cambridge, CB3 0HE, UK} \author[0000-0003-3903-6935]{Stephen M.~Wilkins} % \affiliation{Astronomy Centre, University of Sussex, Falmer, Brighton BN1 9QH, UK} \affiliation{Institute of Space Sciences and Astronomy, University of Malta, Msida MSD 2080, Malta} \author[0000-0002-6590-3994]{Itziar Aretxaga} % \affiliation{Instituto Nacional de Astrof\'isica, \'Optica y Electr\'onica, Luis Enrique Erro 1, Tonantzintla CP 72840, Puebla, M\'exico} \author[0000-0002-2517-6446]{Peter Behroozi} \affiliation{Department of Astronomy and Steward Observatory, University of Arizona, Tucson, AZ 85721, USA} \affiliation{Division of Science, National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan} \author[0000-0002-6184-9097]{Jaclyn B. Champagne} \affiliation{Department of Astronomy, The University of Texas at Austin, Austin, TX, USA} \collaboration{102}{and The CEERS Team:} \affiliation{Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA} \author[0000-0003-3820-2823]{Adriano Fontana} \affiliation{INAF - Osservatorio Astronomico di Roma, via di Frascati 33, 00078 Monte Porzio Catone, Italy} \author[0000-0002-7831-8751]{Mauro Giavalisco} \affiliation{University of Massachusetts Amherst, 710 North Pleasant Street, Amherst, MA 01003-9305, USA} \author[0000-0002-5688-0663]{Andrea Grazian} \affiliation{INAF--Osservatorio Astronomico di Padova, Vicolo dell'Osservatorio 5, I-35122, Padova, Italy} \author[0000-0001-9440-8872]{Norman A. Grogin} \affiliation{Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA} \author[0000-0001-8152-3943]{Lisa J. Kewley} \affiliation{Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA} \author[0000-0002-8360-3880]{Dale D. Kocevski} \affiliation{Department of Physics and Astronomy, Colby College, Waterville, ME 04901, USA} \author[0000-0002-5537-8110]{Allison Kirkpatrick} \affiliation{Department of Physics and Astronomy, University of Kansas, Lawrence, KS 66045, USA} \author[0000-0003-3130-5643]{Jennifer M. Lotz} \affiliation{Gemini Observatory/NSF's National Optical-Infrared Astronomy Research Laboratory, 950 N. Cherry Ave., Tucson, AZ 85719, USA} \author[0000-0001-8940-6768]{Laura Pentericci} \affiliation{INAF - Osservatorio Astronomico di Roma, via di Frascati 33, 00078 Monte Porzio Catone, Italy} \author[0000-0003-4528-5639]{Pablo G. P\'erez-Gonz\'alez} \affiliation{Centro de Astrobiolog\'{\i}a (CAB/CSIC-INTA), Ctra. de Ajalvir km 4, Torrej\'on de Ardoz, E-28850, Madrid, Spain} \author[0000-0003-3382-5941]{Nor Pirzkal} \affiliation{ESA/AURA Space Telescope Science Institute} \author[0000-0002-5269-6527]{Swara Ravindranath} \affiliation{Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA} \author[0000-0002-6748-6821]{Rachel S.~Somerville} \affiliation{Center for Computational Astrophysics, Flatiron Institute, 162 5th Avenue, New York, NY 10010, USA} \author[0000-0002-1410-0470]{Jonathan R. Trump} \affiliation{Department of Physics, 196 Auditorium Road, Unit 3046, University of Connecticut, Storrs, CT 06269, USA} \author[0000-0001-8835-7722]{Guang Yang} \affiliation{Kapteyn Astronomical Institute, University of Groningen, P.O. Box 800, 9700 AV Groningen, The Netherlands} \affiliation{SRON Netherlands Institute for Space Research, Postbus 800, 9700 AV Groningen, The Netherlands} \author[0000-0003-3466-035X]{L. Y. Aaron\ Yung} \affiliation{Astrophysics Science Division, NASA Goddard Space Flight Center, 8800 Greenbelt Rd, Greenbelt, MD 20771, USA} \author[0000-0001-9328-3991]{Omar Almaini} \affiliation{School of Physics and Astronomy, University of Nottingham, University Park, Nottingham NG7 2RD, UK} \author[0000-0001-5758-1000]{Ricardo O. Amor\'{i}n} \affiliation{Instituto de Investigaci\'{o}n Multidisciplinar en Ciencia y Tecnolog\'{i}a, Universidad de La Serena, Raul Bitr\'{a}n 1305, La Serena 2204000, Chile} \affiliation{Departamento de Astronom\'{i}a, Universidad de La Serena, Av. Juan Cisternas 1200 Norte, La Serena 1720236, Chile} \author[0000-0002-8053-8040]{Marianna Annunziatella} \affiliation{Centro de Astrobiolog\'ia (CSIC-INTA), Ctra de Ajalvir km 4, Torrej\'on de Ardoz, 28850, Madrid, Spain} \author[0000-0002-7959-8783]{Pablo Arrabal Haro} \affiliation{NSF's National Optical-Infrared Astronomy Research Laboratory, 950 N. Cherry Ave., Tucson, AZ 85719, USA} \author[0000-0001-8534-7502]{Bren E. Backhaus} \affiliation{Department of Physics, 196 Auditorium Road, Unit 3046, University of Connecticut, Storrs, CT 06269} \author[0000-0002-0786-7307]{Guillermo Barro} \affiliation{Department of Physics, University of the Pacific, Stockton, CA 90340 USA} \author[0000-0002-5564-9873]{Eric F.\ Bell} \affiliation{Department of Astronomy, University of Michigan, 1085 S. University Ave, Ann Arbor, MI 48109-1107, USA} \author[0000-0003-0883-2226]{Rachana Bhatawdekar} \affiliation{European Space Agency, ESA/ESTEC, Keplerlaan 1, 2201 AZ Noordwijk, NL} \author[0000-0003-0492-4924]{Laura Bisigello} \affiliation{Dipartimento di Fisica e Astronomia "G.Galilei", Universit\'a di Padova, Via Marzolo 8, I-35131 Padova, Italy} \affiliation{INAF--Osservatorio Astronomico di Padova, Vicolo dell'Osservatorio 5, I-35122, Padova, Italy} \author[0000-0002-2861-9812]{Fernando Buitrago} \affiliation{Departamento de F\'{i}sica Te\'{o}rica, At\'{o}mica y \'{O}ptica, Universidad de Valladolid, 47011 Valladolid, Spain} \affiliation{Instituto de Astrof\'{i}sica e Ci\^{e}ncias do Espa\c{c}o, Universidade de Lisboa, OAL, Tapada da Ajuda, PT1349-018 Lisbon, Portugal} \author[0000-0003-2536-1614]{Antonello Calabr{\`o}} \affiliation{Osservatorio Astronomico di Roma, via Frascati 33, Monte Porzio Catone, Italy} \author[0000-0001-9875-8263]{Marco Castellano} \affiliation{INAF - Osservatorio Astronomico di Roma, via di Frascati 33, 00078 Monte Porzio Catone, Italy} \author[0000-0003-2332-5505]{\'Oscar A. Ch\'avez Ortiz} \affiliation{Department of Astronomy, The University of Texas at Austin, Austin, TX, USA} \author[0000-0003-4922-0613]{Katherine Chworowsky} \affiliation{Department of Astronomy, The University of Texas at Austin, Austin, TX, USA} \author[0000-0001-7151-009X]{Nikko J. Cleri} \affiliation{Department of Physics and Astronomy, Texas A\&M University, College Station, TX, 77843-4242 USA} \affiliation{George P.\ and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, Texas A\&M University, College Station, TX, 77843-4242 USA} \author[0000-0003-3329-1337]{Seth H. Cohen} \affiliation{School of Earth and Space Exploration, Arizona State University, Tempe, AZ, 85287 USA} \author[0000-0002-6348-1900]{Justin W. Cole} \affiliation{Department of Physics and Astronomy, Texas A\&M University, College Station, TX, 77843-4242 USA} \affiliation{George P.\ and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, Texas A\&M University, College Station, TX, 77843-4242 USA} \author[0000-0002-2200-9845]{Kevin C. Cooke} \affiliation{AAAS S\&T Policy Fellow hosted at the National Science Foundation, 1200 New York Ave, NW, Washington, DC, US 20005} \author[0000-0003-1371-6019]{M. C. Cooper} \affiliation{Department of Physics \& Astronomy, University of California, Irvine, 4129 Reines Hall, Irvine, CA 92697, USA} \author[0000-0002-3892-0190]{Asantha R. Cooray} \affiliation{Department of Physics \& Astronomy, University of California, Irvine, 4129 Reines Hall, Irvine, CA 92697, USA} \author[0000-0001-6820-0015]{Luca Costantin} \affiliation{Centro de Astrobiolog\'ia (CSIC-INTA), Ctra de Ajalvir km 4, Torrej\'on de Ardoz, 28850, Madrid, Spain} \author[0000-0002-1803-794X]{Isabella G. Cox} \affiliation{Laboratory for Multiwavelength Astrophysics, School of Physics and Astronomy, Rochester Institute of Technology, 84 Lomb Memorial Drive, Rochester, NY 14623, USA} \author[0000-0002-5009-512X]{Darren Croton} \affiliation{Centre for Astrophysics \& Supercomputing, Swinburne University of Technology, Hawthorn, VIC 3122, Australia} \affiliation{ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D)} \author[0000-0003-2842-9434]{Romeel Dav\'e} \affiliation{Institute for Astronomy, University of Edinburgh, Blackford Hill, Edinburgh, EH9 3HJ UK} \affiliation{Department of Physics and Astronomy, University of the Western Cape, Robert Sobukwe Rd, Bellville, Cape Town 7535, South Africa} \author[0000-0002-6219-5558]{Alexander de la Vega} \affiliation{Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD, USA} \author[0000-0003-4174-0374]{Avishai Dekel} \affil{Racah Institute of Physics, The Hebrew University of Jerusalem, Jerusalem 91904, Israel} \author[0000-0002-7631-647X]{David Elbaz} \affil{Universit{\'e} Paris-Saclay, Université Paris Cit{\'e}, CEA, CNRS, AIM, 91191, Gif-sur-Yvette, France} \author[0000-0001-8489-2349]{Vicente Estrada-Carpenter} \affiliation{Department of Astronomy \& Physics, Saint Mary's University, 923 Robie Street, Halifax, NS, B3H 3C3, Canada} \author[0000-0003-0531-5450]{Vital Fern\'{a}ndez} \affiliation{Instituto de Investigaci\'{o}n Multidisciplinar en Ciencia y Tecnolog\'{i}a, Universidad de La Serena, Raul Bitr\'{a}n 1305, La Serena 2204000, Chile} \author[0000-0003-0792-5877]{Keely D. Finkelstein} \affiliation{Department of Astronomy, The University of Texas at Austin, Austin, TX, USA} \author[0000-0002-5245-7796]{Jonathan Freundlich} \affiliation{Université de Strasbourg, CNRS, Observatoire Astronomique de Strasbourg, UMR 7550, F-67000 Strasbourg, France} \author[0000-0001-7201-5066]{Seiji Fujimoto} \affiliation{Cosmic Dawn Center (DAWN), Jagtvej 128, DK2200 Copenhagen N, Denmark} \affiliation{Niels Bohr Institute, University of Copenhagen, Lyngbyvej 2, DK2100 Copenhagen \O, Denmark} \author[0000-0002-8365-5525]{\'Angela Garc\'ia-Argum\'anez} \affiliation{Departamento de Física de la Tierra y Astrofísica, Facultad de CC Físicas, Universidad Complutense de Madrid, E-28040, Madrid, Spain} \affiliation{Instituto de Física de Partículas y del Cosmos IPARCOS, Facultad de CC Físicas, Universidad Complutense de Madrid, 28040 Madrid, Spain} \author[0000-0003-2098-9568]{Jonathan P. Gardner} \affiliation{Astrophysics Science Division, NASA Goddard Space Flight Center, 8800 Greenbelt Rd, Greenbelt, MD 20771, USA} \author[0000-0003-1530-8713]{Eric Gawiser} \affiliation{Department of Physics and Astronomy, Rutgers, the State University of New Jersey, Piscataway, NJ 08854, USA} \author[0000-0002-4085-9165]{Carlos G{\'o}mez-Guijarro} \affil{Universit{\'e} Paris-Saclay, Universit{\'e} Paris Cit{\'e}, CEA, CNRS, AIM, 91191, Gif-sur-Yvette, France} \author[0000-0002-4162-6523]{Yuchen Guo} \affiliation{Department of Astronomy, The University of Texas at Austin, Austin, TX, USA} \author[0000-0002-9753-1769]{Timothy S. Hamilton} \affiliation{Shawnee State University, Portsmouth, OH, USA} \author[0000-0001-6145-5090]{Nimish P. Hathi} \affiliation{Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA} \author[0000-0002-4884-6756]{Benne W. Holwerda} \affil{Physics \& Astronomy Department, University of Louisville, 40292 KY, Louisville, USA} \author[0000-0002-3301-3321]{Michaela Hirschmann} \affiliation{Institute of Physics, Laboratory of Galaxy Evolution, Ecole Polytechnique Fédérale de Lausanne (EPFL), Observatoire de Sauverny, 1290 Versoix, Switzerland} \author[0000-0002-1416-8483]{Marc Huertas-Company} \affil{Instituto de Astrof\'isica de Canarias, La Laguna, Tenerife, Spain} \affil{Universidad de la Laguna, La Laguna, Tenerife, Spain} \affil{Universit\'e Paris-Cit\'e, LERMA - Observatoire de Paris, PSL, Paris, France} \author[0000-0001-6251-4988]{Taylor A. Hutchison} \affiliation{NSF Graduate Fellow} \affiliation{Department of Physics and Astronomy, Texas A\&M University, College Station, TX, 77843-4242 USA} \affiliation{George P.\ and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, Texas A\&M University, College Station, TX, 77843-4242 USA} \author[0000-0001-9298-3523]{Kartheik G. Iyer} \affiliation{Dunlap Institute for Astronomy \& Astrophysics, University of Toronto, Toronto, ON M5S 3H4, Canada} \author[0000-0002-6790-5125]{Anne E. Jaskot} \affiliation{Department of Astronomy, Williams College, Williamstown, MA, 01267, USA} \author[0000-0001-8738-6011]{Saurabh W. Jha} \affiliation{Department of Physics and Astronomy, Rutgers, the State University of New Jersey, Piscataway, NJ 08854, USA} \author[0000-0002-1590-0568]{Shardha Jogee} \affiliation{Department of Astronomy, The University of Texas at Austin, Austin, TX, USA} \author[0000-0002-0000-2394]{St{\'e}phanie Juneau} \affiliation{NSF's NOIRLab, 950 N. Cherry Ave., Tucson, AZ 85719, USA} \author[0000-0003-1187-4240]{Intae Jung} \affil{Department of Physics, The Catholic University of America, Washington, DC 20064, USA } \affil{Astrophysics Science Division, NASA Goddard Space Flight Center, 8800 Greenbelt Rd, Greenbelt, MD 20771, USA} \affil{Center for Research and Exploration in Space Science and Technology, NASA/GSFC, Greenbelt, MD 20771} \author{Susan A. Kassin} \affiliation{Space Telescope Science Institute, Baltimore, MD, 21218, USA} \affiliation{Dept. of Physics \& Astronomy, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD, 21218, USA} \author[0000-0002-8816-5146]{Peter Kurczynski} \affiliation{Observational Cosmology Laboratory, Code 665, NASA Goddard Space Flight Center, Greenbelt, MD 20771} \author[0000-0003-2366-8858]{Rebecca L. Larson} \affiliation{NSF Graduate Fellow} \affiliation{Department of Astronomy, The University of Texas at Austin, Austin, TX, USA} \author[0000-0002-9393-6507]{Gene C. K. Leung} \affiliation{Department of Astronomy, The University of Texas at Austin, Austin, TX, USA} \author[0000-0002-7530-8857]{Arianna S. Long} \affiliation{Department of Astronomy, The University of Texas at Austin, Austin, TX, USA} \author[0000-0003-1581-7825]{Ray A. Lucas} \affiliation{Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA} \author[0000-0002-6777-6490]{Benjamin Magnelli} \affiliation{Universit\'e Paris-Saclay, Universit\'e Paris Cit\'e, CEA, CNRS, AIM, 91191, Gif-sur-Yvette, France} \author{Kameswara Bharadwaj Mantha} \affiliation{Minnesota Institute for Astrophysics, University of Minnesota, 116 church St SE, Minneapolis, MN, 55455, USA.} \author[0000-0002-7547-3385]{Jasleen Matharu} \affiliation{Department of Physics and Astronomy, Texas A\&M University, College Station, TX, 77843-4242 USA} \affiliation{George P.\ and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, Texas A\&M University, College Station, TX, 77843-4242 USA} \author[0000-0001-8688-2443]{Elizabeth J.\ McGrath} \affiliation{Department of Physics and Astronomy, Colby College, Waterville, ME 04901, USA} \author{Daniel H. McIntosh} \affiliation{Division of Energy, Matter and Systems, School of Science and Engineering, University of Missouri-Kansas City, Kansas City, MO 64110, USA} \author{Aubrey Medrano} \affiliation{Department of Astronomy, The University of Texas at Austin, Austin, TX, USA} \author[0000-0001-6870-8900]{Emiliano Merlin} \affiliation{INAF Osservatorio Astronomico di Roma, Via Frascati 33, 00078 Monteporzio Catone, Rome, Italy} \author[0000-0001-5846-4404]{Bahram Mobasher} \affiliation{Department of Physics and Astronomy, University of California, 900 University Ave, Riverside, CA 92521, USA} \author[0000-0003-4965-0402]{Alexa M.\ Morales} \affiliation{Department of Astronomy, The University of Texas at Austin, Austin, TX, USA} \author[0000-0001-8684-2222]{Jeffrey A.\ Newman} \affiliation{Department of Physics and Astronomy and PITT PACC, University of Pittsburgh, Pittsburgh, PA 15260, USA} \author[0000-0003-0892-5203]{David C. Nicholls} \affiliation{Research School of Astronomy and Astrophysics, Australian National University, Canberra, ACT 2600, Australia} \author[0000-0002-2499-9205]{Viraj Pandya} \affiliation{Columbia Astrophysics Laboratory, Columbia University, 550 West 120th Street, New York, NY 10027, USA} \author[0000-0002-9946-4731]{Marc Rafelski} \affiliation{Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA} \affiliation{Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218, USA} \author[0000-0001-5749-5452]{Kaila Ronayne} \affiliation{Department of Physics and Astronomy, Texas A\&M University, College Station, TX, 77843-4242 USA} \affiliation{George P.\ and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, Texas A\&M University, College Station, TX, 77843-4242 USA} \author[0000-0002-8018-3219]{Caitlin Rose} \affil{Laboratory for Multiwavelength Astrophysics, School of Physics and Astronomy, Rochester Institute of Technology, 84 Lomb Memorial Drive, Rochester, NY 14623, USA} \author[0000-0003-0894-1588]{Russell E.\ Ryan Jr.} \affiliation{Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA} \author[0000-0002-9334-8705]{Paola Santini} \affiliation{INAF - Osservatorio Astronomico di Roma, via di Frascati 33, 00078 Monte Porzio Catone, Italy} \author[0000-0001-7755-4755]{Lise-Marie Seill\'e} \affiliation{Aix Marseille Univ, CNRS, CNES, LAM Marseille, France} \author[0000-0001-7811-9042]{Ekta A. Shah} \affiliation{Department of Physics and Astronomy, University of California,Davis, One Shields Ave, Davis, CA 95616, USA} \author[0000-0001-9495-7759]{Lu Shen} \affil{CAS Key Laboratory for Research in Galaxies and Cosmology, Department of Astronomy, University of Science and Technology of China, Hefei 230026, China} \affil{School of Astronomy and Space Sciences, University of Science and Technology of China, Hefei, 230026, China} \author[0000-0002-6386-7299]{Raymond C. Simons} \affiliation{Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA} \author{Gregory F. Snyder} \affiliation{Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA} \author[0000-0002-8770-809X]{Elizabeth R. Stanway} \affiliation{Department of Physics, University of Warwick, Coventry, CV4 7AL, United Kingdom} \author[0000-0002-4772-7878]{Amber N. Straughn} \affiliation{Astrophysics Science Division, NASA Goddard Space Flight Center, 8800 Greenbelt Rd, Greenbelt, MD 20771, USA} \author[0000-0002-7064-5424]{Harry I. Teplitz} \affiliation{IPAC, Mail Code 314-6, California Institute of Technology, 1200 E. California Blvd., Pasadena CA, 91125, USA} \author[0000-0002-8163-0172]{Brittany N. Vanderhoof} \affil{Laboratory for Multiwavelength Astrophysics, School of Physics and Astronomy, Rochester Institute of Technology, 84 Lomb Memorial Drive, Rochester, NY 14623, USA} \author[0000-0003-2338-5567]{Jes\'us Vega-Ferrero} \affil{Instituto de Astrof\'isica de Canarias, La Laguna, Tenerife, Spain} \author[0000-0002-9593-8274]{Weichen Wang} \affiliation{Department of Physics and Astronomy, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD 21218, USA} \author[0000-0001-6065-7483]{Benjamin J. Weiner} \affiliation{MMT/Steward Observatory, University of Arizona, 933 N. Cherry St, Tucson, AZ 85721, USA} \author[0000-0001-9262-9997]{Christopher N. A. Willmer} \affiliation{Steward Observatory, University of Arizona, 933 N.\ Cherry Ave, Tucson, AZ 85721,USA} \author[0000-0003-3735-1931]{Stijn Wuyts} \affiliation{Department of Physics, University of Bath, Claverton Down, Bath BA2 7AY, UK} \keywords{High-redshift galaxies (734) --- Starburst galaxies (1570) --- Lyman-break galaxies (979) --- Emission line galaxies (412) --- James Webb Space Telescope (2291) --- Galaxy photometry (611) --- Dust continuum emission (412) --- Submillimeter astronomy (1647) --- Near infrared astronomy (1093) --- Radio interferometry (1346) } \section{Introduction} \label{sec:intro} The superb sensitivity of the JWST coupled with its high angular resolution and its near infrared detectors \citep{Rigby2022a} provides a unique view of the Universe previously invisible to other telescopes, from nearby star-forming regions to the furthest, faintest galaxies ever found. In the field of extragalactic astronomy, JWST allows us to extend the Lyman Break Galaxy (LBG) selection technique beyond $z\gtrsim11$, the redshift at which the Lyman break is redshifted beyond the reach of {\it Hubble Space Telescope} coverage (Hubble serving as the previous work-horse instrument for the identification of such galaxies before the arrival of JWST; see reviews by \citealt{Finkelstein2016,Stark2016,Robertson2021} and references therein). The identification of very high-redshift LBGs has strong implications for our understanding of galaxy formation and evolution. For example, the confirmation of large numbers of $z>11$ galaxies can provide strong constraints on the formation epoch of the first galaxies and their star formation efficiencies. Their existence can shed light on the dark matter halo mass function in the early Universe, particularly with the presence of very luminous sources found $\lesssim$400\,Myr after the Big Bang (e.g. \citealt{Behroozi2019a}). In the first few days after the release of JWST observations, an increasing number of samples of LBG candidates at $z\gtrsim10$ were identified (\citealt{Adams2022a,Atek2022,Castellano2022,Donnan2022,Finkelstein2022,Harikane2022,Naidu2022,Yan2022}). The abundance and masses of these sources start to be in tension with the predictions from most galaxy formation models (\citealt{Boylan-Kolchin2022,Finkelstein2022,Lovell2022}). Nevertheless, the observed colors for some of these very high-redshift candidates may be degenerated with other populations of galaxies at lower redshifts. This results from confusion between the Lyman-$\alpha$ forest break at $z > 12$ with the Balmer and the 4000~\AA\ breaks combined with dust attenuation and/or strong nebular emission. This means that Dusty Star Forming Galaxies (DSFGs) at significantly lower redshifts ($z\lesssim6-7$) can mimic the JWST/NIRCam colors of $z\gtrsim10$ LBGs, particularly in the shortest-wavelength filters. While models tend to assume these galaxies are universally red in color, thus distinguishable from the typically very blue LBGs, the complex environments of the ISM within DSFGs plus contamination from nebular emission lines could lead to a mix of observed near-infrared colors \citep{howell2010a,casey2014b}, further obfuscating the secure identification of ultra high-redshift LBGs. The phenomenon of DSFGs contaminating high-redshift LBG searches is, in fact, not new to JWST, as often $z\sim2-3$ DSFGs were found to contaminate $z\sim6-8$ LBG samples selected by {\it HST} (e.g. \citealt{Dunlop2007}); here, both the contaminants (DSFGs at $z\sim4-6$) and LBG targets ($z\sim10-20$) for JWST have shifted to higher redshifts. The secure identification of LBGs is thus important not only to quantify the contamination fraction in $z\gtrsim10$ LBG samples (which could relax the observed tension between observations and model predictions) but also to constrain the volume density and physical properties of early massive quiescent galaxies and high-redshift DSFGs, an important step towards our ultimate goal of understanding galaxy formation and evolution. However, distinguishing these galaxies from other populations has proven challenging and requires spectroscopic or multi-wavelength observations probing the older stellar populations (for the quiescent systems) or the dust thermal emission (for DSFGs). Here, we use JCMT/SCUBA-2 $850\,\rm\mu m$ and NOEMA 1.1\,mm interferometric observations, in combination with the JWST data from the Cosmic Evolution Early Release Science (CEERS) Survey (\citealt{Finkelstein2017a}; \citealt{Bagley2022}; Finkelstein et al., in prep), to search for dust emission around $z>12$ galaxy candidates and NIRCam dropout sources. We report on a galaxy, \zavalasource, that is undetected in the NIRCam F115W and F150W filters, but whose photometric redshift is well-constrained to be around $z=5$ after including (sub)millimeter data. We also study the $z\sim16.7$ candidate, \donnansource, reported in \citet{Donnan2022}, for which we find a tentative $2.6\sigma$ detection at $850\,\rm\mu m$, and show that, if this emission is real and associated with this source, it would imply a lower redshift solution around $z\sim5$. Finally, we examine all the available long-wavelength (mid-IR to millimeter) observations around the $z\approx12$ candidate known as \finkelsteinsource\ (\citealt{Finkelstein2022}), finding no evidence of continuum emission. This manuscript is organized as follows: \S\ref{secc:observations} describes the new observations and the ancillary datasets. In \S\ref{secc:sed_fitting} we describe the SED fitting methodology and the best-fit SED fitting for \zavalasource\ along with the inferred physical properties. Then, \S\ref{secc:z16_source} introduces our search for potential contamination from other DSFGs in samples of $z>12$ LBGs candidates in the CEERS field including \donnansource\ and \finkelsteinsource. Finally, our conclusions are summarized in \S\ref{secc:conclusions}. In this manuscript, we assume $H_0=67.3\,\rm km\,s^{-1}\,Mpc^{-1}$, $\Omega_\lambda=0.68$, and $\Omega_{\rm M}=0.32$ (\citealt{Planck2016a}). \section{Observations}\label{secc:observations} \subsection{NOEMA observations} We obtained NOEMA continuum observations on a sample of 19 DSFG candidates in the Extended Groth Strip (EGS) field in preparation for CEERS JWST data, as part of the NOEMA Program W20CK (PIs: Buat \& Zavala). The targets were selected from the original sample reported in \cite{Zavala2017a,Zavala2018a} based on deep observations at both 450 and 850$\,\mu\rm m$ obtained with the SCUBA-2 camera on the James Clerk Maxwell Telescope (JCMT). Here, we only focus on CEERS-DSFG-1 (known as 850.027 in \citealt{Zavala2017a,Zavala2018a}). The rest of the observations, along with a detailed description of the sample selection, will be presented elsewhere (Ciesla et al. in preparation). NOEMA observations were performed using the wideband correlator {\it Polyfix} covering the frequency ranges $252.5-260\,\rm GHz$ (with the lower side band) and $268-275.5\,\rm GHz$ (with the upper side band). The on-source integration time varies from $\sim10$ to $\sim50\,$min and was determined based on the 850$\,\mu\rm m$ flux densities of each target. For the main target of this paper, \zavalasource, the on-source integration time was around $25\,$min. Calibration and imaging of the {\it uv} visibilities were then performed with {\sc gildas}\footnote{\url{www.iram.fr/IRAMFR/GILDAS}}, producing continuum maps with $0\farcs15\times0\farcs15$ pixels centered at $270\,\rm GHz$. For \zavalasource, the achieved RMS is measured to be $\sigma_{\rm 1.1mm}=0.10\,$mJy\,beam$^{-1}$ and the beam-size $1\farcs35\times0\farcs85$. The continuum flux density at 1.1mm was extracted using an aperture of $1.5\times$ the beam-size to recover any potential extended emission resolved by the beam. Our NOEMA observations did not explicitly target the other two sources we include in this paper, \donnansource\ or \finkelsteinsource, although the former is covered in a low sensitivity, outlying part of the primary beam of the observations of \zavalasource. We discuss this further in \S~\ref{secc:z16_source} below. \subsection{CEERS data} JWST/NIRCam observations were conducted as part of the CEERS (Finkelstein et al., in prep) Survey program, one of the early release science surveys (\citealt{Finkelstein2017a}). Here, we only use data from CEERS pointing \#2, which covers all three objects we study in seven filters: F115W, F150W, F200W, F277W, F356W, F410M, and F444W. After a three-dither pattern, the total exposure time was typically 47\,min per filter, with the exception of F115W, whose integration time is longer (see details in \citealt{Finkelstein2022} and Finkelstein in prep.). We performed a detailed reduction as described in \citet{Bagley2022} and \citet{Finkelstein2022b}. What follows is a brief summary of the main steps, and we refer the reader to these two papers for more details. We used version 1.7.2 of the \textit{JWST} Calibration Pipeline\footnote{\url{jwst- pipeline.readthedocs.io}}, with custom modifications. Raw images were processed through Stages 1 and 2 of the pipeline, which apply detector-level corrections, flat fielding, and photometric flux calibration. We also applied a custom step to measure and remove $1/\rm f$ noise. We align the F200W images to an \textit{HST}/WFC3 F160W reference catalog created from $0\farcs03$/pixel mosaics in the EGS field with astrometry tied to Gaia-EDR3 \citep[see][for more details about the methodology]{koekemoer2011}. We then aligned each NIRCam filter to F200W, achieving a median astrometric offset $\lesssim0\farcs005$. Our steps represent an initial reduction that will be iteratively improved with updates to the Calibration Pipeline and reference files. The flux extraction was done following Finkelstein et al. (in prep.). Briefly, we use a multi-wavelength photometric catalog created with Source Extractor \citep{Bertin1996a}, that was created with a sum of F277W$+$F356W as the detection image, with colors measured in small Kron apertures on images PSF-matched to F444W. Total fluxes were estimated following an aperture correction based on a ratio between a large Kron (MAG\_AUTO) flux and the small Kron flux in the F444W image, with an additional correction for missing light in the large aperture based on simulations. Finally, a systematic offset of 1-5\% was applied based on comparing the colors of best-fitting model templates to the photometry for $\sim$800 spectroscopically confirmed galaxies. \subsection{Other ancillary data} Photometric constraints at 450 and 850$\,\mu\rm m$ were obtained from \citet{Zavala2017a}, who reported deep observations with a central depth of $\sigma_{450\mu\rm m} = 1.2\,$mJy\,beam$^{-1}$ and $\sigma_{850\mu\rm m} = 0.2\,$mJy\,beam$^{-1}$, respectively, with a beam-size of $\theta_{450\mu\rm m}\approx8''$ and $\theta_{850\mu\rm m}\approx14\farcs5$. We also make use of {\it Spitzer} IRAC 8$\,\mu$m (\citealt{Barro2011a}) and MIPS 24$\,\mu$m observations \citep{Magnelli2009}, as well as, {\it Herschel} photometry from PACS (at 100 and 160$\,\mu\rm m$; \citealt{Lutz2011a}) and SPIRE (at 250, 350, and 500$\,\mu\rm m$; \citealt{oliver2012a}). Note, however, that the sources studied here are not detected in the {\it Spitzer} or {\it Herschel} maps and so we adopt only upper limits. In addition, we use a 3\,GHz mosaic of the EGS field (Dickinson, private communication) obtained using observations from the {\it Karl G. Jansky} Very Large Array (VLA) as part of the program 21B-292 (PI: M. Dickinson). It reaches a sensitivity of $1.5\,\rm \mu Jy\,beam^{-1}$ and angular resolution of $2.3\times2.3\,\rm arcsec$. The photometry extracted from these observations is summarized in Table \ref{tab:tab2}. \section{A JWST/NIRCam dropout: a DSFG at redshift five}\label{secc:sed_fitting} \begin{deluxetable}{ccccccccc}[h] \vspace{2mm} \tabletypesize{\small} \tablecaption{Measured Photometry of \zavalasource} \tablewidth{\textwidth} \tablehead{ \colhead{Instrument/Filter} & \colhead{Wavelength} & \colhead{Flux Density}\\ } \startdata NIRCam/F115W & 1.15\,\um & $-$8$\pm$11\,nJy \\ NIRCam/F150W & 1.50\,\um & 18$\pm$13\,nJy \\ NIRCam/F200W & 2.00\,\um & 41$\pm$13\,nJy \\ NIRCam/F277W & 2.77\,\um & 137$\pm$8\,nJy \\ NIRCam/F356W & 3.56\,\um & 259$\pm$8\,nJy \\ NIRCam/F410M & 4.10\,\um & 420$\pm$15\,nJy \\ NIRCam/F444W & 4.44\,\um & 438$\pm$12\,nJy \\ {\sc PACS}/100\,\um & 100\,\um & 0.11$\pm$0.51\,mJy \\ {\sc PACS}/160\,\um & 160\,\um & 0.1$\pm$3.5\,mJy \\ {\sc SPIRE}/250\,\um & 250\,\um & $-$1.1$\pm$5.8\,mJy \\ {\sc SPIRE}/350\,\um & 350\,\um & $-$4.5$\pm$6.3\,mJy \\ {\sc Scuba-2}/450\,\um & 450\,\um & $-$2.5$\pm$1.7\,mJy \\ {\sc SPIRE}/500\,\um & 500\,\um & $-$1.0$\pm$6.8\,mJy \\ {\sc Scuba-2}/850\,\um & 850\,\um & 2.25$\pm$0.36\,mJy \\ {\sc NOEMA}/1.1\,mm & 1.1\,mm & 1.92$\pm$0.11\,mJy \\ \enddata \tablecomments{AB magnitudes can be derived via: $-2.5\,\rm log_{10}(f_\nu[\rm nJy])+31.4$. \zavalasource\ is formally not detected in F115W, all of the {\it Herschel} bands from 100\,$\mu$m to 500\,$\mu$m, and SCUBA-2 450\,$\mu$m.} \label{tab:tab2} \end{deluxetable} The sub-arcsecond positional accuracy of the NOEMA observations allows us to directly identify the submillimeter-selected galaxy, \zavalasource, in the JWST/NIRCam observations (see Figure \ref{fig:RGB_postage}) without any ambiguity. Interestingly, \zavalasource\ is well-detected in F200W and redder bands but abruptly drops out in F150W and F115W and in all the {\it HST} filters, as can be seen in Figure \ref{fig:postages}. The drop-out nature of this source satisfies some of the color criteria to identify $z>10$ galaxy candidates. Indeed, it satisfies the criterion of $m_{150\rm W}-m_{200\rm W}>0.8$ used in \citet{Yan2022}, and some (but not all) of the criteria used in \citet{Donnan2022}, with a $2\sigma$ non-detection in F115W and F150W, and $>3\sigma$ detections in redder filters (see Figure \ref{fig:postages} and Table \ref{tab:tab2}). However, the identification of this source as a DSFG calls into question such a very high-redshift scenario, given that the highest redshift dust continuum detections ever reported are at $z\sim7-8$ \citep{Laporte2017a,Strandet2017a,Marrone2018a,Tamura2019a,Inami2022}. Moreover, the (sub)millimeter emission would imply an extreme IR luminosity in excess of $\sim10^{13}\,\rm L_\odot$ and a large dust mass in tension with current models. This is thoroughly discussed in Appendix \ref{app:k-correction}, where we show that relatively bright (sub)millimeter sources are unlikely to lie at $z>10$. Here we conduct a more thorough investigation as to the possible redshift of \zavalasource\ using JWST constraints alone, (sub-)millimeter constraints alone, and a combination of both JWST and long-wavelength millimeter data. The results are highly dependent on the available photometric constraints and the inferred redshifts differ significantly, as discussed below. \subsection{SED fitting procedure and redshift constraints} \subsubsection{{\sc EAZY}} We first fit the SED of \zavalasource\ to JWST/NIRCam photometry alone using the {\sc eazy} (\citealt{Brammer2008a}) spectral energy distribution (SED) fitting code. The fitting was performed in an identical fashion as in \citet{Finkelstein2022}. To summarize, EAZY makes use of a user-supplied template set to generate linear combinations of stellar populations that fit the data and generate redshift probability distributions. The template set used in our case includes the ``tweak\_fsps\_QSF\_12\_v3'' set of 12 templates as well as six additional templates that span bluer colors (\citealt{Larson2022}). As shown in Figure \ref{fig:zPDF}, the redshift probability density distribution from EAZY shows two significant peaks at $z\sim3$ and $z\sim5$, and a non-negligible probability ($\sim6\%$) at $z\approx12-14$. To put these fits in context with the (sub)millimeter data, we show in Figure \ref{fig:fullsed} the best-fit SED from EAZY at $z=5.5$, corresponding to the redshift with the maximum probability. \subsubsection{{\sc CIGALE}}\label{secc:cigale} We also fit the photometry using {\sc CIGALE} \citep{Burgarella2005,Noll2009,Boquien2019} assuming a delayed star formation history (SFH): SFR(t) $\propto t \exp(-t/\tau)$ with stellar models from \citet{Bruzual_Charlot2003} (BC03). A \citet{Calzetti2000} law is also adopted for the dust attenuation of the stellar continuum. On the other hand, the nebular emission (continuum and lines) is attenuated with a screen model and an SMC extinction curve (\citealt{Pei1992a}). Finally, the dust emission reemitted in infrared (IR) is modeled with \citet{Draine2014} models. Including only JWST/NIRCAM photometry in the fit results in a similar redshift distribution as the one obtained with EAZY, with significant probability at $z\approx3-5$ and a moderate probability of $\approx22\%$ of being at $z>10$. To illustrate how well the high-redshift solutions fit the available data, we include the best-fit SED at $z=13.5$ in Figure \ref{fig:fullsed}. In addition, we fit the JWST data along with SCUBA-2 and NOEMA detections ({\it Herschel} upper limits were not included in the fit) using the same CIGALE configuration described above. The addition of the long-wavelength data, significantly impacts the results, narrowing down the redshift probability distribution of \zavalasource\ (see Figure \ref{fig:zPDF}). The best-fit photometric redshift when using all the available photometric constraints is $z=5.09^{+0.62}_{-0.72}$, where the error bars encompass the 68\% confidence interval. As shown in Figure \ref{fig:fullsed}, the fitted SED from this analysis is in good agreement with all the available photometric constraints, including upper limits. \subsubsection{MMPz} Finally, though the long wavelength data on \zavalasource\ are somewhat limited, we are able to calculate an independent photometric redshift for the source based on long wavelength data alone using the {\sc MMPz} package \citep{Casey2020a}. {\sc MMPz} presumes that sources with significant (sub)mm emission follow an empirically measured relationship between the rest-frame peak wavelength of emission, $\lambda_{\rm peak}$, which is inversely proportional to the characteristic luminosity-weighted dust temperature of the ISM, and the total emergent IR luminosity, $L_{\rm IR}$. This $L_{\rm IR}-\lambda_{\rm peak}$ relation is fairly well constrained out to $z\sim5$ \citep{casey2018a,drew2022a} where more intrinsically luminous sources have warmer temperatures. {\sc MMPz} generates a redshift probability distribution by computing the $L_{\rm IR}$ and $\lambda_{\rm peak}$ at all possible redshifts, and contrasts that against the empirical distribution of measured SEDs. By design, redshift solutions found using {\sc MMPz} are very broad (due to the degeneracy between ISM dust temperature, constrained via $\lambda_{\rm peak}$, and redshift). The best-fit redshift generated from the long wavelength data alone (including the only two detections and all the non-detections) is most consistent with the joint CIGALE fit, but shifted to higher values with a best-fit redshift of $z=7.77^{+2.55}_{-1.69}$ (see Figure \ref{fig:zPDF}). \subsubsection{The moral of the story} From the above analysis, it is clear that a single color (i.e. drop-out) selection criteria to identify high-redshift ($z\gtrsim10$) candidates might include contamination from lower redshift sources, such as \zavalasource. This contamination could be more severe in studies using pre-flight calibrations since they render the colors of some galaxies more akin to those expected for very high-redshift systems (see discussion by \citealt{Adams2022a}). This is clearly illustrated in Figure \ref{fig:zPDF}, where we have also included the photometric redshift constraints from EAZY using a pre-flight calibration. The fit suggests a very high redshift of $z=18.2^{+1.2}_{-0.7}$ with an almost negligible probability at $z<15$. Careful selection criteria (with several conditions) are thus necessary to produce cleaner samples of high-redshift galaxies. \citet{Finkelstein2022b} and \citet{Harikane2022}, for example, implemented a further criterion based on the significance of the high-redshift solution against secondary lower-redshift solutions (defined by the difference between the $\chi^2$ values of the high-redshift and low-redshift solutions) to select robust candidates (see also \citealt{Donnan2022}). Similarly, other studies used a two-color criterion to minimize contaminants (e.g. \citealt{Adams2022a,Atek2022,Castellano2022,Harikane2022}) that would have prevented the selection of \zavalasource\ as a very high-redshift candidate given its red colors at longer wavelengths (e.g. $m_{277\rm W}-m_{444\rm W}>1.26$). Note, however, that despite these extra selection criteria, lower redshift systems might still masquerade (and be misidentified) as very high-redshift galaxies as discussed in \S\ref{secc:z16_source}. \subsection{On the physical properties of \zavalasource} As mentioned above, the joint fit of CIGALE using the JWST/NIRCam and the (sub)millimeter data provide tight constraints on the redshift of our target and its physical properties. Hence, here we adopt the these results as our fiducial values. The inferred physical properties are summarized in Table \ref{tab:properties} and discussed below. \begin{deluxetable}{lc} \vspace{2mm} \tabletypesize{\small} \tablecaption{Properties of \zavalasource} \tablewidth{\textwidth} \tablehead{\multicolumn{1}{c}{Property} & \multicolumn{1}{c}{Value}} \startdata Source ID&CEERSJ141938.19$+$525613.9\\ RA~(J2000 [deg])&214.9091152\\ Dec~(J2000 [deg])&52.9371977\\ $z_{\rm CIGALE}$ & $5.09^{+0.62}_{-0.72}$\\ M$_{\star}$~(M$_{\odot}$) & $(2.1 \pm 0.8)\times10^{10}$ \\ L$_{\rm IR}$~(L$_{\odot}$) & $(1.1 \pm 0.3)\times10^{12}$ \\ SFR~(M$_{\odot}$ yr$^{-1}$) & $110 \pm 30$\\ sSFR~(Gyr$^{-1}$) & $5.2 \pm 2.5$\\ E(B-V)~(mag) & $1.6\pm 0.1$ \\ Age~(Myr) & 490$\pm 240$\\ Mass weighted Age~(Myr) & 170$\pm 90$\\ \enddata \tablecomments{The redshift and the listed physical properties were derived from the joint fit of CIGALE using the JWST/NIRCam data and the available (sub)millimeter constraints.} \label{tab:properties} \vspace{-8mm} \end{deluxetable} Assuming the best-fit redshift of $z=5.1$, the stellar mass of \zavalasource\ is constrained to be $(2.1 \pm 0.8)\times10^{10} M_\sun$. This is a factor of $\sim4$ smaller than the average mass of DSFGs detected by single-dish telescopes (e.g. \citealt{daCunha2015a}), but aligned with expectation since our source was selected from one of the deepest SCUBA-2 850\um\ surveys and has a fainter 850\um\ flux density than typical galaxies identified in shallower single-dish telescope surveys. Indeed, the stellar mass of our target is in better agreement with other SCUBA-2 galaxies identified in this field, which have an average stellar mass of $\approx5.6\times10^{10} M_\sun$ (\citealt{Cardona-Torres2022a}), and with the masses derived for galaxies identified in recent deeper ALMA surveys (e.g. \citealt{Gomez-Guijarro2022}; see also \citealt{Khusanova2021}). Similarly, the SFR of \zavalasource\ of $\rm 110\pm 30\, M_\sun~yr^{-1}$ (averaged over the last 10\,Myr) lie between those from SMGs and fainter DSFGs identified in deeper ALMA observations (\citealt{daCunha2015a,Zavala2018b,Aravena2020a,Casey2021,Khusanova2021,Gomez-Guijarro2022}). These properties imply a specific star formation rate of $\rm sSFR=5.2\pm2.5\,\rm Gyr^{-1}$, meaning that \zavalasource\ lies on the main-sequence of star forming galaxies, similar to the so-called population of ``{\it HST}-dark" galaxies\footnote{\zavalasource\ is, by definition, an ``HST-dark'' galaxy.} (e.g. \citealt{Wang2019a}). At $z = 5.1$, the NIRCam photometry samples rest-frame wavelengths from 0.2 to 0.7$\,\mu$m, allowing us to constrain the stellar dust attenuation. The red spectral shape in the NIRCam bands implies a strong dust attenuation (as typically found for this kind of galaxies; e.g. \citealt{Simpson2017}) with $E(B-V)=1.6\pm 0.04$, which results in a dust luminosity of $(1.1 \pm 0.3)\times10^{12}\,\rm L_\sun$. \section{Searching for DSFG contaminants in high-redshift LBG candidates identified with JWST}\label{secc:z16_source} The SCUBA-2 observations from \citet{Zavala2017a} partially overlap with the CEERS NIRCam survey and thus can be used to look for dust continuum emission around $z>10$ candidates in the field. Here we focus on two recently reported high-redshift candidates: \donnansource\ reported to be at $z\approx16.7$ \citep{Donnan2022} and \finkelsteinsource\ at $z\approx11.8$ (\citealt{Finkelstein2022}). \subsection{A deeper look into \donnansource} A $2.6\sigma$ tentative detection around the position of \donnansource\ (\citealt{Donnan2022}; RA=214.91450, DEC=52.943033) was found in the $850\,\mu\rm m$ SCUBA-2 map with a flux density of 0.65$\pm$0.26\,mJy (see Figure \ref{fig:edinburgh_source}). Unfortunately, this source was not formally targeted by our NOEMA observations and, although it is only $26''$ away from \zavalasource\ and within the coverage of the NOEMA map described above, it lies on the edge of the map, where the sensitivity is very low (with a primary beam response of $\lesssim0.1$, implying an RMS of $\sigma_{\rm 1.1mm}\gtrsim1\,$mJy\,beam$^{-1}$). \subsubsection{Caveats of a Marginal SCUBA-2 Detection} We emphasize that there are two primary reasons why this marginal detection may not conclusively imply that \donnansource\ is a significant thermal dust emitter. The first concern is the significance of the signal itself and the possibility of being spurious. At 2.6$\sigma$, simulations of blind detections, single-dish submillimeter sources indicate false-positive rates as high as $\sim30-40$\,\%\ \citep{Casey2013a,Casey2014a}. These rates of false-positives are estimated by both searching SCUBA-2 maps for negative significance peaks at $-$2.6$\sigma$ as well as conducting source injection tests on SCUBA-2 jackknife maps \citep[following the same methodology as][ see their Figure 7]{Casey2013a}. To complement these results, we test the reliability of these low signal-to-noise ratio peaks by creating a catalog of $2.5\sigma$ to $3.0\sigma$ SCUBA-2 sources and searching for counterparts in the deep VLA 3\,GHz map (Dickinson, private communication). We find clear associations for at least $50\%$ of the SCUBA-2 sources\footnote{Given the surface density of radio and the SCUBA-2 sources, the probability of chance alignment is $<5\%$}, implying a $\sim50\%$ fidelity rate. A similar result is obtained using the 24\um\ map. Note, however, that this reliability fraction of $50\%$ should be considered a lower limit since it is well-known that a significant fraction (as high as 30-40\%) of submm sources lack radio or mid-infrared counterparts (particularly those at $z>3$; \citealt{chapman2003,barger2007,Pope2006,Dye2008}). We thus conclude that the $2.6\sigma$ SCUBA-2 signal around CEERS-93316 has a $\gtrsim50\%$ probability of being real. The second significant concern is that even if the detection is real, the SCUBA-2 beamsize is large enough that the 850$\,\mu$m emission could arise from another galaxy at a close angular separation with \donnansource\ on the sky. Figure~\ref{fig:edinburgh_source} shows the neighboring sources within the beamsize of the SCUBA-2 tentative detection, with contours overlaid for {\it Spitzer} 8\,\um\ emission, 24\um\ emission, and VLA 3\,GHz continuum. Unfortunately, there is no secure emitter at these wavelengths to which we can definitively associate the 850$\,\mu$m emission to unequivocally rule out association with \donnansource. Note that the lack of such a counterpart does not imply the tentative SCUBA-2 emission is spurious, since $z>3$ galaxies are usually undetected in these bands (this is indeed the case for \zavalasource). This lack of detection rather means that it is not implausible to associate the 850\um\ emission with \donnansource, although it also does not confirm the association. Another possible counterpart could be the 8\um\ emitter (with a $\sim2.5\sigma$ significance) to the northwest that has a photometric redshift of $z\sim5$, though it is farther from the signal-to-noise peak in the SCUBA-2 map than \donnansource. At present, we lack sufficient data to clearly associate the emission with \donnansource\ or other neighboring sources. Follow-up interferometric observations would be necessary to provide both a confirmation of the emission and astrometric localization to \donnansource\ or to a neighboring source. Nevertheless, given the remarkable properties of \donnansource\ (being one of the highest-redshift candidates ever reported with a bright UV magnitude of $M_{\rm UV}=-21.7$), below we explore the impact that the submm tentative detection might have on its redshift solution {\it if} the dust emission is real {\it and} associated with it. \subsubsection{Implications if Dust Emission is associated with \donnansource} First, we consider what the implications would be if \donnansource\ had significant dust emission at its proposed redshift of $z=16.7$. The observed 850\um\ emission would probe the rest-frame $\sim$50\um\ regime; in this scenario, the IR luminosity would be above $10^{12}$\,L$_\odot$ with a dust mass of $\sim$10$^{8}$\,M$_\odot$. A system with such high dust mass found $\sim$230\,Myr after the Big Bang would surely be extraordinary, likely implausibly so (e.g., \citealt{Dwek2014a}). This is further discussed in Appendix \ref{app:k-correction}, where we show the predicted IR luminosity and dust mass as a function of redshift for a hypothetical submm detection with a flux density similar to that of the tentative emission discussed here. We alternatively explore if a lower redshift solution would be plausible given the JWST/NIRCam photometric constraints and the observed blue colors in these bands (which contrasts with those from \zavalasource). To do that, we fit the JWST/NIRCam data\footnote{Note that since we performed our own data reduction and followed our own source extraction procedure designed to measure accurate colors, the NIRCam fluxes for \donnansource\ used in this paper could differ from those in \citet{Donnan2022}. We list the adopted fluxes for the SED fitting Appendix \ref{appendix}. } along with the tentative $850\,\rm\mu m$ flux density with CIGALE. While the redshift distribution from this fitting strongly favors a high redshift solution in agreement with the \citet{Donnan2022} result (with $z_{\rm CIGALE}\approx 16.4$; see Figure \ref{fig:zPDF_donnan}), the best-fit SED does not satisfactory reproduce the tentative submillimeter flux density which is under-estimated by more than an order of magnitude (see Figure~\ref{fig:fullsed_donnan}). Interestingly, the redshift probability distribution does show a secondary peak at $z\sim4.8$, although with a low integrated probability of less than $3\%$ (see Figure \ref{fig:zPDF_donnan}). This peak is seen even without the inclusion of the long wavelength emission and it is also seen in the redshift probability density distribution presented by \citet[see their Figure A1]{Donnan2022}. This lower redshift clearly dominates the probability distribution of the EAZY fitting when imposing a maximum redshift\footnote{The $z=10$ threshold was chosen based on the discussion presented in Appendix \ref{app:k-correction}. Note that other works have also followed this strategy to better assess the feasibility of low-redshift solutions (e.g. \citealt{Finkelstein2022b}).} of $z=10$, as shown in the Figure \ref{fig:zPDF_donnan}. To further explore the feasibility of this alternative redshift solution, we re-run CIGALE but fixing the redshift to $z=4.8$. The resulting SED is shown in Figure \ref{fig:fullsed_donnan} along with the best-fit $z\sim16$ SEDs from EAZY and CIGALE, for comparison. In the low-redshift scenario, the strong break seen between F200W and F277W in \donnansource\ is attributable to strong [OIII] and H$\beta$ emission in the F277W band (see Figure \ref{fig:fullsed_donnan}). Similarly, the excess flux in F356W above the continuum, which produces a blue F356W-F410M color, would be attributable to H$\alpha$ emission. The measured NIRCam photometry would thus require a young starburst with strong nebular line emission to satisfy a $z\sim4.8$ solution, but this would be within the realm of expectation for an early-stage DSFG in formation at these redshifts. The $z\approx4.8$ best-fit SED would imply a SFR averaged over 10 Myr of $\rm 20 \pm 10\,\rm M_\sun\,yr^{-1}$ and a stellar mass equal to $(1.4 \pm 0.5)\times10^{9}\,\rm M_\sun$, with a dust attenuation (for both continuum and lines) of $E(B-V)=0.5\pm 0.1$ and a dust luminosity of $(1.7 \pm 0.8)\times10^{11}\,\rm L_\sun$. These properties are in broad agreement with those derived for the relatively faint population of $z\sim7$ dusty galaxies in the REBELS survey (\citealt{Bouwens2020a,Inami2022}). In addition, the line fluxes required to reproduce the given NIRCam photometry range from $\sim1\times10^{-18}-1\times10^{-17}$\,erg\,s$^{-1}$\,cm$^{-2}$, which are within the range of those predicted for \zavalasource. While deep interferometric observations at millimeter wavelengths are required to confirm or refute dust continuum emission in this high-redshift candidate, here we show (see also Appendix \ref{app:k-correction}) that a $z\sim4.8$ scenario associated with a DSFG with strong nebular emission is plausible for \donnansource\ and highly likely if the submillimeter emission is confirmed, despite its blue NIR colors which are usually associated with the emission of dust-free systems \citep[e.g.][and references therein]{Finkelstein2016}. If this lower redshift solution is true, it would contrast with the little probability of being at $z<15$ inferred from the different redshift probability distributions shown in Figure \ref{fig:zPDF_donnan} (see also \citealt{Donnan2022,Finkelstein2022b}). The reason for this low probability might be related to the low significance of the tentative SCUBA-2 detection in the case of CIGALE or with the adopted templates and the fitting approach for EAZY. Interestingly, \citet{Perez-Gonzalez2022}, who used a novel 2D fitting approach with a new set of SED templates, found a best-fit redshift of $4.59\pm0.03$ for this source (known as nircam2-2159 in \citealt{Perez-Gonzalez2022}) with a low probability of being at $z>10$. \subsection{A deeper look into Maisie's Galaxy} Given that the recently reported $z=11.8$ galaxy candidate from \citet{Finkelstein2022} lies close to the two galaxies described above ($\sim78''$ and $\sim65''$ away from \zavalasource\ and from \donnansource, respectively), we carefully examine the available long-wavelength observations to investigate any possible detection of dust emission. Because this source is not covered by our NOEMA observations, we started by looking at the deep SCUBA-2 850\um\ map (\citealt{Zavala2017a}). As shown in Figure \ref{fig:maisies_source} no significant detection is found (with a measured flux density of S$_{\rm 850}=-0.40\pm0.25$\,mJy at the position of the source). We also search for significant emission in the {\it Spitzer} 8\um\ and 24\um\ map, Herschel 100, 160, 250, 350, and 500\um\ imaging, and SCUBA-2 450\um\ observations, finding only non-detections. We thus conclude that a lower redshift scenario for \finkelsteinsource\, in which the source is rather associated with a DSFG is very unlikely. Given the lack of FIR-to-submm detections, the best-fit SEDs and their associated redshift probability distributions for \finkelsteinsource\ would be similar to those reported in \citet{Finkelstein2022}. Hence, to avoid duplication, they are not included in this paper. \section{Conclusions}\label{secc:conclusions} Using the available datasets from the JWST CEERS survey in combination with NOEMA and SCUBA-2 observations, we have demonstrated that DSFGs at $z\sim4-6$ can dropout in the bluest JWST/NIRCam filters while being well-detected in the redder filters. This kind of galaxies could even show a significant probability of being at high redshifts when performing SED fittings. This is illustrated by studying the source \zavalasource, a 850\um-selected galaxy with robust interferometric observations at 1.1\,mm by NOEMA that is undetected in the F115W and F150W bands. A joint SED fitting analysis including the NIRCam constraints and the long-wavelength (sub-)millimeter data implies a photometric redshift of $5.09^{+0.62}_{-0.72}$, with physical properties that resemble other DSFGs: $\rm M_\star=(2.1 \pm 0.8)\times10^{10}\,M_\odot$; $\rm SFR=110\pm30\,M_\odot\,yr^{-1}$; $\rm L_{\rm dust}=(1.1 \pm 0.3)\times10^{12}\,\rm L_\sun$. Hence, searches of $z>10$ LBGs that rely only on a dropout selection could introduce significant contaminants from lower redshift systems. This could be minimized by adopting multi-color selection criteria or by defining alternative conditions (such as a minimum redshift probability or $\chi^2$ goodness-of-fit; e.g. \citealt{Adams2022a,Castellano2022,Donnan2022,Finkelstein2022,Harikane2022}). Taking advantage of the available submillimeter data in the field, we extended the search for dust continuum emission to two close $z>10$ LBG candidates recently reported, \donnansource\ at $z\approx16.7$ (\citealt{Donnan2022}) and \finkelsteinsource\ at $z\approx11.8$ (\citealt{Finkelstein2022}). We found a tentative $2.6\sigma$ detection at 850\um\ around the position of \donnansource. A confirmation of this flux density measurement and a firm spatial association requires higher resolution sub-mm imaging. This is particularly important given its high probability of being spurious and the large beam-size ($\approx 14.6''$) of the SCUBA-2 observations which encompass several galaxies. While additional observations are required to corroborate this identification, we use this possible association to illustrate that $z\sim5$ DSFGs can also exhibit blue colors in the JWST/NIRCam bands when strong nebular emission lines are present (with line fluxes in the order of $\sim$10$^{-18}$--10$^{-17}$\,erg\,s$^{-1}$\,cm$^{-2}$), and conclude that (sub)millimeter emission in samples of $z>10$ LBGs likely imply misidentifications of DSFGs at lower redshifts ($z\lesssim7$). Indeed, if \donnansource\ is confirmed to be a dust emitter, our analysis suggest that it would rather lie at $z\sim5$. This work has illustrated both the importance and potential of combining JWST observations with submillimeter/millimeter data, a synergy that allows us to identify and characterize populations of galaxies that were previously unreachable, including both $z\gtrsim5$ DSFGs as well as ultra high-redshift $z>10$ LBGs. In particular, it will become crucial for searches of ultra high-redshift LBGs to closely consider contamination from lower redshift ($z\sim4-7$) dusty sources with significant nebular line emission that can mimic the colors of a higher redshift Lyman break. Despite sitting at lower redshift, new discoveries and characterizations of $z\sim5$ DSFGs will also shed new light on an otherwise mysterious population, where fewer than a few dozen systems are currently known. Such discoveries will enable a major step forward in our understanding of massive galaxy formation in the first $\sim$1\,Gyr of the Universe's history. \vspace{1cm} \begin{acknowledgments} We thank the reviewer for a constructive report that improved the clarity of our results. We also thank Jim Dunlop for helpful discussions. V.B and D. B. thank the Programme National de Cosmologie et Galaxies and CNES for their support. We thank Médéric Boquien and Yannick Roehlly for their help. CMC thanks the National Science Foundation for support through grants AST-1814034, and AST-2009577 and additionally the Research Corporation for Science Advancement from a 2019 Cottrell Scholar Award sponsored by IF/THEN, an initiative of Lyda Hill Philanthropies. IA acknowledges support from CONACyT CB-382947. We acknowledge support from STScI through award JWST-ERS-1345. This work is based on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. These observations are associated with program \#1345 and can be accessed in a raw format via \dataset[DOI]{https://doi.org/10.17909/4abm-k128}. This work is based on observations carried out under project number W20CK with the IRAM NOEMA Interferometer. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain). \end{acknowledgments} \facilities{JWST, NOEMA, JCMT} \appendix \section{Assessing the reliability of high-redshift galaxy candidates via dust emission}\label{app:k-correction} Continuum observations at submillimeter and millimeter wavelengths probe galaxies' dust thermal emission for a wide range of redshifts. Here, adopting typical dust SEDs and relationships between dust continuum emission and other physical properties, we estimate the IR luminosity, SFR, and dust mass as a function of redshift implied by a dust continuum detection similar to the one reported in this work. Then, we compare these quantities with the expected galaxies' properties at $z>10$ to assess whether or not they lie within the realm of high-redshift galaxies. For these calculations, we adopt a modified black-body distribution with a dust emissivity index of $\beta=1.8$ for the dust SED (e.g. \citealt{Casey2012a}). Two different dust temperatures of $35$ and $75\,$K are explored. Then, the IR luminosity at a given redshift is estimated by, first, scaling the redshifted SED to the 850\,\um\ flux density and, second, integrating over 8–1000\,\um\ (in the rest-frame). The CMB effects on the observed flux density are also taken into account following \citealt{daCunha2013a}. The inferred IR luminosity as a function of redshift for a $S_{850\rm\mu m}=1\,$mJy dust detection is shown in the left panel of Figure \ref{fig:k-correction}. The corresponding dust-obscured SFR estimated directly from the IR luminosity (\citealt{Kennicutt2012a}) is also indicated on the right axis. Then, we calculate the dust mass as follows. At a given redshift, we estimate the rest-frame 850\,\um\ flux density, $S_{850\rm\mu m, rest}$, from the scaled SED described above (which takes into account the CMB effects) and use the following equation: \begin{equation} M_{\rm d}=\frac{S_{850\rm\mu m, rest}\,D_L^2}{(1+z)\,\kappa_{\rm ref}\,B(\nu_{\rm ref},T_{\rm d})}, \end{equation} where $\kappa_{\rm ref}$ represents the dust mass absorption coefficient at a reference wavelength and $B(\nu_{\rm ref},T_{\rm d})$ the Planck function evaluated at the same frequency. We adopt $\kappa(\rm 850\mu m)=0.043\,\rm m^2\,kg^{-1}$ (\citealt{Li2001}) for this calculation. The implied dust mass as a function of redshift is plotted in the right panel of Figure \ref{fig:k-correction}. As clearly seen in the figure, a dust continuum detection of $S_{850\rm\mu m}\sim1\,$mJy (or $S_{1.1\rm mm}\sim0.4\,$mJy) at $z>10$ would imply a SFR in excess of $\sim100\,\rm M_\odot\, yr^{-1}$, rapidly reaching $\sim1,000\,\rm M_\odot\, yr^{-1}$ at $z\sim15$ (depending on the adopted temperature). This SFR is significantly higher than what is measured in any $z\gtrsim8$ object and around two orders of magnitude higher than the SFRs inferred for JWST-selected candidates. On the right panel of Figure \ref{fig:k-correction}, the inferred dust mass is compared with the maximum mass limit allowed by a $\Lambda$CDM Universe (which depends on redshift and survey volume\footnote{The used survey area for the maximum halo mass calculation corresponds to 34.5\,sq.\,arcmin, the covered area by the current CEERS/NIRCam observations.}). To estimate this limit, we use the halo mass function from \citet{Harrison2013} scaling, first, the halo mass down by a factor of 20 (following \citealt{Marrone2018a}; see also \citealt{Casey2021a}) to approximate the corresponding galaxy ISM mass, and, second, by a factor of 100, which corresponds to the ISM-to-dust ratio typically measured in massive galaxies (e.g. \citealt{Magdis2012a,Remy-Ruyer2014a,Scoville2016a}). As shown in the figure, within the volume probed by the CEERS observations, a submillimeter detection at $z\gtrsim6$ start to be in tension (at $1\sigma$ level) with the maximum mass limit inferred from the halo mass function (when adopting $T_{\rm d}=35\,$K). This could be slightly alleviated if the dust temperature is higher. Nevertheless, \citet{Scoville2016a} argue that, even when the luminosity-weighted dust temperature could be higher at higher redshifts (e.g. \citealt{Faisst2017,Bakx2020,Sommovigo2022}), the mass-weighted temperature is usually cold ($\approx25-35\,K$). Furthermore, even adopting the results from this relatively high dust temperature at face value, the implied dust masses exceed the expected mass limit at $z\gtrsim12$, implying that such a system is unlikely to exist. While these estimates represent zero-order approximations and depend strongly on the adopted assumptions (which might not be valid at very high redshifts), it is clear that dust continuum detections (in the order of $S_{850\rm\mu m}\sim1\,$mJy) strongly disfavour high redshifts, $z>10$, solutions for galaxies discovered in small surveys such as those conducted to-date by the JWST. Submillimeter/millimeter surveys can thus be used to efficiently identified lower-redshift interlopers (i.e. dusty, star-forming galaxies) in samples of very high-redshift galaxy candidates. \section{Extracted photometry for \donnansource}\label{appendix} The photometry used during the SED fitting procedure on \donnansource\ is listed in Table~\ref{tab:donnanphot}.Our fluxes are systematically brighter than those reported by \citet{Donnan2022} in all the detected bands, although the difference is small (with an average magnitude difference of $-0.09\,$mag). This could be related to the different processes used to reduce the data and the applied correction factors as discussed in \citealt{Finkelstein2022b}. \begin{deluxetable}{ccccccccc}[h] \tabletypesize{\small} \tablecaption{Measured Photometry of \donnansource} \tablewidth{\textwidth}\label{tab:donnanphot} \tablehead{ \colhead{Instrument/Filter} & \colhead{Wavelength} & \colhead{Flux Density}\\ } \startdata NIRCam/F115W & 1.15\,\um & -4.1$\pm$5.9\,nJy \\ NIRCam/F150W & 1.50\,\um & 7.0$\pm$6.7\,nJy \\ NIRCam/F200W & 2.00\,\um & 22.5$\pm$4.9\,nJy \\ NIRCam/F277W & 2.77\,\um & 94.2$\pm$4.6\,nJy \\ NIRCam/F356W & 3.56\,\um & 95.8$\pm$3.7\,nJy \\ NIRCam/F410M & 4.10\,\um & 102.4$\pm$7.3\,nJy \\ NIRCam/F444W & 4.44\,\um & 89.7$\pm$5.4\,nJy \\ {\sc Scuba-2}/850\,\um & 850\,\um & 0.65$\pm$0.26\,mJy \\ \enddata \end{deluxetable} \bibliography{sample631}{} \bibliographystyle{aasjournal} \allauthors
Title: The formation of the stripped envelope type II b Supernova progenitors: Rotation, Metallicity and Overshooting
Abstract: Type IIb supernovae are believed to originate from core-collapse progenitors having kept only a very thin hydrogen envelope. We aim to explore how some physical factors, such as rotation, metallicity, overshooting, and the initial orbital period in binaries, significantly affect the Roche lobe overflow and the formation of type IIb supernovae. It is found that binaries are the main channel that capable of producing type typeIIb supernovae progenitors in the mass range for initial masses below 20 $M_{\odot}$. The formation of type IIb supernova progenitors is extremely sensitive to the initial orbital period. A less massive hydrogen indicates smaller radius and a higher effective temperatures, and vice versa. Binary systems with initial periods between 300 and 720 days produce type IIb progenitors that are a red supergiant. Those with an initial period between 50 and 300 days produce yellow supergiant progenitors and those with initial periods shorter than 50 days, blue supergiant progenitors. Both rapid rotation and larger overshooting can enlarge the carbon-oxygen core mass and lead to higher core temperature and lower central density at the pre-collapse phase. They are also beneficial to surface nitrogen enrichment but restrict the efficiency of the first dredge-up. SN IIb progenitors with low metallicity have smaller hydrogen envelope masses and radii than the high metallicity counterparts. Ultra-stripped binary models have systematically higher core mass fraction $\rm ^{12}C$ left, which has important influence on the compactness of type IIb progenitors.
https://export.arxiv.org/pdf/2208.11329
command. \newcommand{\vdag}{(v)^\dagger} \newcommand\aastex{AAS\TeX} \newcommand\latex{La\TeX} \graphicspath{{./}{figures/}} \usepackage[figuresright]{rotating} \usepackage{color} \usepackage{amsmath} \usepackage{epstopdf} \usepackage{threeparttable} \usepackage{booktabs} \begin{document} \title{The formation of the stripped envelope type $\rm \uppercase\expandafter{\romannumeral2}$b Supernova progenitors: Rotation, Metallicity and Overshooting} \author{Gang Long} \affiliation{College of Physics, Guizhou University, Guiyang city, Guizhou Province, 550025, P.R. China} \author{Hanfeng Song} \affiliation{College of Physics, Guizhou University, Guiyang city, Guizhou Province, 550025, P.R. China} \affiliation{Geneva Observatory, Geneva University, CH-1290 Sauverny,Switzerland} \author{Georges Meynet} \affiliation{Geneva Observatory, Geneva University, CH-1290 Sauverny,Switzerland} \author{Andre Maeder} \affiliation{Geneva Observatory, Geneva University, CH-1290 Sauverny,Switzerland} \author{Ruiyu Zhang} \affiliation{College of Physics, Henan Normal University, Xinxiang, Henan Province, 453007, P.R. China} \author{Ying Qin} \affiliation{Department of Physics, Anhui Normal University, Wuhu city, Anhui Province, 241000, P.R. China} \author{Sylvia Ekstr\"omt} \affiliation{Geneva Observatory, Geneva University, CH-1290 Sauverny,Switzerland} \author{Cyril Georgy} \affiliation{Geneva Observatory, Geneva University, CH-1290 Sauverny,Switzerland} \author{Liuyan Zhao} \affiliation{College of Physics, Guizhou University, Guiyang city, Guizhou Province, 550025, P.R. China} \correspondingauthor{Hanfeng Song; Georges Meynet; Andre Maeder} \email{hfsong@gzu.edu.cn;Georges.Meynet@unige.ch; Andre.Maeder@unige.ch} \keywords{Unified Astronomy Thesaurus concepts: Close binary stars (254); Massive stars (732); Stellar rotation (1629); Stellar properties(1624); Stellar structures(1631)} \section{Introduction} Massive stars explode as core-collapse with varying amounts of hydrogen in their envelopes. Supernovae are classified as Type $\rm \uppercase\expandafter{\romannumeral1}$ and Type $\rm \uppercase\expandafter{\romannumeral2}$ in terms of the absence and presence of hydrogen lines in the spectrum and further subdivided into $\rm \uppercase\expandafter{\romannumeral1}$a, $\rm \uppercase\expandafter{\romannumeral1}$b, $\rm \uppercase\expandafter{\romannumeral1}$c, $\rm \uppercase\expandafter{\romannumeral2}$P, $\rm \uppercase\expandafter{\romannumeral2}$L, $\rm \uppercase\expandafter{\romannumeral2}$b, and IIn \citep[e.g.,][]{Branch1991,Filippenko1991}. The spectra of type $\rm \uppercase\expandafter{\romannumeral1}$ core-collapse supernovae show an absence of hydrogen lines. The presence of strong Si lines and He lines defines Type $\rm \uppercase\expandafter{\romannumeral1}$a and Type $\rm \uppercase\expandafter{\romannumeral1}$b, respectively, while Type $\rm \uppercase\expandafter{\romannumeral1}$c does not display any hydrogen and helium features in their spectra. There are a few signatures of hydrogen in the spectra of type $\rm \uppercase\expandafter{\romannumeral1}$b. It has been suggested that the diversity of SNe $\rm \uppercase\expandafter{\romannumeral2}$ originates from different main-sequence mass ranges of the progenitor , i.e., SNe $\rm \uppercase\expandafter{\romannumeral2}$L from about 7-10$M_{\odot}$, SNe $\rm \uppercase\expandafter{\romannumeral2}$P above 10$M_{\odot}$, and Type $\rm \uppercase\expandafter{\romannumeral1}$b/$\rm \uppercase\expandafter{\romannumeral1}$c supernovae (SNe $\rm \uppercase\expandafter{\romannumeral1}$b/$\rm \uppercase\expandafter{\romannumeral1}$c) originating from He stars of different mass ranges in binary systems \citep{Nomoto1990}. However, the exact connection of these types to their progenitors has been a controversial issue. In this paper, we focus on stripped-envelope SNe, e.g., SNe of Type $\rm \uppercase\expandafter{\romannumeral2}$b, Ib and Ic. Type $\rm \uppercase\expandafter{\romannumeral2}$b SNe, which are considered as a transitional class, showing clear hydrogen signatures in their early spectra. However, these signatures gradually disappear, over the period of 30-90 days after the explosions, after which the spectra become virtually indistinguishable from Type $\rm \uppercase\expandafter{\romannumeral1}$b SNe. Possibly the simplest explanation for these observational differences among the stripped envelope (SE) SNe subclasses is that different amounts of the helium/hydrogen envelopes have been stripped from the star prior to the SN explosions. In this channel, type $\rm \uppercase\expandafter{\romannumeral1}$c SNe are stripped the most whereas $\rm \uppercase\expandafter{\romannumeral2}$b the least. The light curve of SN $\rm \uppercase\expandafter{\romannumeral2}$b can show an early fireball phase but is powered near maximum and beyond by radioactive decay. The thin envelope model has been confirmed by spectral variations which show growing features of helium and oxygen. The light curve of SN $\rm \uppercase\expandafter{\romannumeral2}$b is significantly different from the previously known light curve of SNe II. It is obvious that the peculiar light curve of SN $\rm \uppercase\expandafter{\romannumeral2}$b cannot be explained by the explosion of an ordinary red supergiant with a massive hydrogen-rich envelope, which produces a light curve of SN II-P. The light curve of SN $\rm \uppercase\expandafter{\romannumeral2}$b can be understood as the explosion of a red-supergiant whose hydrogen-rich envelope is as small as $M < 1.0M_{\odot}$. For example, supernova 1993J in M81 has been identified as a type $\rm \uppercase\expandafter{\romannumeral2}$b \citep{Schmidt1993}. This behaviour implies that the progenitor of the core-collapse has a very small hydrogen mass at the time of explosion, $\rm M_{H}\simeq 0.03-0.5M_{\odot}$ \citep[e.g.,][]{Woosley1994,Meynet2015,Yoon2017}, with a possible mass down to $\rm M_{H} \simeq 0.001M_{\odot}$ \citep{Dessart2011,Eggleton1983}. In the population synthesis investigation, \cite{Sravan2018} considered the mass of the hydrogen-rich envelope of the progenitor at the onset of explosion to be $0.01M_{\odot} \leq \rm M_{H} \leq 1.0 M_{\odot} $. However, the mechanisms driving the stripping of the hydrogen envelope and the parameter regimes that dominate the formation of SN $\rm \uppercase\expandafter{\romannumeral2}$b are still open questions. Two physical mechanisms for envelope removal have been proposed to explain the progenitors of SN $\rm \uppercase\expandafter{\romannumeral2}$b. In the first scenario, a very massive star with initial mass ($\rm >25 M_{\odot}$) is required in order for the mass-loss rate to be large enough \citep[e.g.,][]{Heger2003,Woosley1993} and at sufficiently high initial metallicity for stellar winds to be triggered. This scenario is supported by the analysis of the environments of type $\rm \uppercase\expandafter{\romannumeral1}$b/c SNe, in which very massive stars ($\rm \geq 30 M_{\odot}$) have lost their envelopes through stellar winds \citep{Maund2018}. In order to meet with the complete set of observations SN 2008ax, \cite{Georgy2012} have constructed a type $\rm \uppercase\expandafter{\romannumeral2}$b progenitor of $20 M_{\odot}$ ending with the suitable core mass, color, luminosity, and hydrogen content. \cite{Groh2013} displayed the final stage of the rotating model as a luminous blue variable (LBV) star and proposed that LBVs may be the progenitors of some core collapse SNe. However, the observed stripping envelope SN rates are too high to be explained solely by single star evolution. The biggest difficulty with the single-star scenario is that the evolution of single star requires extremely precise fine tuning of the initial parameters to leave a thin hydrogen envelope prior to the explosion. Moreover, the clumping in stellar winds suggests that the currently used mass-loss rates are too high {by single star evolution}; the hot wind mass-loss rates are lower by a factor of 2 or 3 than those typically used in stellar evolution calculations. The lower wind loss rate makes it difficult to produce SN $\rm \uppercase\expandafter{\romannumeral2}$b {by single star evolution}. The second scenario is close binary interactions involving mass transfer via the Roche lobe overflow (RLOF) and {possibly common-envelope evolution\citep[e.g.,][]{Podsiadlowski1992, Podsiadlowski1993, Nomoto1993, Yoon2010, Yoon2017}}, stellar evolution with rotation \citep[e.g.,][]{Georgy2012,Groh2013}, and nuclear burning instabilities \citep[e.g.,][]{Arnett2011,Strotjohann2015}. The typical ejecta mass of the stripped-envelope SNe may be very low ($\sim 2-4 M_{\odot}$), indicating that the progenitors originated from lower-mass stars ($ \leq 20 M_{\odot}$) have lost their envelopes through binary interaction \citep{Lyman2016, Taddia2018, Prentice2019}. {\cite{Joss1988} has shown that binary evolution can generate stripped supergiants with small H-rich envelopes. \cite{Claeys2011} have identified binary progenitor models for extended type $\rm \uppercase\expandafter{\romannumeral2}$b SNe. The hydrogen envelope of the primary star is stripped by the RLOF and only several tenths of solar mass of hydrogen envelope can be remained at the time of explosion.} The most direct way to distinguish between these scenarios is to search for a surviving binary companion after the supernovae. Such searches have been successful for type $\rm \uppercase\expandafter{\romannumeral2}$b SNe, where putative surviving companions were discovered, e.g. SN1993J \citep{Maund2004}, SN2011dh \citep{Folatelli2014,Maund2019}, SN2001ig \citep{Ryder2018}. These findings are very important indicators which are originated from binary systems. \cite{Torrey2019} presented an enhanced mass loss scenario due to jets that the companion star might drive, and noted that an enhanced mass loss can cause the binary system to go through the grazing envelope evolution (GEE) and generate a progenitor of type $\rm \uppercase\expandafter{\romannumeral2}$b SNe. They estimated that the binary evolution channel with GEE contributes about a quarter of all SNe IIb. The GEE channel is completely different from the RLOF scenario, and hence widens the binary parameter space that can account for type $\rm \uppercase\expandafter{\romannumeral2}$b SNe. The fatal common envelope evolution scenario can also produce some $\rm \uppercase\expandafter{\romannumeral2}$b SNe. A less mass main sequence companion star circles inside the giant envelope of the primary star and eliminates most of the giant envelope before it merges with the giant core. However, this channel is confronted with some uncertainties in the calculations \citep{Soker2017,Lohev2019}. Close binary stars are important in understanding the formation, evolution and death of massive stars. A high fraction of O-type stars ($70 \%$) at solar metallicity are expected to undergo a mass transfer episode during their lifetime \citep{Sana2012}. There is some evidence for two subclasses of SN $\rm \uppercase\expandafter{\romannumeral2}$b, those with radially-extended hydrogen envelopes and those with compact envelopes \citep{Chevalier2010}. \cite{Yoon2017} investigated the formation of SNe $\rm \uppercase\expandafter{\romannumeral2}$b in the channel of mass transfer via RLOF while considering three groups of SNe $\rm \uppercase\expandafter{\romannumeral2}$b, namely, blue progenitors, yellow supergiants, and red supergiants. The more compact blue progenitors and yellow supergiants have hydrogen envelope masses less than about $\rm M_{H}<0.15M_{\odot}$, mostly resulting from early Case B mass transfer with relatively low initial masses and/or low metallicity. Red supergiants have hydrogen masses of $\rm M_{H }>0.15M_{\odot}$ at the explosion, which can be produced via late Case B mass transfers. In this paper, we intend to explore how the close binary evolution scenario can produce in producing the SNe $\rm \uppercase\expandafter{\romannumeral2}$b. We aim to explore the following questions in the binary scenario: 1) how some initial physical parameters (i.e., rotational velocities, overshooting, metallicity, and orbital period) impact on the formation of SNe $\rm \uppercase\expandafter{\romannumeral2}$b; 2) how the surface chemical abundance varies with these initial parameters; 3) what controls the mass of the H-rich envelope of the progenitor through mass transfer due to RLOF; 4) what is the relation between SN $\rm \uppercase\expandafter{\romannumeral2}$b and other types of supernovae, such as SNe $\rm \uppercase\expandafter{\romannumeral2}$P, $\rm \uppercase\expandafter{\romannumeral2}$L and $\rm \uppercase\expandafter{\romannumeral1}$b/c; 5) how the internal structure of the deep core is influenced by these initial parameters. In Section 2, we describe the physical ingredients of the stellar models and the domain of initial conditions that we have explored in this work. In Section 3, the results of numerical calculations for the evolution of single stars and binary systems are presented in detail. {In Section 4, we discuss an unsolved, long-standing problem about the discrepancy between the observed ratio of type $\rm \uppercase\expandafter{\romannumeral2}$b SNe and the theoretical one.} Conclusions and summaries are given in Section 5. \section{The initial parameters and model descriptions} All models are calculated with the MESA code \citep{Paxton2011,Paxton2013,Paxton2015,Paxton2018}. We make use of the Schwarzschild criterion to determine the boundaries of the convective region. The mixing length parameter is $\rm l_{m}=1.5 H_{\rm P}$, where $\rm H_{\rm P}$ is the pressure scale height at the outer boundary of the core. {We consider a overshooting parameter of 0.12 {$\rm H_{\rm P}$} as the reference value. \cite{Sravan2020} have shown that SN2013df and SN1993J have a very low mass limit (about 2.0-2.8 $M_{\odot}$) of helium cores. This fact makes a smaller overshooting parameter to be a more appropriate choice and a smaller overshooting parameter can also be supported from intermediate-mass eclipsing binaries \citep{Stancliffe2015}. Normally, the standard value of the overshooting is 0.25 of $\rm H_{\rm P}$. \cite{Brott2011} considered convective overshooting using a parameter of 0.335 {$\rm H_{\rm P}$}. This value results from their new calibration using the observed $\rm v \sin i$ drop that is found in their data when they plot $\rm v \sin i$ against the surface gravity.} We assume that the helium abundance increases linearly from $Y = 0.2477$ \citep{Peimbert2007} at $Z = 0.0$ to $Y = 0.28$ at $Z = 0.02$ \citep{Brott2011}. We adopt the basic.net, coburn.net, and approx21.net nuclear networks in MESA. Our models are comprised of single or two zero-age main sequence (ZAMS) stars and various initial parameters are listed in Table 1. {In order to conform with observations of two component stars of SN 1993J in the HR diagram, the accretion efficiency $\rm \beta_{mt}$, (i.e. the fraction of transferred material that is accreted by the companion star) is then chosen as $\rm \beta_{mt}=0.5$. The final hydrogen envelope mass increases if we employ a lower accretion efficiency. For a higher accretion efficiency, the secondary star tends to evolve towards an over-luminous O star.} The non-accreted matter is directly expelled out from the system as a fast wind for the accretor and carries the specific orbital angular momentum of the mass gainer. We use the Dutch scheme in MESA for both hot and cool wind mass loss rates, with the Dutch scaling factor of 1.0\footnote{The Dutch wind mass-loss scheme is a combination of the prescriptions of \cite{Vink2001} (when $\rm T_{eff} \geq 10^{4} $ K and $\rm X_{surf} \geq 0.4$ ), \cite{Nugis2000} (when $\rm T_{eff} \geq 10^{4} $ K and $\rm X_{surf} < 0.4$ ), and \cite{de1988} (when $\rm T_{eff} < 10^{4} $ K)}. The wind of Wolf Rayet stars is computed according to \cite{Nugis2000}. Radiative opacities were interpolated from the OPAL tables \citep{Iglesias1996}. The opacity increase due to Fe-group elements at $\rm T \sim 180 $ kK plays an important role in determining the envelope structure in our stellar models. We take into account various instabilities induced by rotation that result in the mixing of chemical elements: Eddington-Sweet circulation, dynamical and secular shear instability, and the Goldreich-Schubert-Fricke instability {\citep{Endal1978, Pinsonneault1989, Maeder2000a,Maede2012}}. The rotational mixing owing to these hydrodynamic instabilities is considered as diffusion processes according to \cite{Heger2000}. The diffusion coefficients are adopted for the transportation of both chemical species and angular momentum. The contribution of the rotationally induced instabilities to the total diffusion coefficient of the chemical species is decreased by the parameter of $\rm f_{c}=0.0228$. This factor has been calibrated to reproduce the observed nitrogen surface abundances as a function of the projected rotational velocities for stars in the Large Magellanic Cloud sample (NGC 2004) of the FLAMES survey \citep{Brott2011}. {The parameter $\rm f_{\mu}$ denotes the sensitivity of the rotationally induced mixing to mean molecular weightВ gradients (i.e., the mean molecular weight gradients $\rm \nabla_{\mu}$ is replaced by $\rm f_{\mu}\nabla_{\mu}$) (Heger, Langer \& Woosley 2000).} We adopt a value $\rm f_{\mu}=0.1$ as in \cite{Yoon2006} who calibrated this parameter to match the observed surface helium abundances in stellar models with the solar metallicity. The upper mass limit of the hydrogen envelope for SNe $\rm \uppercase\expandafter{\romannumeral2}$b heavily depends on the supernova parameters, such as chemical composition, total mass of ejecta, and supernova energy. We assume that models with final H-rich envelope mass more than $\rm 0.5 M_{\odot}$ explode as SNe $\rm \uppercase\expandafter{\romannumeral2}$P or SNe $\rm \uppercase\expandafter{\romannumeral2}$L, while those with envelope mass less {than $\rm 0.033 M_{\odot}$} explode as SNe Ib or Ic. Models whose envelope mass is {between $\rm 0.033 M_{\odot}$} and $\rm 0.5 M_{\odot}$ are adopted as SN $\rm \uppercase\expandafter{\romannumeral2}$b progenitors. The properties of the SN $\rm \uppercase\expandafter{\romannumeral2}$b progenitor are also shown in Table \ref{table1}. The final evolutionary positions in the HR diagram are classified according to their effective surface temperature and surface hydrogen mass fraction, as follows: Red supergiant (RSG): $\rm T_{\rm eff} < 4.8kK, X_{s}\geq 0.01$; Yellow supergiant (YSG): $\rm 4.8kK < T_{\rm eff} < 7.5kK, , X_{s}\geq 0.01$; Blue supergiant (BSG): $\rm 7.5 kK <T_{\rm eff} < 55 kK, , X_{s}\geq 0.01$. Hot helium giant (HeG): $\rm 15 kK < T_{\rm eff} < 55 kK, X_{s}< 0.01$; Cool helium giant: $\rm T_{\rm eff} < 15 kK, X_{s}< 0.01$; Wolf-Rayet star (WR): $\rm 10 kK < T_{\rm eff} < 251 kK, X_{s} \leq 0.4$ \citep{Gilkis2022}. The initial parameters for single stars and the binary system are listed in Table \ref{table1}. The binary orbit is assumed to be circular and the Roche lobe radius is given by the formula of \cite{Eggleton1983}. The mass ratio is set to be $\rm q=0.882$ for all binary models. {In the systems with $q=\frac{M2}{M2}< 0.7-0.8$, relying on the orbital period, the mass-transfer rate via RLOF is so large that two component stars come into contact. Further common envelope evolution of these models requires complex considerations which are beyond the scope of this paper. Actually, binary systems with lower mass ratios have problems to explain SNe $\rm \uppercase\expandafter{\romannumeral2}$b, in particular those with extended hydrogen envelopes having $M_{H}> 0.15 M_{\odot}$ \citep{Podsiadlowski1992}.} We choose several initial orbital periods corresponding to cases where the first mass transfer event occurs during the main sequence phase ($\rm P_{orb}=3.0$ days, Case A), after core H-exhaustion but before the He-ignition in the core ($\rm P_{orb} =10.0$ days, Case B), during the core He-burning ($\rm P_{orb}> \sim 40.0$ days, Case C). \begin{deluxetable}{lccccccccccccc} \tablecaption{The parameters adopted in our calculations. \label{table1}} \tablewidth{0pt} \tablehead{ \colhead{Models} & \colhead{$M_{\rm 1,ini}$}& \colhead{$M_{\rm 2,ini}$} & \colhead{$V_{\rm 1,ini}$} & \colhead{$V_{\rm 2,ini}$} & \colhead{$P_{\rm orb,ini}$}& \colhead{$\rm \alpha_{over}$} & \colhead{Z} & \colhead{$\rm M_{He}$} &\colhead{$\rm M_{H}$} & \colhead{$\rm R/R_{\odot}$}&\colhead{ST}&\colhead{SP}\\\hline \colhead{} & \colhead{$M_{\odot}$} & \colhead{$M_{\odot}$} & \colhead{km/s} & \colhead{km/s} & \colhead{days} &\colhead{} &\colhead{} & \colhead{$M_{\odot}$} & \colhead{$M_{\odot}$} & \colhead{} &\colhead{} &\colhead{} } \startdata S1 &15 &.. &0 &.. &.. & 0.12& 0.02& 4.34& 8.60&741 &\uppercase\expandafter{\romannumeral2}P&RSG &\\ S2 &15 &.. &200 &.. &.. &0.12 & 0.02& 4.40& 7.62& 812&\uppercase\expandafter{\romannumeral2}P&RSG &\\ S3 &15 &.. &400 &.. &.. & 0.12 & 0.02&5.87 & 3.52& 1023&\uppercase\expandafter{\romannumeral2}P&RSG &\\ S4 &17 &.. &0 &.. &.. & 0.12& 0.02& 5.19&8.16 & 950&\uppercase\expandafter{\romannumeral2}P&RSG &\\ S5 &17 &.. &200 &.. &.. &0.12 & 0.02& 5.21& 8.02& 948&\uppercase\expandafter{\romannumeral2}P&RSG &\\ S6 &17 &.. &400 &.. &.. & 0.12 & 0.02&6.38& 3.67& 891&\uppercase\expandafter{\romannumeral2}P&RSG &\\ S7 &19 &.. &0 &.. &.. & 0.12& 0.02&6.00 & 9.00& 1047&\uppercase\expandafter{\romannumeral2}P&RSG &\\ S8 &19 &.. &200 &.. &.. &0.12 & 0.02&5.92 & 8.11& 1061&\uppercase\expandafter{\romannumeral2}P&RSG &\\ S9 &19 &.. &400 &.. &.. & 0.12 & 0.02&7.30 & 3.13& 851&\uppercase\expandafter{\romannumeral2}P&RSG &\\ B1 &17 &15 &0 &0 &300.00 &0.12 &0.02 & 4.81& 0.35& 478&\uppercase\expandafter{\romannumeral2}b&RSG &\\ B2 &17 &15 &0 &0 &300.00 &0.25 &0.02 &5.61& 0.3& 407&\uppercase\expandafter{\romannumeral2}b&YSG &\\ B3 &17 &15 &0 &0 &300.00 &0.35 &0.02 &5.44& 0.0& 2&\uppercase\expandafter{\romannumeral1}b&WR &\\ B4 &17 &15 &0 &0 &300.00 &0.12&0.008 &5.54& 0.46& 512&\uppercase\expandafter{\romannumeral2}b&RSG&\\ B5 &17 &15 &0 &0 &300.00 &0.12 &0.03& 5.00& 0.27& 436&\uppercase\expandafter{\romannumeral2}b&YSG&\\ B6 &17 &15 &0 &0 &3.00 &0.12 &0.02 &3.34& 0.0& 6&\uppercase\expandafter{\romannumeral1}b&WR &\\ B7 &17 &15 &0 &0 &10.00 &0.12 &0.02 &4.60&0.14 & 14&\uppercase\expandafter{\romannumeral2}b&BSG &\\ B8 &17 &15 &0 &0 &700.00 &0.12 &0.02 &4.94& 0.49& 549&\uppercase\expandafter{\romannumeral2}b&RSG &\\ B9 &17 &15 &0 &0 &1600.00 &0.12 &0.02 &4.95& 1.88& 891&\uppercase\expandafter{\romannumeral2}P&RSG&\\ B10 &17 &15 &200 &200 &300.00 &0.12 &0.02 & 4.81& 0.35& 331&\uppercase\expandafter{\romannumeral2}b&YSG &\\ B11 &17 &15 &400 &400 &300.00 &0.12 &0.02 & 4.81& 0.35& 2&\uppercase\expandafter{\romannumeral1}b& WR &\\ B12 &17 &15 &0 &0 &50.00 &0.12 &0.02 & 4.63& 0.14& 166&\uppercase\expandafter{\romannumeral2}b& BSG &\\ B13 &16 &15 &0 &0 &1100.00 &0.12 &0.04 & 5.01& 0.39& 565&\uppercase\expandafter{\romannumeral2}b& RSG &\\ \enddata \tablecomments{ \\ The meaning of each column is as follows. The symbol S denotes single stars whereas the symbol B denotes the binary systems. $M_{\rm 1,\rm ini}$: the initial mass of the primary star; $M_{\rm 2,\rm ini}$: the initial mass of the secondary star; $\rm V_{\rm 1,\rm ini}$: the initial equatorial velocity of the primary star; $V_{\rm 2, \rm ini}$: the initial equatorial velocity of the secondary star; $P_{\rm orb,\rm ini}$: the initial orbital period; $\rm \alpha_{over}$: the convective overshooting parameter; $\rm Z$: metallicity; $\rm M_{He}$: the mass of helium core at core carbon exhaustion; $\rm M_{H}$:the mass of hydrogen envelope at core carbon exhaustion. ST: Supernovae type. SP: the type of Supernovae progenitor.} \end{deluxetable} \setlength{\tabcolsep}{5mm}{ \begin{deluxetable}{ccc} \tablecaption{The observations of SN 1993J and the theoretical values in model B1 and B13. \label{table2}} \tablewidth{0pt} \tablehead{ \colhead{Observations$^{1)}$} & \colhead{Model B1} & \colhead{Model B13} } \startdata $\rm \log T_{1, eff}=3.63 \pm 0.05$ &3.661 & 3.63\\ $\rm \log L_{1}/L_{\odot}=5.1 \pm 0.3$ & 4.968 &4.98\\ $\rm \log T_{2, eff}=4.3 \pm 0.1$ &4.54 &4.39\\ $\rm \log L_{2}/L_{\odot}=5.0 \pm 0.3$ & 4.87& 4.74\\ $\rm R/R_{\odot}\sim 600$ & 484&565\\ $\rm \dot{M} \sim 2-6 \times 10^{-6} M_{\odot}/yr$ &$\sim 2.58\times 10^{-6} M_{\odot}/yr$& $\sim 3.0 \times 10^{-6} M_{\odot}/yr$\\ $\rm M_{H}=0.15-0.4 M_{\odot}$&$\rm M_{H}=0.35 M_{\odot}$ &$\rm M_{H}=0.39 M_{\odot}$\\ $\rm M_{He}=2.8-6.0 M_{\odot}$&$\rm M_{He}=4.81 M_{\odot}$ &$\rm M_{He}=5.01 M_{\odot}$\\ \enddata \tablecomments{ \\ ${}^{1)}$ The observational data of the progenitor SN 1993J is taken from \citep{Maund2004,Sravan2020}} \end{deluxetable}} \section{Results of numerical calculations} We present non-rotating and rotating single star models and compare them with binary models with various initial parameters. We focus our investigation on the evolution of the primary star and explore whether close binary evolution via different initial orbital periods (i.e., Case A, Case B or Case C mass transfer), overshooting parameters, and metallicities could give rise to diverse Supernovae $\rm \uppercase\expandafter{\romannumeral2}$b in terms of the amount of the removed hydrogen or helium envelope. The evolution of the close binary system composed of a 17$M_{\odot}$ primary star and a 15$M_{\odot}$ companion star is computed. In all models, we calculate the evolution at least to the end of central Neon burning. Properties of single stars and the primary star in binaries, such as evolutionary age, actual mass, radius, effective temperature, luminosity, central temperature and central density, ratio of the surface nitrogen to the initial value, equatorial velocities; mass fraction of chemical elements such as surface mass fraction of hydrogen and helium, logarithm of mass fraction of surface chemical elements such as carbon, nitrogen and oxygen, and the mass ratio of the surface nitrogen to carbon are presented in Table \ref{table3}. \subsection{The evolution of the hydrogen envelope mass and surface nitrogen enrichments} \subsubsection{Rotation effect} Panel (a) in Fig. \ref{fig:general 1} shows the mass of the hydrogen envelope for single stars with $Z=0.02$ but with different initial masses and rotational velocities as a function of the evolutionary age. It can be seen that the more massive star has a thicker hydrogen envelope at the end of the core H-burning phase. For example, the mass of the hydrogen envelope for model S1 with $\rm 15.0 M_{\odot}$ is $\rm 11.38 M_{\odot}$ whereas it is $\rm 13.38 M_{\odot}$ for model S7 with initial mass of $\rm 19.0 M_{\odot}$. Actually, stellar winds are stronger for more massive stars. The $\rm 19.0 M_{\odot}$ star has lost more than 6 $M_{\odot}$ while the $\rm 15.0 M_{\odot}$ counterpart has lost less than 4 $M_{\odot}$. In massive stars, mass loss via stellar winds is mainly a consequence of radiation pressure on atoms during the main sequence and giant star stage. The mass loss via stellar winds proportional to the luminosity of the star and is inversely proportional to its effective temperature during central hydrogen burning. When massive stars evolve towards higher luminosities and lower temperatures, a large fraction of the hydrogen envelope is actually lost by line driven winds once the central hydrogen has been substantially converted in to helium. The mass loss rates increase when the luminosity increases and hence when the initial mass increases. For a O type star, the total mass lost by winds during main sequence is approximately estimated by stellar mass at a power 2.8 (i.e., $\Delta M \approx M^{2.8}$). These single models undergo dramatic loss of mass on the verge of core hydrogen exhaustion. This is due to the fact that higher luminosity triggers strong mass losses. Moreover, when some stars crosse some limit in effective temperature, there is an importance change in the ionization structure of the stellar envelope that may produce a great boost of the mass loss rate due to the bi-stability \citep{Vink2001}. One can also note that most of the mass is lost during the red supergiant phase of evolution when the single star burns helium in its core. After the core helium is exhausted, the mass of the envelope changes very little. This is because the evolution proceeds too fast to give rise to a significant mass loss. Comparing model S4 with $\rm v_{ini}=0$ km/s and model S6 with $\rm v_{ini}=400$ km/s, one can note that the mass loss is higher for rapidly rotating stars during the main sequence. There are three reasons. Firstly, mass loss via stellar winds can also be enhanced by the centrifugal force \citep{Langer1998}. The gravitational acceleration can be significantly reduced by the centrifugal force and the star becomes more expanded. Rotation can also reduce the depth of the gravitational potential which stellar winds are easier to escape from and therefore increase the possibility of the formation of type $\rm \uppercase\expandafter{\romannumeral2}$b supernovae at the expense of the red supergiant stars. However, note that line driven stellar winds are powered by the radiative flux. This radiative flux is proportional to the effective gravity that decreases when the star is a rapid rotator. This behaviour favors the formation of polar winds. But this effects become much important only at velocities near the critical limit and are likely not important for models computed here. Secondly, rotation increases the main sequence lifetime and thus can allow more time for mass to be lost by stellar winds. Thirdly, rotating stars are more luminous than non-rotating or slower rotating ones and thus they undergo more mass loss by stellar winds. Actually, the rotational mixing is the determining factor here. It changes the track in the HR diagram and makes the star to follow a different mass loss history. Therefore, the rotationally enhanced stellar winds can decrease the minimum mass required for a single star to remove its hydrogen envelope \citep{Meynet2003}, thus increasing the generation rate of SN $\rm \uppercase\expandafter{\romannumeral2}$b progenitors from single massive stars. At the end of the evolution, the mass of the hydrogen envelope is thinner in the model with an initial higher velocity. This can be explained by three reasons. Firstly, the convective core can be enlarged greatly by the rotational mixing and thus result in a thinner envelope (cf. Fig. \ref{fig:general 3}). Secondly, rotating stars are more luminous (also in the RSG phase). This enhances the stellar winds during that phase. As a result the star may evolve away from the red supergiant region in the HRD and becomes a yellow or even a blue supergiant. Finally, the angular momentum transport efficiency is maximum in the convective region due to the largest convective diffusion coefficient. Figure \ref{fig:general 2}(a) displays the surface mass fraction ratio of nitrogen to carbon for the single star as a function of evolutionary age. There is no surface nitrogen enrichment in the non-rotational models S1, S4, and S7 until the first dredge up appears. The outer convective region can span the mass coordinate from 16.5 $M_{\odot}$ to 4.6 $M_{\odot}$ during the first dredge up and may move toward the position of the hydrogen-burning shell. After that, the outer convective envelope shrinks rapidly and develops again from the mass coordinate 13.45 $M_{\odot}$ to 5.39 $M_{\odot}$. It may approach the mass position of the hydrogen-burning shell ($ \sim 5.37 M_{\odot}$). The CNO products in the hydrogen-burning shell are mixed by the convective dilution. As a result, the surface ratio of nitrogen to carbon increases during the red supergiant stage. The nitrogen enrichment for S1, S4, and S7 can also be ascribed to the mass removal of hydrogen envelopes via stellar winds after the main sequence. \cite{Markova2018} noted that the envelope is really stripped in the most luminous supergiants or red supergiants by the strong winds ($\rm \log L/L_{\odot} \geq 5.8 $ and $ \log \dot{M} [M_{\odot}/yr]\geq -5.4$ ). Mass loss may reveal the matter with the enriched nitrogen as the surface matter of the star is peeled. For example, the stellar mass in model S4 reduces from 16.51 $M_{\odot}$ at the end of hydrogen core burning to 13.45 $M_{\odot}$ at the end of helium core burning due to the strong RSG stellar winds. A higher ratio of nitrogen to carbon can appear at the surface of the more massive stars. This indicates that these two processes are more efficient in more massive star. In comparison with the non-rotating counterpart, a significantly higher surface ratio of nitrogen to carbon can be produced by a higher degree of rotational mixing during the main sequence \citep[e.g.,][]{Meynet2000,Maeder2014,Chieffi2013,Limongi2018,Song2018}. The main effect of rotational mixing is to smooth the internal chemical gradients and to facilitate a more progressive arrival of internal nuclear products at the surface \citep{Georgy2012,Ekstrom2008}. In MESA massive stars, Eddington-Sweet circulation dominates other rotation induced instabilities during the main sequence and makes the whole star maintain rigid rotation. In the subsequent evolution, dynamical shear dominates other instabilities in the stellar interior. \cite{Maeder2009} presented results suggesting that the behavior of the surface excess of nitrogen is a multivariate function (i.e., stellar mass, evolutionary age, projected rotational velocity, metallicity) for a single rotating star. As expected, nitrogen enrichment increases with the increasing of the initial rotational velocity, initial mass and evolutionary age during the main sequence due to a higher velocity of the meridional circulation (cf, Panel a in Fig. \ref{fig:general 2}). Therefore, rapid rotation can help us explain the nitrogen-rich circumstellar material with a ratio of $\rm N/C \approx 12.4$ in SN 1993J. Nitrogen enrichments can be aided by two extra factors. First, strong stellar winds which are enhanced by rotation can remove the hydrogen envelope and expose the hydrogen-burning shell which is richer in nitrogen. Second, rapid expansion during post main sequence results in larger differential rotation which can strengthen the shear instability. Thus the spin angular momentum transportation from the core to the envelope becomes more efficient, meaning that the outer layer can attain a higher rotational velocity which favors efficiently rotational mixing and mass loss by a higher luminosity. For instance, one can notice that the equatorial velocity of the model S6 can attain 144.64 $\rm km/s$ at the onset of central helium burning (cf, Table \ref{table2}). More importantly, the nitrogen enrichment factor $\rm \frac{N}{N_{ini}}$ goes up from 8.49 to 10.02 in model S6 while it rises from 1.0 to 4.34 in model S4 during the first dredge-up. This implies that rotational mixing might reduce the efficiency of the dredge-up due to the decrease in the opacity of the outer envelope. The lower opacity implies a smaller convective envelope, thus reducing the depth of convective dredge-up. \subsubsection{Convective overshooting effect} A larger convective overshooting parameter can cause the star to evolve redward further in the Hertzsprung-Russel diagram. The reason is that the overshooting can lead to a larger convective core and extend the main sequence to lower effective temperature and higher luminosity. Therefore, these effects can also give rise to a larger mass loss via stellar winds and a thinner hydrogen envelope after the zero age main sequence in panel (b) of Fig. \ref{fig:general 1}. For instance, at the end of the core H-burning phase, the stellar mass of model B1 with $\rm \alpha_{over}=0.12$ is $\rm 16.46 M_{\odot}$ whereas it is $\rm 16.07 M_{\odot}$ for the model B3 with $\rm \alpha_{over}=0.35$. {Note that the overshooting parameter of 0.35 of $\rm H_{p}$ is similar to what \cite{Brott2011} have deduced (i.e., $\rm \alpha_{over}=0.335$), actually.} This implies that mass loss at this age is closely related to the increased luminosity due to the convective overshooting but to a lesser extent, on the effective temperature. However, the evolutionary track of the star computed with overshooting is much more extended toward lower effective temperatures at the core hydrogen exhaustion. This implies a higher mass loss in model B3 and thus its mass is lower than the one of the model B1 (cf. Table \ref{table2}). The hydrogen envelope decreases rapidly for a star with a larger overshooting parameter at the core hydrogen exhaustion. Actually, the mass loss via stellar winds strongly depends on the stellar luminosity when the effective temperature decreases below 22500K \citep{Vink2001}. For example, the envelope mass for model B3 with $\rm \alpha_{over}=0.35$ is $\rm 10.66 M_{\odot}$ whereas it is $\rm 12.15 M_{\odot}$ with $\rm \alpha_{over}=0.12$. Larger core makes the star evolve more rapidly to the red part of the HR diagram after the MS phase. This means that a larger fraction of the core helium burning phase occurs during the RSG phase where strong mass loss occurs. These strong mass losses favor a more rapid appearance of deep layers at the surface. During the first episode of Roche lobe overflow (hereafter, RLOF), the hydrogen envelope of $\rm 11.08 M_{\odot}$ in model B1 is transferred to the companion star while the hydrogen envelope of $\rm 9.36 M_{\odot}$ is transferred to the companion star in B3. The fact implies that the more mass the hydrogen envelope remains during the main sequence, the more matter can be removed by RLOF. Figure \ref{fig:general 2}(b) shows the surface mass fraction ratio of nitrogen to carbon for the primary star with the different overshooting parameters in the binary system as a function of the actual mass. The surface ratio of $\rm ^{14}N/^{12}C$ in the binary system B1 can attain a higher value of 93.370 compared to the value of 2.305 in its single-star counterpart S4 at the core helium exhaustion (cf., Table \ref{table3}). This is mainly because the surface chemical composition can be changed by the mass removal via RLOF. After the first event of RLOF, the ratio of $\rm ^{14}N/^{12}C$ can attain a larger value of 112 in model B3 with a larger overshooting parameter. This is because the hydrogen burning shell is located above the larger helium core and thus is buried in a shallow position of the hydrogen envelope. The envelope can develop a smaller outer convective region in this model. Therefore, the duration of the hydrogen burning shell is very short because the hydrogen-burning shell can be easily exposed by RLOF. The results also show that severe stripping of the hydrogen envelope usually can give rise to a higher surface effective temperature. \subsubsection{The effect of metallicity} It is shown in the panel (c) of Fig. \ref{fig:general 1} that the decreasing of metallicity can trigger smaller mass loss via stellar winds and produce less stripped progenitors of type $\rm \uppercase\expandafter{\romannumeral2}$b supernovae. The mass-loss via stellar winds scales as $\rm \dot{M}_{wind}\propto Z^{0.85}$ \citep{Vink2000,Vink2021}. Therefore, the total mass lost during the lifetime of the star is strongly correlated with the amount of metals present in the envelope. The star with low metallicity evolves essentially at almost constant mass during most of the main-sequence phase because of the absence of the larger mass loss via stellar winds. For this reason, the star with a lower metallicity has a thicker hydrogen envelope in comparison with the counterpart with higher metallicity. For example, when the core hydrogen is exhausted, the mass of the hydrogen envelope of model B4 with $\rm Z=0.008$ is $\rm 13.09 M_{\odot}$ whereas it is $\rm 12.17 M_{\odot}$ for model B5 with $\rm Z=0.03$. During the first episode of RLOF, $\rm 11.43 M_{\odot}$ in model B4 with $\rm Z=0.008$ is transferred to the companion star while it is $\rm 9.81 M_{\odot}$ in the B3 with $\rm Z=0.03$. Therefore, we can infer that more mass can be transfer via RLOF for stars with the same mass by lower metallicity. Figure \ref{fig:general 2}(c) shows the surface mass fraction ratio of nitrogen to carbon for the primary star with different metallicities and rotational velocity as a function of the actual mass. The surface nitrogen $\rm \log ^{14}N$ has the same value of -3.394 as the initial one in the model B4 with $Z= 0.008$ while it is -2.996 in model B1 with $Z=0.02$ before the onset of RLOF (cf., Table \ref{table3}). At lower Z, there are less CNO elements, and thus the nitrogen abundance is anyway less (even the one resulting from CNO process). This implies that the surface nitrogen can not be suddenly enhanced by the first dredge-up and the bottom of the outer convection region does not touch the hydrogen burning shell. It is shown that surface $\rm ^{14}N$ in the binary system B4 with a lower metallicity can attain a higher value after RLOF compared to its counterpart B5 with higher metallicity. The reason is that the total amount of mass transfer via RLOF is higher in the binary system with low metallicity. The abundance of nitrogen is proportional to the initial metallicity prior to the core helium burning. The reaction of nitrogen via $\rm ^{14}N(\alpha, \gamma)^{18}F(e^{+} \nu) ^{18}O$ is an important exoergic process during the central helium burning. Low metallicity has an important impact on the energy generation during hydrogen shell burning via the CNO cycle, which also affects the boundary condition for the helium core. \subsubsection{The orbital period effect} In the panel (d) in Fig. \ref{fig:general 1}, It can be found that RLOF is very efficient to remove the mass of hydrogen envelope in contrast to stellar winds. Beyond the core hydrogen exhaustion, the mass loss via stellar winds is $\rm 4.3 M_{\odot}$ for the single star S4 whereas it is $\rm 11.08 M_{\odot}$ during the first episode of mass transfer in model B1. This indicates that mass loss via stellar winds in the lower massive star (i.e., $\rm M < 19 M_{\odot}$) is too weak to remove the thick hydrogen envelope and produces type $\rm \uppercase\expandafter{\romannumeral2}$b supernovae. The single star will explode as a type $\rm \uppercase\expandafter{\romannumeral2}$P supernova (cf., Table \ref{table1}). The transferred mass of the hydrogen envelope is very sensitive to the initial orbital period of the binary system. We note that an initially tighter orbit can result in a deeper stripping of the hydrogen envelope via RLOF. For example, the transferred matter is $\rm 9.23 M_{\odot}$ for model B6 with $\rm P_{orb}=3.0$ days whereas it is $\rm 6.34 M_{\odot}$ for model B9 with $\rm P_{orb}=1600.0$ days. The B6 system with $\rm P_{orb}=3.0$ days undergoes strong Case A mass transfer, followed by a later episode of Case B mass transfer, with the last bit of remaining hydrogen removed by strong Wolf-Rayet winds during the later core-burning phases. This model can produce a low mass helium core with $\rm M=3.3 M_{\odot}$ at the time of explosion, with a radius of about 4.26 $R_{\odot}$. However, the model B6 cannot be consistent with the observed hydrogen envelope mass of $0.03-0.5 M_{\odot}$ for type $\rm \uppercase\expandafter{\romannumeral2}$b because it completely loses any even thin hydrogen envelope. It is also possible to produce fully stripped type $\rm \uppercase\expandafter{\romannumeral1}$b progenitors (cf., Table \ref{table1} and Fig. \ref{fig:general 6}) \citep{Yoon2015,Yoon2010}. The model B9 finally explodes as a $\rm \uppercase\expandafter{\romannumeral2}$P supernova because its hydrogen envelope remains sufficiently large to preserve the RSG structure. These facts implies that the relationship of SN $\rm \uppercase\expandafter{\romannumeral2}$b with other types of supernovae, such as SNe $\rm \uppercase\expandafter{\romannumeral2}$P, $\rm \uppercase\expandafter{\romannumeral2}$L and Ib/c, is closely bound up to the hydrogen envelope mass. Moreover, various types of $\rm \uppercase\expandafter{\romannumeral2}$b supernovae are also related to the hydrogen envelope mass. The system with the region 300 days $\rm <P_{orb}<$ 700 days can give rise to RSG type SN $\rm \uppercase\expandafter{\romannumeral2}$b progenitor. The initial period of 300 days roughly separates between the RSG type SN $\rm \uppercase\expandafter{\romannumeral2}$b progenitor and the type YSG SN $\rm \uppercase\expandafter{\romannumeral2}$b progenitor. The binary system B7 with an initial $\rm P_{orb}=30$ days can produce the BSG type SN $\rm \uppercase\expandafter{\romannumeral2}$b progenitor (cf., Fig. \ref{fig:general 6}). Thus binary models with orbital periods range $\rm \sim10$ days $\rm < P_{orb}<$ 700 days may turn into SNe $\rm \uppercase\expandafter{\romannumeral2}$b (cf., panel d in Fig. \ref{fig:general 5}). Most importantly, the number of mass transfer via RLOF is closely related not only to the initial orbital period but also to the thickness of the hydrogen or helium envelopes. Beyond the core hydrogen exhaustion, a larger amount of the residual hydrogen in the envelope can expand to larger radius during the late evolutionary stages in contrast to a bare helium core. After the core helium burning, the envelope expansion becomes more significant for more compact stellar core. Panel (d) in Fig. \ref{fig:general 2} shows the surface mass fraction ratio of nitrogen to carbon as a function of the orbital period for the primary star in the binary system with the different initial orbital periods. One can see that the donor star in binaries can experience the envelope peeling which can expose the inner layers of CNO processed material to the surface, increasing the surface abundance of nitrogen. An initial tighter orbit results in more significant peeling of the hydrogen envelope, and vice versa. The system with the shortest orbital period has the highest surface N/C ratio in our grid. The main reason is that the hydrogen burning shell can be revealed early because much more hydrogen envelopes can be eliminated by RLOF. \subsection{The evolution of the convective core and helium core} \subsubsection{The effect of rotation} Panel (a) in Fig. \ref{fig:general 3} shows the convective cores of the non-rotating and rotating single stars as a function of the evolutionary age. The mass of convective core increases with the initial mass of the star because the size of the convective core is governed by radiative pressure which is proportional to the quadrature of core temperature $\rm T^{4}$. Actually, at the onset of the evolution, the centrifugal force partially sustains the gravity but most of the equilibrium is due to pressure gradient. Therefore, the rotating star behaves like a less massive non-rotating one, and the mass of the convective core is smaller. This results in a lower luminosity in the HRD. The rotating star has a slightly denser and cooler core than the non-rotating one. As the evolution proceeds, the rotation induced mixing starts to refuel the core with fresh hydrogen. The mass of the convective core is larger in models with higher initial rotational velocity because the rotation induced mixing becomes very efficient in rapidly rotating stars. Actually, two main physical processes are responsible for such a rotational mixing: meridional circulation and secular shear. Meridional circulations which are scaled as the square of rotational angular velocity are mainly responsible for rotational mixing above the convective core \citep{Maeder2000b, Song2018}. Rotational mixing can slow down the decrease in mass of the convective core and is similar to the behavior of overshooting. The larger core induced by rotational mixing leads to a higher central temperature and a lower opacity in the outer envelope. Rotating stars have a larger convective core than the non-rotating ones. Fig. \ref{fig:general 4} displays the mass of the helium core as a function of the evolutionary age. The mass of the helium core generally scales with the size of the hydrogen convective core and goes up with the increasing of the initial rotational velocities and stellar masses (cf., panel a in Fig. \ref{fig:general 4}). Generally, the larger the size of the convective core is, the larger the final helium core mass would be at the end of the main sequence. The maximum size of the hydrogen convective core generally increases with the mass of the star and the rotational velocities. Therefore, the helium core at core helium exhaustion can increase with the mass of the star and rotational velocities as well. Furthermore, the main consequence of the rotational mixing is the increase of the lifetime of the core hydrogen burning. The main reason is that fresh hydrogen in the outer envelope is transferred into the central core by rotational mixing. This mixing process increases the hydrogen fuel supply in the stellar core. The wind mass-loss rate of single stars with $M \geq 30 M_{\odot}$ is strong enough to remove the hydrogen envelope and these stars are expected to generate the type $\rm \uppercase\expandafter{\romannumeral2}$b SNe \citep{Heger2003,Georgy2009}. This type of star has a helium core mass $\geq 8 M_{\odot}$ previous to the explosion. However, a $8 M_{\odot}$ helium core is too massive to produce the second maximum at $\sim 20$ days as observed light curves for SN 2011dh, even assuming the most extreme $\rm ^{56}Ni$ mixing \citep{Bersten2012}. The helium core mass of single stars goes up because of the creation of helium by the hydrogen-burning shell (cf. panel a in Fig. \ref{fig:general 4}). However, one can see that the mass of the helium core in model B6 can be greatly influenced by the RLOF during the main sequence (cf. panel d in Fig. \ref{fig:general 4}). This is because the convective core can be greatly reduced by the RLOF. This fact indicates that for a given initial mass, the star in the binary system has a smaller core mass than its single counterpart. However, the corresponding radii go up with the decreasing of the core helium mass \citep{Yoon2010,Yoon2017}. After the main sequence, the evolution of the helium core is almost unaffected by the mass transfer. The helium produced by the hydrogen burning shell contributes little to the mass of the core helium. The binary-peeled star loses matter after the RLOF because of strong stellar winds, leading to a reduction in the helium core mass. Therefore, the amount of oxygen that can be produced from a specific exploding core is smaller in an initial tighter system. However, the mass fraction of the convective core in the slowly rotating massive stars with $\rm v_{ini}=200$ Km/s is slightly smaller than the non-rotating stars beyond the core hydrogen exhaustion. Rotational mixing plays a minor role in this case. During the core helium burning, the convective core grows largely because the mass of the helium core itself grows regardless of the strong mass losses via the stellar winds. As the mass of the core grows, its luminosity increases accordingly but the radius of the convective core remains constant. Therefore, the mass-loss rate of stellar winds can grow rapidly. Meanwhile, the central temperature and density go up accordingly. This favors the formation of convection. The helium burning convective core grows until close to the time of the core helium exhaustion because the central temperature increases due to the decrease of the fuel supply. This actually increases the energy generation rate and thus the convective core. This growth of the helium core can have a very important consequence. The addition of helium to the helium convection zone at late time increases the O/C ratio made by helium burning. \subsubsection{Convective overshooting effect} The overshooting can cause its convective core to grow in mass and this leads to the mixing of fresh hydrogen above the core into the central nuclear-burning zone. Therefore, the overshooting has the effect of extending the main-sequence lifetime. The band of main sequence extends to lower effective temperature values when convective core overshooting is larger. A massive star with initial $\rm M > 19 M_{\odot}$ has a helium core $\rm M_{He}> 6 M_{\odot}$ prior to the SN explosion (cf., panel a in Fig. \ref{fig:general 4}). This fact indicates that the low mass progenitor of SNe $\rm \uppercase\expandafter{\romannumeral1}$b only can be produced in the maximum mass limit $\sim 19 M_{\odot}$. The amount of helium in the envelope can also be convected into the helium core near central helium exhaustion by the convective overshooting. At the stage of the advanced nuclear burning, the higher overshooting parameter leads to a sightly higher helium and carbon core masses (cf., Panel c in Fig. \ref{fig:general 4}). However, the lifetime during helium burning might be reduced by the convective overshooting because the central temperature can be increased. The efficiency of helium combustion becomes higher because of the enlarged helium core. For example, the lifetime of the core helium burning in model B1 with $\rm \alpha_{over}=0.12$ is about 0.97 Myr whereas it is about 0.76 Myr in the model in model B3 with $\rm \alpha_{over}=0.35$ (cf., Table \ref{table3}). \subsubsection{The effect of metallicity} The effect of metallicity on the convective core mass is illustrated in Panel (c) in Fig. \ref{fig:general 3}. At the beginning of the evolution, the convective core decreases with the decreasing of the metallicity. The reason is that a lower initial metallicity also indicates a reduction of the abundance of the CNO nuclei and therefore a decrease of the hydrogen burning efficiency. However, it is shown in the second half of the main sequence and the situation has been changed. In order to maintain higher luminosity, the star has to contract more to increase the core temperature and the nuclear energy production. The convective core can be enlarged by the increased central temperature. Moreover, the initial metallicity has an important impact on initial hydrogen and helium abundances. For the lower metallicity, there is more hydrogen to burn in the central core. The lifetime of the main sequence can be enlarged by the fresh fuel supply. The effect of metallicity on the helium core mass is shown in Panel (c) in Fig. \ref{fig:general 4}. \cite{Limongi2018} presented that the less massive star (i.e., $\rm < 40 M_{\odot}$) develops helium core masses essentially independent of the initial metallicity in single stars. This implies that the mass of the helium convective core weakly depends on the metallicity. However, we can see that a smaller mass of the helium convective core (or helium core) can come into being in a low metallicity environment after core hydrogen depletion (cf., Panel C in Fig. \ref{fig:general 3}). Moreover, in massive stars with low metallicity, the helium convective core can grow so much during the central helium burning that it approaches the hydrogen burning shell. Furthermore, the helium convective core never recedes until the core helium exhaustion. The reason is that the conversion of helium to carbon and oxygen increases the opacity in the entire convective core so that its outer border is continuously pushed outward rather than inward. Hence, a strong chemical discontinuity forms at the border of the helium convective core where the helium abundance changes from roughly zero to roughly one at central He exhaustion. A helium convective shell starts to form outside the maximum extension of the convective core whereas the chemical composition of this region is still the one left by hydrogen burning. \subsubsection{The effect of the orbital period} From panel (d) in Fig. \ref{fig:general 3}, can also see notice that the mass of the convective core drops from 4.44 $M_{\odot}$ to 2.73 $M_{\odot}$ during the first episode of RLOF for model B6 with initial $\rm P_{orb}=3.0$ days. The result shows that the primary star that loses its hydrogen envelops via RLOF will develop a smaller convective core than its single counterpart. This fact indicates that RLOF can accelerate the mass decrease of the convective core during main sequence. The reason is that the core temperature can be reduced significantly by RLOF. However, mass transfer via the RLOF has the effect of extending the main-sequence lifetime of the primary star because the efficiency of hydrogen burning via CNO cycle is extremely sensitive to the core temperature. The convective core after main sequence is also influenced by the previous RLOF because it displays a smaller value after main-sequence than the counterpart with initial $\rm P_{orb}=300$ days in model B1. The effect of the orbital period on the helium core mass is shown in Panel (c) in Fig. \ref{fig:general 4}. The helium core formed in binary systems is less massive than the single counterpart by comparing the model S4 with B6. This can be explained by an early onset of Case A mass transfer in this system and a significant loss of matter during this process. Case A mass transfer takes place in the primary star of model B6 before the helium core is fully formed. Mass loss via RLOF is strong enough to decrease substantially the total mass and therefore to induce a reduction of the convective core during the main sequence. The primary star in this system loses about 12.8 $M_{\odot}$ during all binary interactions via RLOF. As a consequence, in this case, the helium core at core He depletion is smaller than it would be in Case B or Case C mass transfer. Actually, the evolution of the star after core hydrogen burning mainly depends on the mass of the helium core rather than the total mass. This property implies that mass loss via RLOF globally affects the evolution during the main sequence because it is efficient enough to determine the reduction of the mass of the hydrogen convective core, which in turn is mainly responsible for the initial mass of the helium core. A small convective core develops in an initial tighter binary system. RLOF in Case B or Case C has a little impact on the evolution of the convective core (cf., Panel d in the Fig. \ref{fig:general 4}). The growth of the mass of helium cores can be merely affected by the development of the hydrogen burning shell. However, the shell can be removed or extinguished by the RLOF, as shown in model B6 with an initial $\rm P_{orb}=3.0$ days. \subsection{The evolution of the rate of mass transfer via RLOF} \subsubsection{The effect of convective overshooting} When the primary star expands beyond its Roche lobe, mass is transferred to the companion. {It is the expansion of the donor due to its own nuclear evolution that initiates mass transfer. Actually, the expansion of stellar radii makes the primary star continue to fill its Roche lobe and to trigger mass transfer via RLOF.} From panel (a) in Fig. \ref{fig:general 5}, can see that the maximum rate of mass transfer is closely related to the convective overshooting. The maximum rate of the mass transfer via RLOF increases with the overshooting parameter. This can be understood by the fact that the larger radii can be induced by the larger convective overshooting parameter and the mass transfer rate has an exponential dependence on stellar radius. The first maximum value of the mass transfer rate heavily depends on the termination of decreasing of the orbital period at mass ratio one $\rm q=\frac{M_{2}}{M_{1}}$. When the primary star has a radiative envelope in Case A and early Case B mass transfer, the mass transfer via RLOF happens on the Kelvin-Helmhotz timescale. The primary shrinks rapidly in response to mass loss (i.e., the adiabatic mass-radius exponent $\rm \zeta_{ad}=(\frac{d\log R}{d\log M})_{ad}\gg 0$) while the Roche lobe contracts in response to mass loss as well (cf., Panel f in Fig. \ref{fig:general 6}). RLOF can proceed only if the Roche lobe is slightly smaller than the stellar radius. One can find that the number episode of mass transfer via RLOF might decrease with the increasing of the overshoot parameter. It is closely related not only to the residual amount of the hydrogen envelope but also to the orbital period of the system. Moreover, the expansion of the helium envelope beyond the core carbon exhaustion becomes more prominent for a more compact carbon-oxygen core via the mirror effect. In fact, the final orbital period of the binary system goes up with the total amount of mass transfer via RLOF. The RLOF ceases when the donor lost most (but not all) of its hydrogen rich layers. The primary star still has some hydrogen left in its envelope (typically a few $\rm 0.1 M_{\odot}$). Moreover, the mass transfer occurs early in the star with a smaller convective parameter. For example, the model B1 with $\rm \alpha_{over}=0.12$ occurs RLOF at $9.95$ Myr whereas the model B3 with $\rm \alpha_{over}=0.35$ occurs RLOF at $11.15$ Myr. The main reason is that the lifetime has been enlarged by the convective overshooting parameter. The model with a higher overshooting is redder and more extended than the one with a smaller overshooting and therefore is more prone to go through stronger mass transfer episode during the RSG phase. Mass-transfer rates via RLOF can be very high (i.e., $\rm 10^{-2}-10^{-3} M_{\odot}/yr$), and can far exceed any mass-loss rate for a line-driven wind. The binary evolutionary channel supports the formation of type $\rm \uppercase\expandafter{\romannumeral2}$b supernovae. The observation of the binary companion in the case of the type $\rm \uppercase\expandafter{\romannumeral2}$b SN 1993J, and possiblely also in the case of SN 2013df, implies the binary channel. Although the RLOF is the main physical process for mass removal, it can not completely eliminate the whole hydrogen envelope from the primary star. Whether the SN progenitor can remain a large amount of hydrogen is closely related to the subsequent mass loss via stellar winds. Stellar winds are stronger for larger overshooting and favor the formation the SN $\rm \uppercase\expandafter{\romannumeral2}$b (cf., panel b in Fig. \ref{fig:general 7}). \subsubsection{Metallicity effect} From panel (b) in Fig. \ref{fig:general 5}, one can see that the maximum rate of mass transfer via RLOF also depends heavily on the metallicity. The smaller the metallicity, the greater the maximum mass transfer rates. Actually, stellar winds for two components of the binary system tend to widen the binary separation and reduces the total amount of mass lost by RLOF. Stars with low metallicity have weaker stellar winds and more hydrogen envelopes available for mass transfer via RLOF. Thus, the size of the Roch lobe becomes smaller in the system with low metallicities. More hydrogen can be retained before the onset of RLOF in the model with the lower metallicity. Therefore, less hydrogen is retained after the RLOF phase in the lower-metallicity model (cf., Panel c in Fig \ref{fig:general 1}). The mass transfer rate depends very sensitively on the fractional radius excess of the donor, $\frac{\Delta R}{R_{L}}=\frac{R_{D}-R_{L}}{R_{L}}$. $\rm R_{D}$ is the radius of the donor while $\rm R_{L}$ is the radius of the Roche lobe. The quantity $\rm \triangle R$ is the radius excess. This implies that the quantity $\rm \triangle R$ is larger for the star with lower metallicity. One can also notice that the higher the metallicity, the earlier RLOF begins. For example, model B5 with $\rm Z=0.03$ begins RLOF at the age of 9.0 Myr whereas model B4 with $\rm Z=0.008$ begins RLOF at the age of 11.16 Myr. The main reason is that stars with higher metallicity have larger stellar radii. It can be noticed that the star with low metallicity is prone to generate the more compact blue progenitors and remain less hydrogen mass at the end of the RLOF in contrast to the counterpart with high metallicity. However, stellar winds after the RLOF are stronger for the star with high metallicity and it favors a higher effective temperature because the thin hydrogen envelop can be removed. \subsubsection{The orbital period effect} From panel (c) in Fig. \ref{fig:general 5}, one can see that the maximum rate of mass transfer via RLOF also depends heavily on the orbital period. One can notice that the mass transfer rate is larger for an initial wider binary system because the primary star is farther evolved. Mass transfer becomes increasingly unstable and rapid from Case A mass transfer to Case C mass transfer. Case A mass transfer occurs in the system B6 with the shortest initial orbit period. The first phase of mass transfer can usually be divided into a rapid phase on the thermal timescale of the primary, followed by a slow phase on the much longer nuclear timescale. The fast phase of mass transfer proceeds until the primary regains thermal equilibrium, i.e. when its equilibrium radius becomes smaller than its Roche radius. For Case A mass transfer, the orbit widens and the mass transfer rate falls down as the mass ratio of the primary to the secondary $\rm q=\frac{M_{2}}{M_{1}}$ approaches unity. Beyond this time the donor star has become the less massive star in the system, in other words, the mass ratio has been reversed. During the rapid mass-transfer phase, both stars are out of thermal equilibrium: the primary is somewhat less luminous due to mass removal, while the secondary is somewhat more luminous as a result of accretion. The mass transfer rate is also sensitively dependent on the mass ratio of the secondary star to the primary star $q=\frac{M_{2}}{M_{1}}<1$, with increasing rates for lower values of mass ratio q. The second RLOF occurs at 10.635 Myr when the envelope of the primary star expands due to hydrogen shell burning during the helium core contraction phase. The resulting high-mass transfer rate (i.e., $\dot{M}_{R} \sim 1.9953\times 10^{-4} M_{\odot}/yr$ ) occurs again on the Kelvin-Helmholtz timescale. The primary star loses most of its hydrogen envelope, exposing its helium core of $3.5 M_{\odot}$ with a small amount of hydrogen envelope of $\rm M_{H} = 1.1 M_{\odot}$. The luminosity $\rm \log L/L_{\odot}$ increases from 4.52 to 4.72 accordingly. During this phase of mass transfer, the orbital separation increases significantly. Although the star remains compact ($R \sim 1.56 R_{\odot}$) at the terminal of the second mass transfer, the helium shell burning can be activated after core helium exhaustion. This leads to the expansion of the envelope up to $\sim 6.3 R_{\odot}$ during core carbon burning. Case B mass transfer in massive binaries B7 with an orbital periods of $\rm \sim 10$ days is similar to Case A mass transfer in many respects. Because the envelope of the primary star is radiative, the mass transfer starts with a rapid, thermal-timescale phase during which the mass ratio is reversed. An important difference is that since the donor star is more extended and therefore has a shorter thermal timescale compared to Case A mass transfer in model B6, and the mass transfer rates are correspondingly larger during this phase. Moreover, the primary star is itself in a rapid phase of evolution when it passes through the Hertzsprung gap. It is out of thermal equilibrium and expands on the timescale at which its core contracts. As a consequence, after mass-ratio reversal mass transfer continues on the expansion timescale of the primary, only slightly slower than the thermal timescale. Therefore there is a lack of slow phase of mass transfer as in Case A mass transfer. Mass transfer continues at a fairly high rate until most of the envelope has been removed. The evolutionary track forms a loop in the HR diagram during the mass transfer phase, and the maximum transfer rate coincides with the minimum luminosity of the donor star (cf., Fig. \ref{fig:general 7}). The decrease of the luminosity during mass transfer is caused by the large thermal disequilibrium of the primary star. A primary star with the radiative envelope shrinks in response to mass loss and has to expand again to reach thermal equilibrium. This requires the absorption of the thermal energy, so that the surface luminosity during thermal-timescale mass transfer is extremely smaller than the nuclear luminosity provided by the H-burning shell. When central helium is ignited in the core, the mass transfer via RLOF ceases. The primary star contracts and detaches from its Roche lobe. This occurs when the primary is almost a bare helium core with $\rm M_{He} \sim 4.15 M_{\odot}$ and a thin H-rich layer $\rm \sim 0.75 M_{\odot}$. The primary shifts to a position close to the helium main-sequence in the HR diagram. The remnants of Case B mass transfer are in a long-lived phase of evolution: these are binaries consisting of an almost bare helium-burning core primary and a more massive main-sequence companion star. Mass transfer via RLOF will widen their orbits significantly. Observed counterparts of this evolution phase among massive systems are the WR+O binaries, consisting of a Wolf-Rayet star and a massive O star. A common prescription for the largest rates of RLOF is to assume that the mass transfer is limited by the thermal timescale of a massive star with a radiative envelope. This implies that the mass-transfer rate is likely to be highest for more massive and more luminous stars, which have short thermal timescales. The thermal time scale can also decrease as the star evolves. This indicates that the mass transfer rate via RLOF is higher for wider systems, in which the donor star is more evolved. Mass-loss rates can be of order $\rm 2.4 \times 10^{-2} M_{\odot} yr^{-1}$ for model B9 with an initial $\rm P_{orb} \approx 1600$ days. The main reason is that the adiabatic response of the donor is unable to keep it within its Roche lobe, leading to ever-increasing mass-transfer rates \citep{Ritter1988,Wellstein2001}. Mass transfer accelerates to a timescale in between the thermal and dynamical timescales of the donor. Stars with deep convective envelopes , i.e. a giant or red supergiants, tend to expand or keep a roughly constant radius (i.e., the adiabatic mass-radius exponent $\rm \zeta_{ad}=(\frac{d\log R}{d\log M})_{ad} \leq 0$) when it loses mass adiabatically. This fact implies that the response to the mass transfer of a convective envelope is very different from that of a radiative envelope. The Roche-lobe radius shrinks when the mass is transferred from a more massive to a less massive star (cf., panel f in Fig. \ref{fig:general 6}). The response to mass loss via RLOF can cause the donor to overfill its Roche lobe by an ever larger amount and cause runaway mass transfer. The maximum mass transfer rate for Case C is the highest among three Cases of mass transfer. However, as the orbital period increases, the total amount of mass transfer via RLOF decreases. This indicates that the duration of RLOF becomes shorter in the system with an initial longer orbital period. When the envelope of the primary star expands, The second event of RLOF happens at 10.519 Myr. The radius cannot expand because it is restricted by the Roche lobe. Finally, the primary has a helium core mass with $\rm M_{He} \sim 4.95M_{\odot}$ and a thick H-rich layer of $\rm \sim 1.87 M_{\odot}$. Therefore, this kind of mass transfer can not give rise to type $\rm \uppercase\expandafter{\romannumeral2}$b due to the existence of a thick envelope. \subsubsection{Rotation effect} From panel (d) in Fig. \ref{fig:general 5}, one can see that the maximum rate of mass transfer via RLOF is also related to the rotation. One can see that rapid rotation can favor the maximum mass transfer rate. However, the total amount of mass transfer via RLOF reduces with the increasing of rotational velocities. Rotation can delay the beginning of Case B mass transfer. There are three main reasons. Firstly, rotational mixing can prolong the lifetime of the main sequence. Second, the orbital separation can be widened by the angular momentum transformation because more spin angular momentum can be transferred to the orbit. Third, rotation enhanced stellar winds can also widen the orbital separation. \subsection{Stellar radii and the evolution in the HR diagram} \subsubsection{Rotation effect} Panel (a) in Fig. \ref{fig:general 6} shows the photospheric radius for the non-rotating and rotating single stars as a function of evolutionary age. One can see that from the beginning of evolution, an increase in rotational speed leads to a decrease in radius. It can be understood that the star receives a small gravitational acceleration because of the sustaining effect of the centrifugal force and behaves like a non-rotating star with a slightly lower mass. The central temperature is smaller and the corresponding central density is larger. This results in a decrease in both luminosity and radius. Massive stars expand more as they cross the Hertzsprung gap and ignite helium as a red supergiant while their radii swell slightly. The rotation effects are almost unnoticeable in the evolution of the radius after the main sequence because of the rapid expansion of the hydrogen envelope. However, the rapidly rotating star S9 can experience very strong mass loss via stellar winds and its hydrogen-rich envelope can be removed when the core helium is ignited. The radius decreases and the star slightly shifts toward a Wolf-Rayet star. The massive single star ($\rm \sim 19 M_{\odot}$) with an initial velocity of 400 Km/s can approach the inferred luminosity and effective temperature of the type $\rm \uppercase\expandafter{\romannumeral2}$b progenitor star. This star finally explodes as a type $\rm \uppercase\expandafter{\romannumeral2}$-P supernova. Panel (a) in Fig. \ref{fig:general 7} shows the evolutionary tracks in the HR diagram for the non-rotating and rotating single stars. The effective temperature behaves as $\rm T_{eff}\propto M^{0.5-0.6}$ at the onset of the evolution. Therefore, the effective temperature increases with the stellar mass. For the star with a moderate initial rotational velocity of $\rm v_{ini} < 200 km/s$, the increase of the stellar wind with respect to the non-rotating case is limited. Rotational mixing merely plays a minor role in decreasing the opacity of the envelope. These models can maintain the star in the red supergiant state throughout the whole core helium burning phase. However, an efficient rotational mixing in model S9 can increase the size of the helium core significantly and the post-main sequence luminosity of the fast rotating star is higher, by approximate a factor of $\sim 2.5$ than that of a non-rotating counterpart S7. Therefore, the post-MS luminosity of the rapidly rotating star is higher than a non-rotating counterpart. The enhanced mass loss by rotation in model S3 implies that tracks for the stars do not attain as far to the right of the HR diagram as they do for the non-rotating star S1. Furthermore, The star S9 with higher rotational velocities shifts its color earlier from the red part of HRD after the core helium exhaustion than the model S7 which does not include rotation. The star terminates its evolution within the different regions of the HR diagram which have been defined by various types of SN $\rm \uppercase\expandafter{\romannumeral2}$b progenitors. The rotating model S9 can approach the observed effective temperature of the type RSG SN $\rm \uppercase\expandafter{\romannumeral2}$b. However, the luminosities of SN $\rm \uppercase\expandafter{\romannumeral2}$b are in the range of $\rm \log L/L_{\odot}=4.92-5.12$. The final luminosity of the model S9 is beyond the scope of the type $\rm \uppercase\expandafter{\romannumeral2}$b observations. Therefore, less massive stars (i.e., $\rm M < 19 M_{\odot}$) with higher rotational velocity are prone to form type $\rm \uppercase\expandafter{\romannumeral2}$b supernova. Furthermore, we also note that the blue-ward excursion can be increased by rotation in the binary system (cf. panel e in Fig. \ref{fig:general 7}). \subsubsection{Overshooting effect} One can see in Panel (b) of Fig. \ref{fig:general 6} that convective overshooting from the core evidently provides a larger reservoir of available nuclear fuel during the main sequence and consequently it produces an increase of the effective temperature and the luminosity after the RSG stage. Hence the radius of a star slightly decreases at a given luminosity for a larger overshooting. During helium burning the star has higher overshooting parameter and a smaller radius because a larger amount of the hydrogen envelope has been removed in model B3. At a smaller hydrogen envelope mass $\rm M_{H}< 0.1 M_{\odot}$, a higher luminosity always associates with a smaller radius in model B3. The radius remains smaller than it is at the tip of the giant branch. Before the core helium exhaustion, the radius of model B3 can display a significant expansion because of the existence of a thick helium envelope. A double-peaked light curve has been observed in SN 1993J. The light curve shows a rapid decline during one to three days after an initial peak, which is a consequence of cooling after the shock breaks out from the surface. The duration of the cooling phase is mainly dominated by the radius of the progenitor. More compact sizes of the progenitor, like Wolf-Rayet stars, give rise to faster declines than extended progenitors, such as red supergiant stars. The main reason is that an extra amount of energy is needed to expand a more compact structure. Panel (b) in Fig. \ref{fig:general 7} illustrates the HR diagram of three models with different convective overshooting parameters in the binary system. The convective overshoot is important to increase the luminosity on or after the main sequence and thereby stellar winds can also be increased. When the hydrogen starts to become exhausted in the core, a thin hydrogen burning shell begins to take shape and moves outward. This leads to a subsequent expansion and cooling of the stellar envelope. The star B3 with a larger overshooting rapidly moves to the redward in the HR diagram and becomes a red supergiant in comparison with the model B1 with a smaller overshooting. This is the so called mirror effect. The low temperature from the expansion significantly increases the envelope opacity at the RSG phase, which can make it easier for photons to be caught by the outer envelope, leading to increased radiation pressure and stronger line-driven winds. Moreover, the outer envelopes are also less tightly bound, since the radius is now several hundred solar radii or larger and the gravitational potential becomes shallow. Both of these effects enormously reinforce the mass loss and will result in a loss of a significant part of the outer envelope. During mass transfer phase via the RLOF, the stars lose most of their hydrogen-rich envelopes on a thermal time-scale. The mass-transfer phase proceeds at similar effective temperatures in three models. The main reason is that the size of the Roche lobe is approximately the same in three binaries. We have noted that the blue loop behavior is almost absent in all single stars in comparison with the binary system (cf., Panel a in Fig. \ref{fig:general 7}). We attribute to the presence of a thick hydrogen envelope in the single stars. This implies that there exists a threshold for the hydrogen envelope mass which can take the shape of the blue loop. The inner layers with higher temperature can be exposed by RLOF. Actually, an evolution to the blue after the red supergiant appears only if the ratio of the core mass to the total mass can exceed a given limit of around $70\%$. This implies that the yellow or blue supergiant will still have a hydrogen envelope that is comprised of a maximum of about $30\%$ of the total mass. The hottest point of the blue loop in model B1 corresponds to the minimum in the stellar radius when the core helium drops at $\rm Y_{c, He} \sim 0.25$, after which the envelope starts to expand and the star approaches the red giant branch again when the core helium $\rm Y_{c, He} \sim 0.02$. We also see that the blue excursion can be triggered earlier in B3 with a larger overshooting than the B1 model with a smaller overshooting. The reason is that the higher luminosity induced by larger convective overshooting can trigger stronger stellar winds after the RLOF. The donor star loses a large amount of hydrogen during the blue excursion phase when the WR wind of \cite{Nugis2000} is adopted. The remaining hydrogen envelope can be completely eliminated by strong winds. The binary model B3 will explode as a type $\rm \uppercase\expandafter{\romannumeral1}$b supernova because the hydrogen envelope is less than $1 \%$ of the total mass. \subsubsection{Metallicity effect} Panel (c) in Fig. \ref{fig:general 6} shows the radius as a function of the evolutionary age at various metallicities. The radius is smaller at lower metallicity during the main sequence. There are two main reasons for this fact. Firstly, because an extremely low metallicity implies a weaker CNO-cycle, the nuclear reaction strongly depends on pp-chains to produce its nuclear energy at the beginning of its evolution. Since pp-chains are much less sensitive to temperature than CNO-cycle, the star has to contract more to obtain a higher central temperature. Secondly, the opacity is lower and thus the envelope is more transparent at low metallicity. The radiative gradient $\rm \nabla_{rad}\propto\frac{\kappa P}{T^{4}}$ is lower and the star remains more compact. Before RLOF, we can see that the model with lower metallicity can display a larger expansion compared to the one with higher metallicity. The reason is that the residual hydrogen mass in the envelope is closely related to the stellar radius. The stellar radius generally decreases with the decreasing of the leftover hydrogen mass in the envelope. The progenitor radius of the type $\rm \uppercase\expandafter{\romannumeral2}$b SNe ranges form $\sim50 R_{\odot}$ of the BSG progenitor to $\sim 600 R_{\odot}$ of the RSG progenitor. One of the main parameters that can determine the final radius and surface temperature of an SN IIb progenitor is the hydrogen envelope mass at the pre-SN stage. We note that the mass range of the hydrogen envelope for type RSG SN $\rm \uppercase\expandafter{\romannumeral2}$b is $\rm 0.35 M_{\odot} < M_{H} <0.5 M_{\odot}$, $\rm 0.15 M_{\odot} < M_{H} <0.35 M_{\odot}$ for type YSG SN $\rm \uppercase\expandafter{\romannumeral2}$b, and $\rm \sim0.033 M_{\odot} < M_{H} <0.15 M_{\odot}$ for the type BSG SN $\rm \uppercase\expandafter{\romannumeral2}$b (cf., Table \ref{table1}). \cite{Gilkis2022} proposed that the type $\rm \uppercase\expandafter{\romannumeral1}$b and $\rm \uppercase\expandafter{\romannumeral2}$b hydrogen mass threshold is 0.033 $M_{\odot}$. This can be approximately consistent with the previous estimates for the SNe $\rm \uppercase\expandafter{\romannumeral2}$b: $\rm M_{H} = 0.2-0.4 M_{\odot}$ for SN 1993J and SN 2013df (RSG progenitor), $\rm M_{H} \simeq 0.1 M_{\odot}$ for SN 2011dh (YSG progenitor), and $\rm M_{H} \simeq 0.06 M_{\odot}$ for SN 2008ax (BSG progenitor). Stars with equal masses on the zero age main sequence but with different initial metallicities have different pre-supernova structures for a variety of reasons. Most importantly, if the amount of mass loss are very low in the single stars, the pre-supernova star, including its helium core is larger. Finally, it has a larger compactness \cite{Sukhbold2014}. We can see that the primary star in the binary system can deviate from this trend because the helium core can be reduced by RLOF. Moreover, low metallicity implies a smaller initial helium mass fraction and more hydrogen. The final helium core mass is sensitive to this and is reduced accordingly (cf. Panel b in Fig. \ref{fig:general 3}). Panel (c) in Fig. \ref{fig:general 7} illustrates the HR diagram of three models at various metallicities. The stars with the low metallicity become bluer compared with the counterpart with high metallicity. The central temperature is slightly higher at low metallicity because of the lower abundance of CNO which can catalyze the nuclear reaction. When core hydrogen is exhausted, the star with low metallicity starts the central helium burning at the blue side of the HR diagram compared with the one with high metallicity. Note that the primary star in three binaries can experience a second RSG phase when the helium becomes depleted in the core. Low metallicity reduces the energy generation of the CNO cycle. The more active H-burning shell in the high metallicity case reduces the pressure on the helium core. The metal-rich model is hotter and more compact than the metal-poor model at the core carbon exhaustion because the metal-poor model can retain more hydrogen envelope. Therefore, low metallicity stars give rise to the slightly stripped Type $\rm \uppercase\expandafter{\romannumeral2}$b SNe, while higher metallicity stars are prone to producing Type $\rm \uppercase\expandafter{\romannumeral1}$b or $\rm \uppercase\expandafter{\romannumeral1}$c SNe. The hydrogen envelope in SN $\rm \uppercase\expandafter{\romannumeral2}$b progenitors becomes thicker for lower metallicity star, while the corresponding opacity becomes smaller. As these two effects compensate each other, the radii of the $\rm \uppercase\expandafter{\romannumeral2}$b progenitor for both low or high metallicity models B4 and B5 are found to be very similar (cf., Table 1). The more compact supergiant envelope in model B5 places greater pressure on the helium core therein and can affect its subsequent evolution. It can determine if the progenitor is a red supergiant or a yellow one. For example, the model B4 explodes as a red supergiants SN $\rm \uppercase\expandafter{\romannumeral2}$b progenitors whereas model B5 ends as a YSG SN $\rm \uppercase\expandafter{\romannumeral2}$b progenitors. \subsubsection{The orbital period effect} Panel (d) in Fig. \ref{fig:general 6} shows radius as a function of the evolutionary age at various orbital periods. The radius evolution becomes very distinct, with a monotonic increase of the radius for single stars until a maximum of $\rm \sim 10^{3} R_{\odot}$ is reached. A non-monotonic increase of the radius occurs in the binary system because the thermal equilibrium is broken by the presence of RLOF. The more hydrogen stripped from the envelope, the larger the radius shrinks during the RLOF. When the mass transfer via RLOF goes down, the star gradually restores thermal equilibrium and its radius gradually increases. Actually, the radius can be greatly changed by the complication of binary interaction: the presence of a thin hydrogen envelope in some of the binary models leads to a more extended envelope than in the corresponding single helium star models in the absence of hydrogen envelope \citep{Yoon2017}. The composition of the envelope also has an important impact on the evolution of the effective temperature. The presence of hydrogen in the envelope of the extended progenitor can greatly reduce the effective temperature of the progenitor. One can notice that single star progenitors tend to have larger radii than binary-star progenitors. For example, the radius of binary-star SN progenitor model B1 is $\rm 851 R_{\odot}$ while the radius of its single star counterpart S4 is $\rm 933 R_{\odot}$. There are two main reasons. Firstly, the stellar radius in binary systems is limited by the Roche lobe. Secondly, a smaller residual hydrogen envelope can be remained in the binary system due to mass transfer via the RLOF. SNe $\rm \uppercase\expandafter{\romannumeral2}$b progenitors with smaller (larger) envelope masses are more compact (extended) and hot (cool). The ultra-stripped primary are believed to be very hot objects, emitting the majority of their photons in the extreme ultraviolet and they can remain hidden by their companions. The slightly stripped donors would be significantly cooler and more visible in their evolution \citep{Gotberg2017}. A more progressive stripping process in an initial tighter system can cause the star to evolve toward the hydrogen-deficient WR star. The mass-loss rate just before the SN explosion contains important information about their evolutionary paths. The mass-loss property is reflected in the density of circum-stellar matter. The radius of supernovae $\rm \uppercase\expandafter{\romannumeral2}$b covers the region form $\sim50 R_{\odot}$ of the BSG progenitor to $\sim 600 R_{\odot}$ of the RSG progenitor. \cite{Maeda2015} have displayed that there is a close relationship between the mass-loss rate of stellar winds and the progenitor radius prior to the explosion. They presented that more extended progenitors ($\sim 600 R_{\odot}$; e.g., 1993J, 2013df) have a higher mass-loss rate that up to $\rm \sim 10^{-5} M_{\odot}/yr$, while less extended progenitors ($\sim 200 R_{\odot}$; e.g., SN 2011dh) have a moderate mass-loss rate ($\rm \sim3 \times 10^{-6} M_{\odot}/yr$). \cite{Ouchi2017} interpreted that the less extended type $\rm \uppercase\expandafter{\romannumeral2}$b SN progenitor has not only a smaller remanent envelope mass to transfer but also a larger value of equilibrium index $\xi_{eq}=(\frac{\partial \log R}{\partial \log M})_{eq}$ which denotes the variation in primary radius in response to the reduction of the envelope mass, assuming that this mass loss proceeds slowly enough to maintain the primary in thermal equilibrium. The larger value of $\rm \xi_{eq}$ indicates that the progenitor shrinks faster in response to the mass loss. However, \cite{van2005} reported that the mass loss rate via stellar winds can reach as high as $\sim 10^{-4}M_{\odot}/yr$ from some red supergiant stars. It may also be an alternative interpretation that the strong mass loss for the more extended progenitor can be understood by with a post-RLOF stellar wind of red supergiants in the model B1. The radius of the primary in model B1 can attain a value of $478 R_{\odot}$ and its stellar wind has a value of $\rm 2.58\times 10^{-6} M_{\odot}/yr$ (RSG progenitor of Supernovae $\rm \uppercase\expandafter{\romannumeral2}$b). These theoretical results are approximatively consistent with the observations of the type $\rm \uppercase\expandafter{\romannumeral2}$b SNe 1993J (cf. Table \ref{table2})\citep{Sravan2020}. The thin envelope of SN $\rm \uppercase\expandafter{\romannumeral2}$b has a very low mass compared to the envelope of an SN $\rm \uppercase\expandafter{\romannumeral2}$P, still soak up a lot of energy from the SN shock, leading to the subsequent strong cooling emission. The large radius ($\rm \sim 600 R_{\odot}$) of SN 1993J is due to a thin extended hydrogen envelope around the progenitor and should have important impact on shock break-outs and bolometric light curves. The extended model gives rise to a apparent peak at the early stage (at the time of 5 days after the SN explosion) of the observed bolometric light curves of the SN $\rm \uppercase\expandafter{\romannumeral2}$b while the compact progenitor shows a much smaller bulge. The main difference is due to the extra amount of energy required to expand a more compact structure \citep{Bersten2012}. Panel (d) in Fig. \ref{fig:general 7} illustrates the evolution of several models with various initial orbital periods in the HR diagram. Models with different initial orbital periods can give rise to the blue, yellow, and red supergiants SN $\rm \uppercase\expandafter{\romannumeral2}$b progenitors. Whether a star explodes as red or blue supergiant, and how much hydrogen envelope remains depending on the mass loss via stellar winds or the RLOF in the binary system. The extended blue loop can be greatly influenced by the enhanced mass-loss via RLOF. The effective temperature gradually reduces with the increase of the size of the radius and the initial orbital period. The primary in an initial tighter system is hotter and more compact than the one in a wider system because an initial wider system can retain more hydrogen in its envelope, which allows it to expand greatly. The position in the HRD of the endpoint of the evolution heavily depends on the leftover mass of the hydrogen envelope. If the hydrogen envelope contains more than about $5\%$ of the initial mass, the star evolves into a red supergiant star with a lower effective temperature at the pre-supernova stage \citep{Meynet2015}. If the hydrogen envelope keeps less than $1 \%$ of the total mass, the star becomes a blue supergiant or a Wolf-Reyet star. The primary star in the wide system remains a significant hydrogen layer after RLOF which can sustain a hydrogen burning shell while the one in the tight system loses its hydrogen-rich envelope earlier due to an ultra-stripped RLOF. The hydrogen shell burning governs the nuclear luminosity during the time of helium exhaustion and this results in a local maximum in the radius. The inflated hydrogen envelope of supergiant progenitor has important impact on the early-time light curves of SNe $\rm \uppercase\expandafter{\romannumeral2}$b during the shock-cooling phase. An inflated hydrogen envelope can contribute to sustained high luminosity. The wind mass loss of a single star with mass $\rm< 20 M_{\odot}$ is not strong enough to completely expel the envelope (cf., panel a in Fig. \ref{fig:general 5}). Instead, for a less massive star ($\rm < 20 M_{\odot}$) to become a stripped envelope supernova, a significant mass transfer in a close binary system is needed. In this case, the stripped envelope supernova has lower mass. RLOF can avoid the redward evolution after the main sequence in the initial tighter systems B6 and B7. The pre-explosion photometry of the progenitor is critical for the evolution and final appearance of an SN $\rm \uppercase\expandafter{\romannumeral2}$b and its companion star. To date, a total of five type $\rm \uppercase\expandafter{\romannumeral2}$b SN progenitors have been identified in pre-explosion imaging, from SN 1993J to the more recent SN 2016gkg. Some observational evidence suggests the presence of its companion near the progenitor stars of SN1993J, SN2001ig, and SN2011dh. The observational position of two component stars in the HR diagram can also provide us with a good target to construct the theoretical model and the evolutionary properties. At the time of SN explosion, the primary star in model B1 fits well with the the pre-explosion observations. However, the location of the companion star of SN 1993J in the HR diagram is in approximate agreement with observations. The luminosity of the secondary practically match the observations, but the star is too blue (cf., panel f in Fig. \ref{fig:general 7}). The secondary in our model is shifted to a higher effective temperature by just $\rm \Delta \log T_{eff}\simeq 0.14$. \cite{Stancliffe2009} have claimed that it is extremely difficult for the secondary star to attain the observational position in the HR diagram. The position indicates that the secondary star appears an overluminous B supergiant and is extremely close to (or just beyond) the right side of its main sequence band. Its position predicted by the theoretical model is extremely sensitive to the value of the accretion efficiency $\rm \beta$ \citep{Benvenuto2013}. Both the effective temperature and luminosity of the companion star of SN 1993J can be reduced by a smaller accretion efficiency. They proved that this can only be done in a very narrow range in initial masses and periods. \cite{Stancliffe2009} have presented that if mass transfer is conservative, the right location of the secondary in the HR diagram can be reproduced by a system consisting of a 15 $\rm M_{\odot}$ primary and a 14 $\rm M_{\odot}$ secondary in an orbit with an initial period of 2100 days. However, they made using of a high metallicity model with $\rm Z=0.04$. We simulate the system of SN 1993J is comprised of a 16 $\rm M_{\odot}$ primary and a 15 $\rm M_{\odot}$ secondary with an initial orbital period of 1100 days. The metallicity is $\rm Z=0.04$ and the accretion efficiency is adjusted to a value of $\rm \beta=0.15$. The final evolutionary positions of two components in the HR diagram are in good agreement with the observations (cf., Table \ref{table2} and panel f in Fig. \ref{fig:general 7}). In the model B12, the luminosity and the effective temperature of the primary have the values of $\rm \log L/L_{\odot}=4.91$ and $\rm \log T_{eff}=3.88$, respectively. The observations of the SN 2008ax progenitor implies that the luminosity and the effective temperature of the primary are $\rm \log L/L_{\odot}=4.42-5.3$ and $\rm \log T_{eff}=3.88-4.3$, respectively \citep{Sravan2020}. These theoretical results can be also consistent with the observations of SN 2008ax (BSG progenitor of Supernovae $\rm \uppercase\expandafter{\romannumeral2}$b). \subsection{Carbon profile at the core carbon exhaustion} \subsubsection{The overshooting effect} Panel (a) in Fig. \ref{fig:general 8} shows the mass fraction of carbon as a function of the lagrangian mass for the non-rotating and rotating single stars at the core carbon exhaustion. The carbon-oxygen core mass which scales with the mass of helium core increases with the initial mass of the star at all metallicities. For example, the CO core mass is 2.853 $M_{\odot}$ for model S4 while it is 3.418 $M_{\odot}$ for model S7. One can see that the overshooting governs the final mass of the carbon-oxygen core. A higher overshooting parameter can result in a larger final carbon-oxygen core masse. The growth of the helium-burning core overshooting can bring about an additional supply of helium above the core that favors the formation of a larger CO core at the core helium exhaustion. Actually, all single stars and the peeled primary stars in our calculations display similar density profiles in the iron core (i.e., $\rm M < \sim 1.6 M_{\odot}$). However, the star shows a shallower drop of the density profile above the iron core for the more massive helium core. The local density tends to be higher while the central core has a higher temperature. An important effect is that the increase of overshooting leads to a lower ratio of carbon to oxygen at the end of helium core burning, which can have strong impact on the strength of subsequent carbon burning and the final size of the iron core. For a high carbon abundance in the model with smaller overshooting parameter, the phase of convective carbon shell burning may proceed longer, typically leading to smaller carbon-exhausted cores. This outcome in turn produces smaller iron cores with steeper density gradient outside the iron core and results in a pre-SN structure that more easily produces a successful supernova \citep{Limongi2018}. Furthermore, the formation of a larger convective core induced by the overshooting slows down the contraction of the core as well as its heating. It can be noticed that at the mass coordinate of $1.0 M_{\odot}$ the central carbon mass fraction reduces with increasing of overshooting parameter. Generally, the mass fraction $\rm ^{12}C$ left by the core helium burning decreases with the CO core mass. This is because the star with a larger convective overshooting has a higher reference core mass. This condition favors the rate of alpha captures onto carbon over the triple-alpha process. Therefore, carbon is destroyed more efficiently in the larger core induced by the convective overshooting and the mass fraction of oxygen is higher in the enlarged core. This contributes to a faster outward shift of the carbon burning shell and a more compact core of the star \cite{Chieffi2013}. \subsubsection{The metallicity effect} Panel (b) in Fig. \ref{fig:general 8} shows the mass fraction of carbon as a function of the lagrangian mass in nonrotating binary models with different metallicities at the end of the core carbon-burning phase. Actually, mass loss in the more massive star is strong enough to reduce substantially the total mass and therefore to reduce the hydrogen convective core during the main sequence. As a consequence, the helium core mass is smaller than it would be in the case of weak mass loss. Therefore, a strong mass loss in the single star with high metallicity will drive evolution toward a smaller carbon-oxygen core in contrast to the model with low metallicity. However, RLOF can make the low metallicity primary star B4 transfer more hydrogen envelopes to the companion. Therefore, the carbon-oxygen core mass in the low metallicity star becomes smaller due to RLOF. The star tends to behave as a lower mass star and evolves toward lower central temperature and higher central density. Such an occurrence has five outcomes. Firstly, the helium convective core shrinks progressively in mass and leaves a zone with variable chemical composition. Secondly, the lifetime of core helium burning goes up accordingly. Thirdly, the total luminosity progressively falls down and then the star shifts downward in the HR diagram. Fourth, the local density above the iron core tends to be higher in an initial high metallicity circumstance while the central temperature becomes larger. The mass fraction of $\rm ^{12}C$ at the time of core helium exhaustion becomes larger than it would be in high metallicity binary system. Finally, the CO core at core He exhaustion is smaller than it would be in high metallicity binary system. Generally speaking, stellar compact at the pre-supernova stage increases with the carbon-oxygen mass. Therefore, it is found that the primary star with low metallicity is easier to explode. As a consequence, we expect these models to produce smaller remnant masses and eventually, to give rise to faint and failed supernovae. \subsubsection{The orbital period effect} Panel (c) in Fig. \ref{fig:general 8} shows the carbon profiles for the primary star with different orbital periods in the binary system at the time of core carbon exhaustion. For a given initial mass, the final CO core mass may origin from different initial orbital periods. As can be seen a smaller carbon-oxygen core can be induced by the RLOF in an initial tighter system B6. The local density above the iron core tends to be smaller in this system while the central temperature becomes lower. We also find that a higher central carbon abundance can be reached in this system. A hydrogen-burning shell can be extinguished because of RLOF. This leads to a smaller helium core and a higher ratio of carbon to oxygen at the end of carbon core burning, which have an important impact on the strength of subsequent carbon burning and the final size of the iron core \citep{Brown2001}. The bolometric luminosity of the SN progenitor can mainly be resolved by helium shell burning, which is in turn largely resolved by the mass of the CO core. However, the strength of the secondary carbon convective shell progressively weakens because the fraction of carbon left after the core helium exhaustion inversely scales with the CO core mass. The final degree of core compactness of a star can be increased because it heavily depends on the formation and development of the various carbon convective episodes \citep{Chieffi2000,Limongi2009}. \subsubsection{The effect of rotation} Panel (d) in Fig. \ref{fig:general 8} illustrates the carbon profile of the single star and the primary star in the binary system for different rotational velocities at the time of core carbon exhaustion. Rapid rotation can increase the mass of helium and carbon-oxygen cores in both single stars and binary systems. The greater the initial rotational speed, the greater the helium and carbon-oxygen core mass. The corresponding central density tends to be higher. Therefore, rotational mixing can significantly reduce the mass fraction $\rm ^{12}C$ at the core helium exhaustion. Rotation can also continuously mix the helium outside the helium burning region and give rise to more $\rm ^{12}C$. At a given initial rotational velocity, the mass of carbon-oxygen core in single stars is larger than in a binary system. For example, the carbon-oxygen core mass is 3.16 $M_{\odot}$ for model B11 while it is 2.62 $M_{\odot}$ for model S6. The main reason is that RLOF can extinguish the hydrogen burning shell which may contribute an amount of helium to the helium core. For a given initial mass, rotating models behave like more massive stars and therefore they end their life with more compact structures. \setlength{\tabcolsep}{0.5mm}{ % \setlength{\LTcapwidth}{17.0in} % \begin{center} \scriptsize{ \begin{longtable}{lcccccccccccccccccccccc} \caption{\label{table3}Major evolutionary parameters for nine models including single stars and the primary star in binaries.}\\ \hline \hline Sequence &Age & $M_1$& $\rm \log(\frac{R}{R_{\odot}})$ &$\log T_{\rm eff}$& $\log (\frac{L}{L_{\odot}})$&$\log T_{\rm c}$&$\log \rho_{\rm c}$& $\rm \frac{N}{N_{\rm ini}}$& $V_{\rm eq}$& $X_{\rm H}$& $Y_{\rm He}$ & $\log(^{12}C)$& $\log(^{14}N)$& $\log(^{16}O)$& $\rm \frac{^{14}N}{^{12}C}$\\\hline &Myr&$M_{\odot}$& & K& &K&$\rm g/cm^{3}$&&km/s \\ \hline \endfirsthead \hline Sequence,&Age & $M_1$ & $\rm \log(\frac{R}{R_{\odot}})$ &$\log T_{\rm eff}$& $\log (\frac{L}{L_{\odot}})$&$\log T_{\rm c}$& $\log \rho_{\rm c}$& $\rm \frac{N}{N_{\rm ini}}$ & $V_{\rm eq}$& $X_{\rm H}$& $Y_{\rm He}$ & $\log(^{12}C)$& $\log(^{14}N)$& $\log(^{16}O)$& $\rm \frac{^{14}N}{^{12}C}$\\ \hline \endhead \hline \endfoot ZAMS&&&&&&&&&\\ S1&0.103&15.000&0.694&4.487&4.290&7.534&0.787&0.995&0.000&0.700&0.280&-2.463&-2.996&-2.029&0.293\\ S4&0.144&17.000&0.726&4.510&4.447&7.543&0.734&0.993&0.000&0.700&0.280& -2.463&-2.996&-2.029&0.293\\ S6&0.169&17.000&0.768&4.483&4.422&7.541&0.741&1.000&400.00&0.700&0.280&-2.463&-2.996&-2.029&0.293\\ B1&0.104&16.999&0.725&4.511&4.446&7.543&0.737&1.000&0.000&0.700&0.280&-2.463&-2.996&-2.029&0.293\\ B3&0.174&16.998&0.726&4.510&4.448&7.543&0.734&1.000&0.000&0.700&0.280&-2.463&-2.996&-2.029&0.293\\ B4&0.043&17.000&0.698&4.512&4.397&7.560&0.811&1.000&0.000&0.736&0.256&-2.861&-3.394&-2.427&0.293\\ B6&0.107&16.999&0.725&4.511&4.446&7.543&0.737&1.000&0.000&0.700&0.280&-2.463&-2.996&-2.029&0.293\\ B9&0.104&16.999&0.725&4.511&4.446&7.543&0.737&1.000&0.000&0.700&0.280&-2.463&-2.996&-2.029&0.293\\ B11&0.199&16.997&0.769&4.483&4.423&7.540&0.740&1.000&400.00&0.700&0.280&-2.463&-2.996&-2.029&0.293\\ \hline ECHB&&&&&&&&&\\ S1&11.642&14.619&1.075&4.391&4.670&7.732&1.301&1.000&0.000&0.700&0.280&-2.463&-2.996&-2.029&0.293\\ S4&9.900&16.506&1.124&4.405&4.820&7.743&1.257&1.000&0.000&0.700&0.280&-2.463&-2.996&-2.029&0.293\\ S6&12.841&15.993&1.105&4.445&4.943&7.751&1.224&8.488&131.725&0.596&0.384&-3.224&-2.068&-2.344&14.328\\ B1&9.923&16.455&1.119&4.406&4.816&7.744&1.265&1.000&0.000&0.700&0.280&-2.463&-2.996&-2.029&0.293\\ B3&11.136&16.069&1.315&4.332&4.913&7.759&1.271&1.000&0.000&0.700&0.280&-2.463&-2.996&-2.029&0.293\\ B4&11.133&16.797&1.075&4.424&4.800&7.864&1.776&1.000&0.000&0.736&0.256&-2.861&-3.394&-2.427&0.293\\ B6&10.629&7.479&1.148&4.307&4.478&7.746&1.457&5.712&0.000&0.691&0.288&-4.487&-2.240&-2.072&176.912\\ B9&9.923&16.455&1.119&4.406&4.816&7.744&1.265&1.000&0.000&0.700&0.280&-2.463&-2.996&-2.029&0.293\\ B11&11.808&15.774&1.158&4.402&4.877&7.746&1.238&6.529&72.147&0.654&0.326&-3.037&-2.182&-2.197&7.167\\ \hline BCHEB&&&&&&&&&\\ S1&11.642&14.619&1.075&4.391&4.670&7.732&1.301&1.000&0.000&0.700&0.280&-2.463&-2.996&-2.029&0.293\\ S4&9.905&16.504&1.102&4.422&4.845&7.816&1.525&1.000&0.000&0.700&0.280&-2.463&-2.996&-2.029&0.293\\ S6&12.846&15.993&1.075&4.467&4.971&7.839&1.534&8.488&144.644&0.596&0.384&-3.224&-2.068&-2.344&14.331\\ B1&9.941&16.449&1.385&4.286&4.866&8.038&2.548&1.000&0.000&0.700&0.280&-2.463&-2.996&-2.029&0.293\\ B3&11.148&16.061&1.586&4.202&4.934&8.054&2.384&1.000&0.000&0.700&0.280&-2.463&-2.996&-2.029&0.293\\ B4&11.145&16.795&1.329&4.302&4.820&8.053&2.633&1.000&0.000&0.736&0.256&-2.861&-3.394&-2.427&0.293\\ B6&10.659&5.554&1.392&4.240&4.696&8.042&2.902&12.422&0.000&0.300&0.681&-4.037&-1.902&-3.176&136.285\\ B9&9.941&16.449&1.385&4.285&4.866&8.038&2.549&1.000&0.000&0.700&0.280&-2.463&-2.996&-2.029&0.293\\ B11&11.827&15.763&1.663&4.159&4.917&8.063&2.536&6.530&31.258&0.654&0.326&-3.037&-2.182&-2.197&7.171\\ \hline ECHEB&&&&&&&&&\\ S1&12.788&12.926&2.881&3.541&4.878&8.905&5.430&3.943&0.000&0.645&0.334&-2.699&-2.401&-2.103&1.987\\ S4&10.988&13.447&2.880&3.548&4.907&8.478&3.587&4.334&0.000&0.632&0.348&-2.722&-2.360&-2.120&2.305\\ S6&13.761&9.950&2.956&3.582&5.192&8.911&5.297&10.065&0.077&0.467&0.513&-3.544&-1.994&-2.505&35.495\\ B1&10.924&5.177&2.687&3.665&4.987&8.906&5.307&9.619&0.000&0.478&0.502&-3.984&-2.013&-2.411&93.370\\ B3&11.918&5.442&0.194&4.943&5.113&8.922&5.232&12.493&0.000&0.000&0.981&-3.567&-1.900&-3.461&46.424\\ B4&12.153&5.011&2.712&3.639&4.933&8.902&5.343&9.985&0.000&0.526&0.466&-4.351&-2.395&-2.860&90.439\\ B6&12.000&3.347&0.626&4.627&4.715&8.885&5.486&12.546&0.000&0.000&0.981&-3.626&-1.898&-3.480&53.502\\ B9&10.909&6.848&2.955&3.533&4.993&8.905&5.318&5.874&0.000&0.584&0.396&-2.851&-2.228&-2.190&4.207\\ B11&12.668&5.202&0.234&4.913&5.074&8.918&5.257&12.353&95.589&0.000&0.975&-2.289&-1.905&-3.257&2.424\\ \hline BCCB&&&&&&&&&\\ S1&12.788&12.926&2.881&3.541&4.878&8.905&5.430&3.943&0.000&0.645&0.334&-2.699&-2.401&-2.103&1.987\\ S4&11.013&13.327&2.982&3.532&5.046&8.807&5.148&4.978&0.000&0.615&0.365&-2.780&-2.299&-2.146&3.023\\ S6&13.761&9.950&2.956&3.582&5.192&8.911&5.297&10.065&0.077&0.467&0.513&-3.544&-1.994&-2.505&35.495\\ B1&10.924&5.177&2.687&3.665&4.987&8.906&5.307&9.619&0.000&0.478&0.502&-3.984&-2.013&-2.411&93.370\\ B3&11.918&5.442&0.194&4.943&5.113&8.922&5.232&12.493&0.000&0.000&0.981&-3.567&-1.900&-3.461&46.424\\ B4&12.153&5.011&2.712&3.639&4.933&8.902&5.343&9.985&0.000&0.526&0.466&-4.351&-2.395&-2.860&90.439\\ B6&12.000&3.347&0.626&4.627&4.715&8.885&5.486&12.546&0.000&0.000&0.981&-3.626&-1.898&-3.480&53.502\\ B9&10.909&6.848&2.955&3.533&4.993&8.905&5.318&5.874&0.000&0.584&0.396&-2.851&-2.228&-2.190&4.207\\ B11&12.668&5.202&0.234&4.913&5.074&8.918&5.257&12.353&95.589&0.000&0.975&-2.289&-1.905&-3.257&2.424\\ \hline ECCB&&&&&&&&&\\ S1&12.789&12.922&2.885&3.540&4.884&8.933&5.941&3.977&0.000&0.644&0.336&-2.701&-2.397&-2.104&2.015\\ S4&11.015&13.316&2.976&3.533&5.038&8.944&5.974&4.991&0.000&0.614&0.366&-2.781&-2.298&-2.146&3.039\\ S6&13.761&9.947&2.953&3.581&5.182&8.949&5.851&10.083&0.082&0.465&0.516&-3.546&-1.993&-2.508&35.715\\ B1&10.925&5.169&2.688&3.667&4.997&8.938&5.850&9.630&0.000&0.477&0.503&-3.983&-2.013&-2.413&93.333\\ B3&11.919&5.441&0.224&4.930&5.119&8.917&5.749&12.493&0.000&0.000&0.981&-3.567&-1.900&-3.461&46.424\\ B4&12.154&5.002&2.713&3.641&4.944&8.938&5.866&10.013&0.000&0.525&0.467&-4.350&-2.394&-2.864&90.444\\ B6&12.003&3.341&0.720&4.581&4.717&8.928&6.012&12.546&0.000&0.000&0.981&-3.625&-1.898&-3.481&53.334\\ B9&10.910&6.833&2.956&3.535&5.006&8.937&5.846&5.899&0.000&0.583&0.397&-2.854&-2.226&-2.191&4.250\\ B11&12.669&5.200&0.268&4.898&5.081&8.923&5.760&12.348&83.027&0.000&0.975&-2.287&-1.905&-3.256&2.408\\ \hline BCNEB&&&&&&&&&\\ S1&12.790&12.919&2.875&3.542&4.870&9.178&7.165&3.984&0.000&0.644&0.336&-2.701&-2.396&-2.105&2.020\\ S4&11.015&13.316&2.977&3.533&5.038&9.181&6.849&4.991&0.000&0.614&0.366&-2.781&-2.298&-2.146&3.039\\ S6&13.761&9.946&2.951&3.583&5.187&9.178&6.852&10.093&0.083&0.463&0.517&-3.547&-1.992&-2.509&35.827\\ B1&10.925&5.165&2.682&3.664&4.974&9.179&7.071&9.636&0.000&0.477&0.503&-3.982&-2.013&-2.413&93.301\\ B3&11.919&5.440&0.245&4.917&5.113&9.180&6.679&12.493&0.000&0.000&0.981&-3.567&-1.900&-3.461&46.424\\ B4&12.155&4.998&2.711&3.639&4.931&9.182&7.128&10.032&0.000&0.525&0.467&-4.349&-2.393&-2.867&90.437\\ B6&..&..&..&..&..&..&..&..&..&..&..&..&..&..&..\\ B9&10.910&6.825&2.954&3.534&4.996&9.180&7.048&5.915&0.000&0.583&0.397&-2.856&-2.225&-2.192&4.277\\ B11&12.669&5.199&0.303&4.878&5.071&9.125&6.804&12.346&71.602&0.000&0.975&-2.285&-1.905&-3.256&2.401\\ \hline ECNEB&&&&&&&&&\\ S1&12.790&12.919&2.876&3.541&4.870&9.284&6.968&3.984&0.000&0.644&0.336&-2.701&-2.396&-2.105&2.020\\ S4&11.015&13.316&2.978&3.532&5.038&9.302&6.772&4.991&0.000&0.614&0.366&-2.781&-2.298&-2.146&3.039\\ S6&13.761&9.946&2.953&3.582&5.189&9.309&6.682&10.093&0.082&0.463&0.517&-3.547&-1.992&-2.509&35.827\\ B1&10.925&5.165&2.684&3.664&4.979&9.275&7.048&9.636&0.000&0.477&0.503&-3.982&-2.013&-2.413&93.301\\ B3&11.919&5.440&0.282&4.895&5.098&9.265&6.968&12.493&0.000&0.000&0.981&-3.567&-1.900&-3.461&46.424\\ B4&12.155&4.998&2.713&3.639&4.936&9.262&6.924&10.032&0.000&0.525&0.467&-4.349&-2.393&-2.867&90.437\\ B6&..&..&..&..&..&..&..&..&..&..&..&..&..&..&..\\ B9&10.910&6.825&2.954&3.534&4.996&9.180&7.048&5.915&0.000&0.583&0.397&-2.856&-2.225&-2.192&4.277\\ B11&12.669&5.199&0.326&4.866&5.070&9.310&7.705&12.346&71.202&0.000&0.975&-2.285&-1.905&-3.256&2.401\\ \hline BROLF1&&&&&&&&&\\ B1&9.955&16.442&2.368&3.789&4.843&8.195&2.928&1.000&0.000&0.700&0.280&-2.463&-2.996&-2.029&0.293\\ B3&11.152&16.058&2.371&3.794&4.869&8.168&2.761&1.000&0.000&0.700&0.280&-2.463&-2.996&-2.029&0.293\\ B4&11.164&16.789&2.363&3.775&4.778&8.218&3.024&1.000&0.000&0.736&0.256&-2.861&-3.394&-2.427&0.293\\ B6&8.534&16.835&1.037&4.424&4.724&7.585&0.768&1.000&0.000&0.700&0.280&-2.463&-2.996&-2.029&0.293\\ B9&9.966&16.423&2.840&3.531&4.757&8.211&2.961&2.176&0.000&0.685&0.294&-2.563&-2.659&-2.048&0.801\\ B11&11.832&15.760&2.378&3.783&4.843&8.164&2.857&6.530&5.519&0.654&0.326&-3.037&-2.182&-2.197&7.174\\ \hline EROLF1&&&&&&&&&\\ B1&9.964&5.594&2.671&3.618&4.768&8.206&2.945&9.575&0.000&0.479&0.501&-3.915&-2.015&-2.408&79.428\\ B3&11.159&6.820&2.574&3.751&5.104&8.204&2.807&9.441&0.000&0.514&0.466&-3.766&-2.021&-2.398&55.484\\ B4&11.174&5.372&2.688&3.598&4.720&8.224&3.025&9.866&0.000&0.528&0.464&-4.294&-2.400&-2.846&78.234\\ B6&10.493&7.500&1.215&4.260&4.425&7.636&1.118&5.712&0.000&0.691&0.288&-4.487&-2.240&-2.072&176.911\\ B9&9.997&10.171&2.887&3.510&4.766&8.230&2.999&4.372&0.000&0.626&0.354&-2.714&-2.356&-2.126&2.280\\ B11&11.846&6.866&2.614&3.718&5.054&8.208&2.893&11.019&0.385&0.375&0.605&-3.609&-1.954&-2.683&45.107\\ \hline BROLF2&&&&&&&&&\\ B1&10.916&5.241&2.665&3.664&4.940&8.706&4.682&9.589&0.000&0.479&0.502&-3.960&-2.015&-2.408&88.108\\ B3&..&..&..&..&..&..&..&..&..&..&..&..&..&..&..\\ B4&12.134&5.194&2.684&3.625&4.819&8.621&4.397&9.868&0.000&0.528&0.464&-4.295&-2.400&-2.846&78.485\\ B6&10.635&7.478&1.215&4.283&4.515&7.811&1.791&5.712&0.000&0.691&0.288&-4.487&-2.240&-2.072&176.912\\ B9&10.519&8.940&2.888&3.509&4.765&8.286&3.077&4.372&0.000&0.626&0.354&-2.714&-2.356&-2.126&2.280\\ B11&..&..&..&..&..&..&..&..&..&..&..&..&..&..&..\\ \hline EROLF2&&&&&&&&&\\ B1&..&..&..&..&..&..&..&..&..&..&..&..&..&..&..\\ B3&..&..&..&..&..&..&..&..&..&..&..&..&..&..&..\\ B4&..&..&..&..&..&..&..&..&..&..&..&..&..&..&..\\ B6&10.680&3.976&1.553&4.167&4.727&8.179&3.126&12.543&0.000&0.169&0.811&-3.927&-1.898&-3.309&106.831\\ B9&..&..&..&..&..&..&..&..&..&..&..&..&..&..&..\\ B11&..&..&..&..&..&..&..&..&..&..&..&..&..&..&..\\ \hline EOC&&&&&&&&&\\ S1&12.790&12.919&2.876&3.541&4.869&9.301&7.569&3.984&0.000&0.644&0.336&-2.701&-2.396&-2.105&2.020\\ S4&11.015&13.316&2.977&3.532&5.039&9.301&7.473&4.991&0.000&0.614&0.366&-2.781&-2.298&-2.146&3.039\\ S6&13.761&9.946&2.956&3.580&5.186&9.735&8.990&10.093&0.082&0.463&0.517&-3.547&-1.992&-2.509&35.827\\ B1&10.925&5.165&2.685&3.661&4.968&9.933&9.652&9.636&0.000&0.477&0.503&-3.982&-2.013&-2.413&93.301\\ B3&11.919&5.440&0.320&4.869&5.070&9.956&9.908&12.493&0.000&0.000&0.981&-3.567&-1.900&-3.461&46.424\\ B4&12.155&4.998&2.714&3.638&4.935&9.953&9.765&10.032&0.000&0.525&0.467&-4.349&-2.393&-2.867&90.437\\ B6&12.005&3.339&0.818&4.519&4.665&8.979&7.211&12.545&0.000&0.000&0.981&-3.624&-1.898&-3.481&53.243\\ B9&10.910&6.825&2.955&3.532&4.992&9.940&9.626&5.915&0.000&0.583&0.397&-2.856&-2.225&-2.192&4.277\\ B11&12.669&5.199&0.420&4.803&5.005&9.496&8.485&12.346&57.367&0.000&0.975&-2.285&-1.905&-3.256&2.401\\ & & & & & & & & & &\\ \hline\hline \end{longtable} \begin{tablenotes} \footnotesize \item[1)] Abbreviations: ZAMS-zero age main sequence; ECHB-the terminal of core hydrogen burning; ECHEB-the end of core helium burning; ECCB-the end of core carbon burning; BCNEB-the beginning of core neon burning; ECNEB-the end of core neon burning; BROLF1-the beginning of the first episode of RLOF; EROLF1-the end of the first episode of RLOF; BROLF2-the beginning of the second episode of RLOF; EROLF2-the end of the second episode of RLOF; BROLF3-the beginning of the third episode of RLOF; EROLF3-the end of the third episode of RLOF; EOC-the end of the calculation. \end{tablenotes} } \end{center} \section{Discussion} {There is an unsolved, long-standing problem that the observed rates of type $\rm\uppercase\expandafter{\romannumeral2}$b SNe seem to be much higher than the one which is predicted by binary evolution, in particular type $\rm\uppercase\expandafter{\romannumeral2}$b SNe with red/yellow supergiant progenitors. \cite{Claeys2011} have presented that binary evolution predicts roughly $0.6 \%$ of all core collapse SNe to be type $\rm \uppercase\expandafter{\romannumeral2}$b SNe, about a factor 5 lower than the observed rate. Actually, they restrained the parameter spaces at solar metallicity to the initial primary mass of 15 $\rm M_{\odot}$, initial secondary masses 10-15 $\rm M_{\odot}$, initial orbital periods 800-2100 days. Because of their limited parameter space coverage, they were not able to obtan robust relative rates. Another limitation is that they restrict their analysis to progenitors that explode with 0.1-0.5 $\rm M_{\odot}$ of residual hydrogen envelope. The low mass limit of 0.1 $\rm M_{\odot}$ excludes the group of more compact SN $\rm \uppercase\expandafter{\romannumeral2}$b progenitors obtained from theoretical models and it can be significantly reduced by the recent observations.} {Actually, this fraction increases if the companion star can accrete only a small fraction of the transferred mass via RLOF, and if the mass outflow carries relatively low angular momentum. If more material can escape from the binary system and take along its orbital momentum, this will cause the orbit to shrink faster, promoting parameter space for the evolution of contact or unstable mass transfer. This evolutionary pathway might increase the probability of the CEE channel to produce type $\rm \uppercase\expandafter{\romannumeral2}$b SNe and has not included in the calculation of \cite{Claeys2011}. A smaller accretion efficiency can broaden the orbit and therefore the Roche lobe. This results in smaller mass-transfer rates via the RLOF and therefore larger hydrogen envelope masses of the donor at the time of explosion. A smaller accretion efficiency tends to generate the stable mass-transfer and it leads to more type $\rm \uppercase\expandafter{\romannumeral2}$b SNe than type $\rm \uppercase\expandafter{\romannumeral1}$b SNe. Therefore, lower mass transfer efficiencies are also favorable for the production of SNe $\rm \uppercase\expandafter{\romannumeral2}$b with red/yellow supergiant progenitors.} {Moreover, if post-RLOF wind is lower than usually, it can greatly increase the rates that are predicted by binary evolution. \cite{Gilkis2019} commented that the wind mass-loss rate derived by \cite{Vink2017}, instead of the rate by \cite{Nugis2000}, greatly shifts binary progenitor models for core collapse SNe over a large initial parameter space from type $\rm \uppercase\expandafter{\romannumeral1}$b to type $\rm \uppercase\expandafter{\romannumeral2}$b. The mass-loss rate is expected to be smaller at lower metallicites and thus \cite{Yoon2017} presented that there would be even more type $\rm \uppercase\expandafter{\romannumeral2}$b SNe relative to type $\rm \uppercase\expandafter{\romannumeral1}$b SNe.} {In this paper, we intend to expand the parameter space for type $\rm \uppercase\expandafter{\romannumeral2}$b SN progenitors using detailed binary evolutionary calculations. With a smaller overshooting parameter, the stars have smaller core masses and hence are less luminous, with smaller radii. However, the final remnant masses tend to be similar to their counterpart with a larger overshooting parameter. As a result, these stars have much smaller final helium core masses and they lose less of their envelopes accordingly, retaining larger amounts of hydrogen at the point of explosion. This evolution may be in favor of the formation of SNe $\rm \uppercase\expandafter{\romannumeral2}$b with red/yellow.} {Rotation is thought to play a critical role in massive star evolution and rapidly rotating stars even are predicted to evolve chemically homogeneously due to the high efficiency of rotational mixing \citep{Maeder1987, de2009, Song2016}. The star evolves blueward without experiencing the RSG phase. This evolutionary pathway greatly reduces the mass of the hydrogen envelope and significantly increases the production rate of type $\rm \uppercase\expandafter{\romannumeral2}$b or $\rm \uppercase\expandafter{\romannumeral1}$b supernovae. Moreover, rotationally enhanced mass-loss rates can also reduce the minimum mass required for a single star to remove its hydrogen envelope \citep{Meynet2003}. Mass loss by line-driven winds is closely related to the chemical abundances and the luminosity. Therefore, rotation mixing can enhance the surface helium fraction and the luminosity and this also increase the rate of SN $\rm \uppercase\expandafter{\romannumeral2}$b with red/yellow progenitors from single stars.} {\cite{Sravan2018} have found that it is very difficult to account for the rate of type $\rm \uppercase\expandafter{\romannumeral2}$b SNe at solar metallicity. They take a ratio of the type $\rm \uppercase\expandafter{\romannumeral2}$b SNe of about 10-12 $\%$ in high metallicity stellar populations and about 20 $\%$ in low metallicity populations. Therefore, the parameter space for binary SNe $\rm \uppercase\expandafter{\romannumeral2}$b rapidly increases with the decreasing of metallicity. This is because of evolutionary channels to SNe $\rm \uppercase\expandafter{\romannumeral2}$b via Case early-B mass transfer that is only viable at low metallicity. In brief, a new statistical investigation is needed to compare binary models with the overall rates of different types of core collapse SNe in the future work.} \section{Conclusion and summary} In this paper, we investigate the evolution of stars that lost most of their hydrogen-rich envelope because of the interaction with their companions. We consider the binary systems with various overshooting, metallicities, initial orbital periods, and initial rotational velocities and these physical factors are in favor of the formation of the type type $\rm \uppercase\expandafter{\romannumeral2}$b SNe. We study how the internal structure and the nuclearsynthesis are connected with the evolution of the star. The main conclusions can be summarized as follow: \begin{enumerate} \item SN $\rm \uppercase\expandafter{\romannumeral2}$b show hydrogen lines in early spectra, while later spectra show helium lines but no hydrogen lines. They are believed to originate from the core collapse supernova with very thin hydrogen left in their outer envelope. There are two stripping mechanisms for the formation of type $\rm \uppercase\expandafter{\romannumeral2}$b supernovae. There are strong line-driven winds from isolated massive stars and mass transfer via RLOF in binary systems. Stellar wind in the less massive stars range within $\rm M < 20 M_{\odot}$ is too weak to give rise to type $\rm \uppercase\expandafter{\romannumeral2}$b supernovae because there exists a thick hydrogen envelope. Mass transfer via RLOF provides a promising channel for mass loss that is not solely regulated by the mass loss via stellar winds. Interacting binaries can therefore explain the existence of relatively low-mass progenitors of stripped envelope type $\rm \uppercase\expandafter{\romannumeral2}$b supernova. Some initial parameters, such as rotational velocities, metallicity, overshooting and orbital period, have important impacts on the RLOF and thus the formation of type $\rm \uppercase\expandafter{\romannumeral2}$b Supernova. A larger hydrogen envelope mass indicates a more extended radius and a lower effective temperature, and vice versa. \item The faster the initial rotation rate, the greater the mass-loss rate of the stellar winds. Rapid rotation can decrease the low limit of the mass that can turn into a type $\rm \uppercase\expandafter{\romannumeral2}$b supernova due to rotationally enhanced helium cores and stellar winds. Initially, stellar luminosities are lower in rotating models because the effective gravitational acceleration can be reduced by the centrifugal force. Later, stellar luminosity is higher because the helium-core can be enlarged under the influence of Eddington-Sweet circulation. The models with rotation have higher core temperature and lower central density due to more massive convective helium cores. Relatively low-mass helium stars usually experience a rapid expansion of the envelope during the core carbon burning phase. Moreover, the opacity in the radiative envelope can be decreased by the rotational mixing and the corresponding efficiency of the convective dredge-up can be reduced. Rotational mixing can enlarge the main sequence lifetime. Rotational mixing is responsible for the transport of nuclear matter from the core to the surface. Surface chemical species of CNO processing can be changed in rotating models. Surface $\rm ^{4}He$ and $\rm ^{14}N$ are enhanced by rotational mixing while surface $\rm ^{12}C$ and $\rm ^{16}O$ can be decreased. Rotational mixing can affect the hydrogen abundance just outside the helium core, which in turn decreases energy generation from hydrogen shell burning. Rotating star can produce a larger carbon-oxygen core and a higher compactness than non-rotating counterpart. \item The larger convective overshoot is very important for setting the larger size of the convective cores, especially for helium or carbon cores in the advanced evolution of massive stars. It also can increase the stellar luminosity, stellar winds and lifetime in the main sequence. Moreover, overshooting can also decrease the minimum mass for the formation of type $\rm \uppercase\expandafter{\romannumeral2}$b supernovae. The larger convective overshooting develops a larger carbon oxygen core and higher compactness than non-rotating stars. Larger overshooting can restrict the development of the dredge-up and there appears a smaller convective dredge-up region after core hydrogen exhaustion. In binary system, the hydrogen burning shell can be extinguished earlier in the model with the larger overshoot. Larger nitrogen enrichments can be reproduced by both RLOF and the subsequent strong WR winds. \item Stars with different initial metallicity have different pre-supernova structures. Most importantly, metallicity has an important impact on mass loss via stellar winds. If the amount of mass lost is very low, the helium core and the compactness of the presupernova star will be larger. Moreover, low metallicity implies a smaller initial helium mass fraction and the final helium core mass can be reduced. Low metallicity decreases the energy generation of the hydrogen shell burning via the CNO cycle, and this decreases the boundary for the helium core. The opacity reduces with the decreasing of the metallicity. The combined effects of opacity, energy generation, and mass loss determine whether the progenitor ends up as a red supergiant or a blue supergiant. The primary star with lower metallicity is prone to generate the more compact blue progenitors and can remain less hydrogen mass during RLOF. The more compact radiative structure of the blue supergiant envelope places greater boundary pressure on the helium core and it has important effect on the subsequent evolution. The donor with higher metallicity tends to give rise to higher effective temperature SN $\rm \uppercase\expandafter{\romannumeral2}$b progenitor. \item Compared with single stars, the primary stars in binary systems, develop less massive helium and carbon-oxygen cores. This is expected by the losing mass due to mass transfer. As hydrogen is converted into helium in the core, the interior of the star becomes progressively less sensitive to variations in the total mass but to the mass of the helium core. Close binary evolution should lead to a further stripping of the hydrogen envelope, and the formation of SN $\rm \uppercase\expandafter{\romannumeral2}$b supernovae is extremely sensitive to the initial orbital period. The effective temperature gradually decreases with increasing initial orbital period while the size of the radius goes up. This can be explained by the fact that less hydrogen is eliminated in the case of the binary with wider orbit. The system with the range $\rm \sim10$ days $\rm < P_{orb}<$ 700 days may turn into type SNe $\rm \uppercase\expandafter{\romannumeral2}$b. The model with $\rm P_{orb}=3.0$ days loses all of their hydrogen-rich envelopes and become SNe Ib whereas the system with initial $\rm P_{orb}>$ 700 days finally explodes as a RSG SNe $\rm \uppercase\expandafter{\romannumeral2}$P. The system with 300 days $\rm <P_{orb}<$ 700 days can give rise to the RSG type SN $\rm \uppercase\expandafter{\romannumeral2}$b progenitor. The initial period of 300 days roughly separates the RSG type SN $\rm \uppercase\expandafter{\romannumeral2}$b progenitor from the YSG type progenitors. The binary system B7 with an initial $\rm P_{orb}=30$ days can produce BSG type SN $\rm \uppercase\expandafter{\romannumeral2}$b progenitor. The primary star in the binary system may end its live as a compact BSG and this depends on the larger overshooting parameter and initial orbital period. \item The mass fraction of $\rm ^{12}C$ left in the core when core He is depleted can significantly affect the structure of the progenitor of supernovae. Generally, the mass fraction of $\rm ^{12}C$ is left in the core after core He burning falls down with the increasing of the CO core mass. This can contribute to a faster outward shift of the $\rm ^{12}C$ burning shell and make the core of the star more compact. The fraction of $\rm ^{12}C$ left heavily depends on the mass loss via RLOF, metallicity, overshooting and initial rotational velocity. The remaining mass fraction of $\rm ^{12}C$ is higher in an initial tighter binary system with smaller overshooting, lower initial rotational velocities and metallicity. \end{enumerate} \section*{acknowledgments} We are very grateful to an anonymous referee for his/her valuable suggestions and very insightful remarks, which have improved this paper greatly. This work was sponsored by the National Natural Science Foundation of China (Grant Nos. 11863003, 12173010), Swiss National Science Foundation (project number 200020-172505), Science and technology plan projects of Guizhou province (Grant No. [2018]5781). Dr. Y. Qin gratefully acknowledges the Science Foundation of University in Anhui Province (Grant No. KJ2021A0106). \bibliographystyle{aasjournal}
Title: A New Discrete Dynamical Friction Estimator Based on $N$-body Simulations
Abstract: A longstanding problem in galactic simulations is to resolve the dynamical friction (DF) force acting on massive black hole particles when their masses are comparable to or less than the background simulation particles. Many sub-grid models based on the traditional Chandrasekhar DF formula have been proposed, yet they suffer from fundamental ambiguities in the definition of some terms in Chandrasekhar's formula when applied to real galaxies, as well as difficulty in evaluating continuous quantities from (spatially) discrete simulation data. In this work we present a new sub-grid dynamical friction estimator based on the discrete nature of $N$-body simulations, which also avoids the ambiguously-defined quantities in Chandrasekhar's formula. We test our estimator in the GIZMO code and find that it agrees well with high-resolution simulations where DF is fully captured, with negligible additional computational cost. We also compare it with a Chandrasekhar estimator and discuss its applications in real galactic simulations.
https://export.arxiv.org/pdf/2208.12275
\label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \begin{keywords} Galaxy: kinematics and dynamics -- methods: numerical -- black hole physics -- quasars: supermassive black holes \end{keywords} \section{Introduction} \label{sec:intro} An essential element in the study of galactic dynamics is the process of dynamical friction (DF, \citealt{chandrasekhar1943}), a statistical effect of numerous two-body scatterings which causes a massive particle to lose its momentum when it travels through a medium of much lighter background particles. DF is believed to be an important effect to drive massive black holes (BHs, from intermediate mass BHs to super-massive BHs) to galactic centers (see, e.g. \citealt{Ostriker1999,Weller2022,Chen2022}), and it plays an essential role in the evolution of globular clusters (see, e.g. \citealt{PortegiesZwart2002,Gurkan2004,Alessandrini2014,Shi2021}). Hence the evaluation of DF is important in studying the evolution of galaxies, globular clusters, and black holes in a wide variety of contexts. In numerical $N$-body simulations with sufficient resolution (such as in the limit in which all bodies such as stars, black holes, or even dark matter particles are represented by individual $N$-body particles), DF will be automatically captured. However, as DF is an accumulated effect of many weak encounters in the regime where the ``target'' mass is much larger than the mass of the ``background'' particles masses ($M_\mathrm{target\;particle}\gg M_\mathrm{background\;particle}$), it is often not possible to fully resolve this background. This is especially true in large-scale simulations of e.g.\ galactic scales, where a typical ``$N$-body particle'' can easily have mass much larger than intermediate and super-massive black holes ($\gg 10^{4}\,M_{\odot}$), let alone the masses of individual stars, dark matter particles, or hydrogen ions. Specifically, when the $N$-body particle mass becomes comparable to or larger than the ``target'' mass, the explicit results of an $N$-body solver will not return the correct DF forces. For example, in e.g.\ the ``high-resolution'' simulations of high-redshift galaxies in \citet{xiangchengma2018,xiangchengma2017,xiangchengma2019}, the baryonic mass resolution is $\Delta m_i\sim7000m_\odot$ and the dark matter mass resolution is 5 times larger, which makes it impossible to resolve dynamical friction effects for BHs or other ``sink'' particles (e.g.\ particles which might represent unresolved massive, dense structures such as globular clusters, or hyper-dense exotic dark matter structures, etc.) less massive than $\sim10^5 M_\odot$. Hence, in these types of simulations, an additional ``sub-grid'' DF force must be added to these ``target'' particles to attempt to recover their real dynamics, to replace the lost information of individual two-body encounters in the smoothed-out gravity potential in simulations. Multiple sub-grid DF models have been proposed in the literature (e.g. \citealt{colpi:2007.binary.in.mgrs,dotti:bh.binary.inspiral,Tremmel2015,Pfister2019}) based on the classical Chandrasekhar's dynamical friction formula (\citealt{chandrasekhar1943}, or C43 hereafter): \beq \label{eqn:chandra_original} {\bf a}_\mathrm{df}^\mathrm{C43}=-4\pi G^2Mm\ln{\Lambda}\int d^3{\bf v}_m f({\bf v}_m)\frac{{\bf v}_M-{\bf v}_m}{|{\bf v}_M-{\bf v}_m|^3} \eeq where $M$ and $m$ are the masses of the moving ``target'' particle and the background or field particles, respectively. Here ${\bf v}_M$ and ${\bf v}_m$ are their velocities, and $\Lambda$ is the Coulomb logarithm defined by $\Lambda\equiv b_\mathrm{max}/b_\mathrm{min}$ where $b_\mathrm{max}$ and $b_\mathrm{min}$ are the maximum and minimum impact factors of scattered particles in weak encounters. $f({\bf v}_m)$ is the velocity distribution of field particles, and, with the usual assumption of a Maxwellian velocity distribution with dispersion $\sigma$, the formula reduces to \citep{BinneyTremaine}: \beq \label{eqn:chandra_maxwell} {\bf a}_\mathrm{df}^\mathrm{C43}=-\frac{4\pi^2G^2M\rho\ln{\Lambda}}{V_M^3}\bigg[\mathrm{erf}\bigg(\frac{v_m}{\sqrt{2}\sigma}\bigg)-\sqrt{\frac{2}{\pi\sigma}}e^{-v_m^2/2\sigma^2}v_m\bigg]{\bf v}_M \eeq i.e. the DF acceleration is proportional to the local field particle density $\rho$ and is in the opposite direction of the particle velocity ${\bf v}_M$, effectively acting as a ``friction'' force. Despite its elegance and (often surprising) accuracy in estimating the DF, Chandrasekhar's formula suffers from the following shortcomings when applied as a sub-grid model: \begin{enumerate} \item In deriving the formula, C43 assumes an isotropic and homogeneous medium of field particles. This is generally not true for real galaxies. For example, it has been pointed out that high-redshift galaxies and low-redshift dwarf galaxies could be chaotic and clumpy (e.g., \citealt{Weisz2014,meng2020,Velazquez2021}). The existence of such systems makes the physical assumptions behind C43 formula questionable. \item The Coulomb logarithm is ambiguously defined, and is often selected ad-hoc in practice, with a case-dependent selection of the minimum and maximum impact parameters (see, e.g., \citealt{Tremmel2015,Pfister2019}), which introduces a large systematic uncertainty in the sub-grid model. \item The formula has an explicit dependence on the local mass density, which must be evaluated from discrete $N$-body data for collisionless fluid (stars or dark matter, often ``blended'' with gas for which the density is continuously defined, depending on the numerical hydrodynamic method). The choice of how to do so is arbitrary and has no defined ``preferred'' scale. Most commonly it is done with a local kernel density estimator at some multiple of the resolution scale (see, e.g. \citealt{Tremmel2015}), but this is known to be quite noisy, and is not consistent with the unique local gas density available from hydrodynamic calculations. \item The velocity integral and $f({\bf v}_m)$ must be estimated with some similar ad-hoc local estimator, which is also undefined, and different choices can lead to different {\em directions} for the dynamical friction acceleration. Usually the choice of a local kernel sampling amplifies numerical noise further here and means that $f({\bf v}_{m})$ must be assumed to be Maxwellian (since it cannot be fit to an arbitrary function given just a few local points). \item There is no self-consistent way to incorporate force softening, which is necessary in $N$-body simulations to avoid spurious divergences in the forces, as an $N$-body particle does not physically represent a point-mass particle. Fail to incorporate softening can produce inconsistent results between the (often softened) gravitational acceleration and the additional dynamical friction acceleration. \item As C43 depends on {\em local} continuous field parameters but represents long-range forces, there is no way to self-consistently implement it in a way that conserves momentum, while in reality dynamical friction should be exactly conservative since it is derived from an infinite superposition of pair-wise $N$-body encounters; \item Evaluating C43 numerically requires operations which are not algorithmically identical to the gravity solver in $N$-body equations, which introduces not only additional inconsistencies, but also substantial computational expense. This also means numerical convergence for C43 applied to $N$-body particles is undefined: there is no formal guarantee of convergence even on idealized, smooth problems. \end{enumerate} To tackle these problems, we develop a new sub-grid DF estimator which can be efficiently embedded into discrete $N$-body calculations in this work. The new estimator is based on a discrete version of the DF formula which can be applied to an arbitrary phase-space distribution of field particles, and avoids the fundamental ambiguity in the definitions of some terms in Chandrasekhar's formula. It also naturally embeds force softening and momentum conservation. It can also easily be generalized to assumptions beyond those of C43 for the nature of DF-like forces. We test our estimator in both on-the-fly simulations and in post processing, and compare our results to those from a Chandrasekhar DF estimator. The paper is written as follows: in \S~\ref{sec:formula} we derived our discrete DF formula. In \S~\ref{sec:methods} we describe the methods we use to test the estimator. In \S~\ref{sec:results} and \ref{sec:discussion} we present and discuss the results. \section{Derivation of Our DF Formula} \label{sec:formula} Here we present the derivation of our discrete DF formula, and general comments on its application in $N$-body methods. \subsection{Derivation} \label{sec:derivation} In C43, the classical DF formula is derived as follows: assume a test particle with mass $M$ travels through an infinite, homogeneous and isotropic medium (filled with background particles with mass $m\ll M$), and experiences a number of individual two-body encounters. During each encounter, along the direction of relative motion, the test particle velocity in the parallel direction to the initial relative velocity is altered by (after integrating along the encounter path $ds$ from $s\rightarrow-\infty$ to $s\rightarrow+\infty$) \beq \label{eqn:one_encounter} \begin{split} \Delta {\bf v}_{\|} &= \frac{2\,m\,{\bf V}}{M+m}\,\left[1 + \frac{b^{2}\,V^{4}}{G^{2}\,(M+m)^{2}} \right]^{-1}\\ &= \frac{2\,m\,{\bf V}}{(M+m)\,(1+\alpha^{2})} \end{split} \eeq where ${\bf V} \equiv {\bf v}_{m}-{\bf v}_{M}$ (i.e. the velocity of $m$ in the rest frame of $M$), $b$ is the impact parameter, and $\alpha \equiv b\,V^{2}/G\,(M+m)$ parameterizes the encounter strength. Note that the perpendicular deflection $\Delta {\bf v}_{\bot}$ will be cancelled by symmetry if the medium is homogeneous and isotropic so we neglect it for now, but we will return to this below. To account for the contributions of \textit{all} encounters, C43 then integrates Eq. \ref{eqn:one_encounter}, by noting that the encounter rate in a differential time $d t$ is the sum of encounters within a cylindrical slice, with surface area $dA$ in the plane perpendicular to the relative motion and height $V\,dt$, over all relatively velocities and angles \beq \begin{split} \label{eqn:integral} {\bf a}_{\rm df} \equiv \frac{d {\bf v}_{M}}{d t} &= \int \Delta {\bf v}_{\|} \,V\,dA\,\mathcal{N}({\bf x},\,{\bf v})\,d^{3}{\bf v} \\ &= \int \frac{2\,\alpha}{1+\alpha^{2}}\,\frac{G\,m}{b}\,\hat{\bf V}\,\mathcal{N}({\bf x},\,{\bf v})\,dp\,dq\,d^{3}{\bf v} \end{split} \eeq where $\mathcal{N}({\bf x},\,{\bf v})=dN/d^{3}{\bf x}\,d^{3}\,{\bf v}$ is the phase space distribution function (by number) of the background particles; $\hat{\bf V} \equiv {\bf V}/V$, and $p$ and $q$ are the two spatial coordinates perpendicular to the path length $ds$, i.e. characterizing the surface $dA$ (so $ds\,dp\,dq=d^3{\bf x}$). The integral can be easily carried out for an isotropic and homogeneous distribution with $\mathcal{N}({\bf x},\,{\bf v})=nf_\text{M}(\bf{v})$, where $n$ is the number density (constant) and $f_\text{M}(\bf{v})$ is the Maxwellian velocity distribution, leading to the classical formula. To generalize the above formula to an arbitrary phase space distribution sampled by a discrete set of data points as in our simulations, one might naively attempt to directly insert the usual $N$-body approximation, replacing $\mathcal{N}({\bf x},\,{\bf v}) \rightarrow \sum_{i} (\Delta m_{i}/m)\,\delta({\bf x}-{\bf x}_{i},\ {\bf v}-{\bf v}_{i})$. This treats the distribution function as a sum of Dirac $\delta$-functions, i.e.\ point particles, each with $N$-body particle mass $\Delta m_{i}$, so representing $N=\Delta m_{i}/m$ ``background'' particles of mass $m$. However, the integral in Eq.~\ref{eqn:integral} only integrates over the two-dimensional surface ($d p d q$) as a slice of the full phase space, which makes it impossible to discretize directly. The missing integral parameter reflects the fundamental conceptual difficulty in deriving the DF formula for \textit{arbitrary} phase space distribution. In deriving Eq.~\ref{eqn:integral}, we actually already performed the integral over the missing degree of freedom when calculating $\Delta {\bf v}_{\|}$, by integrating over path length $ds$ in each encounter from $-\infty$ to $\infty$, containing the full effect of one two-body encounter before we sum them up to get the final result. This is only correct if the background distribution is isotropic and homogeneous, since in principal, the DF process cannot be evaluated in this manner for any given instant of time, without knowing all the history and future of the full dynamics, unless the background profile is static (i.e. isotropic and homogeneous). Nevertheless, it is still suggestive to consider what an inhomogeneous background particle distribution could bring (quantitatively) to this story, hence we offer an ad hoc derivation here. The key conceptual requirement to replace Eq.~\ref{eqn:integral} with one that can be discretized for an arbitrary $\mathcal{N}$ is to re-expand the integral that gave rise to $\Delta {\bf v}_{\|}$ (Eq.~\ref{eqn:one_encounter}) to explicitly account for the contributions of particles at different distances $s$ along their two-body encounter trajectory, i.e. taking $\Delta {\bf v}_{\|} \rightarrow \int \langle d\Delta {\bf v}_{\|}/ds \rangle_{\rm deflected}\,ds$. Recall, the entire point of our derivation is to develop a formula which can be applied where the {\em explicit} $N$-body evolution of the mass $M$ was not followed. Since DF fundamentally arises from the ``back-reaction'' of the medium (i.e.\ the deflection of mass $m$ as it feels gravity from $M$ creating a net ``drag''), we need to identify the {\em difference} between the contribution to $d {\bf v}_{M}/dt$ which $m$ {\em would have} at a distance $r$ along its encounter trajectory with $M$ if it had indeed been deflected by $M$, relative to the acceleration $M$ would feel if it saw $m$ on an ``un-deflected'' trajectory. The latter is, of course, just the ``normal'' gravitational acceleration on $M$.\footnote{This contribution will differ depending on the sign of $s$ at a given $r$, i.e.\ depending on whether $m$ is ``approaching'' or ``receding'' from $M$; however in our application to $N$-body simulations, the sign of ${\bf V}$ for distant $m$ will change frequently, so there is no way to unique identify ``approaching'' or ``receding'' elements without actually performing the full time integral of every encounter (i.e.\ doing the full ``live'' $N$-body calculation with $M$, exactly what we wish to avoid). We therefore simply average between the two, giving $\langle d\Delta {\bf v}_{\|}/ds \rangle_{\rm deflected} \equiv (1/2\,|ds|)\,[\int_{-s-ds}^{-s}\,({\bf a}^{\prime}-{\bf a}^{0})\,dt+\int_{s}^{s+ds}\,({\bf a}^{\prime}-{\bf a}^{0})\,dt]$, where ${\bf a}^{\prime}\equiv {\bf a}_{Mm} [ {\bf x}_{M}^{\rm deflected}(t),\,{\bf x}_{m}^{\rm deflected}(t) ]$ and ${\bf a}^{0} \equiv {\bf a}_{Mm} [{\bf x}_{M}^{m-{\rm undeflected}}(t),\,{\bf x}_{m}^{\rm undeflected}(t)]$ are the two-body accelerations assuming $m$ follows the deflected and un-deflected trajectories, respectively (note $M$ still ``sees'' $m$ in its un-deflected trajectory, but $m$ does not ``see'' $M$ in that case)} The full expressions for this are quite cumbersome and cannot be analytically closed; but they are still, in any case, approximate (as we still ignore many effects such as other influences on the orbit of $m$ during each stage of its 2-body encounter), so we can safely approximate them to the same order of accuracy by noting that asymptotically $\langle d\Delta {\bf v}_{\|}/ds \rangle_{\rm deflected} \rightarrow \Delta {\bf v}_{\|}\,b^{2}/2\,(s^{2}+b^{2})^{3/2}$ at large $r\gg b$ (noting $r^{2}\equiv s^{2}+b^{2}$), and (for weak encounters, the only case where our derivation is meaningful) near pericenter ($r=b\,(1+\epsilon)$ with $\epsilon \ll 1$) $\langle d\Delta {\bf v}_{\|}/ds \rangle_{\rm deflected} \rightarrow \Delta {\bf v}_{\|}\,(1/2\,b)$. Together with the identity $1 = (b/2)\,\int_{-\infty}^{+\infty}\,b\,ds/(s^{2}+b^{2})^{3/2}$, we can replace $\Delta {\bf v}_{\|}$ in Eq.~\ref{eqn:integral} with this expression, giving: \begin{align} \nonumber {\bf a}_{\rm df} &= \int \frac{2\,\alpha\,G\,m\,\mathcal{N}({\bf x},\,{\bf v})}{b\,(1+\alpha^{2})}\,\hat{\bf V}\,dp\,dq\,d^{3}{\bf v}\,\frac{b}{2}\int_{s} \frac{b\,ds}{(s^{2}+b^{2})^{3/2}} \\ \nonumber &\approx \int \int_{s} \frac{2\,\alpha\,G\,m\,\mathcal{N}({\bf x},\,{\bf v})}{b\,(1+\alpha^{2})}\,\hat{\bf V}\,dp\,dq\,d^{3}{\bf v}\,\frac{b}{2} \frac{b\,ds}{(s^{2}+b^{2})^{3/2}}\\ \label{eqn:analytic.arbitrarydf} &= \int \frac{\alpha\,b\,G\,m}{(1+\alpha^{2})\,r^{3}}\,\hat{\bf V}\,\mathcal{N}({\bf x},\,{\bf v})\,d^{3}{\bf x}\,d^{3}{\bf v} \end{align} where we used $ds\,dp\,dq \equiv d^{3}{\bf x}$, and in the $\approx$ step, where we move the integrand, we essentially make a much weaker version of the original \citet{chandrasekhar1943} approximation, assuming that quantities such as $\mathcal{N}$ do not vary strongly over the timescale during which most of the $\Delta {\bf v}_{\|}$ is imparted by each 2-body encounter. Now, we can insert the discrete $N$-body form of $\mathcal{N}$ as a sum of $\delta$ functions to trivially obtain: \beq \label{eqn:discrete_df} \begin{split} {\bf a}_{\rm df} &\rightarrow \sum_{i} \frac{\alpha_{i}\,b_{i}\,G\,\Delta m_{i}}{(1+\alpha_{i}^{2})\,r_{i}^{3}}\,\hat{\bf V}_{i} \\ &= \sum_{i} \left( \frac{\alpha_{i}}{1+\alpha_{i}^{2}} \right)\,\left( \frac{b_{i}}{r_{i}} \right)\,\left( \frac{G\,\Delta m_{i}}{r_{i}^{2}} \right)\,\hat{\bf V}_{i} \end{split} \eeq We have of course made a number of assumptions to derive Eq.~\ref{eqn:discrete_df}, and our final expression is not necessarily unique. However it has many useful properties. (1) In a spatially homogeneous medium (i.e.\ any where we can write $\mathcal{N} = n\,f({\bf v})$), then it is trivial to verify by inserting this in Eq.~\ref{eqn:analytic.arbitrarydf} that Eq.~\ref{eqn:discrete_df} reproduces {\em exactly} the expressions from \citet{chandrasekhar1943} for any $f({\bf v})$. (2) Eq.~\ref{eqn:discrete_df}, as intended, can be easily applied to an arbitrary $N$-body simulation collection of particles of arbitrary types (summing different components such as dark matter, gas, or stars simply involves carrying out the sum in Eq.~\ref{eqn:discrete_df} with the appropriate $\Delta m_{i}$ and $m$ for each ``species''). (3) Eq.~\ref{eqn:discrete_df} removes a number of ambiguities: the Coulomb logarithm is removed (it only ``re-appears'' if indeed the medium is infinite and homogeneous), and the ${\bf V}$ which appears is un-ambiguous (discussed further below). (4) Eq.~\ref{eqn:discrete_df} above can be trivially generalized for softened gravity (below). (5) Eq.~\ref{eqn:discrete_df} at least asymptotically captures the relative contributions of near versus far particles $m$ to the DF force, i.e.\ the dimensional scaling with $r$, e.g.\ correctly capturing the fact that most of the effect comes from when particles are near-pericenter. \subsection{Force Softening} \label{appendix:softening} To apply Eq.~\ref{eqn:discrete_df} to numerical simulations, we must account for force softening as in the simulations (since an $N$-body particle of mass $\Delta m_{i}$ represents many individual stars, collocating them at a specific ${\bf x}_{i}$, ${\bf v}_{i}$ would lead to spurious divergences in the forces). In Eq.~\ref{eqn:discrete_df}, note that all but one term are well-behaved: $0 < \alpha_{i}/(1+\alpha_{i}^{2}) < 1/2$, $0 < b_{i}/r_{i} < 1$, and $|\hat{\bf V}|=1$, so numerical divergence entirely arises from the term $G\,\Delta m_{i}/r_{i}^{2}$. But this is just the Newtonian gravity from a point $N$-body particle -- i.e.\ {\em exactly} the same term that is force-softened in the simulations. Hence we insert the same softening kernel $S_{i}(r_{i})$ as used in the actual $N$-body simulation (taking $G\,\Delta m_{i}/r_{i}^{2} \rightarrow S_{i}(r_{i})\,G\,\Delta m_{i}/r_{i}^{2}$). For the specific simulations here, this follows from the adaptive gravitational softening scheme described in \cite{gizmo}, corresponding to a cubic spline mass distribution: \beq \label{eqn:force.softening}S_{i}(r_i)= \begin{cases} \frac{32}{3}q_i^3-\frac{192}{5}q_i^5+32q_i^6\qquad& 0\leq q_i<\frac12 \\ -\frac{1}{15}+\frac{64}{3}q_i^3-48q_i^4\\ \qquad+\frac{192}{5}q_i^5-\frac{32}{3}q_i^6\qquad& \frac12\leq q_i<1 \\ 1\qquad& q_i\geq1 \\ \end{cases} \eeq where $q_{i}\equiv r_{i}/H_{i}$ with $H_{i}\approx 2.8\,\epsilon_{i}$ the radius of compact support of the kernel and $\epsilon_{i}$ the equivalent Plummer softening. This removes the numerical divergence and gives the correct result for a uniform density distribution sampled by $N$-body particles. \footnote{Note that in principle this softening is not exactly self-consistent with our derivation, since if $\Delta m_{i}$ represents an extended spatial distribution of particles, each would be deflected slightly differently in Eq.~\ref{eqn:analytic.arbitrarydf}. However, this {\em is} consistent with the simulations: $N$-body softening for collisionless fluids simply features this ambiguity at a fundamental level, because an individual $N$-body particle cannot actually deform in a fully-Lagrangian manner.} \subsection{Perpendicular Force} \label{sec:df.perpendicular} In the above, we only included the parallel DF term ($\propto \hat{\bf V}_{i}$). However two-body encounters also produce a perpendicular deflection ${\bf a}_{\rm df,\,\bot}$; this only vanishes in the C43 derivation because of the assumption of a homogeneous $\mathcal{N}$ (giving exact cancellation). Because we do not assume homogeneous $\mathcal{N}$, we can (if desired) retain these terms, giving: \begin{align} \label{eqn:df.perp} {\bf a}_{\rm df,\,\bot} &= - \sum_{i} \left(\frac{1}{1+\alpha_{i}^{2}} \right)\, \left( \frac{b_{i}}{r_{i}} \right) \, \left( S_{i}(r_{i})\,\frac{G\,\Delta m_{i}}{r_{i}^{2}} \right)\, \hat{\bf b}_{i} \\ \nonumber {\bf b}_{i} &\equiv {\bf r}_{i} - ({\bf r}_{i}\cdot \hat{\bf V}_{i})\,\hat{\bf V}_{i} \end{align} This differs from the parallel ${\bf a}_{\rm df,\,\|}$ only by one power of $\alpha_{i}$ and, of course, the direction. The power of $\alpha_{i}$ means that the perpendicular deflection can be stronger (compared to the parallel term) in strong encounters (although $0<1/(1+\alpha_{i}^{2})<1$ so this term is still bounded and cannot produce spurious divergences or forces larger than the regular/external acceleration). But because the integrated force is always dominated by weak deflections (where $\alpha_{i} \gg 1$), then even ignoring cancellations (which further reduce ${\bf a}_{\rm df,\,\bot}$ even in inhomogeneous $\mathcal{N}$), this term is generally smaller than the parallel $|{\bf a}_{\rm df,\,\|}|$ by one power of $\sim G\,M/r\,V^{2} \sim M/M_{\rm total,\,galaxy}(<r) \ll 1$. We show in an additional set of tests that this term is completely negligible for most galaxy simulation contexts, hence we do not include them in our final expression and tests below. But we emphasize that it is trivial to include and imposes no additional cost. \subsection{Final Expression} \label{appendix:expression} It is straightforward to generalize the above for a spectrum of masses $m$, i.e.\ integrating over the stellar initial mass function (IMF). However for any $M \gtrsim 10\,M_{\odot}$, this makes a negligible difference to our results. Since we do not know the ``true'' dark matter particle mass, it is more straightforward to simply assume the limit $M \gg m$, in which case the species masses $m$ completely factor out of the salient expressions. This gives the expression we will use throughout: \begin{equation} \begin{split} \label{eqn:discrete_df_smoothing} {\bf a}_{\rm df} &= \sum_{i} \Delta {\bf a}_{\rm df}^{i} \\ \Delta {\bf a}_{\rm df}^{i} &\equiv \left( \frac{\alpha_{i}\,b_{i}}{(1+\alpha_{i}^{2})\,r_{i}} \right)\, \left( S_{i}(r_{i})\,\frac{G\,\Delta m_{i}}{r_{i}^{2}} \right)\, \hat{\bf V}_{i} \end{split} \end{equation} with $\alpha_{i} \approx b_{i}\,V_{i}^{2}/G\,M$. \subsection{Numerical Implementation} In the form of Eq.~\ref{eqn:discrete_df_smoothing}, it is particularly straightforward to implement our estimator. First, noting that $\alpha_{i}$ and $b_{i} \equiv r_{i}\,|\hat{\bf r}_{i} - (\hat{\bf r}_{i}\cdot \hat{\bf V}_{i})\,\hat{\bf V}_{i}|$ is a function only of ${\bf r}_{i}$ and ${\bf V}_{i}$, we see that the only piece of additional of information needed to compute Eq.~\ref{eqn:discrete_df_smoothing}, alongside the usual gravity force, in an $N$-body solver is the velocity ${\bf V}$ (already known). In other words, we do not need to construct some estimator for values in the C43 formula, like $\rho$, $\Lambda$, $\langle {\bf V} \rangle$ which are not actually computed in standard $N$-body simulations. Second, we also immediately see that it is completely trivial to carry out this sum over any arbitrary set of species (e.g.\ stars+gas+dark matter+other BHs). Comparing the form of Eq.~\ref{eqn:discrete_df_smoothing} and the ``regular'' gravitational acceleration ${\bf a}_{\rm ext}$: \begin{align} {\bf a}_{M} &= {\bf a}_{\rm ext} + {\bf a}_{\rm df} = \sum_{i}\Delta {\bf a}_{\rm ext}^{i} + \sum_{i}\Delta {\bf a}_{\rm df}^{i} \\ \Delta {\bf a}_{\rm ext}^{i} &\equiv \left( S_{i}(r_{i})\,\frac{G\,\Delta m_{i}}{r_{i}^{2}} \right)\, \hat{\bf r}_{i} \\ \Delta {\bf a}_{\rm df}^{i} &\equiv \left( \frac{\alpha_{i}\,b_{i}}{(1+\alpha_{i}^{2})\,r_{i}} \right)\, \left( S_{i}(r_{i})\,\frac{G\,\Delta m_{i}}{r_{i}^{2}} \right)\, \hat{\bf V}_{i} \end{align} we immediately see that the operation needed to compute ${\bf a}_{\rm df}$ is algorithmically identical to that needed to compute the normal gravitational forces. In Tree-gravity, Tree-PM, direct $N$-body, or many other methods, implementing exact evaluation of Eq.~\ref{eqn:discrete_df_smoothing} in a manifestly-conservative manner is especially trivial. \footnote{In PM and related methods, where long-range forces are evaluated via computing the potential from a Particle-Mesh Fourier method, implementing Eq.~\ref{eqn:discrete_df_smoothing} is less trivial: the issue is that the direction $\hat{\bf V}_{i}$ differs from $\hat{\bf r}_{i}$, so one cannot simply treat ${\bf a}_{\rm sf}$ as a scalar correction to the regular external gravitational potential, but must compute a separate potential/field. However in hybrid Tree-PM methods, such as (optionally) implemented in {\small GIZMO}, the less-accurate PM forces are only used at large distances; given this, we find (consistent with Fig.~\ref{fig:lss_slice}) that the errors from simply truncating the sum for ${\bf a}_{\rm df}$ by including only the contributions from the tree-walk (ignoring the PM terms in ${\bf a}_{\rm df}$) are entirely negligible (below normal integration-error level).} In e.g.\ a tree-walk, as one sums up to compute ${\bf a}_{\rm ext}$, we simply sum the additional term $\Delta {\bf a}_{\rm df}^{i}$, which scales exactly with the $| \Delta {\bf a}_{\rm ext}^{i} | $ multiplied by the numerical pre-factor $\alpha_{i}\,b_{i}/(1+\alpha_{i}^{2})\,r_{i}$, and oriented in the different direction $\hat{\bf V}_{i}$. The gravitational force softening is also naturally embedded in Eq. \ref{eqn:discrete_df_smoothing}. Moreover, our Eq.~\ref{eqn:discrete_df_smoothing} is well-behaved when applied to tree nodes/leaves, not just individual particles: one simply treats each node as a ``super-particle'' with the appropriate total $\Delta m_{i}$ and mass-averaged ${\bf V}_{i}$, ${\bf r}_{i}$, in the same manner as done for the usual gravity calculation. It is trivial to verify from the form of Eq.~\ref{eqn:discrete_df_smoothing} that the order of the errors from this approach will always be equal to or better than the order of errors in ${\bf a}_{\rm ext}$ in the tree (i.e.\ convergence is equal or faster). To ensure manifest momentum conservation, we simply enforce equal and opposite forces, i.e.\ apply an acceleration $\Delta {\bf a}_{\rm M-to-i} = -(M/\Delta m_{i})\,\Delta {\bf a}_{\rm df}^{i}$ to each particle $i$. The scaling of the pre-factor in Eq.~\ref{eqn:discrete_df_smoothing} is such that it guarantees this ``back-reaction'' term is well behaved and does not produce any spurious numerical divergences in the accelerations of the particles $i$.\footnote{That behavior is {\em not} guaranteed if one attempts to conserve momentum by simply applying a C43-style formula to $M$ and then ad-hoc ``redistribute'' the equal-and-opposite momentum change to the neighboring $i$ around $M$.} \section{Numerical Validation: Methodology} \label{sec:methods} To study the accuracy of our DF formula, we compare it to both direct high-resolution simulations and calibrated versions of the local Chandrasekhar's DF formula, using both ``on-the-fly'' applications in simulations (\S~\ref{sec:methods:sim}) and post processing methods (\S~\ref{sec:methods:post}). Here we detail those methods. In what follows, we refer to the ``target'' or ``sinking'' particle as a black hole (BH) of mass $M_{\rm BH}$, since this is a particularly relevant motivating case for our sub-grid model, but of course the ``target'' particle could in principle represent any sufficiently compact bound massive object. \subsection{On-the-fly Simulations} \label{sec:methods:sim} \subsubsection{Numerical Methods} We have implemented the ``discrete DF estimator'' Eq.~\ref{eqn:discrete_df_smoothing} in the {\small GIZMO} multi-physics code \citep{gizmo}, which uses a standard Barnes-Hut tree algorithm to solve the gravity equations \citep[an improved version of that in][]{springel:gadget}. {\small GIZMO} is well-tested in numerous applications of $N$-body dynamics problems involving dynamical friction, $N$-body resonances and wake problems (see, e.g. \citealt{Lokas2019, Collier2020, Grudic2020, bonetti2020, Morton2021, Bonetti2021, Bortolas2022}), to which we refer for more detailed descriptions of numerical methods, demonstrations of convergence, test problems, etc. As described above we simply evaluate the DF force ${\bf a}_{\rm df}$ alongside the ``normal'' gravitational force (using the identical softening, etc.) in the tree-walk operation, imposing negligible CPU cost. \subsubsection{Initial Conditions} To test the estimator, we have run a series of test problems. In each, we initialize a steady-state ``halo'' of collisionless particles (e.g.\ ``dark matter'' or ``stars'') using the {\small GALIC} code \citep{galic}, with a target/BH particle on an initial orbit expected to decay owing to DF. We have experimented with several different choices for the initial halo density profile, whether the halo velocity distribution is anisotropic or isotropic, and other parameters of the halo and orbit (e.g.\ eccentric versus circular, and initial position/energy/angular momentum). Our qualitative conclusions and comparison of methods are identical in each case (and of course, this being a pure $N$-body problem it is scale free), so we focus on and show plots from one example with typical cosmological units for the sake of clarity. In our fiducial example, we adopt a \citet{hernquist1990}-profile halo with total mass $2\times10^{11}\,M_\odot$ with the \citet{galic} concentration parameter of $4$ and spin parameter $0.04$ (consistent with typical dark matter halo parameters, \citealt{Bullock2001}, and sufficient to make the halo mildly anisotropic because of rotation), so that the \citet{hernquist1990} scale-length $a=30.2\,{\rm kpc}$. The target/BH is placed $5\,\mathrm{kpc}$ away from the halo center and has a tangential velocity of $59\,\mathrm{km/s}$, which is the circular velocity of the halo at that radius. The black hole mass is $10^8\,M_\odot$, much less than the enclosed dark matter mass inside $5\,\mathrm{kpc}$ ($\sim 4\times10^9\,M_\odot$), to avoid disrupting the dynamical equilibrium of the galaxy. \subsubsection{Sub-Grid Versus Resolved Simulations} As DF should be fully resolved when the target/BH mass $M_\mathrm{BH}$ is much more than the background (``dark matter'' or DM) particle mass $M_\mathrm{DM}$, one would expect that only in a low resolution simulation (i.e., $M_\mathrm{BH}\lesssim M_\mathrm{DM}$) a sub-grid treatment of DF is necessary\footnote{ When the resolution lies in between and DF is partially resolved, a sub-grid treatment may cause ``double counting'' when calculating DF. While this remains an open question in general, we find that it can be avoided by multiplying a field-mass-dependent function on the DF formula in our tests. See discussions in \ref{sec:double-count}.}. However, if the resolution is {\em too} low, the orbital semi-major axis of the BH particle will be smaller than the inter-particle spacing of the $N$-body simulation and the BH will have essentially ``sunk to the center'' already -- trivially, if it were just one background/DM particle inside of the initial $5\,\mathrm{kpc}$, then there is no define-able smaller-scale center towards which even a ``perfect'' sub-grid model could migrate the target/BH. We hence choose $M_\mathrm{DM}=10^7M_\odot$ in the tests with sub-grid DF, so $\sim 400$ dark matter particles are enclosed inside the initial $5\,\mathrm{kpc}$. We further run a set of 50 simulations with the same background halo, but with the BH particles placed randomly on a $5\,\mathrm{kpc}$-radius sphere with a random direction of velocity in the tangent plane. By choosing the median between these runs, we can smooth out the chaotic motions intrinsic in the problem, as well as the effects of anisotropy (both real, from the halo rotation, and numerical, from $N$-body noise) generating eccentric orbits which produce larger oscillations in the instantaneous BH speed (making the results more difficult to read). To test our results, we compare a set of reference simulations at varying resolution which do not adopt any sub-grid DF, but with the same setups of black hole initial conditions. At sufficiently high resolution, these simulations satisfy $M_\mathrm{BH} \gg M_\mathrm{DM}$ and so should directly capture the salient effects of DF on the target. \subsubsection{Simulations with a ``Fitted'' C43 Sub-Grid Model} Finally, we consider a third set of simulations where we again adopt a sub-grid DF estimator, but instead adopt the local Chandrasekhar DF estimator of Eq.~\ref{eqn:chandra_maxwell} as previously introduced in {\small GADGET} in e.g.\ \citet{cox:xray.gas} updated to be essentially identical to that in \citet{Tremmel2015}. Here we assume a Maxwellian velocity distribution, estimate the mean velocity and dispersion as a kernel-and-cell-mass-weighted mean, and use the BH kernel density estimator from \citet{wellons:2022.smbh.growth} to estimate $\rho$. We previously noted intrinsic difficulties this method faces: however, for this particular test problem, the background halo is (by construction) smooth and nearly isotropic and single-component and nearly-Maxwellian, so this provides a ``best-case scenario'' for a C43-like estimator. But this still leaves un-resolved the question of how to estimate the Coulomb logarithm. We find that common choices (e.g. the ratio of virial radius to ``true'' inter-particle spacing) are not only impossible to predict a-priori in a completely general simulation (they must be put in ``by-hand''), but also appear to give DF forces which differ systematically from the resolved solutions by tens of percent or up to a factor of two. Therefore, to give this model the best possible chance, we explicitly {\em fit} the Coulomb logarithm, varying it until we find a model which best matches the BH orbital decay seen in the explicit high-resolution $N$-body calculation. We use this, essentially as a way of detecting how our method compares to a ``best-case'' C43 estimator calibrated ahead of time to the {\em specific} problem being simulated. The simulation setups are summarized in Table \ref{tab:simulations}. \begin{table} \centering \begin{tabular}{ccccc} \\\hline set & $M_\mathrm{DM}/M_\mathrm{BH}$ & sub-grid DF model & $\epsilon$ criterion & num. of runs\\ \hline 1 & $10^{-1}$ & this paper & $\epsilon\sim\Delta x_i$ & 50\\ 2 & $10^{-1}$ & fitted C43 & $\epsilon\sim\Delta x_i$ & 50\\ 3 & $10^{0}$ & none & $\epsilon<b_\mathrm{min}$ & 50\\ 4 & $10^{-1}$ & none & $\epsilon<b_\mathrm{min}$ & 50\\ 5 & $10^{-2}$ & none & $\epsilon<b_\mathrm{min}$ & 50\\ 6 & $10^{-3}$ & none & $\epsilon<b_\mathrm{min}$ & 1\\ 7 & $10^{-4}$ & none & $\epsilon<b_\mathrm{min}$ & 1\\ 8 & $10^{0}$ & none & $\epsilon\sim\Delta x_i$ & 50\\ 9 & $10^{-1}$ & none & $\epsilon\sim\Delta x_i$ & 50\\ 10 & $10^{-2}$ & none & $\epsilon\sim\Delta x_i$ & 50\\ 11 & $10^{-3}$ & none & $\epsilon\sim\Delta x_i$ & 1\\ 12 & $10^{-4}$ & none & $\epsilon\sim\Delta x_i$ & 1\\ 13 & $10^{-5}$ & none & $\epsilon\sim\Delta x_i$ & 1\\ \hline \end{tabular} \caption{Representative simulation summary for our idealized tests. Different sets share the same setup of initial conditions: a $10^8\,M_\odot$ black hole particle placed randomly at a $5\,\mathrm{kpc}$ radius with a velocity of $59\,\mathrm{km/s}$ in a random tangent direction. The background particles form an Hernquist halo with $M_{\rm halo}=2\times10^{11}\,M_\odot$. The black hole speed from these tests are shown in Figure \ref{fig:values_evo} (when multiple runs are at present, only the median value is shown).} \label{tab:simulations} \end{table} \subsection{Post-Processing in Multi-Physics Galaxy Simulations} \label{sec:methods:post} While comparing our discrete estimator with the Chandrasekhar estimator in the idealized test problem above can help to test its accuracy, it is of course also important to apply it to some more ``realistic'' (or at least more complicated) galaxy simulations which involve multi-component (gas+star+DM) anisotropic, highly-inhomogeneous backgrounds. Full applications to such simulations on-the-fly can be used to make predictions for e.g.\ demographics of free-floating BHs, IMBHs, and rates of BH-BH coalescence in galaxy nuclei (e.g. predictions for LISA). However this is clearly beyond the scope of this work. Instead here we will select some snapshots of $N$-body information from high-redshift galaxies in the Feedback In Realistic Environments (FIRE; \citealt{fire,fire2}) project, and use these to make some simple post-processing comparisons in order to see how the full on-the-fly application of the estimator used here might differ (or not) from other approaches to including or ignoring DF in these kinds of systems. \section{Results and Discussions} \label{sec:results} \subsection{Validation in On-The-Fly Simulations} \label{sec:results_sim} Figs.~\ref{fig:trajectories}-\ref{fig:values_evo} show some representative results of our numerical validation tests in on-the-fly simulations, specifically focusing on an illustrative trajectory of the BHs as well as the BH velocity as a function of time. First, we examine the behavior of pure $N$-body calculations ({\em without} sub-grid DF) as a function of resolution. Not surprisingly, when the target mass is similar to the $N$-body particles (e.g. $m_{\rm DM} \gtrsim M_{\rm BH}$), no DF is captured. Most previous studies arguing for different ``sufficient'' resolutions to capture DF refer to this regime \citep[see e.g.][]{vandenbosch:no.orbital.circularization.due.to.dyn.frict,colpi:2007.binary.in.mgrs,boylankolchin:merger.time.calibration,fire2,Pfister2019,Barausse2020,2020MNRAS.495L..12B,Ma2021}, depending on the specific problems they are choosing. In our case, at better resolution ($m_{\rm DM} \ll M_{\rm BH}$) we see DF but with an important dependence on how we treat the {\em spatial} force softening $\epsilon$. If we adopt a fixed Plummer-equivalent $\epsilon$ comparable to or smaller than the canonical minimum impact parameter for strong encounters $b_{\rm min} \sim G\,M_{\rm BH}/(2\,\sigma^{2}+V_{\rm bh}^{2})$ (here $\sim 60\,{\rm pc}$ at the initial BH position), we see excellent convergence once $m_{\rm DM} \ll 0.1\,M_{\rm BH}$ (Fig.~\ref{fig:values_evo}, left-panel). However, this is not how force softenings are typically set in $N$-body simulations which do not resolve the individual point masses: instead, to prevent spurious noise in {\em other} properties, the ``optimal'' softening is usually chosen to roughly match the inter-particle separation $\epsilon \sim \Delta x_{i} \sim (\Delta m_{i}/\rho_{i})^{1/3}$ \citep[Fig.~\ref{fig:values_evo}, right-panel;][]{merritt:1996.optimal.softening,romeo.1998:optimal.softening,athanassoula:2000.optimal.force.softening.collisionless.sims,dehnen:2001.optimal.softening,rodionov:2005.optimal.force.softening}. When we do this, we see notably worse convergence: in fact, the convergence is logarithmic in $m_{\rm DM}$, because we have $\epsilon > b_{\rm min}$, the effective Coulomb logarithm is artificially truncated (i.e.\ we artificially suppress close encounters). This is a known challenge for DF in softened gravity \citep[see e.g.][for more details and extended discussion]{karl:2015.direct.nbody.df.sims}, and it further emphasizes the importance of a sub-grid model like ours: achieving $\Delta x_{i} \ll b_{\rm min}$ requires $m_{\rm DM} \ll 10^{-5}\,M_{\rm BH}$, i.e.\ billions of $N$-body particles even for a simple, idealized halo like that here. We then compare our ``sub-grid'' DF model (Eq.~\ref{eqn:discrete_df_smoothing}) calculated on-the-fly to an extremely low-resolution IC with $m_{\rm DM}=0.1\,M_{\rm BH}$,\footnote{We find that the results of our sub-grid DF runs are robust and nearly independent of resolution so long as the dynamical mass of the target/BH particle is at least slightly larger (a factor of $\gtrsim2-3$) than the mass-weighted median of the ``background'' $N$-body particles. If the BH particle has mass lighter than the background, then either sub-grid DF model (C43 or Eq.~\ref{eqn:discrete_df_smoothing}) requires additional care, or else spurious $N$-body heating effects can become larger than the true DF forces. So for practical applications where one wishes to evolve the dynamics of targets with very small masses, it is useful to follow standard practice \citep{springel:multiphase,dimatteo:cosmo.bh.growth.sim.1,hopkins:lifetimes.letter} and assign a separate ``true target/BH mass'' used for the DF calculation and other physics to the $N$-body particle ``carrying'' the target/BH.} using $\epsilon \sim \Delta x_{i}$ as would be applied in typical cosmological simulations. For this low-resolution case, there is significant variation owing to different eccentric orbits and discreteness noise, so we show the median and $\pm1\sigma$ range of BH velocities. The median agrees remarkably well with the converged solution. We stress that Eq.~\ref{eqn:discrete_df_smoothing} contains {\em no other adjustable parameter} beyond the physically motivated $\epsilon$: this is an actual prediction. Next we compare the ``fitted'' C43 model Eq.~\ref{eqn:chandra_maxwell}: as described above, in addition to the arbitrary choice of kernel estimator size and shape (which we set to the smallest size that reduces noise acceptably), we freely vary the numerical pre-factor (``effective Coulomb log'') $\ln{\Lambda}$ in Eq.~\ref{eqn:chandra_maxwell} until we find a value which best matches our high-resolution simulations. For the best-fit value, the result is strikingly similar to our Eq.~\ref{eqn:discrete_df_smoothing} (perhaps not surprising, given that we start from the same assumptions) -- but we stress that even small, $\sim 10\%$ differences in $\ln{\Lambda}$ produce significant disagreement with the high-resolution simulations. Moreover, we have considered a dozen ``standard'' estimates of $\ln{\Lambda}$ widely used in the literature \citep[see references above and][]{hashimoto03:varying.culomb.log.in.dynfric,just:2011.dyn.fric.coulomb.calib,antonini:2012.df.faster.star.contrib,Dosopoulou:2017.df.modeling}, e.g.\ $\Lambda \sim |\rho/\nabla \rho|/b_{\rm min}$, and find that {\em none} of them correctly predicts the best-fit $\Lambda$ (usually discrepant by factors $\sim1.3-2$). This probably owes at least in part to the fact that the central \citet{hernquist1990} distribution function is appreciably non-Maxwellian, as discussed in \citep{karl:2015.direct.nbody.df.sims}, so the fitted $\ln{\Lambda}$ is essentially compensating for this error (the ``${\rm erf(...)-...}$'' term in Eq.~\ref{eqn:chandra_maxwell}). As noted above, these conclusions are robust to the parameters of the initial halo and orbit, mass profile of the halo assumed, amount of angular momentum (anisotropy in the distribution function), and other choices of the problem setup: however, we find as expected that the C43 ``effective Coulomb logarithm'' must be re-calibrated in many cases to fit high-resolution simulations. We have also tested other numerical aspects of the method including e.g.\ the tree opening criteria \citep{power:2003.nfw.models.convergence,springel:gadget}, timestep size/integration accuracy \citep{fire2,Grudic2020}, and inclusion/exclusion of the perpendicular force (Eq.~\ref{eqn:df.perp}): none of these has a significant effect \citep[consistent with previous studies, see e.g.][]{just:2011.dyn.fric.coulomb.calib,karl:2015.direct.nbody.df.sims,mikherjee:2012.fmm.dynfric.tests}. Given the close agreement between the discrete DF and explicitly calibrated-Chandrasekhar DF models, it is likely that more detailed differences in orbit shape we see comparing {\em either} of these models and the true, high-resolution simulation owes not to anything we can simply ``further calibrate'' (like a Coulomb logarithm), but rather to fundamental resolution effects (e.g.\ more accurately recovering the shape of the background potential itself, hence the ``correct'' elliptical orbit structure; or the treatment of subparsec-scale physics around SMBHs, a known issue as discussed in, e.g. \citealt{Rantala2017,Mannerkoski2021,Mannerkoski2022}), as well as assumptions of the Chandrasekhar-like derivation which our DF derivation also implicitly assumes. For example, the assumption of linearity (that the net effect on the BH can be approximated via the sum of many independent two-body encounters) or forward/backward asymmetry in the distribution function (implicit in a stronger assumption like homogeneity but present in a weaker form in our derivation as well). \subsection{Post-Processing in Multi-Physics Galaxy Simulations} \label{sec:results_post} While the idealized experiments above are important for validation, their simplicity means that it is difficult to gain insight into possible differences between our Eq.~\ref{eqn:discrete_df_smoothing} and the fitted C43 model. We therefore briefly consider this in post-processing of a multi-physics galaxy formation simulation. The specific (arbitrary) simulation and time we select is the ``{\bf z5m12b}'' galaxy at redshift $7.0$ described in \cite{Ma2021}, illustrated in Figure \ref{fig:galaxy}. The simulation is multi-component, containing dark matter, stars, multi-phase gas, and black holes, with complicated cooling, star formation and ``feedback'' physics all included on-the-fly. This particular snapshot is chosen because it is dynamically unstable, asymmetric, gas-rich and starforming, and contains several giant star clusters and molecular cloud complexes, all of which complicate the dynamics. We compare the results from the discrete estimator with the Chandrasekhar estimator and discuss their differences and implications. In Fig.~\ref{fig:df_compare_amp} we compare the acceleration amplitude $a_\mathrm{df} \equiv |{\bf a}_{\rm df}|$ calculated from different formulae for a test particle of mass $10^5\,M_\odot$ in the representative snapshot. The test particle is placed along an arbitrary $x$-axis passing through the galaxy center with a simulation-frame velocity of ${\bf V}_{M}=200\,{\rm km\,s^{-1}}\,\hat{y}$ (Figure \ref{fig:galaxy}, red dashed line and arrow). We compare the results from our ``full'' expression (Eq.~\ref{eqn:discrete_df_smoothing}), our expression ignoring force softening (Eq.~\ref{eqn:discrete_df}), and the classical C43 expression (Eq. \ref{eqn:chandra_original}). Eqs.~\ref{eqn:discrete_df_smoothing} \&\ \ref{eqn:discrete_df} can be directly applied to the simulations without any processing. To apply Eq.~\ref{eqn:chandra_original}, we estimate the continuous $\rho$ at each position ${\bf x}_{M}$ using a kernel density estimator by averaging through the $0.4\,\mathrm{kpc}$ cubic box around ${\bf x}_{M}$; we calculate the local velocity integral by converting it into the usual discrete sum in this box, and we take $\ln{\Lambda}=5$ to be constant, once again fitting it so that the median/mean acceleration is essentially identical. The agreement between Eq.~\ref{eqn:discrete_df_smoothing} and Eq.~\ref{eqn:chandra_original} is reasonable, but again this requires choosing $\Lambda$ specific for the problem and snapshot (we note, for example, that the effective $\Lambda$ here differs by almost a factor of two from the value fit to the idealized \citet{hernquist1990} profile sphere tests in the previous section). Eq.~\ref{eqn:discrete_df}, which ignores force softening, is also quite similar, except for occasional ``spikes'' arising from close proximity to $N$-body particles producing a spurious large force which is not actually present in the simulations (accounted for correctly in our Eq.~\ref{eqn:discrete_df_smoothing}). Fig.~\ref{fig:df_compare_dir} similarly compares the direction $\hat{\bf a}_{\rm df}$. Because C43 assume homogeneity, and their ${\bf a}_{\rm df}^{\rm C43}$ has equal contributions from all scales, a major ambiguity in Eq.~\ref{eqn:chandra_original} -- even after we fit out the Coulomb logarithm -- is where/how to evaluate ${\bf V}={\bf V}_M-{\bf V}_m$. Should we interpolate to the local value at ${\bf x}_{M}$, weight by contribution to $\Lambda$, or weight by mass (dominated by distant particles)? If we follow the same procedure above to obtain a ``local'' ${\bf V}$, then we see that usually, the direction of $\hat{\bf a}_{\rm df}$ from Eq.~\ref{eqn:discrete_df_smoothing} and from Eq.~\ref{eqn:chandra_original} agree, especially if we assume a test particle $M$ with large lab-frame $|{\bf V}_{M}|$ (since then ${\bf V} \approx {\bf V}_{M}$, independent of the background ${\bf v}_{m}$). But when ${\bf V}_{M}$ is small (the case of interest for sinking), Eq.~\ref{eqn:chandra_original} can occasionally ``flip'' to point in an unphysical direction in a noisy velocity field. Our Eq.~\ref{eqn:discrete_df_smoothing} allows us to easily quantify the contributions to the total $a_{\rm df}$ from all the mass in radial shells. Fig.~\ref{fig:lss_slice} shows this (specifically $d a_{\rm df} / d\ln{r}$, integrating the contributions from all particles in logarithmically-spaced shells of distance $r$ from $M$) again for a representative example (with $M$ at $|{\bf x}_{M}| = 1\,$kpc from the origin on the $x$-axis) in the same snapshot. At small scales ($r\lesssim 1\,$kpc) around $M$, where the density field is {\em statistically} homogeneous (there are local fluctuations, but there is not a strong systematic dependence of density on distance $r$ from $M$), we see the expected Coulomb log behavior ($d a_{\rm df} / d\ln{r} \sim $\,constant). At larger radii, the contribution falls rapidly. We can, for example, truncate the sum in Eq.~\ref{eqn:discrete_df_smoothing} at the virial radius (labeled) with negligible loss of accuracy. This is expected if the galaxy follows a realistic density profile, as in e.g.\ an isothermal sphere, the density is not constant, but at $r \gg |{\bf x}_{M}|$ falls rapidly ($\propto r^{-2}$, giving rapid convergence). As expected, the behavior at larger $r$ does motivate the value of $\ln{\Lambda}$ we fit: if we take $\Lambda=b_{\rm max}/b_{\rm min}$, with $b_{\rm min} \sim {\rm max}[(m/\rho)^{1/3},\,G\,M/V^{2}] \sim 1\,$pc, and $b_{\rm max} \sim 1\,$kpc, we obtain $\ln{\Lambda} \sim 7$, similar to our fitted value. The above discussions are closely related to cases where the background field particles have a non-negligible physical bulk motion, like a wandering BH in a rotating disc-galaxy setup. While studying such simulations in detail is beyond the scope of this work, we comment that the rotating of star particles in the disc could largely affect the strength and direction of DF, since their phase space distribution departs significantly from homogeneity and isotropy. In an extremely dense galactic environment, we may expect the local disc particles with similar circular velocities contribute most to the BH's DF, such that the BH is boosted by the field particles around it, which is similar to the case we already studied. For a less dense setup, non-local (halo) particles with different circular velocity could be important, and their combined contribution to DF with local disc particles could make the BH dynamics more complicated. Our DF estimator, which applies to an arbitrary phase space distribution and counts the DF contribution from each individual field particle, would be ideal for studying such problems. Such topics will be studied in future work. Briefly, one might wonder whether on sufficiently large scales, where the Universe becomes homogeneous and isotropic, $a_{\rm df}$ might begin to grow logarithmically again. However, even if we ignore finite speed-of-gravity effects (i.e.\ consider pure Newtonian gravity), on these scales the velocity must include the Hubble flow, so ${\bf v}_{\rm physical} = {\bf v}_{\rm peculiar} + H(z)\,{\bf r}$. In an isotropic pure Hubble-flow medium, the DF is identically zero, as there is always equal-and-opposite contributions to ${\bf a}_{\rm df}$ from the fact that ${\bf V} \propto {\bf r}$ (i.e.\ because $\langle {\bf V}\rangle=0$ on all scales). If we consider a Hubble flow plus peculiar velocities, then expanding Eq.~\ref{eqn:discrete_df_smoothing} appropriate for large $r$ where $\langle \rho(r) \rangle \sim $\,constant and $H\,r \gg \langle |{\bf v}_{\rm peculiar}(r)|^{2} \rangle^{1/2}$, the contributions to the sum take the form $\sum\,G^{2}\,M\,\langle |{\bf v}_{\rm peculiar}(r)|^{2} \rangle^{1/2}\,\Delta m_{i} / H^{3}\,r^{6} \propto \int \rho\,r^{-6}\,d^{3}{\bf x}$, which converges rapidly as $r\rightarrow \infty$. \subsection{Interpolating the Sub-Grid Model in Simulations With Variable Masses} \label{sec:double-count} Finally, one can easily imagine situations such as cosmological simulations with a range of BH masses where the DF forces are well-resolved for some targets (e.g.\ supermassive BHs with $M_{\rm BH}\sim 10^{10}\,M_{\odot}$) but not others (e.g.\ lower-mass BHs). In these cases applying Eq.~\ref{eqn:discrete_df_smoothing} to all BHs would ``double count'' for some. A simple (albeit ad-hoc) approach to avoid double-counting is to multiply $\Delta {\bf a}_{\rm df}^{i}$ by a sigmoid or ``switch''-like function $g(\Delta m_{i}/M_{j},...)$ which has the property $g(x,...)\rightarrow 0$ for $x\rightarrow 0$ and $g(x,...) \rightarrow 1$ for $x\rightarrow \infty$. It is beyond the scope of our paper here to develop and test such models, and from Fig.~\ref{fig:values_evo} we see one complication is that this should depend on how one treats the force softening (not just particle masses $\Delta m_{i}$), but a quick examination of the idealized tests in \S~\ref{sec:results_sim} with different $M_{\rm BH}$ suggests (if we assume $\epsilon \sim \Delta x$, as usually adopted in such simulations) a simple function like $g = \mathrm{min} (1.0, \mathrm{max}(0, (3/\log(M_{{\rm BH},\,j}/\Delta m_i)-1)/1.6))$ works reasonably well. Another advantage of our Eq.~\ref{eqn:discrete_df_smoothing} is that because it operates in pairwise fashion, it can naturally deal with simulations with a wide range of $\Delta m_{i}$ (a common situation), while attempting to apply such a correction factor ``locally'' to Eq.~\ref{eqn:chandra_original} leaves it ill-defined which value of $\Delta m_{i}$ to use. \section{Conclusions} \label{sec:discussion} In numerical simulations, especially of star and galaxy formation, it is common to encounter the limit where DF {\em should} be experienced ($M\gtrsim m$) by some explicitly-evolved objects $M$ (e.g.\ black holes, massive stars), but it cannot be {\em numerically} resolved ($\Delta m_{i} \gtrsim M$). As a result, there have been several attempts to develop and apply ``on the fly'' sub-grid DF models. Almost all of these amount to some attempt to calculate and apply the traditional C43 formula (Eq.~\ref{eqn:chandra_original}) to the masses $M$ at each time \citep[see e.g.][]{colpi:2007.binary.in.mgrs,dotti:bh.binary.inspiral,Tremmel2015,Pfister2019}. However, this can introduce a number of problems in practice, namely the ambiguity of kernel-dependent locally-defined quantities, inconsistency in applying force softening and momentum conservation, the semi-arbitrary choice of Coulomb logarithm, the necessity of assuming Maxwellian velocity distribution functions, and additional computational expenses for kernel estimates. In this manuscript, we derive a new discrete expression for the DF force, ${\bf a}_{\rm df}$, given in Eq.~\ref{eqn:discrete_df_smoothing}. This formula is specifically designed for application to numerical simulations, either in post-processing, or ``on the fly'' when the DF forces cannot be resolved (e.g.\ when $N$-body particle masses are comparable to the BH mass $M$, as a ``sub-grid'' DF model). While still approximate, this has a large number of advantages compared to the traditional \citet{chandrasekhar1943} (C43) analytic expression, including (1) it allows for an arbitrary distribution function, without requiring an infinite homogeneous time-invariant medium with constant density, Maxwellian velocity distribution, etc. (but it does reduce {\em identically} to a discrete form of the C43 expression, when these assumptions are actually satisfied); (2) it is designed specifically for simulations so it is represented only as a sum over quantities which are always well-defined in the simulation for all $N$-body particles (e.g.\ positions, velocities, masses), and does not require the expensive and fundamentally ill-defined evaluation of quantities like a ``smoothed'' density, background mean velocity/dispersion/distribution function, Coulomb logarithm, etc.; (3) it trivially incorporates force softening exactly consistent with how it is treated in-code, and generalizes to arbitrary multi-component $N$-body simulations with different species and an arbitrary range of particle masses; (4) it manifestly conserves total momentum, unlike $N$-body implementations of C43; (5) it can be evaluated directly alongside the normal gravitational forces with negligible cost, and automatically inherits all of the desired convergence and accuracy properties of the $N$-body solver. We have implemented this ``live'' evaluation of Eq.~\ref{eqn:discrete_df_smoothing} in {\small GIZMO}, and verified that all of the properties above apply, that it agrees well with our $N$-body simulations, and that the computational overhead of evaluating it alongside gravity in the tree is immeasurably small. There are still uncertainties in our work. In our derivation of the discrete formula, we inserted an approximate integral kernel, which is not necessarily unique or best-behaved. We found that even if our discrete estimator closely agrees with the calibrated-Chandrasekhar DF estimator in our test problems, it still differs from the the high-resolution simulation results in terms of the detailed particle trajectories, which might be related to the fundamental Chandrasekhar-like assumptions we have made in our formula. We also note that it remains an open question how to accurately avoid ``double counting'' when some of the DF may be captured self-consistently by the $N$-body code while additional DF is modeled using our sub-grid model. This is especially the case when the system evolves (such as when supermassive black holes grow) and the fraction of ``resolved'' dynamical friction changes with time. Future work will be needed to make improvements on these points. \section{Acknowledgements} We thank Sophia Taylor for early contributions to the development of the discrete dynamical friction estimator, and Daniel AnglГ©s-AlcГЎzar for useful discussions. Support for LM \&\ PFH was provided by NSF Collaborative Research Grants 1715847 \&\ 1911233, NSF CAREER grant 1455342, NASA grants 80NSSC18K0562, JPL 1589742. LZK was supported by NSF-AAG-1910209, and by the Research Corporation for Science Advancement through a Cottrell Fellowship Award. CAFG was supported by NSF through grants AST-1715216, AST-2108230, and CAREER award AST-1652522; by NASA through grants 17-ATP17-0067 and 21-ATP21-0036; by STScI through grants HST-AR-16124.001-A and HST-GO-16730.016-A; by CXO through grant TM2-23005X; and by the Research Corporation for Science Advancement through a Cottrell Scholar Award. Numerical calculations were run on the Caltech compute cluster ``Wheeler,'' allocations FTA-Hopkins supported by the NSF and TACC, and NASA HEC SMD-16-7592. \section*{Data Availability} The data and source code supporting the plots within this article are available on reasonable request to the corresponding author. \bibliographystyle{mnras} \bibliography{df.bib} \bsp % \label{lastpage}
Title: Coherent Time-Domain Canceling of Interference for Radio Astronomy
Abstract: Radio astronomy is vulnerable to interference from a variety of anthropogenic sources. Among the many strategies for mitigation of this interference is coherent time-domain canceling (CTC), which ideally allows one to "look through" interference, as opposed to avoiding the interference or deleting the afflicted data. However, CTC is difficult to implement, not well understood, and at present this strategy is not in regular use at any major radio telescope. This paper presents a review of CTC including a new comprehensive study of the capabilities and limitations of CTC using metrics relevant to radio astronomy, including fraction of interference power removed and increase in noise. This work is motivated by the emergence of a new generation of communications systems which pose a significantly increased threat to radio astronomy and which may overwhelm mitigation methods now in place.
https://export.arxiv.org/pdf/2208.04256
command. \newcommand{\vdag}{(v)^\dagger} \newcommand\aastex{AAS\TeX} \newcommand\latex{La\TeX} \newcommand{\mike}[1]{\textcolor{purple}{(RMB: #1)}} \newcommand{\steve}[1]{\textcolor{red}{(SWE: #1)}} \newcommand{\rdel}[1]{\textcolor{blue}{\sout{#1}}} \newcommand{\radd}[1]{\textcolor{blue}{#1}} \begin{document} \title{Coherent Time-Domain Canceling of Interference for Radio Astronomy} \correspondingauthor{S.W.\ Ellingson} \email{ellingson@vt.edu} \author[0000-0001-8622-7377]{S.W.\ Ellingson} \author{R.M.\ Buehrer} \affiliation{Bradley Dept.\ of Electrical \& Computer Engineering \\ Virginia Tech\\ Blacksburg, VA 24061, USA} \keywords{instrumentation : detectors -- methods : analytical} \section{Introduction} \label{sIntro} Interference of anthropogenic origin is an old but growing problem for radio astronomy. While certain frequency bands are in some sense set aside for exclusively passive uses, less than 2.1\% of the spectrum below 3~GHz is protected in this manner \citep{NRC-SMS}. In fact, most of the spectrum that is necessary and commonly used for radio astronomy is in frequency bands in which radio astronomy receives little or no regulatory protection. Astronomy in bands not explicitly protected for radio astronomy is possible only because large swaths of the time-frequency plane remain fallow, and because astronomers have become expert at editing data in order to remove interference from sparsely-used regions of the time-frequency plane. An overview of these techniques appears in \citet{ITU-RA.2126-1}. By far, the most commonly-used category of interference mitigation techniques consists of detecting time-frequency pixels that are corrupted by interference, and then eliminating those pixels from subsequent processing, typically after the observation. In this paper, we refer to this as \emph{incoherent time-frequency editing} (ITFE). ITFE is effective because modern instruments typically reduce Nyquist-rate time-domain signals to time-frequency ``dynamic spectrum'' representations having resolutions ranging from microseconds to seconds in the time domain, and kHz to MHz in the frequency domain. This is necessary in order to accommodate the limited bandwidth and capacity of modern data storage systems. This intermediate form of the data is useful for identification of interference and provides a convenient opportunity to excise the affected time-frequency pixels prior to a subsequent reduction to science products such as averaged spectrum for spectroscopy, and dedispersed and averaged pulse profiles for pulsar processing. ITFE algorithms have been refined and fine-tuned over time, culminating in sophisticated and highly-effective software such as ``flagdata'' in the interferometer data analysis software CASA,\footnote{\url{https://ascl.net/1107.013}} % ``rfifind'' in the pulsar analysis software PRESTO,\footnote{\url{https://ascl.net/1107.017}} % and ``AOFlagger'' \citep{OVR12}. This state of affairs may not be sustainable. Strong societal, economic, and political pressures exist to increase the utilization of spectrum, including in remote areas where radio telescopes tend to be located. A particularly ominous development in this regard is the dramatic increase in the use of satellites to deliver world-wide continuous broadband communications. Whereas previous generations of systems consisted of a few satellites in geosynchronous orbit (e.g., INMARSAT), or tens of satellites in low-earth orbit (LEO; e.g., Iridium), emerging and planned systems consist of tens of thousands of satellites in LEO, transmitting in L-band % and X-band % \citep{CEPT-ECC-271, kodheli2020satellite,UNOOSA-IAU-2021}. Soon there will be no location on Earth which is not within view of many such satellites simultaneously. Interference from terrestrial communications is also expected to worsen with the deployment of new generations of wireless communications systems and navigation and positioning systems using radio frequencies. Compounding the problem is the fact that future generations of radio telescopes will consist of 100s of antennas deployed over areas 100s of km in extent, and will therefore will be geographically commingled with interference sources that previously could be avoided simply by siting in remote locations. Therefore, it is uncertain whether ITFE will continue to be sufficient; at some point the amount of data that must be excised renders the remainder unsuitable for scientific interpretation; and even if this is not the case, the still-formidable amount of manual effort required to process data using ITFE may become intractable. One possible solution lies in spatial processing. Telescopes with array feeds, or telescopes which are themselves arrays, have in principle the ability to form pattern nulls in the directions from which interference arrives. While this strategy has been well-studied, it is not in regular use in any major radio telescope. Reasons include (1) high system cost/complexity and (2) undesirable dynamic modification of main lobe gain and overall pattern characteristics which are difficult to know or correct in subsequent processing. An alternative strategy, and the topic of this paper, is coherent time-domain canceling (CTC), illustrated in Figure~\ref{fCanceling}. (The particular form shown in this figure is the ``feedforward'' architecture. An alternative ``feedback'' architecture is shown in Figure~\ref{fCanceling-Feedback} (Section~\ref{sFA})). In Figure~\ref{fCanceling}, the signal $x(t)$ from the instrument is the sum of the astronomical signal of interest (SOI) $s(t)$, interfering signal $z(t)$, and noise $n(t)$. The signal $x(t)$ is compared to a ``reference signal'' $d(t)$ which represents the best available information about $z(t)$. The reference signal may be obtained either from an external input, such as a separate antenna pointed at the source of the interference; or internally synthesized; e.g., based on \emph{a priori} information about $z(t)$. The result of the comparison is used to create the interference estimate $\hat{z}(t)$, which is subsequently subtracted from $x(t)$, yielding the output $y(t)=s(t)+\left[z(t)-\hat{z}(t)\right]+n(t)$. Ideally, this operation completely removes the interference (i.e., $z(t)-\hat{z}(t)=0$) while preserving $s(t)$ and (importantly in radio astronomy) $n(t)$ with negligible distortion. Thus, CTC potentially allows an instrument to ``look through'' interference and, unlike spatial processing, is applicable also to single-feed instruments and instruments employing fixed analog beamforming, such as certain kinds of focal plane arrays and radio cameras. Note that the ``look through'' capability is not merely deleting interference, but (unlike ITFE) is potentially restoring the use of the afflicted spectrum for astronomy. Despite these compelling features, and like spatial processing, no major radio telescope regularly employs CTC. The reasons are somewhat similar: Increased system cost/complexity, and the potential for increased noise and signal distortion that may be difficult to know or correct in subsequent processing. The purpose of this paper is to provide a review of CTC for radio astronomy, provide new information about capabilities and limitations, and provide a new starting point for those interested in revisiting this technology. This paper is organized as follows. Section~\ref{sHMCR} addresses the important preliminary question of how effective CTC needs to be in order to achieve the desired ``look through'' capability; and also the distinction between CTC for radio astronomy and CTC for communications, radar, and other active radio frequency applications. Section~\ref{sOTDC} presents the theory of optimal CTC design, and what constitutes ``optimal'' in this application. In Section~\ref{sHMCP} we provide a new and comprehensive analysis of the performance of optimal canceling including an example using real-world data. Section~\ref{sRC} presents a canceler with reduced complexity, but similar performance. Whereas Sections~\ref{sOTDC} through \ref{sRC} address the ``feedforward'' architecture depicted in Figure~\ref{fCanceling}, Section~\ref{sFA} addresses the alternative ``feedback'' architecture, which exhibits similar performance in certain conditions, but which may be less well-suited to radio astronomy. Section~\ref{sPC} addresses practical considerations that apply to the implementation of CTC in radio astronomy. Section~\ref{sSurvey} presents a brief review of past work on CTC for radio astronomy. We have made the unconventional choice of presenting this review at the end so that past work can be understood in the context of the theory and concepts presented in this paper. \section{\label{sHMCR}How Much Canceling is Required?} A fundamental difference between CTC and ITFE is that CTC cannot \emph{completely} remove interference. Whereas ITFE removes 100\% of the interference that is detected, CTC is limited by estimation error even if the interference is reliably detected. This raises the question of how much canceling is required, which in turn raises the question of how much interference is detrimental. The answers depend on the application: See \citet{NRC-SMS} for a general overview and \citet{ITU-RA.769,ITU-RA.1513} for levels that have traditionally been considered detrimental to radio astronomy. What follows is a generic analysis that provides context for the performance levels reported later in this paper. Consider the system model shown in Figure~\ref{fCancelingSystem}. Here, INR$_x$ is the interference-to-noise ratio (INR) at the input of the canceler, INR$_y$ is the INR at the output of the canceler, and INR$_{post}$ is the INR following whatever averaging is subsequently applied. For simplicity and with no loss of generality, let us assume INR$_x$, INR$_y$, and INR$_{post}$ are each evaluated for the same bandwidth $B$. To quantify the amount of canceling, let us define ``interference rejection ratio'' (IRR) to be the ratio of the time-average power of interference in the input to time-average power of interference at the output. (This definition is formalized in Section~\ref{sOTDC}. For the purposes of this section, the definition as stated suffices.) Note IRR $=1$ for no canceling and IRR$\rightarrow\infty$ with improving performance. Averaging increases the SOI signal-to-noise ratio as well as INR$_{post}$ in proportion to $\sqrt{B\Delta t}$, where $\Delta t$ is the averaging time. Normally $\Delta t$ is selected to make the SOI signal-to-noise ratio $\gg 1$, whereas INR$_{post}$ is ideally $\ll 1$ so as to have negligible effect on the observation. Therefore the amount of canceling required to effectively mitigate an interferer, assuming the noise is unaffected by the canceler, is \begin{equation} \mbox{IRR} \gg \mbox{INR}_x\cdot\sqrt{B\Delta t}~\mbox{,} \end{equation} and this is necessary even if $\mbox{INR}_x\ll 1$. It will be useful later in this paper to have this condition in the form of a specific numerical threshold that can be compared to results. For this purpose we define \begin{equation} \mbox{IRR}_{req} = 10 \cdot \mbox{INR}_x\cdot\sqrt{B\Delta t} \label{eIRRreq} \end{equation} where the constant 10 is arbitrary but reasonable in light of the preceding discussion. To clearly see the implications, consider an observation with $\sqrt{B\Delta t}=100$; for example, $B=10$~kHz and $\Delta t=1$~s. First, a strong interferer appears, having INR$_x = 10^3$. Without CTC, INR$_{post}=10^5$ after averaging; thus IRR$_{req} = 10^6$ (60~dB). As will be demonstrated in Section~\ref{sHMCP}, this is on the high end of plausible values of IRR and requires that INR$_d$, the interference-to-noise ratio in the reference channel $d(t)$, be very high. This level of performance also requires that no implementation issues (addressed in Section~\ref{sPC}) significantly degrade IRR. It should also be noted that this is a regime which has been well-explored in the literature on communications, radar, and other active radio frequency systems (see e.g.\ \citet{Ghose96}). However an even more challenging scenario emerges when averaging converts weak interference into strong interference. Continuing the example: An interferer having INR$_x=10^{-1}$ emerges with INR$_{post}=10$ without CTC, and so becomes detrimental despite being very weak. Here, IRR$_{req} = 10^2$ (20~dB). Although this IRR is relatively modest, it must be achieved for a much lower INR$_x$, and perhaps also with a much lower INR$_d$. As we shall see later in this paper, low INR$_d$ also increases the risk that significant additional noise is injected into the output. Thus, ironically, this weak interference may be more difficult to mitigate than the strong interference considered in the previous paragraph. This is a regime which has \emph{not} been well-explored in the communications, radar, and navigation literature because INR$_{post}$ is typically not much greater than INR$_y$ in these applications, and furthermore there is typically no advantage in driving INR$_y$ or INR$_{post}$ below 1 in these applications. For these reasons, CTC techniques which are effective for communications, navigation, and radar are not necessarily suitable for radio astronomy. \section{\label{sOTDC}Optimal Time-Domain Canceling} \subsection{Derivation \& Implementation} We now describe the optimal implementation of the ``estimate interference waveform'' block in Figure~\ref{fCanceling}, which somehow computes the estimate $\hat{z}(t)$ using $x(t)$ and $d(t)$. A broad class of relevant applications is addressed by assuming $d(t)$ has the form \begin{equation} d(t) = f(\tau) * s(t) + g(\tau) * z(t) + u(t) \label{eRefChModel} \end{equation} where $f(\tau)$ and $g(\tau)$ are impulse responses describing the difference between how $s(t)$ and $z(t)$, respectively, appear in the reference channel relative to the system input, $u(t)$ is the noise in the reference channel, and ``$*$'' denotes convolution. This suggests implementation of the ``estimate interference waveform'' block as a filter having impulse response $h(\tau)$; i.e., \begin{equation} \hat{z}(t) = h(\tau) * d(t) \end{equation} To determine $h(\tau)$, we first note \begin{equation} \hat{z}(t) = h(\tau) * f(\tau) * s(t) + h(\tau) * g(\tau) * z(t) + h(\tau) * u(t) \label{ezhd} \end{equation} so ideally $h(\tau)*f(\tau)=0$, $h(\tau)*g(\tau)=1$, and the time-average power associated with the third term (noise) is minimized. A general solution meeting these criteria is not possible because $f(\tau)$ and $g(\tau)$ are not precisely known \emph{a priori}. To make progress, we assume that the time-average power associated with the first term is much less than the time average power associated with the second term; i.e., \begin{equation} \left<\left|h(\tau) * f(\tau) * s(t)\right|^2\right> \ll \left<\left|h(\tau) * g(\tau) * z(t)\right|^2\right> \label{eA1} \end{equation} where the angle brackets denote mean over time. This condition is not hard to meet since the magnitude of $f(\tau)*s(t)$ can be made sufficiently small compared to that of $g(\tau)*z(t)$ in a properly-designed canceling system. For example, if $d(t)$ is obtained using a separate antenna or beam, that antenna or beam would be designed to have low gain in the direction of the SOI and relatively high gain in the direction of the interference. For the parametric estimation and subtraction (PES) strategy described in Sec.~\ref{sSurvey}, $d(t)$ is generated internally and so for these methods $f(\tau)*s(t)$ is effectively zero. Assuming Equation~\ref{eA1} applies, it is possible to design $h(\tau)$ to minimize the mean square error (MSE) defined as follows: \begin{equation} \mbox{MSE} = < | x(t-t_p) - h(\tau) * d(t) |^2 > \label{eMSE1} \end{equation} where $t_p$ is the delay indicated in Figure~\ref{fCanceling}, and is now seen to be the ``pipeline delay'' associated with filtering. Although minimizing MSE is not necessarily equivalent to forcing $h(\tau)*g(\tau)=1$, minimizing MSE does maximize the interference-to-noise ratio in $\hat{z}(t)$, and is in this sense optimal. At this point it is convenient to switch to discrete time notation. Let {\bf d}[k] be $M$ consecutive samples of $d(t)$ organized as an $M\times 1$ vector as follows: \begin{equation} {\bf d}[k] = \left[ d((k-M+1)T_S) ~ d((k-M+2)T_S) ~...~ d(kT_S) \right]^T \end{equation} where $T_S$ is the sample period and $k$ is an integer. Also, we define ${\bf w}^*$ (``$^*$'' denoting the conjugate) to be the $M\times 1$ vector representing $h(\tau)$.\footnote{The use of ${\bf w}^*$ as opposed to ${\bf w}$ is arbitrary, but is customary and simplifies notation later.} Equation~\ref{eMSE1} may now be written in discrete complex baseband form as follows: \begin{equation} \mbox{MSE} = < | x(kT_S-t_p) - {\bf w}^H {\bf d}[k] |^2 > \label{eMSE2} \end{equation} where ``$^H$'' denotes the conjugate transpose and ``$<\cdot>$'' now operates over $k$. It is well known (see e.g. \cite{Haykin2001}) that the filter ${\bf w}$ which minimizes MSE is the solution to \begin{equation} {\bf R}{\bf w} = {\bf r} \label{eDTWH} \end{equation} where {\bf R} is the $M\times M$ covariance matrix \begin{equation} {\bf R} = < {\bf d}[k] ~ {\bf d}^H[k] > \label{eCorr-R} \end{equation} and {\bf r} is the $M\times 1$ reference correlation vector \begin{equation} {\bf r} = < x^*(kT_S) ~ {\bf d}[k] > \label{eCorr-r} \end{equation} Finally, the filter output is \begin{equation} \hat{z}(kT_S) = {\bf w}^H {\bf d}[k] \end{equation} This method is commonly known as ``minimum MSE'' (MMSE), and we refer to this specific implementation as ``feedforward MMSE.'' There are three important things to know about feedforward MMSE in this application. First: To the extent that the inequality in Equation~\ref{eA1} is not satisfied, ${\bf w}$ will be biased and the canceling of $z(t)$ will be degraded. Second: The same problem will result in the term $h(\tau)*f(\tau)*s(t)$ being non-zero in $\hat{z}(t)$, which will distort the SOI in the output of the canceler. Third: The term $h(\tau) * u(t)$ will be injected into the output of the canceler, which will decrease INR$_y$ and color the noise in $y(t)$, so it is important that INR$_d$ be as large as possible. The second and third items are aspects of what we refer to as ``toxicity,'' and are particularly important considerations for radio astronomy. This is because achieving the necessary IRR may be for naught if the SOI $s(t)$ or the primary channel noise $n(t)$ are distorted in a manner that impedes scientific interpretation. The toxicity issue is addressed further in Section~\ref{ssTox}. To implement MMSE one must choose (1) the number of samples $L$ used for ``training;'' i.e., used to compute ${\bf R}$ and ${\bf r}$; and (2) the filter length in samples, $M$. The training length $L$ determines the accuracy to which ${\bf w}$ is computed, which normally improves with increasing $L$. Thus, IRR normally increases with $L$. However $L$ should be small enough that the change in the impulse response $g(\tau)$ is negligible relative to the time $LT_S$ over which the canceler attempts to determine ${\bf w}$. The filter length $M$ also entails a tradeoff. The filter must be long enough to equalize the frequency response corresponding to $g(\tau)$ with sufficient accuracy. However, increasing $M$ increases the effective duration of $h(\tau)$, which limits the ability of the filter to adapt to changing conditions. Thus, $M$ should be small enough that the change in $g(\tau)$ is negligible relative to the time $MT_S$ required for the filter to produce an output. Making $M$ larger than is required to equalize the interference component of the reference signal may decrease IRR and is not recommended; see e.g. Table~\ref{tHighINRM}. Finally, note that $L$ should be $\gg M$ to ensure that ${\bf R}$ is numerically well-conditioned (i.e., not nearly singular) and to ensure a low-variance estimate of ${\bf r}$. \subsection{Theoretical Performance} \label{ssTP} A complete rigorous derivation of the theoretical performance of the feedforward MMSE canceler is, to the best of our knowledge, not available. In Section~\ref{asMMSE} we derive expressions for performance for the special case of $M=1$. These are Equations~\ref{IRR2}--\ref{eaIRR2l} and \ref{IRR1}--\ref{eaIRR1l}. These expressions are validated by comparison to the simulation results in Section~\ref{sHMCP} (Figures~\ref{fLargeINRr}, \ref{fSmallINRr1}, and \ref{fSmallINRr2} and associated text), where the agreement is found to be excellent. Derivation of expressions for IRR for $M>1$ is much more difficult and has not been completed. However Section~\ref{asMMSEM} presents empirical expressions for $M>1$ (Equations~\ref{IRR2M}--\ref{eaIRR1lM}) which are again shown to be in excellent agreement with the simulation results. This paper considers two similar but distinct definitions of IRR. ``IRR$_1$'' is defined as the ratio of time-average power of the interference in the input to time-average power of the interference in the output; i.e., \begin{equation} \mbox{IRR}_1 = \frac{\left< |z(t) |^2\right>} {\left< |z(t)-h(\tau)*g(\tau)*z(t)|^2 \right>} \label{eIRR1Def} \end{equation} This is arguably the ``natural'' definition of IRR. However this definition does not account for noise injected by the canceler into the output that could be interpreted as new interference. Furthermore, this metric may be difficult to measure experimentally. Therefore we define an alternative metric ``IRR$_2$'' to be the ratio of time-average power of the interference in the input to time-average power of the difference between $z(t)$ and the interference estimate $\hat{z}(t)$ in the output; i.e., \begin{equation} \mbox{IRR}_2 = \frac{\left< |z(t) |^2\right>} {\left< |z(t)-\hat{z}(t)|^2 \right>} \label{eIRR2Def} \end{equation} As noted in Appendix~\ref{aIRR} and demonstrated in the results presented in the following sections, IRR$_1$ and IRR$_2$ are usually equal when INR$_d$ is large, but are significantly different otherwise. Our impression is that IRR$_2$ is probably most appropriate where the spectrum of the output is less important than the total power of the output; e.g., continuum and most pulsar observations. On the other hand, IRR$_1$ is perhaps more appropriate if the spectrum is the primary concern -- in particular, in spectroscopy -- since IRR$_1$ does not conflate canceler noise injection with interference suppression. \section{\label{sHMCP}How Much Canceling is Possible?} In this section we quantify the performance of feedforward MMSE CTC using a combination of simulations, derived expressions, empirical expressions, and an example using real-world data. \subsection{\label{ssED}Experiment Design} In each simulation, the interference consists of a single signal which is either a sinusoid or zero-mean white Gaussian noise. The sinusoidal interference waveform is representative of interference which is narrowband in the sense that the bandwidth of $z(t)$ cannot be spectrally resolved. When the interference is noise, it fills the Nyquist bandwidth, and can be viewed as the limiting case where the bandwidth of $z(t)$ exceeds the bandwidth of the observation. When the interference is sinusoidal, the frequency is varied from trial to trial according to a uniform random distribution from $-\pi/2$ to $+\pi/2$ radians/sample. The primary-to-reference channel response for the SOI, $f(\tau)$, is zero; i.e., there is no astronomy ingress into the reference channel. The primary-to-reference channel response for the interference, $g(\tau)$, is a constant with magnitude determined by the specified INR$_d$ and with phase varied from trial to trial according to a uniform random distribution from $-\pi$ to $+\pi$ radians. The primary and reference channel noise waveforms ($n(t)$ and $u(t)$, respectively) are uncorrelated zero-mean white Gaussian noise, and $n(t)$ and $u(t)$ are uncorrelated with $z(t)$ in scenarios where $z(t)$ is a noise waveform. In any given trial, IRR$_1$ and IRR$_2$ are computed over $10^6$ samples. Statistics of IRR$_1$ and IRR$_2$ are computed over 100 trials. Care is required in computing these statistics. The mean of these quantities over trials is not an appropriate statistic, because IRR can be intermittently very high for a sinusoid having constant magnitude, phase, and frequency over the duration of the experiment.\footnote{This is especially important to know for hardware testing using synthesized interference signals.} We solve this problem by reporting the mean over the trial values of the numerator of Equations~\ref{eIRR1Def} and \ref{eIRR2Def} divided by the mean over trial values of the denoniminator of Equations~\ref{eIRR1Def} and \ref{eIRR2Def}.\footnote{This problem can also be avoided using median statistics, but the results will be slightly different, most notably in the high-INR$_d$ regime. In this regime, the median over trials of IRR$_1$ is $\sqrt{2}\cdot\overline{\mbox{IRR}}_1$, and similarly the median over trials of IRR$_2$ is $\sqrt{2}\cdot\overline{\mbox{IRR}}_2$.} We refer to the statistics of IRR computed in this specific way as $\overline{\mbox{IRR}}_1$ and $\overline{\mbox{IRR}}_2$, respectively. We also calculate noise ingress ratio (NIR), defined as the ratio of the time average power of $n(t)-h(\tau)*u(t)$, (the total noise in the output) to the time-average power of $n(t)$, again computed over $10^6$ samples and averaged over 100 trials. The minimum and ideal value of NIR is 1 (0~dB), and a greater value indicates an increase in the effective system temperature. \subsection{High INR$_d$ -- Narrowband Interferer} \label{ssHINRdN} We begin with the special case of sinusoidal interference and high INR$_d$. Figure~\ref{fLargeINRr} shows the results for $M=1$, INR$_d=+70$~dB, varying INR$_x$ and $L$. We find that the simulations are in excellent agreement with the analysis in Appendix~\ref{aIRR} (Section~\ref{asMMSE}); that is: $\overline{\mbox{IRR}}_1 = \overline{\mbox{IRR}}_2 = L \cdot \mbox{INR}_x$. Note that IRR is proportional to both $L$ and INR$_x$, even for $L \cdot \mbox{INR}_x < 1$. Results for $M\ge1$ are shown in the first row of Table~\ref{tHighINRM}. While it is not surprising that $\overline{\mbox{IRR}}_1$ is independent of $M$, the finding that $\overline{\mbox{IRR}}_2$ is inversely proportional to $M$ is counter-intuitive.\footnote{This phenomenon is also apparent by comparing Figures~\ref{fSmallINRr1} and \ref{fSmallINRr2}.} Clearly it is not safe to make $M$ larger than necessary. \begin{table}[] \centering \begin{tabular}{l|ll} interferer & $\overline{\mbox{IRR}}_1$ & $\overline{\mbox{IRR}}_2$ \\ \hline sinusoid & $L \cdot \mbox{INR}_x$ & $L \cdot \mbox{INR}_x / M$ \\ noise & $L \cdot \mbox{INR}_x/M$ & $L \cdot \mbox{INR}_x / M$ \end{tabular} \caption{Performance in the high INR$_d$ regime and $L\gg M$, summarized from results of simulations. $M=1$ sinusoidal interferer results are verified by theory (Equations~\ref{eaIRR2h} and \ref{eaIRR1h}). $M>1$ sinusoidal interferer results agree with the empirical equations \ref{eaIRR2hM} and \ref{eaIRR1hM}. } \label{tHighINRM} \end{table} Using the definition from Section~\ref{sHMCR}, IRR is judged to be sufficient if it is greater than $\mbox{IRR}_{req}$. Using the worst case from Table~\ref{tHighINRM}, $L \cdot \mbox{INR}_x/M \gtrsim 10 \cdot \mbox{INR}_x \cdot \sqrt{B\Delta t}$. Solving for $L$, we find \begin{equation} L \gtrsim 10 \sqrt{B \Delta t} \cdot M \label{eGuidanceHigh} \end{equation} For example: Using the value of $\sqrt{B\Delta t}=100$ from Section~\ref{sHMCR}, we find $L \gtrsim 1000M$ is required for confidence that the interferer will be reduced to a negligible level in the output, and this does not depend on INR$_x$. Thus, one can plausibly achieve sufficient levels of canceling using feedforward MMSE when INR$_d$ is high. Now we consider NIR. For NIR we have only simulation results, but the findings are unambiguous. NIR does not depend on INR$_x$ in this case. NIR \emph{does} depend on the extent to which $L> M$, but this can easily be accommodated. For example: For $L=1000$ and $M=8$, NIR is merely $0.03$~dB. NIR is decreased by increasing $L$ or decreasing $M$, and is too small to be reliably measured when $L\ge 100$ and $M \le 4$. Examples of high NIR due to \emph{inappropriate} choices of $L$ and $M$ are NIR $=0.3$~dB and 5~dB for $M=8$ and $L=100$ and 10, respectively. Summarizing: The aspect of toxicity measured by NIR is best managed by minimizing $M$ and making $L\gg M$, and can be made negligible for reasonable values of $M$ and $L$. \subsection{High INR$_d$ -- Wideband Interferer} \label{ssHINRdW} The second row of Table~\ref{tHighINRM} summarizes IRR for noise interference and high INR$_d$. It is not surprising that IRR is proportional to $L\cdot\mbox{INR}_x$, as in the case of sinusoidal interference (Section~\ref{ssHINRdN}). However, in this case we find that \emph{both} $\overline{\mbox{IRR}}_1$ and $\overline{\mbox{IRR}}_2$ are inversely proportional to $M$. The reason for this peculiar dependence on $M$ is unclear and we continue to investigate. In contrast to the sinusoidal interferer scenario, NIR in the wideband interferer scenario is always too small to measure reliably (here, $< 0.01$~dB), independent of $M$ and $L$. The reason for the surprisingly good NIR performance in this case is that the canceler's ``estimate interference waveform'' block converges to approximately flat magnitude response when the interference is spectrally-white noise, but is constrained only at one frequency -- i.e., not necessarily flat and intermittently large -- when the interference is sinusoidal. The latter facilitates increased injection of reference channel noise into the canceler output. \subsection{\label{ssRINRdN}Reduced INR$_d$ -- Narrowband Interferer} Next, we consider the effect of reducing INR$_d$. Figures~\ref{fSmallINRr1} and \ref{fSmallINRr2} show $\overline{\mbox{IRR}}_1$ and $\overline{\mbox{IRR}}_2$, respectively, for the sinusoidal interferer, varying INR$_d$, INR$_x$ and $M$. Considering first $M=1$, note that the agreement between simulation and theory is excellent for both IRR metrics. As expected, the overall behavior depends on INR$_d$ relative to INR$_x L$. The high INR$_d$ regime is discussed in Section~\ref{ssHINRdN}. For the low INR$_d$ regime, the results are summarized in the first row of Table~\ref{tLowINRM}. \begin{table}[] \centering \begin{tabular}{l|ll} interferer & $\overline{\mbox{IRR}}_1$ & $\overline{\mbox{IRR}}_2$ \\ \hline sinusoid & $\left(M\cdot\mbox{INR}_d+1\right)^2$ & $M\cdot\mbox{INR}_d+1$ \\ noise & ~~~~~$\left(\mbox{INR}_d+1\right)^2$ & ~~~~~$\mbox{INR}_d+1$ \end{tabular} \caption{Performance in the low INR$_d$ regime and $L\gg M$, summarized from results of simulations. $M=1$ sinusoidal interferer results are verified by theory (Equations~\ref{eaIRR2l} and \ref{eaIRR1l}). $M>1$ sinusoidal interferer results agree with the empirical equations \ref{eaIRR2lM} and \ref{eaIRR1lM}. } \label{tLowINRM} \end{table} Note that in this regime, IRR depends only on INR$_d$ and $M$, but not on INR$_x$, and not on $L$ as long as $L\gg M$. The reason for the difference in dependence on INR$_d$ between $\overline{\mbox{IRR}}_1$ and $\overline{\mbox{IRR}}_2$ is simply that the latter considers noise injected by the canceler to be interference, whereas the former does not. It is interesting to note that increasing $M$ in the low-INR$_d$ regime is beneficial, whereas this was found to be detrimental in the high-INR$_d$ regime. The fact that IRR improves with increasing $M$ in the low-INR$_d$ regime indicates that the estimation filter is exhibiting spectral selectivity in this case. Repeating the procedure in Section~\ref{ssHINRdN}, we judge the canceling is sufficient if $\left(M\cdot\mbox{INR}_d+1\right)^n \gtrsim \mbox{IRR}_{req}$, where $n=2$ for $\overline{\mbox{IRR}}_1$ and $n=1$ for $\overline{\mbox{IRR}}_2$. Solving for INR$_d$, we find \begin{equation} \mbox{INR}_d \gtrsim 10^{1/n}~\mbox{INR}_x^{1/n} \left(B \Delta t\right)^{1/2n} M^{-1} \label{eINRdrS} \end{equation} Let us consider the implications for $\overline{\mbox{IRR}}_1$ ($n=2$). Using the value of $\sqrt{B\Delta t}=100$ from Section~\ref{sHMCR}, INR$_x$=10~dB, and $M=1$, we find INR$_d \gtrsim 20$~dB is required to have high confidence that the interferer will be reduced to a negligible level in the output. While this seems encouraging at first glance, consider what is required for a weak interferer: For INR$_x=-10$~dB, INR$_d \gtrsim 10$~dB is required. While this value of INR$_d$ is much lower, it must be achieved for an interferer which is much weaker. Specifically, the required ratio INR$_d$/INR$_x$ has increased from 10~dB to 20~dB. This does not bode well for CTC implementations in which the reference signal $d(t)$ is obtained from an auxilliary antenna. Because INR$_d$ is not necessarily high (as it was in Section~\ref{ssHINRdN}), the potential for NIR to be significant is much greater. Figure~\ref{fSmallINRr_NIR_sin} shows the situation for the sinusoidal interferer. Note that NIR can be devestatingly large for INR$_d < 40$~dB or so. Also note that the NIR catastrophe can be forestalled somewhat by increasing $M$. \subsection{\label{ssRINRdW}Reduced INR$_d$ -- Wideband Interferer} IRR for the noise interferer in the low INR$_d$ regime is summarized in the second row of Table~\ref{tLowINRM}. The single difference is that IRR does not depend on $M$, which is expected since the estimation filter is unable to exhibit spectral selectivity in this case. As noted in Figure~\ref{fSmallINRr_NIR_sin}, the NIR performance for the noise interferer is the same as that for the sinusoidal interferer, except NIR does not depend on $M$. Again this attributable to the inability of the estimation filter to exhibit spectral selectivity in this case. Before moving on, recall that the impulse response $g(\tau)$ for results in Section~\ref{sHMCP} is a complex valued constant and therefore represents a flat frequency response. To the extent that $g(\tau)$ represents a non-flat response and the resulting variation is significant over the spectrum of $z(t)$, $M$ must necessarily be increased. \subsection{\label{ssRWE} Real-World Example } In Appendix~\ref{aWX} we provide an example of the use of $M=1$ feedforward MMSE to cancel a \emph{bona fide} interference signal in a scenario representative of a typical radio astronomical observation. The interferer is an analog frequency modulation broadcast signal with bandwidth that dynamically varies from near zero to nearly the full bandwidth of the channel. The results are consistent with the results of the preceding sections, which confirms that the performance of $M=1$ feedforward MMSE is not sensitive to the details of the interference waveform. Further, this example demonstrates good performance even in a case where $g(\tau)$ is demonstrably non-stationary. \section{\label{sRC}Reduced-Complexity Feedforward Canceler} In the MMSE approach of Sections~\ref{sOTDC} and \ref{sHMCP}, the filter ${\bf w}$ is the solution to ${\bf R}{\bf w} = {\bf r}$ (Equation~\ref{eDTWH}). A simplified approach may be necessary or desirable. As we shall see in Section~\ref{ssRC-P}, simplifying the canceler does not necessarily result in a significant performance reduction. \subsection{Description} First, note that the covariance matrix ${\bf R}$ depends only on the reference signal $d(t)$ and not at all on the input $x(t)$. So, we replace ${\bf R}$ with a matrix that describes in some sense the time-average power of $d(t)$, but which facilitates a simple solution for ${\bf w}$. Such a matrix is $\|{\bf R}\|_2 {\bf I}$, where ${\bf I}$ is the identity matrix and $\|{\bf R}\|_2$ is the induced 2-norm (largest singular value) of ${\bf R}$. A variety of computationally-efficient algorithms exist for accurate estimation of the largest singular value of a covariance matrix directly from samples (i.e., $d(kT_S)$ for a set of values of $k$). Subspace tracking (see e.g., \cite{DeGroat+10} and in particular \cite{Yang95}) is well-suited to this task. The solution of ${\bf R}{\bf w} = {\bf r}$ with this simplification is: \begin{equation} {\bf w} = {\bf r}/\|{\bf R}\|_2 \label{eRC1} \end{equation} Note that this approach will entail some important disadvantages with respect to MMSE. First: Performance will be degraded if $M$ is greater than 1 and the signal subspace of ${\bf R}$ has rank greater than 1; i.e., has more than one significant singular value. Thus, degradation is expected if the interference has significant fractional bandwidth. A full-bandwidth noise interferer represents the worst case in this respect, since in that scenario of the rank of the signal subspace of ${\bf R}$ is $M$. Second: To the extent that $d(t)$ contains signals other than the intended interference component and noise, these will not be mitigated by the resulting filter and will pass through to the canceler output. In contrast, MMSE will, to the extent that the degrees of freedom provided by $M$ allow, attempt to mitigate these signals. \subsection{Performance} \label{ssRC-P} The experiments reported in Section~\ref{sHMCP} were repeated using Equation~\ref{eRC1} (in lieu of MMSE) to generate ${\bf w}$. The results for the high INR$_d$ regime are summarized in Table~\ref{tHighINRM-RC}. \begin{table}[] \centering \begin{tabular}{l|l} interferer & $\overline{\mbox{IRR}}_1$ = $\overline{\mbox{IRR}}_2$ \\ \hline sinusoid, any $M$ & ~~~$L \cdot \mbox{INR}_x$ \\ noise, $M=1$ & ~~~$L \cdot \mbox{INR}_x$ \\ noise, $M>1$ & $< L \cdot \mbox{INR}_x$ (see e.g. Fig.~\ref{fLargeINRrRC}) \end{tabular} \caption{IRR of the reduced complexity feedforward MMSE method in the high INR$_d$ regime and $L\gg M$, summarized from simulations. (Compare to Table~\ref{tHighINRM}.) } \label{tHighINRM-RC} \end{table} Comparison to the results of the MMSE implementation (Table~\ref{tHighINRM}) reveals the following differences. First, and as expected, performance is degraded for the noise interferer when $M>1$. An example is shown in Figure~\ref{fLargeINRrRC} ($M=8$), which shows that IRR saturates at some threshold value of INR$_x$ which decreases with increasing $M$. Second, $\overline{\mbox{IRR}}_1=\overline{\mbox{IRR}}_2$ in all cases considered. Specifically, $\overline{\mbox{IRR}}_2$ no longer depends on $M$. This is significant: If $\overline{\mbox{IRR}}_2$ is the metric that best describes performance in a particular application, and the interference is narrowband, and $M>1$, then the reduced complexity method actually \emph{outperforms} MMSE. It is important to keep in mind, however, that $g(\tau)$ models a zero-length impulse response channel in these experiments; should the true impulse response have significant length such that $M>1$ is required for equalization, then this advantage of the reduced complexity method will be diminished. In the low INR$_d$ regime, the IRR performance of the reduced complexity method is the same as that of MMSE. NIR performance is also somewhat different for the reduced complexity method relative to MMSE. In the high INR$_d$ regime, NIR is always negligible. This is true even for sinusoidal interference in the $M>1$ case; whereas for MMSE, NIR can become significant with increasing $M$. In the low INR$_d$ regime, the NIR performance of the reduced complexity method is the same as that of MMSE. Summarizing: The reduced-complexity method is probably an acceptable alternative to MMSE unless one of the following is true: (1) The interference has significant fractional bandwidth \emph{and} $M$ must be greater than 1; or (2) The reference channel $d(t)$ contains significant signals other than a single well-correlated version of $z(t)$, since these signals will not be mitigated in the reduced-complexity canceler as they are in the MMSE-based canceler, and therefore will be injected into the output at a significantly greater level. Item (2) is of particular concern for implementations in which $d(t)$ is obtained using a reference antenna. \section{\label{sFA}Feedback Architecture} The CTC architecture shown in Figure~\ref{fCanceling} is ``feedforward'' because the interference input to the ``estimate interference waveform'' block is from the input of the canceler. The alternative is ``feedback'' architecture, shown in Figure~\ref{fCanceling-Feedback}, in which the interference input is from the \emph{output} of the canceler. Although this architecture is not the primary topic of this paper, we address it here because it appears in several seminal papers on CTC in radio astronomy, notably \citet{BarnbaumBradley1998} and \citet{Kesteven+2005}, and is also explored in \citet{Poulson03}, an interesting experiment at the Green Bank Telescope; so a comparison is warranted. Before considering canceling performance, we identify two distinct and important characteristics of feedback architecture. First: The fact that ${\bf w}$ depends on the output means that ${\bf w}$ must be continuously updated during training. This is in contrast to feedforward architecture, where ${\bf w}$ is determined at the end of a training period of length $LT_S$, and held constant until the end of the next training period. This may be either an advantage or disadvantage depending on the nature of the interference. Second: Whereas the output of feedforward CTC is determined entirely by inputs (namely, $x(t)$ and $d(t)$), the output of feedback CTC depends also on past output. In signal processing terms, feedforward CTC has finite impulse response (FIR), whereas feedback CTC has infinite impulse response (IIR). In this section we address specifically the least mean squares (LMS) algorithm (see e.g., \cite{Haykin2001}), as it is relatively easy to study and is specifically the method used in \citet{BarnbaumBradley1998} and \citet{Poulson03}. LMS is identical to feedforward MMSE with the exception that the filter is updated iteratively according to \begin{equation} {\bf w}((k+1)T_S) = {\bf w}(kT_S) + 2\mu y(kT_S){\bf d}(kT_S) \label{eLMSUpdate} \end{equation} The parameter $\mu$ controls the tradeoff between rapid convergence and agile tracking (requiring large $\mu$) and low ``jitter'' following convergence (requiring small $\mu$). It is a well-known rule-of-thumb that $\mu$ should be less than the reciprocal of the largest eigenvalue of ${\bf R}$; i.e., $\mu < 1/(\mbox{INR}_d+1)$ \citep{Widrow76}. In practice, the optimal value of $\mu$ is typically not apparent without experimentation and tuning, and may of course also vary with circumstances. This is a disadvantage of LMS relative to feedforward architecture. In the ideal (but unlikely) case that the jitter associated with $\mu$ is negligible, the IRR achieved by LMS after convergence is the same as that of feedforward MMSE. The effect of jitter is to degrade performance in the high INR$_d$ regime. We have provided a derivation for $M=1$ (analogous to the derivation provided for $M=1$ feedforward MMSE) in Section~\ref{asLMS}. In the high-INR$_d$ regime, \begin{equation} \overline{\mbox{IRR}}_1 = \overline{\mbox{IRR}}_2 = \frac{1}{\mu}\frac{\mbox{INR}_x}{\mbox{INR}_d} \label{eIRR-LMS-HighINRd} \end{equation} At first glance, the finding that IRR is inversely proportional to INR$_d$ is counter-intuitive. The explanation for this is that the principal impairment in the high-INR$_d$ regime is jitter, which is the net effect of the change in ${\bf w}$ over the updates, and the magnitude of this change for any single update increases with increasing INR$_d$ as is apparent from Equation~\ref{eLMSUpdate}. Comparing Equation~\ref{eIRR-LMS-HighINRd} to the corresponding feedforward MMSE result (INR$_x L$), we see that the IRR achieved by LMS compared to feedforward MMSE depends on $1/\mu\mbox{INR}_d$ relative to the number of training samples $L$ used in feedforward MMSE. For example: At INR$_d=+70$~dB, LMS with $\mu\le 10^{-9}$ would outperform feedforward MMSE with $L=100$. On the other hand, decreasing $\mu$ comes at the expense of increasing convergence time for LMS, whereas increasing $L$ comes at no analogous penalty for feedback MMSE, assuming $g(\tau)$ is stationary in both cases. Returning to feedback architecture in general, it should be noted that the impact of ${\bf w}$ jitter is not merely a reduction in IRR in the high INR$_d$ regime. The jitter exists regardless of INR$_d$, and is potentially toxic for radio astronomy. Feedforward architecture, on the other hand, is not subject to ${\bf w}$-jitter, since in that architecture ${\bf w}$ is obtained from a block of $L$ samples and can be held utterly constant for as long as the scenario remains stationary.\footnote{It is perhaps more accurate to say that feedforward architecture \emph{is} vulnerable to ${\bf w}$-jitter, but over time scales of $LT_S$ as opposed to $T_S$.} Finally, it should be noted that LMS is a ``rank 1'' algorithm in the same sense as the reduced complexity feedforward MMSE canceler of Section~\ref{sRC}, and will have the associated limitations. While one might consider a MMSE implemention of the feedback architecture to address wideband interference, this has an extraordinarly greater computational burden relative to feedforward MMSE. This is because feedforward architecture requires correlation (Equations~\ref{eCorr-R} and \ref{eCorr-r}) only while training is in progress, and requires a solution to Equation~\ref{eDTWH} only when a new value of ${\bf w}$ is needed. In contrast, the analogous implementation of feedback architecture estimates interference in the output, and therefore requires a new solution of Equation~\ref{eDTWH} for every sample processed. \section{\label{sPC}Practical Considerations} In this section we address some particular issues that emerge in practical implementations of CTC. \subsection{Non-Stationarity Between the Primary and Reference Channels} \label{sNS} MMSE-based CTC is potentially sensitive to the variations in the impulse response $g(\tau)$, defined in Equation~\ref{eRefChModel}, which describes the channel response applied to the interference signal in the reference channel relative to the channel response applied to the interference signal in the primary channel.\footnote{Since the interference waveform $z(t)$ appears in both the primary and reference channels, MMSE is not affected by the non-stationarity of $z(t)$ itself; e.g., by changes in carrier magnitude, carrier phase, and so on. It is also not affected by the non-stationarity of the propagation channels through which the interference waveform is received, as long as $g(\tau)$ remains constant. The problem emerges when $g(\tau)$ changes with time, and is a problem only because the process of filter synthesis in MMSE presumes this to be constant.} The derivation of MMSE-based CTC as well as the results presented in Sections~\ref{sHMCP}--\ref{sFA} presume $g(\tau)$ to be perfectly stationary; i.e., independent of $t$. In feedforward CTC this means ${\bf w}$ is assumed to be valid between updates, and in feedback CTC this means ${\bf w}$ is assumed to be able to follow changes with negligible latency. This raises the question of the effect of non-stationarity on IRR. In Appendix~\ref{aIRRS} we derive expressions for IRR for $M=1$ feedforward MMSE, generalized from those in Section~\ref{asMMSE}. The non-stationarity is described as $g(\tau,t)$, which simplifies to $g(t)$ (i.e., a constant with respect to $\tau$) for $M=1$. It is found that the effect of time-varying $g(t)$ on IRR is negligible if \begin{equation} \epsilon^2 \ll \left( \left.\mbox{IRR}\right|_{\epsilon=0} \right)^{-1} \label{IRRSs_main} \end{equation} where $\epsilon^2$ is the mean-squared variation of $g(t)$, and $\left.\mbox{IRR}\right|_{\epsilon=0}$ is the associated IRR for stationary conditions. Thus, the impact of non-stationarity is greatest when IRR is high, decreases with decreasing IRR, and is negligible when Equation~\ref{IRRSs_main} is satisfied. For example, mean-square variation of 0.4~dB in the magnitude of $g(t)$ is significant if the IRR in stationary conditions would otherwise have been $+40$~dB, but is negligible if the IRR in stationary conditions is $+20$~dB. Experiments using \emph{bona fide} interference signals, such as those reported in Section~\ref{sSurvey} and Appendix~\ref{aWX}, provide evidence that non-stationarity exists but is not necessarily a show-stopper, especially if care is taken to use an appropriately short update rate. Nevertheless, potential adopters would be well-advised to carefully consider this issue in the design of CTC algorithms. \subsection{\label{ssIS}Intermittent Signals} While MMSE-based CTC is robust to the details of the interference waveform, there is a distinct and important type of waveform non-stationarity which can potentially cause problems: This is intermittency; i.e., signals which are not continuously present. One form of intermittency is burst modulation; examples being ground-based aviation radar and the Iridium user downlink. Other forms of intermittency include temporarily-strong reflections from aircraft and interference from sources which transmit according some indiscernible schedule. CTC is certainly applicable in each of these cases; the problem is ensuring that the interference is present in the samples used to calculate the estimation filter. Furthermore, it is preferable that the canceler operate only when the interference is present, and do nothing when the interferer is absent. This is an important consideration since a canceler operating in the absence of an interferer is prone to introduce spurious signals (more on this in Section~\ref{ssTox}). Thus, one encounters a problem in interference \emph{detection}. Reliable interference detection is typically very difficult in the radio astronomy application since it is necessary to detect very weak interference as soon as it appears. In the case of burst modulations, individual bursts may not be present long enough to be reliably detected; see e.g. \cite{EH03} for an example where the performance of detection, and not the performance of CTC \emph{per se}, limits overall IRR performance. \subsection{\label{ssTox}Toxicity} All forms of CTC entail adding the signal $\hat{z}(t)$ to the signal $x(t)$ from the telescope. Ideally $\hat{z}(t)=z(t)$, the interference component in $x(t)$. In practice, $\hat{z}(t)$ is the sum of (1) A waveform which is not quite equal to $z(t)$, (2) Spectrally-colored versions of signals that also appeared in $d(t)$ (noise and the astronomical SOI in particular), and (3) Internally-generated spurious content associated the operation of the canceler. The presence of these other signals in $\hat{z}(t)$ have a potentially deleterious effect on the processing and scientific interpretation of the data. This is what we refer to as ``toxicity''. Three aspects of the toxicity problem already addressed include reference signal noise injection (quantified as NIR), ${\bf w}$-jitter, and spurious operation due to false detection (addressed in Section~\ref{ssIS}). Additional aspects of the toxicity problem include leakage of $s(t)+n(t)$ into the reference signal path, which is a problem particularly with auxiliary antennas; and spurious spectral content associated with block-wise updating of ${\bf w}$ (see e.g. \citet{E20-STSA}). \subsection{Inadequate Reference Signal-to-Noise Ratio} A recurring theme in this paper has been the importance of a high-quality reference signal $d(t)$ with the highest possible INR$_d$. This poses a challenge in radio astronomy applications, since (as pointed out in Section~\ref{sHMCR}) even interference which is much weaker than noise is potentially damaging. The solution employed in early studies of CTC for radio astronomy (see Section~\ref{sSurvey}) was to acquire the reference signal through a separate high-gain antenna (variously referred to as a ``reference'' or ``auxilliary'' antenna) directed at the source of the interference. This is certainly effective, but entails considerable additional complexity since the antenna must be pointed, and if the source is moving, the antenna must track accordingly. This is not only awkward to implement, but requires \emph{a priori} or at least real-time knowledge of the presence and direction of sources. It should be noted that some existing and emerging radio telescope arrays employ architectures which provide multiple narrow steerable beams within the wider beam of a single element of the array. In principle these beams could be used in lieu of CTC auxiliary antennas, but only for interference which arrives from within the element pattern. Another strategy for increasing INR$_d$ is narrowband filtering with adaptive tuning. This scheme exploits the fact that interferers of interest often occupy only a small fraction of the bandwidth being processed. Thus, applying a relatively narrow filter at the center frequency of the interferer prior to the ``estimate interference waveform'' block in Figures~\ref{fCanceling} and \ref{fCanceling-Feedback} can dramatically increase INR$_d$, and has the additional benefit of excluding signals unrelated to the interference. Specifically, this technique excludes spectrally-disjoint portions of the astronomical SOI from the ``estimate interference waveform'' block, which provides further mitigation against toxicity. Note that essentially this scheme is employed in the example presented in Appendix~\ref{aWX}. Yet another tool for improving INR$_d$ and mitigating toxicity is parametric estimation and subtraction (PES), addressed in Section~\ref{sSurvey}. \subsection{\label{ssNRI}Nyquist-Rate Implementation} A barrier to adoption of CTC has been hardware implementation. Unlike ITFE, CTC requires access to a Nyquist-rate data stream. This presents a potential challenge in modern radio telescope implementations. Due to limitations in technology, storage cost, and logistics, existing instruments are typically limited to recording only the averaged spectrum. Therefore a practical operational CTC system must operate in real time, in the sense that any latency associated with CTC must be less than the time during which Nyquist-rate data is available. This in turn requires large amounts of high bandwidth memory and computing resources with low-latency access to this data. Thus, CTC is difficult to implement as an ``add on'' to existing instruments, and may require co-design and low-level integration with instrument electronics. \subsection{Separability} \label{ssSep} For the reasons cited in previous sections, it is not certain that CTC can be a fully ``hands off, always on'' capability for radio astronomy, and that astronomers will want the ability to enable, disable, or ``tune'' CTC as needed. Of course this is complicated by the issue noted in Section~\ref{ssNRI}: CTC, unlike ITFE, must normally occur in real-time as the observation is running. Unless interference is certain to ruin an observation, astronomers might understandably prefer to keep CTC turned off, rather than to take the chance that data that might be salvageable using ITFE is instead ruined by CTC toxicity. A possible remedy is \emph{separability}, which might consist of any of the following techniques: (1) Record two versions of the observation: one with CTC, and the other without. (2) Record only the CTC-processed observation, but also $\hat{z}(t)$ so that it is possible to know precisely how CTC affected the data, and thereby retain some ability to perform remedial post-observation processing. This is feasible since the bandwidth of the interference is normally much less than the bandwidth of the observation. (3) Record only the observation without CTC, but also $\hat{z}(t)$. This retains the option to perform enhanced post-observation interference mitigation, although probably not truly coherent time-domain canceling. (4) If the observation can be recorded at the Nyquist rate, then full separability is possible simply by also recording $\hat{z}(t)$. (5) Record the observation at the Nyquist rate and also record $d(t)$ (as opposed to $\hat{z}(t)$), allowing full CTC to be implemented as a post-processing operation. Options (4) and (5) have the benefit that CTC can be optimized after the observation, in the same manner as present-day ITFE processing. \section{\label{sSurvey}A Brief History of CTC in Radio Astronomy} We now present a brief review of the history of CTC in radio astronomy. We have chosen not to attempt a numerical comparison between the findings of these studies and findings presented in this paper. This is partially due to the difficulty of extracting and presenting the relevant data from each paper in a consistent way, but also because experimental results are limited by practical factors in the implementation (typically well documented in the papers) that have a large effect on the outcomes. We strongly encourage readers to instead consult these papers directly; our paper may aid the reader by providing context. Seminal work on canceling for radio astronomy appears in \citet{BarnbaumBradley1998}. This work addresses interference from radio stations in the 88--108~MHz~FM broadcast band. Their approach is feedback CTC using LMS with a reference signal obtained from a directional antenna pointed toward the source of the interference. They show theoretically that $\overline{\mbox{IRR}}_1 \approx (\mbox{INR}_d+1)^2$, which is consistent with the low-INR$_d$ regime result obtained in this paper (Equation~\ref{eaLMS1l}). The authors present experimental results that are consistent with their theoretical analysis. \citet{Ellingson2002} reports experiments using feedforward MMSE to mitigate interference from the L-band satellite navigation system GLONASS in the main beam of a 3~m dish using the orthogonal linear polarization as a reference signal (thus, INR$_d$=INR$_x$; and precluding use for astronomy, since $f(\tau)$ is significant in this configuration). Results indicated IRR$>$IRR$_{req}$. Also, this work identifies the high-INR$_d$ relationship $\mbox{IRR}= L\cdot\mbox{INR}_d$, obtained as a special case in this paper (Equations~\ref{eaIRR2h} and \ref{eaIRR1h}). \citet{Poulson03} reports experiments using LMS to mitigate GLONASS received through sidelobes of the Green Bank Telescope using a reference signal obtained from a separate 3.6-m reflector antenna tracking the interferer. IRR$>$IRR$_{req}$ is apparent despite challenges in setting the LMS step gain $\mu$ and mitigating non-stationarity in $g(\tau)$. \citet{Kesteven+2005} demonstrate that interference from a digital TV station at 675~MHz can be sufficiently suppressed to facilitate productive pulsar observations. Their work also employs feedback architecture with an auxiliary antenna, but they use a different method for computing ${\bf w}$ that is similar to the reduced-complexity method of Section~\ref{sRC}. They include theoretical analysis showing $\overline{\mbox{IRR}}_2 \approx \mbox{INR}_d+1$, which again is consistent with the low-INR$_d$ regime result obtained in this paper (Equation~\ref{eaLMS2l}). The fact that \citet{BarnbaumBradley1998} perform analysis in terms of $\overline{\mbox{IRR}}_1$ and \citet{Kesteven+2005} perform analysis in terms of $\overline{\mbox{IRR}}_2$ explains why these two similar techniques should yield such dramatically different IRR performance: We now see that the issue is simply that they used different performance metrics. Also, we note that neither work identifies the fact that their analysis is limited to the low INR$_d$ regime, and that IRR in the high INR$_d$ regime is significantly different, as explained in Section~\ref{sFA}. An important finding in these studies, and a recurring theme in this paper, is the need for large INR$_d$ in order to effectively suppress weak interference. An approach that addresses this problem is \emph{parametric estimation and subtraction} (PES). PES takes advantage of the fact that essentially all communications, radar, and navigation signals are comprised of modulated sinusoidal carriers which can be modeled as \begin{equation} z(t) = A(t) \cos\left[ \omega_c t + \omega_{\Delta}(t)\cdot t + \theta(t) \right] \label{ePESmodel} \end{equation} where $A(t)$, $\omega_{\Delta}(t)$, and $\theta(t)$ are parameters that vary slowly relative to the period of the carrier $2\pi/\omega_c$. This makes it possible to estimate these parameters; in fact, the process of estimating these parameters is essentially demodulation. Once waveform parameters are estimated, it is possible to synthesize a noise-free interference estimate using Equation~\ref{ePESmodel}, which may then serve as $\hat{z}(t)$ directly, or used as $d(t)$ in a feedforward canceler if correction for additional effects (e.g., $g(\tau)$) is required. PES is particularly effective against interference from modern communications systems, where the ``finite alphabet'' property of digital modulations greatly aids in waveform parameter estimation. When applicable, PES has three compelling advantages: First, INR$_d$ is not directly limited by the received strength of the interference. Second, there is no ingress of astronomy into reference channel; i.e., $f(\tau)=0$, thereby ameliorating a primary toxicity concern. Third, an external reference signal (i.e., from an auxiliary antenna) is not required. The principal disadvantage of PES is that the technique is sensitive to the details of the waveform, including the stationarity of the waveform parameters, unlike techniques in which the reference signal is obtained from a reference antenna. In \citet{EBB00}, a feedforward canceler using PES is used to mitigate interference from GLONASS from Australia Telescope Compact Array observations of a spectral line at $1612.15$~MHz. % Despite INR$_x\ll 1$, IRR in the range 20~dB to 25~dB is achieved. % Other studies involving similar PES-type cancelers include \citet{Roshi02} for analog (NTSC) broadcast television; % \citet{EH03}, for L-band air surveillance radar; \citet{Lee08}, addressing a wide variety of analog and digital interference waveforms; \citet{Nigra+2010} for the US Global Positioning System (GPS); and \citet{E20-STSA} for VHF-band US weather radio. % \section{\label{sConc}Conclusions} The studies cited in the previous section reach essentially the same top-level conclusion: CTC shows promise, but work is incomplete and there are a variety of problems remaining to be solved. These problems fall in two broad categories: (1) Algorithm design (What is the appropriate algorithm, and how to anticipate levels of performance); and (2) Implementation issues remaining to be understood, quantified, and solved. This paper is an attempt to gain a comprehensive understanding of the first category of problems, and has identified some key elements in the second category of problems. We have identified feedforward MMSE, including the reduced complexity version of Section~\ref{sRC}, as a good starting point for development of an operational CTC capability for radio astronomy, and we have demonstrated that this strategy can plausibly meet the requirements for the ``look through'' capability envisioned in Sections~\ref{sIntro} and \ref{sHMCR}. Along the way we have defined the relevant and useful performance metrics $\overline{\mbox{IRR}}_1$, $\overline{\mbox{IRR}}_2$, and NIR. Finally, we have confirmed and quantified the importance of high INR$_d$ for effective CTC, and identified several strategies by which this can be achieved even in scenarios where INR$_x$ is low. \acknowledgments This paper is based upon work supported in part by the National Science Foundation under Grant ECCS-2029948. \appendix \section{Interference Rejection Ratio} \label{aIRR} In this appendix we present expressions for the interference rejection ratios $\overline{\mbox{IRR}}_1$ and $\overline{\mbox{IRR}}_2$, defined in Section~\ref{ssTP}. In Section~\ref{asMMSE}, these expressions are derived for feedforward MMSE for the special case of a length-1 filter ($M=1$) and a single narrowband interferer. In Section~\ref{asMMSEM}, empirical expressions are proposed for the $M>1$ case. In Section~\ref{asLMS}, expressions are derived for LMS with $M=1$ and a single narrowband interferer. ~\\ \subsection{Feedforward MMSE, $M=1$} \label{asMMSE} For notational convenience let us define $x[k]=x(kT_S)$, $z[k]=z(kT_S)$, and so on. As in Section~\ref{sOTDC}, we assume $s(kT_S)$ is negligible in this analysis; i.e., \begin{equation} x[k] = z[k] + n[k] \end{equation} For convenience and without loss of generality, $z[k]$ is assumed to have unit time-average power and $n[k]$ is assumed to be complex white Gaussian noise (WGN) with variance $\sigma_n^2 = 1/\mbox{INR}_x$. We further assume $g(\tau)$ is a complex-valued constant with phase $\theta$ such that \begin{equation} d[k] = \sqrt{\mbox{INR}_d}~e^{j\theta} z[k] + u[k] \end{equation} where $j=\sqrt{-1}$ and $u[k]$ is unit power complex WGN. In feedforward MMSE, we have \begin{equation} \hat{z}[k] = {\bf w}^H {\bf d}[k] \label{eAppAzhat} \end{equation} where ${\bf w}$ is the solution to \begin{equation} {\bf R}{\bf w} = {\bf r} \end{equation} In the context of stochastic analysis, time averages are more appropriately expressed as expectations over $k$. Thus, Equations~\ref{eCorr-R} and \ref{eCorr-r} of Section~\ref{sOTDC} become \begin{equation} {\bf R}=E\{{\bf d}[k]{\bf d}^H[k]\} \end{equation} \begin{equation} {\bf r}=E\{x^*[k]{\bf d}[k]\} \end{equation} respectively. In the special case of $M=1$ and asymptotically large $L$, we have: \begin{equation} {\bf R} = E\{d[k]d^*[k]\} = \mbox{INR}_d + 1 \end{equation} \begin{equation} {\bf r} = E\{x^*[k] d[k] \} = E\{(z^*[k]+n^*[k])(\sqrt{\mbox{INR}_d}~e^{j\theta}z[k] + u[k]) \} = \sqrt{\mbox{INR}_d}~e^{j\theta} \end{equation} which, being $1\times 1$, we shall henceforth refer to simply as ``$R$'' and ``$r$'', respectively. Subsequently, \begin{equation} {\bf w} = \frac{\sqrt{\mbox{INR}_d}~e^{j\theta}}{\mbox{INR}_d + 1} \end{equation} which, also being $1\times 1$, we henceforth refer to simply as ``$w$''. Using these findings in Equation~\ref{eAppAzhat}, we obtain: \begin{eqnarray} \hat{z}[k] & = & w^*d[k] \\ & = & \frac{\sqrt{\mbox{INR}_d}~e^{-j\theta}}{\mbox{INR}_d + 1} \left ( \sqrt{\mbox{INR}_d}~e^{j\theta} z[k] + u[k] \right ) \\ & = & \frac{\mbox{INR}_d}{\mbox{INR}_d + 1} z[k] + \frac{\sqrt{\mbox{INR}_d}}{\mbox{INR}_d + 1} \tilde{u}[k] \end{eqnarray} where $\tilde{u}[k]$ has been defined as $u[k]~e^{j\theta}$ for notational convenience. Section~\ref{ssTP} describes two possible definitions of interference rejection ratio; namely IRR$_1$ (Equation~\ref{eIRR1Def}) and IRR$_2$ (Equation~\ref{eIRR2Def}). Let us begin with IRR$_2$. In this case we define: \begin{equation} \overline{\mbox{IRR}}_2 = \frac{E\{|z[k]|^2\}}{E\{z[k]-\hat{z}[k]|^2 \}} \label{eIRR2BarDef} \end{equation} The distinction between IRR$_2$ and $\overline{\mbox{IRR}}_2$ is important: IRR$_2$ is a measurable outcome from a single trial, whereas $\overline{\mbox{IRR}}_2$ is a statistic determined from all trials. The latter can be defined in multiple ways; we choose Equation~\ref{eIRR2BarDef} because it facilitates the simple derivation below (alternative definitions lead to much more difficult analysis), and also because Equation~\ref{eIRR2BarDef} is not significantly biased by the intermittent spuriously large values of IRR that are encountered in experiments in which the interferer is a deterministic signal with slowly-varying waveform parameters. Since we earlier specified $z[k]$ to have unit time-average power, the numerator of Equation~\ref{eIRR2BarDef} is 1. In the denominator, we find: \begin{eqnarray} E\{|z[k]-\hat{z}[k]|^2 \} & = & E\left \{ \left | z[k]- \frac{\mbox{INR}_d}{\mbox{INR}_d + 1} z[k] + \frac{\sqrt{\mbox{INR}_d}}{\mbox{INR}_d + 1} \tilde{u}[k] \right |^2 \right \} \\ & = & \left ( 1-\frac{\mbox{INR}_d}{\mbox{INR}_d + 1} \right )^2 + \frac{{\mbox{INR}_d}}{(\mbox{INR}_d + 1)^2} \\ & = & \frac{1}{(\mbox{INR}_d+1)^2} + \frac{\mbox{INR}_d}{(\mbox{INR}_d+1)^2} \\ & = & \frac{1}{\mbox{INR}_d+1} \end{eqnarray} Therefore \begin{equation} \overline{\mbox{IRR}}_2 = {\mbox{INR}_d + 1}~~~\mbox{(asymptotically large $L$)} \label{eq:IRR_approx} \end{equation} As expected, $\overline{\mbox{IRR}}_2$=1 for INR$_d=0$, and $\overline{\mbox{IRR}}_2\rightarrow\infty$ for INR$_d\rightarrow\infty$. Note also that $\overline{\mbox{IRR}}_2$ under these assumptions is independent of INR$_x$, since any limitation due to finite INR$_x$ is made irrelevant by the unlimited observation time ($L$). Now we wish to account for the fact that $R$ and $r$ must estimated from a limited number of samples; i.e., potentially small $L$. To begin, note that the quality of the estimate of $r$ depends on both INR$_d$ and INR$_x$, whereas the quality of the estimate of $R$ depends only on INR$_d$. With this in mind, let us assume that INR$_d$ is large enough that performance is limited primarily by the quality of the estimate of $r$; i.e., that the quality of estimation of $R$ has negligible effect in comparison. The quantity $r=E\{x^*[k] d[k]\}$ is estimated from $L$ samples as follows: \begin{equation} r = \frac{1}{L} \sum_{k=1}^L x^*[k]d[k] \end{equation} Substituting for $x[k]$ and $d[k]$ we find: \begin{eqnarray} r & = & \frac{1}{L} \sum_{k=1}^L (z^*[k] + n^*[k])(\sqrt{\mbox{INR}_d}~e^{j\theta}z[k] + u[k] ) \\ & = & \frac{1}{L} \sum_{k=1}^L \sqrt{\mbox{INR}_d}~e^{j\theta} |z[k]|^2 + \frac{1}{L} \sum_{k=1}^L n^*[k]u[k] + \frac{1}{L} \sum_{k=1}^L \sqrt{\mbox{INR}_d}~ e^{j\theta}z[k] n^*[k] + \frac{1}{L} \sum_{k=1}^L z^*[k]u[k] \end{eqnarray} Since we previously set the variance of $z[k]$ to one, the first term reduces to $\sqrt{\mbox{INR}_d}~e^{j\theta}$. The second term is negligible since $n[k]$ and $u[k]$ are uncorrelated. The last two terms can be approximated as statistically-independent Gaussian random variables with zero mean and variances $\mbox{INR}_d/(L \cdot \mbox{INR}_x)$ and $1/L$, respectively. Therefore, the sum of the last two terms can be approximated as a Gaussian random variable $\tilde{v}$ with zero mean and variance $\mbox{INR}_d/(L \cdot \mbox{INR}_x) + 1/L$. Thus, we may interpret $r$ as a random variable: \begin{equation} r = \sqrt{\mbox{INR}_d}~e^{j\theta} + \tilde{v} \end{equation} Subsequently, the revised expression for $w$ as a random variable which accounts for limited number of samples $L$ is \begin{equation} w = \frac{r}{R} = \frac{\sqrt{\mbox{INR}_d}~e^{j\theta} + \tilde{v}}{\mbox{INR}_d + 1} \end{equation} and the associated expression for the interference estimate is \begin{equation} \hat{z}[k] = w^* d[k] = \frac{\sqrt{\mbox{INR}_d}~e^{-j\theta} + \tilde{\nu}}{\mbox{INR}_d + 1} \left ( \sqrt{\mbox{INR}_d}~e^{j\theta} z[k] + u[k] \right ) \end{equation} and the denominator of $\overline{\mbox{IRR}}_2$ becomes \begin{eqnarray} E \{ |z[k] - \hat{z}[k] |^2\} & = & E \left \{ \left |z[k] - \frac{\sqrt{\mbox{INR}_d}~e^{-j\theta} + \tilde{\nu}}{\mbox{INR}_d + 1} \left ( \sqrt{\mbox{INR}_d}~e^{j\theta} z[k] + u[k] \right ) \right |^2 \right \} \label{ea1IRR2d} \\ & = & E \left \{ \left |z[k] \left ( 1- \frac{\mbox{INR}_d}{\mbox{INR}_d + 1} \right ) - \frac{\sqrt{\mbox{INR}_d}}{\mbox{INR}_d + 1}~e^{j\theta} z[k]~\tilde{\nu} \ldots \right. \right .\nonumber \\ & & ~~~~~\left . \left . -\frac{\sqrt{\mbox{INR}_d}}{\mbox{INR}_d + 1}~e^{-j\theta} u[k] - \frac{\tilde{\nu}}{\mbox{INR}_d + 1}u[k] \right |^2 \right \} \end{eqnarray} Neglecting terms corresponding to correlations between uncorrelated noise waveforms, we obtain \begin{equation} E \{ |z[k] - \hat{z}[k] |^2\} = \left ( 1- \frac{\mbox{INR}_d}{\mbox{INR}_d + 1} \right )^2 + \left (\frac{\mbox{INR}_d}{\mbox{INR}_d + 1}\right)^2 \frac{1}{L\cdot \mbox{INR}_x} + \frac{\mbox{INR}_d}{L(\mbox{INR}_d+1)^2} +\frac{\mbox{INR}_d}{(\mbox{INR}_d+1)^2} \label{aIRR_eIRR2s1} \end{equation} Thus we obtain \begin{equation} \overline{\mbox{IRR}}_2 = \left[ \left ( 1- \frac{\mbox{INR}_d}{\mbox{INR}_d + 1} \right )^2 + \left (\frac{\mbox{INR}_d}{\mbox{INR}_d + 1}\right)^2 \frac{1}{L\cdot \mbox{INR}_x} + \frac{\mbox{INR}_d}{L(\mbox{INR}_d+1)^2} +\frac{\mbox{INR}_d}{(\mbox{INR}_d+1)^2} \right ]^{-1} \end{equation} which simplifies to \begin{equation} \overline{\mbox{IRR}}_2 = \frac{\mbox{INR}_x L (\mbox{INR}_d+1)^2} {\mbox{INR}_x L (\mbox{INR}_d+1) + \mbox{INR}_d(\mbox{INR}_d+\mbox{INR}_x)} \label{IRR2} \end{equation} This yields Equation~\ref{eq:IRR_approx} as expected when either $L\rightarrow \infty$ or INR$_x\rightarrow \infty$. Of particular interest is the result in the high- and low-INR$_d$ regimes. Note: \begin{equation} \overline{\mbox{IRR}}_2 \rightarrow \mbox{INR}_x L ~~~\mbox{for INR$_d \gg$ INR$_x L$} \label{eaIRR2h} \end{equation} \begin{equation} \overline{\mbox{IRR}}_2 \rightarrow \mbox{INR}_d +1 ~~~\mbox{for INR$_d \ll$ INR$_x L$} \label{eaIRR2l} \end{equation} Now we consider the alternative definition $\overline{\mbox{IRR}}_1$, which for $M=1$ is \begin{equation} \overline{\mbox{IRR}}_1 = \frac{E\{|z[k]|^2\}}{E\{|z[k]-w*g*z[k]|^2 \}} \end{equation} where $g$ is the complex gain of the interference in the reference channel; i.e., $d[k] = g z[k] + u[k]$. From previous work we see that we may represent this quantity as $g=\sqrt{\mbox{INR}_d}~e^{j\phi}$ where $\phi$ is an independent random variable analogous to $\theta$. Assuming for the moment that $z[k]$ is narrowband, $z[k]$ may be factored from the expression yielding: \begin{equation} \overline{\mbox{IRR}}_1 = \frac{1}{E\{|1-w*g|^2 \}} \label{eaIRR1a} \end{equation} Following the same analysis as before, we obtain: \begin{equation} \overline{\mbox{IRR}}_1 = \frac{\mbox{INR}_x L(\mbox{INR}_d+1)^2}{\mbox{INR}_x L + \mbox{INR}_d(\mbox{INR}_d+\mbox{INR}_x)} \label{IRR1} \end{equation} Like $\overline{\mbox{IRR}}_2$, this yields Equation~\ref{eq:IRR_approx} as expected when either $L\rightarrow \infty$ or INR$_x\rightarrow \infty$, and also \begin{equation} \overline{\mbox{IRR}}_1 \rightarrow \mbox{INR}_x L ~~~\mbox{for INR$_d \gg \sqrt{\mbox{INR}_x L}$} \label{eaIRR1h} \end{equation} However, \begin{equation} \overline{\mbox{IRR}}_1 \rightarrow (\mbox{INR}_d +1)^2 ~~~\mbox{for INR$_d \ll \sqrt{\mbox{INR}_x L}$} \label{eaIRR1l} \end{equation} The dramatically larger value of $\overline{\mbox{IRR}}_1$ relative to $\overline{\mbox{IRR}}_2$ in the small INR$_d$ regime is due to the fact that $\overline{\mbox{IRR}}_1$ considers only the change in the interference component of output signal, whereas $\overline{\mbox{IRR}}_2$ interprets noise injection by the canceler as an additional increase in the interference in the output signal. Approximations made in the above derivation are validated by the agreement with simulation results shown in Section~\ref{sHMCP}. \subsection{Feedforward MMSE, $M>1$} \label{asMMSEM} Derivations for expressions valid for $M>1$ are not available. Instead we propose the following empirical expressions, which are informed by the $M=1$ analysis in the previous section. These expressions show excellent agreement with the $M>1$ simulation data (see e.g. Figures~\ref{fSmallINRr1} and \ref{fSmallINRr2}) and reduce as expected to the $M=1$ expressions. For $\overline{\mbox{IRR}}_2$: \begin{equation} \overline{\mbox{IRR}}_2 \approx \frac{\mbox{INR}_x (L/M) (M\cdot\mbox{INR}_d+1)^2} {\mbox{INR}_x (L/M) (M\cdot\mbox{INR}_d+1) + M\cdot\mbox{INR}_d(M\cdot\mbox{INR}_d+\mbox{INR}_x)} \label{IRR2M} \end{equation} \begin{equation} \overline{\mbox{IRR}}_2 \rightarrow \mbox{INR}_x L/M ~~~\mbox{for INR$_d \gg$ INR$_x L$} \label{eaIRR2hM} \end{equation} \begin{equation} \overline{\mbox{IRR}}_2 \rightarrow M\cdot\mbox{INR}_d +1 ~~~\mbox{for INR$_d \ll$ INR$_x L/M^2$} \label{eaIRR2lM} \end{equation} For $\overline{\mbox{IRR}}_1$: \begin{equation} \overline{\mbox{IRR}}_1 \approx \frac{\mbox{INR}_x L(M\cdot\mbox{INR}_d+1)^2}{\mbox{INR}_x L + M^2\cdot\mbox{INR}_d(\mbox{INR}_d+\mbox{INR}_x)} \label{IRR1M} \end{equation} \begin{equation} \overline{\mbox{IRR}}_1 \rightarrow \mbox{INR}_x L ~~~\mbox{for INR$_d \gg$ INR$_x L$} \label{eaIRR1hM} \end{equation} \begin{equation} \overline{\mbox{IRR}}_1 \rightarrow (M\cdot\mbox{INR}_d +1)^2 ~~~\mbox{for INR$_d \ll$ INR$_x L$ and INR$_d \ll L/M^2$} \label{eaIRR1lM} \end{equation} \subsection{LMS, $M=1$} \label{asLMS} The difference between the theoretical IRR of $M=1$ LMS and $M=1$ feedforward MMSE is due to the jitter in $w$ due to the iterative update controlled by $\mu$. For this analysis, we represent $w$ as the random variable $\sqrt{\mbox{INR}_d}~e^{j\theta} + e$, where $e$ represents the noise in the update after convergence, and is well-modeled as WGN. \cite{Widrow76} have shown that once the algorithm has converged, the power $\sigma_e^2$ of this noise is $\mu \mbox{MSE}_{min}$ where $\mbox{MSE}_{min}$ is the minimum mean square error associated with the ideal noise-free solution $w=r/R$. Furthermore, \begin{equation} \mbox{MSE}_{min} = E\left \{ \left | x[k] - w^* d[k]\right |^2 \right \} = \frac{1}{\mbox{INR}_d+1} + \frac{1}{\mbox{INR}_x} = \frac{\mbox{INR}_d+1+\mbox{INR}_x}{(\mbox{INR}_d+1)\mbox{INR}_x} \label{eq:MSE} \end{equation} In this case we have for the denominator of $\overline{\mbox{IRR}}_2$, in lieu of Equation~\ref{ea1IRR2d}, \begin{eqnarray} E\{ |z[k]-\hat{z}[k]|^2\} & = & E \left \{ \left | z[k]-\left ( \frac{\sqrt{\mbox{INR}_d}~ e^{-j\theta}}{\mbox{INR}_d+1} + e^*\right) \left( \sqrt{\mbox{INR}_d}~e^{j\theta}z[k]+u[k] \right ) \right|^2\right \} \nonumber \\ & = & E\left \{ \left |z[k]-\frac{{\mbox{INR}_d}}{\mbox{INR}_d+1}z[k] -\frac{\sqrt{\mbox{INR}_d}~e^{-j\theta}}{\mbox{INR}_d+1}u[k] - \sqrt{\mbox{INR}_d}~e^{j\theta}z[k]e^* + e^*u[k]\right |^2 \right \} \nonumber \\ & = & \frac{1}{\mbox{INR}_d+1} + (\mbox{INR}_d+1)\sigma_e^2 \label{eq:lms_err1} \end{eqnarray} where we have ignored the term representing the product of uncorrelated noise sources. Next we substitute $\sigma_e^2=\mu \mbox{MSE}_{min}$ with $\mbox{MSE}_{min}$ coming from Equation~\ref{eq:MSE}: \begin{eqnarray} E\{ |z[k]-\hat{z}[k]|^2\} & = & \frac{1}{\mbox{INR}_d+1} + (\mbox{INR}_d+1) \mu \frac{\mbox{INR}_d+1+\mbox{INR}_x}{(\mbox{INR}_d+1)\mbox{INR}_x} \nonumber \\ & = & \frac{1}{\mbox{INR}_d+1} + \mu \frac{\mbox{INR}_d+1+\mbox{INR}_x}{\mbox{INR}_x} \label{eq:lms_err2} \end{eqnarray} Inserting Equation~\ref{eq:lms_err2} into the definition of $\overline{\mbox{IRR}}_2$ we have: \begin{equation} \overline{\mbox{IRR}}_2 = \left ( \frac{1}{\mbox{INR}_d+1} + \mu \frac{\mbox{INR}_d+1+\mbox{INR}_x}{\mbox{INR}_x} \right )^{-1} \end{equation} In the large- and small-INR$_d$ regimes, we find \begin{equation} \overline{\mbox{IRR}}_2 \rightarrow \frac{1}{\mu}\frac{\mbox{INR}_x}{\mbox{INR}_d} ~~~\mbox{for INR$_d \gg \sqrt{\mbox{INR}_x/\mu}$} \end{equation} \begin{equation} \overline{\mbox{IRR}}_2 \rightarrow \mbox{INR}_d+1 ~~~\mbox{for INR$_d \ll \sqrt{\mbox{INR}_x/\mu}$} \label{eaLMS2l} \end{equation} Now we consider the alternative definition $\overline{\mbox{IRR}}_1$. The analysis for $M=1$ is the same as in Section~\ref{asMMSE} up to Equation~\ref{eaIRR1a}. Following the same procedure, we find in the case of LMS: \begin{equation} \overline{\mbox{IRR}}_1 = \left ( \frac{1}{(\mbox{INR}_d+1)^2} + \mu\mbox{INR}_d \frac{\mbox{INR}_d+1+\mbox{INR}_x}{(\mbox{INR}_d+1)\mbox{INR}_x} \right ) ^{-1} \end{equation} In the large- and small-INR$_d$ regimes, we find \begin{equation} \overline{\mbox{IRR}}_1 \rightarrow \frac{1}{\mu}\frac{\mbox{INR}_x}{\mbox{INR}_d} ~~~\mbox{for INR$_d \gg \left(\mbox{INR}_x/\mu\right)^{1/3}$} \end{equation} \begin{equation} \overline{\mbox{IRR}}_1 \rightarrow (\mbox{INR}_d+1)^2 ~~~\mbox{for INR$_d \ll \left(\mbox{INR}_x/\mu\right)^{1/3}$} \label{eaLMS1l} \end{equation} We have verified these expressions using simulations under the same conditions as our feedforward MMSE experiments, and have found similarly excellent agreement. \section{Demonstration Using Real-World Data} \label{aWX} In this appendix, we demonstrate feedforward MMSE CTC for the mitigation of interference from a terrestrial radio broadcast signal. In this demonstration, we consider the weather radio service of the U.S. National Oceanic and Atmospheric Administration (NOAA). This service is provided by broadcast stations transmitting in 25~kHz channels with center frequencies 162.400~MHz through 162.550~MHz, as shown in Figure~\ref{awx_fFullband}. Each signal is analog narrowband frequency-modulated voice. Each signal is continuously present, and the instantaneous occupied bandwidth varies dynamically between nearly zero (effectively, a sinusoid) to most of the channel on millisecond timescales. % These signals are representative of a great number of sources of terrestrial interference throughout the HF, VHF, and UHF wavebands. Data was collected from the vicinity of Blacksburg, Virginia, USA using half-wavelength dipoles horizontal to the ground and separated by about 5~m (about 2.7 wavelengths). The signal from each dipole was converted to baseband and sampled at 2.4 million samples per second (MSPS) with 8 bits for ``I'' and 8 bits for ``Q'' using a software defined radio with coherent channels. One dipole was aligned in azimuth so as to maximize the 162.450~MHz signal, resulting in the spectrum shown in Figure~\ref{awx_fFullband}. This signal was filtered (as described below) and served as the reference channel. The other dipole was aligned in azimuth so as to \emph{minimize} the 162.450~MHz signal. This served as the primary channel input, simulating the signal received through a far sidelobe of a radio telescope.\footnote{The typical level for the far sidelobes of a large reflector is less than 0~dBi; see e.g., \cite{ITU-SA509}.} The sensitivity of the receivers is dominated by internal noise, so the noise in the primary and reference channels is uncorrelated. Raw samples were recorded and all subsequent processing was done off-line. First, the 162.450~MHz channel was extracted from the primary and reference channel inputs using a Hamming filter of length 2048 having total bandwidth of 25~kHz. The signals were not downsampled. The resulting primary channel input is shown in the leftmost panel of Figure~\ref{awx_fExample1}; note that the orientation of the dipoles has resulted in the primary channel INR (INR$_x$) being much weaker than the reference channel INR (INR$_d$), as would normally be the case in an operational CTC system. In order to estimate INR, we estimated noise baselines for the primary and reference channels by log-linear fitting to the noise in the unoccupied 162.3125--162.3875~MHz and 162.5625--162.6375~MHz regions of the spectrum. Using the extrapolated noise baseline to estimate noise power $N$ in the channel, interference power may then be estimated as the difference between total power in the channel $I+N$ and $N$. Using this method, we estimate INR$_x=+7.96$~dB and INR$_d=+27.32$~dB within the 25~kHz channel of interest. The primary channel is processed using $M=1$ feedforward MMSE. A single training period is used, but the length of the training period is varied. Training period lengths of $3\times 10^3$, $1\times 10^4$, and $1\times 10^5$ samples are considered, corresponding to 1.25~ms, 4.17~ms, and 41.7~ms, respectively. Because the signals were not downsampled after filtering, the corresponding values of $L$ are smaller by the factor (25~kHz)/(2.4 MSPS); i.e., $L=31$, 104, and 1042; respectively. The estimation filter is calculated once and held constant for the entire duration of the experiment. The resulting spectra are shown in Figure~\ref{awx_fExample1}. Clearly CTC is highly effective in this scenario; we see in fact that $L=1042$ is sufficient to render the interference essentially undetectable. Also note that the noise is neither noticeably increased or noticeably modified. Thus, NIR is too small to be reliably measured, which is consistent with the findings of Section~\ref{sHMCP}. Now we consider IRR relative to predictions using the theory presented in Appendix~\ref{aIRR}. IRR$_2$ is the relevant metric for this experiment since only the total power can be measured directly, and the interference power must be estimated using the extrapolated noise baseline as described earlier. Table~\ref{awx_tResults} summarizes the results. First, note that the three values of $L$ considered correspond to INR$_d$ which is high, moderate, and low relative to the INR$_x L$ criterion identified in Section~\ref{sHMCP} and Appendix~\ref{aIRR}. The second row of Table~\ref{awx_tResults} shows IRR$_2$ calculated using Equation~\ref{IRR2}, which is valid in all three cases. The remaining rows show IRR calculated from total power measurements as described previously. This works sufficiently well for $L=31$ and $L=104$, but the fails for $L=1042$ as the difference between interference$+$noise and noise alone is too small to reliably measure in the $L=1042$ case. In order to assess the effect of any non-stationarity of the difference channel ($g(\tau)$ in Equation~\ref{eRefChModel}), the experiment is performed with varying dataset lengths ranging from 11.52~s (the entire dataset) down to 1.44~s (the first one-eighth of the dataset), as indicated in the left column of Table~\ref{awx_tResults}. When the entire dataset is used, we see that the observed IRR is a few dB less than the theoretical value. However, the observed IRR increases monotonically with decreasing dataset length, suggesting that there is a significant time-varying difference between the propagation channel experienced by the two dipoles over these time scales. Therefore, in this scenario, stationarity considerations require retraining on periods less than a few seconds in order to approach the theoretical limit of Equation~\ref{IRR2}. This is quite reasonable since even the $L=1042$ case corresponds to only 41.7~ms of training. Finally, we conducted an experiment to confirm the ``look through'' capability of CTC in this scenario, and to assess toxicity. This experiment is summarized in Figure~\ref{awx_fExample2}. The panel labeled ``(b)'' shows a simulated astrophysical spectral feature that was generated by filtering uncorrelated noise. This spectral feature is added to the original signal, with the result shown in the panel labeled ``(a)$+$(b)''. The rightmost panel shows the result after CTC with $L=1042$, using the entire dataset. Note that the spectral feature is recovered with no apparent distortion. \begin{table} \centering \begin{tabular}{|l|rr|rr|r|} \hline & $L=31$ & $\Delta$ & $L=104$ & $\Delta$ & $L=1042$ \\ \hline INR$_d$/INR$_x$$L$ & $+4.45$~dB & ~ & $-0.81$~dB & ~ & $-10.82$~dB \\ IRR$_2$ (Eq.~\ref{IRR2}) & $+21.52$~dB & ~ & $+24.68$~dB & ~ & $+26.98$~dB \\ IRR (Obs.), $11.52$~s & $+17.32$~dB & $-4.20$~dB & $+22.01$~dB & $-2.67$~dB & $>+25.4$~~dB \\ IRR (Obs.), ~~$5.76$~s & $+17.76$~dB & $-3.76$~dB & $+23.13$~dB & $-1.55$~dB & $>+25.4$~~dB \\ IRR (Obs.), ~~$2.88$~s & $+17.91$~dB & $-3.61$~dB & $+23.24$~dB & $-1.44$~dB & $>+25.4$~~dB \\ IRR (Obs.), ~~$1.44$~s & $+18.44$~dB & $-3.08$~dB & $+24.62$~dB & $-0.06$~dB & $>+25.4$~~dB \\ \hline \end{tabular} \caption{Summary of predicted and observed CTC performance. ``$\Delta$'' is the ratio of observed performance to the predicted value from Equation~\ref{IRR2}.} \label{awx_tResults} \end{table} \section{Effect of Non-Stationarity Between the Primary and Reference Channels} \label{aIRRS} As noted in Section~\ref{sNS}, the performance of MMSE-based CTC is degraded if the impulse response $g(\tau)$, as defined in Equation~\ref{eRefChModel}, is not constant; i.e., non-stationary. In this case we have $g(\tau,t)$; i.e., the function $g(\tau)$ is itself a function of time. When this form of non-stationarity becomes significant, performance will depend on the specific way in which $g(\tau,t)$ is changing with $t$, and also on what fraction of the time between updates of the filter is used for training. However it is possible to calculate the impact on IRR for $M=1$ as we shall now show. For this analysis we assume that the period between filter updates is equal to the $L$-sample training period. When $M=1$, $g(\tau,t)$ reduces in the data model to a time-varying complex-valued constant $g_0(t)$. Assuming the same normalization of signals as in Section~\ref{asMMSE}, the mean square variation due to non-stationarity will be \begin{equation} \epsilon^2 = E\left\{ \left| g_0(t)-\overline{g_0} \right|^2 \right\} \end{equation} where the expectation is taken over $L$ samples, and $\overline{g_0}$ is the mean of $g_0(t)$ over these samples. Considering first IRR$_2$, Equation~\ref{aIRR_eIRR2s1} becomes \begin{equation} E \{ |z[k] - \hat{z}[k] |^2\} = \left ( 1- \frac{\mbox{INR}_d}{\mbox{INR}_d + 1} \right )^2 + \left (\frac{\mbox{INR}_d}{\mbox{INR}_d + 1}\right)^2 \frac{1}{L\cdot \mbox{INR}_x} + \frac{\mbox{INR}_d}{L(\mbox{INR}_d+1)^2} +\frac{\mbox{INR}_d}{(\mbox{INR}_d+1)^2} +\epsilon^2 \end{equation} Subsequently Equation~\ref{IRR2} becomes: \begin{equation} \overline{\mbox{IRR}}_2 = \frac{\mbox{INR}_x L (\mbox{INR}_d+1)^2} {\mbox{INR}_x L (\mbox{INR}_d+1) + \mbox{INR}_d(\mbox{INR}_d+\mbox{INR}_x) + \epsilon^2\mbox{INR}_x L (\mbox{INR}_d+1)^2 } \label{eaSIRR2} \end{equation} Similarly, for IRR$_1$, Equation~\ref{IRR1} becomes \begin{equation} \overline{\mbox{IRR}}_1 = \frac{\mbox{INR}_x L(\mbox{INR}_d+1)^2}{\mbox{INR}_x L + \mbox{INR}_d(\mbox{INR}_d+\mbox{INR}_x) + \epsilon^2\mbox{INR}_x L (\mbox{INR}_d+1)^2 } \end{equation} We have tested these expressions against simulations in which $g_0(t)$ varies linearly in magnitude with $t$, and have found excellent agreement. Finally, let us consider how bad the non-stationarity must be to have a significant effect on IRR. For $\overline{\mbox{IRR}}_2$, the effect of the term containing $\epsilon$ in the denominator of Equation~\ref{eaSIRR2} is negligible if \begin{equation} \epsilon^2 \ll \frac{\mbox{INR}_x L (\mbox{INR}_d+1) + \mbox{INR}_d(\mbox{INR}_d+\mbox{INR}_x) }{\mbox{INR}_x L(\mbox{INR}_d+1)^2} \end{equation} The right side of this inequality is simply $1/\overline{\mbox{IRR}}_2$ evaluated for $\epsilon=0$. Therefore the effect of non-stationarity is negligible if \begin{equation} \epsilon^2 \ll \left( \left.\overline{\mbox{IRR}}_2\right|_{\epsilon=0} \right)^{-1} \end{equation} The same result is obtained for $\overline{\mbox{IRR}}_1$, so we may say generally that the effect of non-stationarity is negligible if \begin{equation} \epsilon^2 \ll \left( \left.\mbox{IRR}\right|_{\epsilon=0} \right)^{-1} \label{eIRRSs} \end{equation} Summarizing, the impact of non-stationarity is greatest when IRR is high, decreases with decreasing IRR, and is negligible when Inequality~\ref{eIRRSs} is satisfied. \bibliographystyle{aasjournal} \bibliography{main} %
Title: WALOP-South: A Four-Camera One-Shot Imaging Polarimeter for PASIPHAE Survey. Paper II -- Polarimetric Modelling and Calibration
Abstract: The Wide-Area Linear Optical Polarimeter (WALOP)-South instrument is an upcoming wide-field and high-accuracy optical polarimeter to be used as a survey instrument for carrying out the Polar-Areas Stellar Imaging in Polarization High Accuracy Experiment (PASIPHAE) program. Designed to operate as a one-shot four-channel and four-camera imaging polarimeter, it will have a field of view of $35\times 35$ arcminutes and will measure the Stokes parameters $I$, $q$, and $u$ in a single exposure in the SDSS-r broadband filter. The design goal for the instrument is to achieve an overall polarimetric measurement accuracy of 0.1 % over the entire field of view. We present here the complete polarimetric modeling of the instrument, characterizing the amount and sources of instrumental polarization. To accurately retrieve the real Stokes parameters of a source from the measured values, we have developed a calibration method for the instrument. Using this calibration method and simulated data, we demonstrate how to correct instrumental polarization and obtain 0.1 % accuracy in the degree of polarization, $p$. Additionally, we tested and validated the calibration method by implementing it on a table-top WALOP-like test-bed polarimeter in the laboratory.
https://export.arxiv.org/pdf/2208.12441
\begin{spacing}{1} % \keywords{polarization, polarimetric modeling, polarimetric calibration, linear polarimetry, optical polarization, wide-field polarimeter, one-shot polarimetry} {\noindent \footnotesize\textbf{*}Siddharth Maharana, \linkable{sidh@iucaa.in, sidh345@gmail.com} } \section{Introduction} Using the two upcoming Wide-Area Linear Optical Polarimeters\cite{walop_s_spie_2020} (WALOPs) as survey instruments, the Polar-Areas Stellar Imaging in Polarization High Accuracy Experiment (\textsc{pasiphae}) program aims to create the first large swath ($>$~4000 square degrees) optopolarimetric map of the sky, towards the Galactic polar regions. The main objectives of the \textsc{pasiphae} program include - (i) determining the 3D structure of the dust distribution along a large number of lines of sight with the sub-degree plane of sky angular resolution, (ii) determining the plane of the sky orientation of the magnetic fields associated with the multiple dust clouds that are generally seen along each line of sight\cite{Panopoulou_2019, vincent_decorrelation_paper, vincent_tomography, gina_lenz}, (iii) to test the physics of interstellar dust, especially concerning grain alignment and size distribution, and (iv) to trace paths traversed by ultra-high-energy cosmic rays through the Galaxy. For an extensive description of the motivation and scientific objectives of the \textsc{pasiphae} program, we refer the reader to the \textsc{pasiphae} white paper by Tassis et al.\cite{tassis2018pasiphae}. The survey will be concurrently executed from the southern and northern hemispheres, using the WALOP-South instrument mounted on the 1~m telescope at South African Astronomical Observatory’s (SAAO) Sutherland Observatory and WALOP-North mounted on the 1.3~m telescope at Skinakas Observatory, Greece, respectively. Table~\ref{techtable} captures the design goals of the WALOP instruments, which were decided based on the scientific goals of the \textsc{pasiphae} program as well as the current status of state-of-the-art polarimetric instrumentation capabilities. This has been discussed in the optical design paper of the WALOP-South instrument by Maharana et al.\cite{WALOP_South_Optical_Design_Paper}, hereafter to be referred to as Paper I. \begin{table} \centering \begin{tabular}{ccc} \hline \textbf{Sl. No}. & \textbf{Parameter} & \textbf{Technical Goal} \\ \hline 1 & Polarimetric Accuracy ($a$) & 0.1~\%\\ 2 & Polarimeter Type & Four Channel One-Shot Linear Polarimetry \\ 3 & Number of Cameras & 4 (One Camera for Each Arm)\\ 4 & Field of View & $30\times30$~arcminutes\\ 5 & Detector Size & $4k\times4k$ (Pixel Size = $15~{\mu}m$) \\ 6 & No. of Detectors & 4 \\ 7 & Primary Filter & SDSS-r \\ 8 & Imaging Performance & Close to seeing limited PSF \\ \hline \end{tabular} \caption{Design goals for WALOP instruments.} \label{techtable} \end{table} Both WALOPs are currently under development at the Inter-University Centre for Astronomy and Astrophysics (IUCAA), Pune, India. Of the two WALOP instruments, WALOP-South is scheduled first for commissioning in 2022. Paper I gives a complete description of the optical model of the WALOP-South instrument. The optical model of WALOP-North is very similar to WALOP-South- the differences are owing to the differences in the telescope optics as well as optomechanical constraints. Consequently, the polarimetric behavior of the two instruments is qualitatively identical. In this paper, we focus on WALOP-South to illustrate the polarimetric modeling and calibration approach and its validation. Integrated (and unresolved) light from most stars is unpolarized in optical wavelengths. However, as light from the stars passes through the intervening interstellar medium, dichroic extinction in the line of sight and anisotropic scattering by elongated grains aligned with the magnetic field in dust clouds introduces linear polarization ($p$) to fractions of around a few percent or less, depending on the dust column density and the geometry of the magnetic field\cite{andersson_review, Hensley_2021, Heiles_2000}. As the \textsc{pasiphae} survey is aiming to cover the Galactic polar regions where the interstellar dust is very sparse\cite{Skalidis}, the expected amount of polarization is even smaller ($<$~ 0.3 \%). We define polarimetric sensitivity ($s$) as the least value and change of linear polarization that the instrument can measure, without correction for the instrumental polarization. $s$ is a measure of the internal noise and random systematics of the instrument. Polarimetric accuracy ($a$) is the measure of closeness of the predicted polarization of a source to the real value after correcting for the instrument induced polarization using calibration techniques. The technical goal for the WALOP instruments is to obtain polarimetric sensitivity ($s$) and accuracy ($a$) of 0.05~\% and 0.1~\%, respectively. To determine the intrinsic (on-sky) linear Stokes parameters of a source, $q$ and $u$, from the instrument measured Stokes parameters $q_{m}$ and $u_{m}$, polarimetric calibrations are required to estimate and correct for instrument induced polarimetric effects.% While polarimeters in astronomy achieving polarimetric sensitivity and accuracy of around $10^{-3}~\%$ in $p$ have been made, eg. HIPPI-2 \cite{Bailey_2020} and DIPOL-2 \cite{DIPOL2}, they have very narrow fields of view (FoV), effectively only capable of point source polarimetry. In narrow FoV polarimeters, the rays are incident at nearly normal and/or azimuthally symmetric angles on all the optical elements, leading to very small levels of net instrumental polarization. Whereas, in the case of wide-field polarimeters like WALOPs, rays from off-axis field points propagate inside the optical system at oblique angles. As these angles become larger, the polarization effects due to the optics become stronger. All optical elements can introduce instrumental polarization in the following ways: (i) oblique angles of incidence on an optical surface lead to the preferential transmission of one orthogonal polarization over the other- this can be described through Fresnel coefficients, and (ii) retardance and consequent polarimetric cross-talk due to stress birefringence, as a result of thermal and mechanical stresses on the optics. Over and above these, the main source of instrumental polarization in WALOPs arises from the angle of incidence-dependent retardance by the half-wave retarder plates (HWP) in the Wollaston Prism Assembly (WPA) of the instruments. To estimate and correct for these effects so as to accurately recover the real Stokes parameters, we have developed a detailed method for calibration of the instruments. Temperature change and instrument flexure are known sources of variable instrumental polarization \cite{Tinyanont2018}. The optomechanical design of the instrument (Maharana et al., in preparation) has been created to ensure the time-independent optical and polarimetric behavior of the instrument. For example, irrespective of the variation of these two factors, the optics holders and overall instrument model have been designed to (a) maintain the overall instrument optical alignment to tight tolerance levels and (b) minimize the stress birefringence on the lenses to levels below the instrument sensitivity (Anche et al., in preparation). % \par In this paper, we present the polarization modeling and calibration method for the WALOP-South instrument, which are presented in Sections~\ref{WALOP_modelling_section} and \ref{Calibration_model_theortical}, respectively. The instrument polarization model was developed using the instrument's optical model with the aid of polarization analysis features of the \href{https://www.zemax.com/}{Zemax}\textsuperscript{\textregistered} software, which was used previously in polarization modeling work for other telescopes and instruments such as the Thirty Meter Telescope by Anche et al.\cite{Ramya_TMT_polarization} and Daniel K. Inouye Solar Telescope by Harrington et al.\cite{dkist_polarization_modelling}. We have tested and validated the calibration method on the computer-based Zemax optical model of the WALOP-South instrument and have achieved better than 0.1~\% accuracy across most of the FoV. In this model, various on-sky effects such as photon noise, the variable transmission of sky as well as non-idealities in calibration optics were incorporated. Further, to test the calibration model on real WALOP-like polarimeter systems, a table-top test-bed polarimeter system was developed and assembled in the lab at IUCAA. Section~\ref{calibration_model_lab_test} contains details of the lab set-up and results obtained from it. Finally, Section~\ref{calibration_conclusion} contains conclusions and further discussions. \section{Polarimetric Modelling of WALOP-South}\label{WALOP_modelling_section} \subsection{Optical Model of the WALOP-South Instrument} Here we present an overview of the optical model of the WALOP-South instrument; we refer the reader to Paper I for a detailed description. Fig~\ref{WALOP-S_shaded_model} shows the 3D optical layout of the instrument. The optical system consists of the following assemblies: a collimator, a polarizer assembly, and four cameras (one for each channel). The collimator assembly begins from the telescope focal plane. Aligned along the z-axis, it creates a pupil image that is fed to the polarizer assembly. The polarizer assembly acts as the polarization analyzer system of the instrument and splits the pupil beam into four channels corresponding to $0^{\circ}$, $45^{\circ}$, $90^{\circ}$ and $135^{\circ}$ polarization angles, which are referred to as O1, O2, E1, and E2 beams, respectively. Additionally, this assembly folds and steers the O beams along the +y and -y directions and the E beams along the +x and -x directions. Each channel has its own camera to image the entire FoV on a $4k\times4k$ CCD detector. The obtained FoV of the instrument is $34.8\times34.8$~arcminutes, surpassing the design goal of $30\times30$~arcminutes. Table~\ref{op_design_summary} lists the key design parameters of the instrument's optical system. % \iffalse The complete optical design of the instrument is described in elaborate detail in the detail in Paper I. Fig~\ref{WALOP-S} shows the optical model of the WALOP-South instrument, designed in the Zemax software. The instrument's optics begins at the telescope focal plane where the collimator is the first optical sub-assembly, followed by the polarizer assembly which analyzes the light, and then there are four camera assemblies that image the four channels into four detectors. Fig~\ref{wpa_cartoon} shows the working of the polarizer assembly, and how the polarization state of the light is getting changed. \fi \begin{table}[htbp!] \centering \begin{tabular}{cc} \hline \textbf{Parameter} & \textbf{Design Value/Choice}\\ \hline Filter & SDSS-r \\ Telescope f/number & 16.0 \\ Camera f/number & 6.1 \\ Collimator Length & 700~mm\\ Camera Length & 340~mm \\ No of lenses in Collimator & 6 \\ No of lenses in Each Camera & 7 \\ Detector Size & $4096\times4096$, $15~{\mu}m$ pixel. \\ Plate scale at detector & 0.51"/pixel\\ \hline \end{tabular} \caption{Values of the key parameters of WALOP-South optical design.} \label{op_design_summary} \end{table} The Polarizer Assembly consists of four sub-assemblies: (i) Wollaston Prism Assembly (WPA), (ii) Wire-Grid Polarization Beam-Splitter (PBS), (iii) Dispersion Corrector Prisms (DC Prisms), and (iv) Fold Mirrors. The WPA consists of two identical calcite Wollaston Prisms (WP), with a half-wave plate (HWP) and a BK7 glass wedge in front of each WP (Fig~\ref{pol_ass_cartoon}). The WPs have an aperture of $45\times80~mm$ and a wedge angle of $30^{\circ}$, resulting in a split angle of $11.4^{\circ}$ at $0.6~{\mu}m$ wavelength. The left WP has a HWP with fast-axis at $0^{\circ}$ with respect to the instrument coordinate system (ICS) to separate the $0^{\circ}$ and $90^{\circ}$ polarizations while the right WP has a HWP with fast-axis at $22.5^{\circ}$ to split the $45^{\circ}$ and $135^{\circ}$ polarizations. The BK7 wedges at the beginning of the WPA, which share the incoming pupil beam equally, ensure that rays from off-axis objects in the FoV entering at oblique angles of incidence do not hit the interface between the WPs, which will lead to throughput loss as well as instrumental polarization from scattering arising at the surface. The PBS' act as beam selectors, allowing both the O1 and O2 beams to pass through while folding the E1 and E2 beams along -x and +x directions. Fig~\ref{pol_ass_cartoon} shows the overall working idea of the WPA and PBS components of the polarizer assembly. The Dispersion Corrector (DC) Prisms are a pair of glass prisms present in the path of each of the four beams after the PBS' to correct for the spectral dispersion introduced by the WPA (Paper I). Additionally, mirrors placed at $\pm~45^{\circ}$ the y-z plane fold the O beams into +y and -y directions to limit the total length of the instrument to 1.1~m from the telescope focal plane. \subsection{Mueller Matrix Modelling} The entire instrument's polarization behavior can be mathematically modeled as an Instrument Matrix ($M_{inst}$), similar to a Mueller matrix, as shown in Equation~\ref{mueller_matrix_instrument}. A brief description of $M_{inst}$ and its relation to the Mueller matrices of the instrument's four-camera optical system is presented in Appendix~\ref{instrument_matrix_appendix}. As we are interested in the normalized Stokes vectors, it can be rewritten in the form of Equation~\ref{mueller_matrix_instrument_normalized}. The first row of $m_{inst}$ has been ignored as it does not affect the measured Stokes parameters. The $m_{22}$, $m_{33}$ and $m_{44}$ terms are the \textit{polarimetric efficiencies} of the instrument which capture how each of the measured Stokes parameters scale with respect to their corresponding input values. The $m_{21}$, $m_{31}$ and $m_{41}$ terms are referred to as the polarimetric \textit{zero offsets} as they represent the measured Stokes parameters when the input is unpolarized. The terms $m_{23}$ and $m_{32}$ capture the instrument \textit{cross-talk} between linear Stokes parameters, i.e., how much of $u$ is converted into $q_{m}$, and $q$ into $u_{m}$, respectively. Likewise, the terms $m_{24}$ and $m_{34}$, and $m_{42}$ and $m_{43}$ capture the \textit{cross-talk} between the circular and linear polarizations. For an ideal polarimeter, the diagonal terms will be 1 and all other terms will be zero. For linear polarimeters, the fourth row concerned with $v_{m}$ is ignored. For most celestial objects, $v$ is usually at least an order of magnitude smaller than $q$ and $u$ at optical wavelengths and hence the fourth column can be ignored for linear polarimeters. % \begin{gather}\label{mueller_matrix_instrument} S_{m} = I_{m} \times \begin{bmatrix} 1 \\ q_{m} \\ u_{m} \\ v_{m} \end{bmatrix} = M_{inst}\times S = \begin{bmatrix} M_{11} & M_{12} & M_{13} & M_{14} \\ M_{21} & M_{22} & M_{23} & M_{24} \\ M_{31} & M_{32} & M_{33} & M_{34} \\ M_{41} & M_{42} & M_{43} & M_{44} \end{bmatrix} \times I \begin{bmatrix} 1 \\ q \\ u \\ v \end{bmatrix} \end{gather} \begin{gather} s_{m} = \begin{bmatrix} 1 \\ q_{m} \\ u_{m} \\ v_{m} \end{bmatrix} = m_{inst}\times s = \begin{bmatrix} m_{11} & m_{12} & m_{13} & m_{14} \\ m_{21} & m_{22} & m_{23} & m_{24} \\ m_{31} & m_{32} & m_{33} & m_{34} \\ m_{41} & m_{42} & m_{43} & m_{44} \end{bmatrix} \times \begin{bmatrix} 1 \\ q \\ u \\ v \end{bmatrix} \nonumber \\ = \begin{bmatrix} - & - & - & - \\ 1 \,\to\,q_{m} & q\,\to\,q_{m} & u\,\to\,q_{m} & v\,\to\,q_{m} \\ 1 \,\to\,u_{m} & q\,\to\,u_{m} & u\,\to\,u_{m} & v\,\to\,u_{m} \\ 1 \,\to\,v_{m} & q\,\to\,v_{m} & u\,\to\,v_{m} & v\,\to\,v_{m} \end{bmatrix} \times \begin{bmatrix} 1 \\ q \\ u \\ v \end{bmatrix} \label{mueller_matrix_instrument_normalized} \end{gather} \par For polarimeters with narrow FoV (few arcminutes or less), e.g. RoboPol\cite{robopol} and Impol\cite{impol}, the values of cross-talk terms are $\simeq$~0 and the polarimetric efficiency is $\simeq$~1. The main sources of instrumental polarimetric effect are the polarimetric zero offset terms $m_{21}$ and $m_{31}$. On sky, both these terms can be found by observing zero and partially-polarized standard stars. For polarimeters with non-negligible cross-talk terms, a more detailed calibration method is needed. A major source of cross-talk is the presence of mirror systems in the optical path before the polarization analyzer system, as is often the case with instruments mounted on side-ports of telescopes or on a Nasmyth focus. For such instruments, an accurate Instrument matrix can be created by placing a calibration linear polarizer before any polarimetric effects are introduced (usually at the beginning of the instrument) \cite{instrumentation_book_springer,IRDIS_Calibration}. \subsection{Polarimetric Cross-talk and Zero-offset}\label{WALOP_modelling_section_overall} The measured Stokes parameters can be written in the form of Equations~\ref{calibration_eqn_q} and \ref{calibration_eqn_u}. In general, these can depend on all the intrinsic Stokes parameters of the source. Additionally, the coefficients in these equations are a function of the field position. To estimate the coefficients, the instrument can be made to observe sources with known states of polarization. \begin{equation}\label{calibration_eqn_q} q_{m} = m_{21} + m_{22}q + m_{23}u \end{equation} \begin{equation}\label{calibration_eqn_u} u_{m} = m_{31} + m_{32}q + m_{33}u \end{equation} \par Fig~\ref{WALOP-South Instrument Zero Polarization Map} plots the maps of polarimetric zero offsets for $q_{m}$ and $u_{m}$ measurements, i.e, these are the measured Stokes parameters when the input to the system is unpolarized light. As expected, these resemble hyperbolic functions since the differential Fresnel coefficients for orthogonal polarizations for curved surfaces such as lenses lead to similar spatial variation across the FoV \cite{moonlight_calibration_VLT}. The zero-offsets are seen to be as large as few percents in some parts of the field. Such relatively large values are expected as the E and O beams follow different optical paths (Fig~\ref{WALOP-S_shaded_model}). In particular, (i) unlike E-beams, the O-beams have a fold mirror in the path just after the PBS, and (ii) the transmission of the PBS for the O-beams is relatively inferior ($\sim90 ~\%$) compared to the near 100~\% reflectivity for the E beams.% \par Fig~\ref{WALOP-South Instrument Polarization} shows the polarimetric efficiency and cross-talk maps for the $q_{m}$ and $u_{m}$ measurements. Figs~\ref{WALOP-South Instrument Polarization}~(a) and (c) are the measured Stokes parameter maps (zero offset corrected) when the input polarization is $q = 1$, i.e., these are the maps of $m_{22}$ and $m_{32}$ terms. For an ideal polarimeter, these should be 1 and 0 across the FoV. Figs~\ref{WALOP-South Instrument Polarization}~(b) and (d) are the measured Stokes parameter maps (zero offset corrected) when the input polarization is $u = 1$, so these are maps of the $m_{23}$ and $m_{33}$ terms. For an ideal polarimeter, these should be 0 and 1 across the FoV. % \par As can be inferred from the plots, while the $m_{22}$ and $m_{23}$ terms are nearly close to their ideal values of 1 and 0 across the whole field, the same is not true for the $m_{32}$ and $m_{33}$ terms. The value of both $m_{32}$ and $m_{33}$ terms is well behaved and near to 0 and 1, respectively, in the lower-left half of the FoV, but it starts deviating rapidly in the upper-right half. In fact, in some places the $m_{32}$ tends to be 1 and $m_{33}$ tends to 0, i.e., all of the $u_{m}$ is actually derived from $q$ with very little contribution from $u$. Consequently, we find that while $q_{m}$ is mostly dependent only on $q$ and not so much on $u$, $u_m$ has strong dependence on $q$ and $u$- i.e., the cross-talk terms are very significant for $u_m$. This dependence changes significantly across the FoV. Hence, we need to develop a calibration model that can create accurate mapping functions between the measured and real Stokes parameters across the FoV. The detailed working of this model is presented in Section~\ref{Calibration_model_theortical}. Due to the introduction of the two BK7 wedges just before the HWPs in the WPA (Fig~\ref{pol_ass_cartoon}), rays from the entire field enter at obliques angles of incidence to the HWPs. This leads to non-half wave retardance in the outgoing beam. In addition to this, due to manufacturing difficulties of the relatively large aperture HWPs that are needed, first-order HWPs have been used instead of zero-order HWPs. First-order HWPs are much more sensitive to oblique angles of incidence and have sharper deviation from half-wave retardance. Appendix~\ref{HWP_modelling_section} contains the analytical polarimetric modeling of the HWPs in the WPA based on ray tracing programs written in Python, which are the source of the observed cross-talk in the instrument. % \section{Theoretical Calibration Model of WALOP-South}\label{Calibration_model_theortical} Astronomers, both night-time as well as solar, in the past have calibrated many high polarization cross-talk instruments \cite{IRDIS_Calibration,Harrington_solar_Calibration}. The standard procedure involves creating a mapping function between instrument measured and real Stokes parameters of a source, often in the form of a matrix. For this, known polarized sources are needed to find the coefficients such as in Equations~\ref{calibration_eqn_qi} and \ref{calibration_eqn_ui}. We define $q_{i}$ and $u_{i}$ as the predicted Stokes parameters after applying the calibration model correction to the measured Stokes parameters. Due to the scarcity of on-sky calibration sources, a calibration linear polarizer (CLP) has been provided in the instrument before any polarization inducing optics, including the lenses. Mounted on a motorized rotary stage, the CLP will be inserted in the optical path during calibration observations, and removed during the main science observations. The CLP will provide as input linearly polarized light with different Electric Vector Position Angles (EVPA) to the instrument (i.e., $p \sim 1$ with different $q-u$ values). It is well known that the polarization introduced by a Cassegrain telescope optics system at its direct focus is at least an order of magnitude smaller than our target accuracy levels of 0.1~\%, and can be assumed as a non-polarizing component\cite{Sen_telescope_polarization}. Below we provide a prescription to develop a calibration model for WALOP-South which allows us to determine the coefficients of the polynomial equations of the form given in Equations~\ref{calibration_eqn_qi} and \ref{calibration_eqn_ui}. We find that a second order polynomial, without the ${q_m}{u_m}$ cross-term, provides a better fit to data and consequently gives a better polarimetric accuracy than linear equations. % \begin{equation}\label{calibration_eqn_qi} q_{i} = a_{1} + b_{1}q_{m} + c_{1}u_{m} + d_{1}q_{m}^{2} + e_{1}u_{m}^{2} \end{equation} \begin{equation}\label{calibration_eqn_ui} u_{i} = a_{2} + b_{2}q_{m} + c_{2}u_{m} + d_{2}q_{m}^{2} + e_{2}u_{m}^{2} \end{equation} \subsection{Calibration model} The WALOP-South FoV is divided into a grid of 12$\times$12 = 144 points, as shown in Fig~\ref{grid_points_pol_input}. For each grid point, the calibration model is developed independently. For this work, the \textit{polarization transmission} tool of the Zemax software was used to find the transmission at each of the four detectors as a function of polarization. As input, (i) unpolarized, and (ii) 15 states of 100\% linearly polarized beams with different EVPA ($\theta$) of same intensity were fed to the telescope+instrument, as shown in Fig~\ref{grid_points_pol_input} and the corresponding transmissions at the four detectors were obtained. Further, during processing of the data, we introduce the following effects on all the obtained transmissions: (i) random sky transparency variation between 0.7 to 1, (ii) assuming the star and sky magnitude to be 12 magnitude and 20 magnitude per arcseconds squared in the R band, we add the expected photometry noise (star + sky) to these transmissions, and (iii) effect of non-ideal nature of the CLP (in the WALOP instruments, we are using a Thorlabs polarizer whose extinction ratio is $>~5\times10^{3}$ in the R band: \url{https://www.thorlabs.com/thorproduct.cfm?partnumber=LPVISE2X2}). From the measured intensities at the four detectors for the 15 input states of fully polarized light, $q_{m}$ and $u_{m}$ is found, as given by Equations~\ref{q_definition_cal} and \ref{u_definition_cal}. We use least squares method based curve fitting tools in Python to fit the Equations~\ref{calibration_eqn_qi} and \ref{calibration_eqn_ui} and obtain the coefficients. Fig~\ref{model_qu_fitting} shows the fit for grid point 1. To test the accuracy of the obtained calibration model, we need to verify that this model can predict the real Stokes parameters for partially polarized and unpolarized sources. A crucial requisite is the \textit{transmission model} of the instrument (described below in Section~\ref{transmission_model_appendix}), which accurately predicts the transmission at the detectors for any input Stokes vector to the telescope and instrument system. Each grid point is given as input 100 known $q$ and $u$ states of partially polarized light, with $p$ up to 5~\% and random $\theta$. Using the instrument \textit{transmission model}, we obtain the $N_{0}$, $N_{1}$, $N_{2}$ and $N_{3}$ and subsequently the $q_m$ and $u_m$ for these $q$ and $u$ values. Thereafter, we employ the calibration model to predict the input Stokes parameters, $q_i$ and $u_i$, from $q_m$ and $u_m$. Although we have used 100 test sources, in real observations, 5-10 sources (these can be standard polarized stars on sky) with near uniform spread in $q-u$ plane should be sufficient for testing the accuracy of the model. For each grid point, we find a constant difference between the input and calibration model predicted Stokes parameters- between $q$ and $q_i$, and $u$ and $u_i$. That is, the calibration model created by using only fully polarized light is able to accurately predict all the coefficients in Equations~\ref{calibration_eqn_qi} and \ref{calibration_eqn_ui}, except for the $a_i$ parameters that are associated with unpolarized light. This is most likely due to the over-fitting of the Equations~\ref{calibration_eqn_qi} and \ref{calibration_eqn_ui} as we are using only fully polarized light to find the coefficients. This is corrected by calculating the mean difference between $q$ and $q_i$, and $u$ and $u_i$ of the 100 stars (to be referred to as $\Delta_q$ and $\Delta_u$ from hereon) and then applying this correction to their predicted $q_i$ and $u_i$ values. $\Delta_q$ and $\Delta_u$ are field dependent and can be up to 1~\% at some field points. The model performance for $q$ and $u$ is then estimated for these 100 sources after correction. The performance of the calibration model at each grid point can be characterized by the following four parameters- ${\mu}_q$, ${\mu}_u$, ${\sigma}_q$ and ${\sigma}_u$. These are the mean offset (difference with respect to real values) and spread in the recovery of the $q$ and $u$ parameters of the sources. The parameters ${\sigma}_q$ and ${\sigma}_u$ can be considered as the accuracy of the model. Fig~\ref{quhist_corrected} shows the histogram plot of the difference between the input and calibration model predicted Stokes parameters for grid point 1. The mean offset and spread associated with the recovery of the corresponding Stokes parameter are marked in their corresponding subplots. For the grid point 1, using the calibration model, the real Stokes parameters $q$ and $u$ can be recovered with accuracy of 0.01~\%. % \iffalse \begin{equation}\label{qm_model} q_m = \frac{I_{0} - I_{90}}{I_{0} + I_{90}} \end{equation} \begin{equation}\label{um_model} u_m = \frac{I_{45} - I_{135}}{I_{45} + I_{135}} \end{equation} \fi \subsection{Instrument Transmission Model} \label{transmission_model_appendix} The \textit{polarization transmission} tool in Zemax software can only predict transmissions for fully polarized and unpolarized light. We created a \textit{transmission model} for all the four beams to accurately predict the fractional transmission value, $T_{k}$ (k = 1, 2, 3, 4) for each camera for any state of input polarized light to the system. Using the transmission data obtained from the Zemax software for the WALOP-South optical model for various states of polarized and unpolarized light, similar to the kind used for creating the calibration model, we create the transmission model. $T_{k}$ for any of the four beams can be written as a second order polynomial function of the intrinsic $q$ and $u$ value of the source, as given by Equation~\ref{t_eqn}. % \begin{equation}\label{t_eqn} T_{k} = s_{k1} + s_{k2}q + s_{k3}u + s_{k4}q^{2} + s_{k5}u^{2} \end{equation} \par While the transmission model has been developed for all the 144 grid points (Fig~\ref{grid_points_pol_input}), we present here the results for the grid point 1, which is representative of results from all other grid points. Fig~\ref{modelling1to4} shows the results (fits and residuals) of the transmission functions for the four beams. As can be seen, the fit is very good for both the polarized and unpolarized light beams with a residual of 0.02\% of the measured intensity. \subsection{Calibration Model Results} The calibration model's results for grid point 1 shown in Fig~\ref{quhist_corrected} is representative of the model's performance across the FoV. We can recover the real Stokes parameters from the measured Stokes vector with high accuracy. The histogram and contour plot for $q_{\sigma}$ and $u_{\sigma}$ quantities for all the 144 grid points over the entire FoV are shown in Fig~\ref{m12recovery}. As can be seen from the results, the accuracy of the model for both $q$ and $u$ is better than 0.1~\% across the FoV, barring the strip where the cross-talk in $u_m$ from $q$ tends close to 1 and $u_{m}$ does not contain any contribution from $u$. This constitutes around 15~\% of the FoV area. Even in these regions, the accuracy of recovery is about 0.2\% at worst, which may compare reasonably with the expected photon noise induced uncertainties of the really faint stars in the field. Figs~\ref{qm_coefficients} and \ref{um_coefficients} show the values of the various coefficients of the mapping functions (Equations~\ref{calibration_eqn_qi} and \ref{calibration_eqn_ui}) across the FoV. As will be noticed, there are steep changes in the regions of high cross-talk.% \subsubsection{Discussion} The results from the previous section demonstrate that we can carry out high-accuracy linear polarimetry with the WALOP instruments across the FoV using the proposed calibration method. As Equations~\ref{calibration_eqn_qi} and \ref{calibration_eqn_ui} are second-order polynomial functions of ${q}_{m}$ and ${u}_{m}$, there is no degeneracy in the mapping of values from the measured ${q}_{m}$-${u}_{m}$ plane to the $q_i$-$u_i$ plane. In general, there is a larger error in the recovery of the $u$ parameter than that for $q$ for the entire field. This is expected due to the field-dependent cross-talk introduced in the $u$ measurement by the HWP in the WPA. During the commissioning as well as the operational lifetime of the instruments, the calibration model will be created and updated at frequent intervals. For creating the calibration model, a source/star of any polarization can be used, as the CLP in the optical path ensures the input Stokes vector is only dependent on the CLP orientation angle. On the other hand, testing the accuracy of the calibration model requires multiple sources of known polarization, referred to as standard stars. As may be indicative from Figs~ \ref{qm_coefficients} and \ref{um_coefficients}, from further analysis, it is evident that a calibration model based on 144 grid points can not be used to interpolate the model parameters across the entire FoV of WALOP-South. There is a steep change in the value of the coefficients in regions associated with high cross-talk, leading to inaccurate interpolated values for the coefficients in Equations~\ref{calibration_eqn_qi} and \ref{calibration_eqn_ui}. Therefore a finer spatial sampling of the FoV is required by increasing the number of grid points. This can be easily implemented using stellar fields with a high spatial density such as star clusters that fill the entire FoV, e.g., as implemented by Clemens et al. for calibrating the Mimir instrument for the Galactic Plane Infrared Polarization Survey\cite{clemens_GPIPS_Calibration}. However, making observations of 5-10 individual standard stars for each grid point would require a substantial amount of telescope time. \par The calibration model described above assumes a flat spectrum in the SDSS-r filter for the calibrating source as well as the science targets. In the sky, stars can have a significant difference in their spectral shape within the SDSS-r filter, owing to their different effective temperature. The instrument performance for different source spectra and mitigation strategies are described in Appendix~\ref{specpol_appendix}.% \iffalse \begin{enumerate} \item \textit{Step 1}: Using observations through the CLP, parameters of the Equations~\ref{calibration_eqn_qi} and ~\ref{calibration_eqn_ui} are estimated. This step provides accurate estimate of all the cross-talk and efficiency coefficients, but not the zero-offset terms ($a_i$). \item \textit{Step 2}: Observe 5-10 sources of known partial polarization or zero polarization, and accurately obtain the zero-offset terms ($a_i$) using the difference between the model predicted and real Stokes vector of the sources. \end{enumerate} \fi \subsection{On Sky Implementation of the Calibration Scheme} \par The main challenge in calibrating the WALOP polarimeters comes from their unprecedentedly large FoV. To calibrate extended FoV instruments, most often a raster-scanning method is used, in which the calibration model is created for a grid of points and interpolated for the entire FoV. For example, this was the approach that was used in RoboPol \cite{robopol}. Such an approach can work for a polarimeter requiring few grid points- i.e., if (i) it has a relatively small FoV of only a few arcminutes, and (ii) its polarimetric behaviour across the FoV is relatively simple that interpolation will suffice. For the WALOP polarimeters, due to their very large FoV and the relatively low noise floor requirements, the raster-scanning method becomes unfeasible. Hence, we need a method that can calibrate the entire FoV with spatial continuity and minimal observation time. Gonz\'alez-Gait\'an et. al. \cite{moonlight_calibration_VLT} had used the bright sky adjacent to the full-Moon as an extended source for calibrating the FORS2 polarimeter on the 8~m Very Large Telescope. Light entering the atmosphere from the Moon on a full-Moon night is unpolarized and subsequent polarization is introduced owing to the scattering angle between the observer/telescope and Moon's position. The amount and angle of polarization can be modelled by using single Rayleigh scattering equations \cite{Harrington_solar_Calibration, Strutt_moonlight_polarization} as given by Equation~\ref{moonlight_p_eqn}. The polarization value depends on the angular distance of the region from the Moon ($\gamma$), and the polarization angle depends on the angle to the perpendicular scattering plane. $\delta$ is an empirical parameter whose value depends on the sky conditions, and for clear cloudless nights, it is found to be around 0.8\cite{moonlight_calibration_VLT}. Equation~\ref{moonlight_p_eqn} predicts that within an area of few arcminutes ($10\times10$~arcminutes) and sky positions of up to 15-20 degrees away from the Moon, the polarization fraction will remain constant to a level of few hundredths of a per cent, and has been verified through polarimetric observations with RoboPol (Maharana et al., in preparation). While second order effects lead to deviation of the measured polarization value from that predicted by theory, the polarization is expected to remain constant in that patch. Fig~\ref{sky_pol_plot} shows the expected linear polarization plot for a sky patch of size 7 arcminutes at 10 degrees away from the Moon. Gonz\'alez-Gait\'an et al. used the full-Moon sky to calibrate the FORS2 polarimeter, which has a FoV of 7 arcminutes, to an accuracy better than 0.05~\%. We plan to use the full-Moon sky for carrying out both steps of the calibration model development. % \begin{equation}\label{moonlight_p_eqn} p = \delta\frac{sin^{2}\gamma}{1 + cos^{2}\gamma} \end{equation} \iffalse \begin{equation}\label{moonlight_theta_eqn} \theta = cos^{-1}\frac{cos\gamma - cos\phi cos\psi}{sin\phi sin\psi} \end{equation} \fi The on-sky calibration of the WALOP instruments will be carried out in the following steps to obtain a spatially continuous calibration model. Firstly, the preliminary calibration model will be created for the entire FoV by observing bright wide-field/extended sources such as the star clusters, twilight or full-Moon sky through the CLP. The only requirement is that the polarization of the source should not change within the exposure time as that will lead to uncertainty in the polarization of the beam passing through the CLP. Using standard polarized stars on sky, determination and correction for $\Delta_q$ and $\Delta_u$, and subsequent estimation of calibration accuracy will be done at the centers of the 5x5 grid into which the entire FoV will be divided, as shown in Fig~\ref{calibration_binning}. This will provide us accurate calibration model for these points. Subsequently, observation of the full-Moon sky patches at different (5-10) combinations of separation and azimuth angles with respect to the Moon will be used as standard polarized patches to carry out zero-offset correction as well as establish the calibration accuracy for the entire FoV. The uncertainty in the value of absolute polarization in each box is overcome by using the already developed calibration model at the central point of each box. \par From the calibration modelling results, we find a higher error ($> 0.1~\%$) in the calibration model performance for the $u$ parameter in a narrow patch where the cross-talk level is high. To compensate for this, between exposures, the calibration HWP inside the instrument (refer to Paper I) can be alternatively placed at position angles of $0^{\circ}$ and $45^{\circ}$. This will effectively interchange the $q$ and $u$ channels of the instrument. Consequently, by averaging the measurements from the two calibration HWP orientations, in regions of high cross-talk, the modelling errors can be reduced. % \section{Lab Set Up Tests}\label{calibration_model_lab_test} To test the calibration model on real WALOP-like polarimeter systems, a table-top test-bed polarimeter was set up in the lab in IUCAA. The schematic of the set-up along with its three modes of operation is shown in Fig~\ref{lab_setup_overall} and the actual lab setup is shown in Fig~\ref{Set_up_pic}. It is a dual-channel polarimeter consisting of a rotating HWP and a WP as the analyzer system. An LED-fed fiber source was used to simulate star like point source for the experiment. The HWP is placed with its normal tilted with respect to the optic axis of the system so that the rays are incident on the HWP at oblique angles so as to create cross-talks similar to what is expected in WALOPs. For measuring $q$, the HWP's fast axis is aligned with the x-axis of the Instrument Coordinate System (ICS) and for $u$, it is oriented at $22.5^{\circ}$. Combining these two measurements, we are able to simulate both the left and right halves of the WPA of the WALOP instruments, which is where all the polarimetric cross-talk originates. Various levels of cross-talk between 16~\% to 84~\% were simulated, as noted under the column \textit{Tilt Angle} in Table~\ref{Calibration_lab_Results}. The percentile area corresponding to each cross-talk level in the WALOP-South instrument is noted in the same Table. The operation of the rotation stages and the camera were fully automated through an instrument control software written in Python and run through the Setup Control Computer (shown in Fig~\ref{Set_up_pic}). An SBIG-ST9 CCD camera was used as the detector. The photometric and subsequent polarimetric analysis of the data was carried out through a data reduction pipeline written in Python, with careful emphasis on error estimation and propagation in each step. For photometry, Photutils software\cite{photutils} was used for implementing aperture photometry. % \iffalse \begin{subfigure}{0.99\textwidth} \centering \frame{\includegraphics[scale = 0.4]{Lab_setup_measured_polarization.png}} \caption{ High cross-talk polarimeter system to create polarimetric cross-talk expected in WALOPs. The tilt angle of the HWP is controlled through a motorized rotary stage to introduce various cross-talk levels. Another rotation stage controls the orientation of the HWP fast-axis (on the X-Y plane).} \label{lab_setup} \end{subfigure} \centering \begin{subfigure}{0.99\textwidth} \centering \frame{\includegraphics[scale = 0.4]{Lab_setup_calibration.png}} \caption{Calibration linear polarizer, mounted on a motorized rotation stage, is inserted in the polarimeter to carry out calibration measurements. } \label{lab_setup_calibration} \end{subfigure} \centering \begin{subfigure}{0.99\textwidth} \centering \frame{\includegraphics[scale = 0.4]{Lab_setup_intrinsic_polarization.png}} \caption{To measure real linear polarization values of partially polarized sources used in the calibration model, HWP is oriented normal to optical axis and standard two-channel polarimetry is carried out by rotating it around the optical axis. } \label{lab_setup_intrinsic} \end{subfigure} \fi \iffalse The calibration of this polarimeter was done in the same way as proposed in the calibration model for WALOPs. Calibration \textit{step 1} is done by inserting a linear polarizer mounted on a motorized rotation stage in the optical path as shown in Fig~\ref{lab_setup_calibration}. Once the \textit{step 1} of the model is executed, we need to carry out \textit{step 2} to correct for the zero off-set as well as test the accuracy of the model. For this, multiple partially polarized sources were created in the setup by placing an old film-based polarizer sheet at an intermediate focus position (Fig~\ref{lab_setup}) whose coating had worn off and effectively became a partial polarizer sheet. Using this sheet, polarization values between 0 and 5~\% were generated, depending on the spatial position of incident light on the sheet. Fig~\ref{calibration_test_sources} shows the $q-u$ values generated using this trick. To measure the intrinsic polarization of these sources, we removed the tilt of the HWP by making it normal to the optical axis, as shown in Fig~\ref{lab_setup_intrinsic}. Using conventional two-channel polarimetry with this polarimeter system, the intrinsic polarization of the sources was found. The $q_m$ and $u_m$ measured with the tilted HWP systems were then used in the calibration model to predict the $q_i$ and $u_i$ for each of the tilted polarimeters. The mean offset in $q$ and $u$ between the calibration model predicted and real Stokes parameters were corrected and the calibration accuracy was estimated. % \fi \par The calibration of this polarimeter was done in the same way as proposed in the calibration model for WALOPs, through its three modes of operation. At the beginning of this system, a calibration linear polarizer mounted on a motorized rotation stage is placed in the optical path during calibrations, as shown in Fig~\ref{lab_setup_overall}~(b) and removed thereafter. The polarimeters at different HWP tilts were treated as independent systems and thus the calibration was done independently for them. Calibration \textit{step 1} is done by inserting the calibration linear polarizer in the optical path and feeding 15 states of fully polarized light as input for which the corresponding $q_m$ and $u_m$ were measured. Using this, the calibration equations were fitted and coefficients were obtained. Once the \textit{step 1} of the model is executed, we need to carry out \textit{step 2} to test the accuracy of the model. For this, multiple partially polarized beams were created in the set up by placing an old polarizer film sheet at an intermediate focus position before the aperture of the instrument (Figs~\ref{Set_up_pic} and \ref{lab_setup_overall}~(a)). The polarizer's coating had worn off and effectively became a partial polarizer sheet. Using this sheet, polarization values between 0 and 5~\% were generated, depending on the spatial position of incident light on the sheet. We checked the stability of the strip by making repeated polarization measurements over time. Fig~\ref{calibration_test_sources} shows the $q-u$ values generated using this method. To measure the intrinsic polarization of these beams created through this method, we corrected for the tilt of the HWP by making it normal to the optical axis through the rotary stage on which it is mounted, as shown in Fig~\ref{lab_setup_overall}~(c). Using conventional two-channel polarimetry with this polarimeter system, the intrinsic polarization of the sources were found ($q$, $u$, and $p$), to have accuracies better than 0.03~\%. The $q_m$ and $u_m$ measured with the tilted HWP systems were then used in the calibration model to predict the $q_i$ and $u_i$ (and $p_i$ using these) for each of the tilted polarimeters. As found in the case of the theoretical calibration model for the WALOP-South instrument, just by using fully polarized light in \textit{step 1}, all the coefficients could be accurately estimated barring the zero-offset terms ( $a_i$ terms in Equations~\ref{calibration_eqn_qi} and \ref{calibration_eqn_ui}). The mean offset in $q$ and $u$ between the calibration model predicted and real Stokes parameters ($\Delta_q$ and $\Delta_u$ ) were corrected and the calibration accuracy was estimated. Fig~\ref{calibration_results_iqu} plots the results for one of the HWP tilts leading to a cross-talk value of 16~\%, corresponding to around 50~\% area percentile of WALOP-South FoV. While the black and blue points correspond to the real and model predicted $q-u$ values, respectively, the red points are the instrument-measured Stokes parameters ($q_m$ and $u_m$). The gray point is the estimated $q-u$ prediction after correction for only zero-offset, which is the standard practice in most polarimeters since their cross-talk is nearly zero. As can be seen, without polarimetric cross-talk correction, the predicted polarization is very inaccurate. Once the measured $q-u$ values are corrected with the calibration model, the model predicted and intrinsic polarization values match to better than 0.04~\%. For estimating accuracy in $p$, $p_i$ derived from the calibration model predicted $q_i$ and $u_i$ is compared to the intrinsic $p$ of the source. As error in $p$ is a weighted average of errors in $q$ and $u$ (~$\sigma_{p} = \sqrt{\frac{q^{2}{\sigma_{q}^{2}}+ u^{2}{\sigma_{u}^{2}}}{q^{2} + u^{2}}}$), we get an overall polarimetric accuracy in $p$ lying between accuracy of $q$ and $u$. Likewise, there is an asymmetric contribution from errors in $q$ and $u$ to EVPA ($\theta$) measurement errors. Other cross-talk values (created using different tilt angles of the HWP) and the corresponding performance of the calibration model are shown in Table~\ref{Calibration_lab_Results}. Fig~\ref{lab_results_WALOP_FoV} is the expected polarimetric accuracy across the WALOP-South FoV based on the instrument's cross-talk map. In summary, we are able to calibrate more than 75~\% of the WALOP-South instrument's FoV with better than 0.1~\% accuracy in $p$ and the remaining area to better than 0.2~\%. \begin{table}[] \begin{center} \begin{tabular}{ccccccc} \hline \multicolumn{1}{|c|}{\begin{tabular}[c]{@{}c@{}}Position\\ No\\ \end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Tilt\\ Angle\\ \end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Cross-talk\\ Level\\ \end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Calibration\\ Accuracy\\ $q$\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Calibration\\ Accuracy\\ $u$\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Calibration\\ Accuracy\\ $p$\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Percentile Area \\ of WALOP \\ field of view\end{tabular}} \\ \hline \multicolumn{1}{|c|}{1} & \multicolumn{1}{c|}{12} & \multicolumn{1}{c|}{16 \%} & \multicolumn{1}{c|}{0.03 \%} & \multicolumn{1}{c|}{0.09 \%} & \multicolumn{1}{c|}{0.04 \%} & \multicolumn{1}{c|}{48.5 \%} \\ \hline \multicolumn{1}{|c|}{2} & \multicolumn{1}{c|}{14} & \multicolumn{1}{c|}{27 \%} & \multicolumn{1}{c|}{0.04 \%} & \multicolumn{1}{c|}{0.14 \%} & \multicolumn{1}{c|}{0.08 \%} & \multicolumn{1}{c|}{58 \%} \\ \hline \multicolumn{1}{|c|}{3} & \multicolumn{1}{c|}{16} & \multicolumn{1}{c|}{43 \%} & \multicolumn{1}{c|}{0.04 \%} & \multicolumn{1}{c|}{0.16 \%} & \multicolumn{1}{c|}{0.07 \%} & \multicolumn{1}{c|}{67 \%} \\ \hline \multicolumn{1}{|c|}{4} & \multicolumn{1}{c|}{18} & \multicolumn{1}{c|}{62 \%} & \multicolumn{1}{c|}{0.04 \%} & \multicolumn{1}{c|}{0.17 \%} & \multicolumn{1}{c|}{0.07 \%} & \multicolumn{1}{c|}{75.6 \%} \\ \hline \multicolumn{1}{|c|}{5} & \multicolumn{1}{c|}{20} & \multicolumn{1}{c|}{81 \%} & \multicolumn{1}{c|}{0.03\%} & \multicolumn{1}{c|}{0.29 \%} & \multicolumn{1}{c|}{0.13 \%} & \multicolumn{1}{c|}{83.9 \%} \\ \hline \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \end{tabular} \end{center} \caption{Results from the lab calibration tests of the WALOP calibration model. For more than 75~\% of the WALOP-South instrument's field we obtain accuracies better than the target accuracy of 0.1~\% in $p$ (degree of polarization).} \label{Calibration_lab_Results} \end{table} \iffalse \fi \section{Conclusions}\label{calibration_conclusion} We have carried out complete polarimetric analysis of the WALOP-South instrument's optical system to estimate the polarimetric cross-talk in the measurements of the Stokes parameters. We find significant cross-talk in the measured Stokes parameters. Almost all the cross-talk originates from the HWPs in the WPA, owing to the large ($> 5^\circ$) oblique angles of incidence. The effect is very well reproduced using ray tracing principles applied on the HWP. To correct for this, we have developed a complete on-sky calibration method, enabling us to obtain $0.1~\%$ accuracy across most of the FoV. The implementation of the calibration model on sky is relatively straight forward. This work was done using polarization analysis tools available in Zemax software as well as utilizing ray tracing programs developed in Python. Before fabrication and assembly of an instrument, the expected instrumental polarization can be estimated using the polarization analysis features in optical design software like \href{https://www.zemax.com/}{Zemax}\textsuperscript{\textregistered} or \href{https://www.breault.com/software/asap-capabilities}{Advanced Systems Analysis Program (ASAP)}\textsuperscript{\textregistered}. The tools developed for the polarization modelling and calibration of WALOPs can be used for any other polarimeter to understand the various polarimetric systematic effects induced by the optics and prepare the required calibration plan. Additionally, we tested the calibration method on a test-bed WALOP like polarimeter in the lab and find that we can carry out high accuracy polarimetric calibrations at various cross-talk levels. The calibration method will be implemented during the on-sky commissioning of the instrument and subsequent results will be published as part of the instrument commissioning paper. % \appendix \section{Measured Stokes Parameters and Instrument Matrix}\label{instrument_matrix_appendix} The incident intensity at the detector for any polarization channel of the instrument ($0^\circ$, $45^\circ$, $90^\circ$ and $135^\circ$ polarizations) can be written as: \begin{equation} I_{\theta} = a_{\theta} + b_{\theta}q + c_{\theta}u \end{equation} where $a_{\theta}$, $b_{\theta}$ and $c_{\theta}$ correspond to the elements of the first row of the Mueller matrix for the optical path corresponding to that polarization angle/channel. The normalized difference between intensities corresponding to two orthogonal polarization angles/channels yields $q$ or $u$, as given by the following equation. \begin{equation} r_{i} = \frac{I_{\theta1} - I_{\theta2}}{I_{\theta1} + I_{\theta2}} = \frac{(a_{\theta1} + b_{\theta1}q + c_{\theta1}u) - (a_{\theta2} + b_{\theta2}q + c_{\theta2}u)} {(a_{\theta1} + b_{\theta1}q + c_{\theta1}u) + (a_{\theta2} + b_{\theta2}q + c_{\theta2}u)} \end{equation} The binomial expansion of $(1+x)^{-1}$ is given by $(1+x)^{-1} = 1 - x +x^2 -x^3 + x^4$. Using that, the generalized normalized difference can be written as a polynomial equation in $q$ and $u$. \begin{equation} r_{i} = A_{i} + B_{i}q + C_{i}u + D_{i}q^2 + E_{i}u^2 + F_{i}qu + ... \end{equation} In general, for most simple polarimeters, the second order terms are zero, and the instrument measured Stokes parameters can be written as a set of linear equations, together forming a Muller matrix like \textit{Instrument Matrix}. The first row is inconsequential to polarimetric measurements and hence can be ignored. \begin{gather}% s_{m} = \begin{bmatrix} 1 \\ q_{m} \\ u_{m} \\ v_{m} \end{bmatrix} = m_{inst}\times s = \begin{bmatrix} m_{11} & m_{12} & m_{13} & m_{14} \\ m_{21} & m_{22} & m_{23} & m_{24} \\ m_{31} & m_{32} & m_{33} & m_{34} \\ m_{41} & m_{42} & m_{43} & m_{44} \end{bmatrix} \times \begin{bmatrix} 1 \\ q \\ u \\ v \end{bmatrix} \\ = \begin{bmatrix} - & - & - & - \\ 1 \,\to\,q_{m} & q\,\to\,q_{m} & u\,\to\,q_{m} & v\,\to\,q_{m} \\ 1 \,\to\,u_{m} & q\,\to\,u_{m} & u\,\to\,u_{m} & v\,\to\,u_{m} \\ 1 \,\to\,v_{m} & q\,\to\,v_{m} & u\,\to\,v_{m} & v\,\to\,v_{m} \end{bmatrix} \times \begin{bmatrix} 1 \\ q \\ u \\ v \end{bmatrix} \end{gather} \section{Polarimetric Modelling of HWPs}\label{HWP_modelling_section} The Mueller Matrix of a retarder plate with retardance $\delta$ oriented at an angle $\alpha$ with respect to the Instrument Coordinate System (ICS) is given by Equation~\ref{genera_retarder_matrix}.% \begin{gather} M_{\alpha, \delta} = M_{rot}(-\alpha)\times M_{\delta} \times M_{rot}(\alpha) \nonumber \\ = \centering \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & cos^{2}2\alpha + sin^{2}2\alpha cos\delta & cos2\alpha sin2\alpha(1- cos\delta) & -sin2\alpha sin\delta \\ 0 & cos2\alpha sin2\alpha(1- cos\delta) & cos^{2}2\alpha cos\delta + sin^{2}2\alpha & cos2\alpha sin\delta \\ 0 & sin2\alpha sin\delta & -cos2\alpha sin\delta & cos\delta \end{bmatrix} \label{genera_retarder_matrix} \end{gather} Thus the Mueller matrices for the HWPs used in the WPA, to be referred as HWP1 (oriented at $0^{\circ}$ with ICS) and HWP2 (oriented at $22.5^{\circ}$ with ICS) from hereon, are given by Equations~\ref{retarder_matrix_non_ideal_LHWP} and \ref{retarder_matrix_non_ideal_RHWP}. \begin{gather}\label{retarder_matrix_non_ideal_LHWP} M_{0, \delta} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & cos\delta & sin\delta \\ 0 & 0 & -sin\delta & cos\delta \end{bmatrix} \end{gather} \begin{gather}\label{retarder_matrix_non_ideal_RHWP} M_{22.5, \delta} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & \frac{1}{2}(1 + cos\delta) & \frac{1}{2}(1-cos\delta) & -\frac{1}{\sqrt{2}} sin\delta \\ 0 & \frac{1}{2}(1-cos\delta) &\frac{1}{2}(1 + cos\delta) & \frac{1}{\sqrt{2}} sin\delta \\ 0 & \frac{1}{\sqrt{2}} sin\delta & -\frac{1}{\sqrt{2}} sin\delta & cos\delta \end{bmatrix} \end{gather} Hence the Stokes vector of a beam after passing through the two HWPs are as given in Equation~\ref{Svector_nonideal_LHWP} and \ref{Svector_nonideal_RHWP}. \iffalse \begin{gather}\label{Svector_general} S_{\alpha, \delta} = \frac{1}{2} \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & M_{22} & M_{23} & M_{24} \\ 0 & M_{32} & M_{33} & M_{34} \\ 0 & M_{42} & M_{43} & M_{44} \end{bmatrix} \times \begin{bmatrix} 1 \\ q \\ u \\ v \end{bmatrix} = \begin{bmatrix} 1 \\ M_{22}{q} + M_{23}{u} + M_{24}{v} \\ M_{32}{q} + M_{33}{u} + M_{34}{v} \\ M_{42}{q} + M_{43}{u} + M_{44}{v} \end{bmatrix} \end{gather} \fi \begin{gather}\label{Svector_nonideal_LHWP} s_{0, \delta} = \begin{bmatrix} 1 \\ q \\ u cos\delta + v sin\delta \\ -u sin\delta + v cos\delta \end{bmatrix} \end{gather} \begin{gather}\label{Svector_nonideal_RHWP} s_{22.5, \delta} = \begin{bmatrix} 1 \\ \frac{q}{2}(1 + cos\delta) +\frac{u}{2}(1-cos\delta) -\frac{v}{\sqrt{2}} sin\delta \\ \frac{q}{2}(1-cos\delta) + \frac{u}{2}(1 + cos\delta) + \frac{v}{\sqrt{2}} sin\delta \\ \frac{q}{\sqrt{2}} sin\delta -\frac{u}{\sqrt{2}} sin\delta + v cos\delta \end{bmatrix} \end{gather} The general formula for the measured Stokes parameter for a two-channel or four-channel polarimeter using a Wollaston Prism that splits $0^{\circ}$ and $90^{\circ}$ polarizations corresponds to the q-element of the Stokes vector as given by Equation~\ref{r_non_ideal}, from which the $q_{m}$ and $u_{m}$ for can be found using Equations~\ref{q_non_ideal} and \ref{u_non_ideal}. As can be seen, the $q_{m}$ is unaffected by non-half wave retardance and depends only on $q$ of the source while $u_{m}$ can have a strong dependence on all the intrinsic Stokes parameters $q$, $u$ and $v$ through the Mueller matrix elements. Only in the case of retardance of $\lambda/2$, $u_{m}$ equals $u$. As an extreme example, if the retardance is $\lambda/4$ instead of $\lambda/2 $, $u_{m}$ equally depends on $q$, $u$ and $v$ as shown by Equation~\ref{u_non_ideal_quarter_wave}. \begin{equation}\label{r_non_ideal} r_{m} = m_{22}{q} + m_{23}{u} + m_{24}{v} \end{equation} \begin{equation}\label{q_non_ideal} q_{m} = r(0, \delta) = q \end{equation} \begin{equation}\label{u_non_ideal} u_{m} = r(22.5, \delta) = \frac{q}{2}(1 + cos\delta) +\frac{u}{2}(1-cos\delta) -\frac{v}{\sqrt{2}} sin(\delta) \end{equation} \begin{equation}\label{u_non_ideal_quarter_wave} u^{'}_{m} = r(22.5, \lambda/4) = \frac{q}{2} +\frac{u}{2} -\frac{v}{\sqrt{2}} \end{equation} Assuming circular polarization as zero, the formula for $u_{m}$ can be written as Equation~\ref{u_non_ideal2}. $m_{22}$ term captures the cross-talk from $q$ to $u_{m}$ and the $m_{23}$ term captures the fraction of $u$ converted into $u_{m}$. $u_{m}$ is dependent on the only on retardance apart from the input Stokes vectors. For a light ray, retardance depends on the incident angle (angle with the normal) and the azimuth angle with the HWP. For any point in the WALOP-South FoV, both these angles can be found using the Lagrange invariant equation (refer to Paper I) at the pupil and from thereon using Snell's law based ray tracing equations after the BK7 wedges and incident on the HWPs. \begin{equation}\label{u_non_ideal2} u_{m} = m_{22}{q} + m_{23}{u} = \frac{q}{2}(1 + cos\delta) +\frac{u}{2}(1-cos\delta) % \end{equation} The WALOP HWPs are made of one quartz and one MgF$_{2}$ plate aligned orthogonally to each other and the overall fast-axis of the HWP is along the fast-axis of the MgF$_{2}$ plate (the details of the HWP can be provided upon request). The relative fast axis orientation of the quartz and MgF2 plates is same in HWP2 as HWP1, but are rotated by $22.5^{\circ}$ with respect to the x-axis. The prescription for finding the retardance of a wave-plate for any given incident and azimuth angle is described in the paper by Gu et al. \cite{HWP_retardance}, and elaborated below. We used their method in conjugation with ray tracing equations to model the retardance and the Mueller matrices for HWP1 and HWP2 for the entire WALOP-South FoV. The generalized Mueller matrix M for a retarder given by Equation~\ref{genera_retarder_matrix} is always in normalized form, i.e, $M_{\alpha, \delta}$ = $m_{\alpha, \delta}$. Fig~\ref{walop_LHWP} shows the retardance, $M_{22}$, $M_{33}$ and $M_{23}$ maps for HWP1 and Fig~\ref{walop_RHWP} shows the corresponding maps for HWP2. As can be seen, the patterns of polarimetric cross-talk and efficiency in the HWPs is identical to that obtained for the entire instrument in Section~\ref{WALOP_modelling_section_overall}. For example, if we look at the cross-talk term for $u_{m}$ ($M_{22}$ in Fig~\ref{walop_RHWP}), it is identical to the cross-talk term for $u_{m}$ for the entire instrument in Fig~\ref{WALOP-South Instrument Polarization}.% \iffalse \begin{table}[] \centering \begin{tabular}{|c|c|c|c|c|} \hline Material & Thickness ($mm$)& $n_{o}$ (0.66~${\mu}m$) & $n_{e}$(0.66~${\mu}m$) & ${\Delta}n$~(0.66~${\mu}m$)\\ \hline $Quartz$ & 2.2499 & 1.542 & 1.551 & 0.009 \\ \hline $MgF_{2}$ & 1.8093 & 1.377 & 1.388 & 0.011 \\ \hline \end{tabular} \caption{Details of the quartz and MgF$_{2}$ plates used in the HWPs used in the Wollaston Prism Assembly of WALOP instruments.} \label{HWP_details} \end{table} \fi \iffalse wewant to model the retardance of the WALOP Wollaston Prism HWPs aligned at O and 22.5 deg wrt to the x-axis. Before the rays enter the HWP, the beams from the entire field are refracted at large angles by the BK7 wedge in front of the HWPs. This refraction leads to rays leaving the wedge at oblique angles of incidence to the instrument coordinate system and to the HWP. Following are the effects that we need to take into account: 1. The rays from any point in the field of view arrive at angle, lets call it the slant angle. This angle can be found using the Lagrange invariant equation, where D and p are the telescope and the pupil diameter and the $\alpha$ is the position of the object on the sky with respect to the center. $$\theta = \frac{D\times\alpha}{p}$$ \fi \iffalse \fi \iffalse WALOP-South\cite{WALOP_South_Optical_Design_Paper, walop_s_spie_2020} has a large FoV. two aspects of instrumental polarization effects: \begin{itemize} \item Cross-talk. \item Instrumental polarization. \end{itemize} Most polarimeters are simple on-axis polarimeters like RoboPol and ImPol with limited FoV. This limits the angles of incidence of rays falling on the optics- lenses etc. the problem of instrument induced polarization can be subdivied into \textit{instrumental polarization} (IP) and \textit{crosstalk}. IP is the constant zero offset of polarizition introduced into the beams due to differential transmission of the O and E beams. So this is zero offset for the system. On the other hand, \textit{crosstalk} refers to the intermixing of the Stokes vectors amongst themrselves. This leads to lower polarimetric efficiency of the system. The main problematic feature of instrumental cross-talk is that the measured Stokes values depend on the input Stokes values to the instrument. This can be understood in terms of the instrumental Mueller Matrix. The Mueller matrix for a perfect polarizer along $\pm~x-axis$(to measure q) is given by Eqn~\ref{mpolq}, where the + and - signs stand for polarizer along +~x and -~x axis respectively. \begin{gather}\label{mpolq} M_{polq} = \begin{bmatrix} 1 & \pm 1 & 0 & 0 \\ \pm 1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix} \end{gather} Similarly, the Mueller matrix for a perfect polarizer along $\pm~45^{\circ}-axis$(to measure u) is given by Eqn~\ref{mpolu}, where the + and - signs stand for polarizer along +~$45^{\circ}$ and ~$135^{\circ}$ axis respectively. \begin{gather}\label{mpolu} M_{polu} = \begin{bmatrix} 1 & 0 & \pm 1 & 0 \\ 0 & 0 & 0 & 0 \\ \pm 1 & 0 &1 & 0 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix} \end{gather} \begin{gather}\label{inst_matrix} S_{inst} = \begin{bmatrix} 1 \\ q_{m} \\ u_{m} \\ v_{m} \end{bmatrix} = \begin{bmatrix} M_{11} & M_{12} & M_{13} & M_{14} \\ M_{21} & M_{22} & M_{23} & M_{24} \\ M_{31} & M_{32} & M_{33} & M_{34} \\ M_{41} & M_{42} & M_{43} & M_{44} \end{bmatrix} \begin{bmatrix} 1 \\ q_{i} \\ u_{i} \\ v_{i} \end{bmatrix} \end{gather} \begin{gather}\label{mmatrix} M_{inst} = \begin{bmatrix} M_{11} & M_{12} & M_{13} & M_{14} \\ M_{21} & M_{22} & M_{23} & M_{24} \\ M_{31} & M_{32} & M_{33} & M_{34} \\ M_{41} & M_{42} & M_{43} & M_{44} \end{bmatrix} = \begin{bmatrix} I\,\to\,I & Q\,\to\,I & U\,\to\,I & V\,\to\,I \\ I \,\to\,Q & Q\,\to\,Q & U\,\to\,Q & V\,\to\,Q \\ I \,\to\,U & Q\,\to\,U & U\,\to\,U & V\,\to\,U \\ I \,\to\,V & Q\,\to\,V & U\,\to\,V & V\,\to\,V \end{bmatrix} \end{gather} \begin{gather}\label{inst_matrix2} S_{inst} = \begin{bmatrix} 1 \\ q_{i} \\ u_{i} \\ v_{i} \end{bmatrix} = \begin{bmatrix} M_{11i} & M_{12i} & M_{13i} & M_{14i} \\ M_{21i} & M_{22i} & M_{23i} & M_{24i} \\ M_{31i} & M_{32i} & M_{33i} & M_{34i} \\ M_{41i} & M_{42i} & M_{43i} & M_{44i} \end{bmatrix} \begin{bmatrix} 1 \\ q_{m} \\ u_{m} \\ v_{m} \end{bmatrix} \end{gather} \begin{equation}\label{fqm} q_{m} = M_{21} + M_{22}q_{i} + M_{23}u_{i} + M_{24}v_{i} \end{equation} \begin{equation}\label{fum} u_{m} = M_{31} + M_{32}q_{i} + M_{33}u_{i} + M_{34}v_{i} \end{equation} \begin{equation}\label{fqi} q_{i} = M_{21i} + M_{22i}q_{m} + M_{23i}u_{m} + M_{24i}v_{m} \end{equation} \begin{equation}\label{fui} u_{i} = M_{31i} + M_{32i}q_{m} + M_{33i}u_{m} + M_{34i}v_{m} \end{equation} For WALOP instruments and \textsc{pasiphae} survey, as we aim to measure stellar polarimetry, there is negligible circular polarizations, so the mapping functions can be written as: \begin{equation}\label{fqm} q_{m} = M_{21} + M_{22}q_{i} + M_{23}u_{i} \end{equation} \begin{equation}\label{fum} u_{m} = M_{31} + M_{32}q_{i} + M_{33}u_{i} \end{equation} \begin{equation}\label{fqi} q_{i} = M_{21i} + M_{22i}q_{m} + M_{23i}u_{m} \end{equation} \begin{equation}\label{fui} u_{i} = M_{31i} + M_{32i}q_{m} + M_{33i}u_{m} \end{equation} Eqns~\ref{fqm} and \ref{fum} are the liear equations mapping the input Stokes parameters to the instrument measured Stokes values.Eqns~\ref{fqi} and \ref{fui} are the linear equations mapping the instrument measured Stokes values to the real on-sky values. Finding the coefficients of Eqns~\ref{fqi} and \ref{fui} is the task under calibration model for the WALOP instruments. \fi \iffalse \subsection{Mueller Matrix of Polarizer} The Mueller matrix for a ideal polarizer oriented along the instrument coordinate system's x-axis is given by Equation~\ref{ideal_polarizer_matrix}: \begin{gather}\label{ideal_polarizer_matrix} M_{pol}(x) = \frac{1}{2} \begin{bmatrix} 1 & 1 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix} \end{gather} The Stokes vector after passing through the polarizer is given by Equation~\ref{+qvector}. The quantity measured by the detector is the intensity component of the Stokes vector, which in this case is $I_{+x} = \frac{1 + q}{2}$. \begin{gather}\label{+qvector} S_{+x} = \frac{1}{2} \begin{bmatrix} 1 & 1 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix} \times \begin{bmatrix} 1 \\ q \\ u \\ v \end{bmatrix} = \frac{1}{2} \begin{bmatrix} 1 + q \\ 1 + q \\ 0 \\ 0 \end{bmatrix} \end{gather} The Mueller Matrix for a coordinate rotation is given by Equation~\ref{rotation_matrix}. \begin{gather}\label{rotation_matrix} M_{rot}(\alpha) = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & cos2\alpha & sin2\alpha & 0 \\ 0 & -sin2\alpha & cos2\alpha & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \end{gather} Using Equations~\ref{ideal_polarizer_matrix} and \ref{rotation_matrix}, the Muller Matrix for a polarizer at a different orientation along any angle $\alpha$ can be found by Equation~\ref{mmatrix_rotation_algebra}. \begin{equation}\label{mmatrix_rotation_algebra} M^{'}_{\alpha} = M_{rot}(-\alpha)\times M\times M_{rot}(\alpha) \end{equation} Using Equations~\ref{ideal_polarizer_matrix}, \ref{rotation_matrix} and \ref{mmatrix_rotation_algebra}, t A Wollaston Prism or similar polarization beamsplitters enable measurement of intensities at $0^{\circ}$ and $90^{\circ}$ polarizations, i.e, $I_{x}$ and $I_{-x}$ in a single measurement to yield $q$ or $u$ depending on the HWP orientation. The Mueller matrices for a polarizer at an angle of $0^{\circ}$ and $90^{\circ}$ come out be as given by Equations~\ref{ideal_polarizer_matrix_0} and \ref{ideal_polarizer_matrix_90}. The intensities measured at the detector for the polarizer orientation angles of $0^{\circ}$ and $90^{\circ}$ correspond to $I_{x} = \frac{1 + q}{2}$ and $I_{-x} = \frac{1 - q}{2}$.% \fi \iffalse \begin{gather}\label{ideal_polarizer_matrix_alpha} M_{pol}(\alpha) = \begin{bmatrix} 1 & cos2\alpha & sin2\alpha & 0 \\ cos2\alpha & cos^{2}2\alpha & cos2\alpha sin2\alpha & 0 \\ sin2\alpha & cos2\alpha sin2\alpha & sin^{2}2\alpha & 0 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix} \end{gather} \begin{gather}\label{ideal_polarizer_matrix_0} M_{pol}(0^{\circ}) = \frac{1}{2} \begin{bmatrix} 1 & 1 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix} \end{gather} \begin{gather}\label{ideal_polarizer_matrix_90} M_{pol}(90^{\circ}) = \frac{1}{2} \begin{bmatrix} 1 & -1 & 0 & 0 \\ -1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix} \end{gather} \fi \iffalse \begin{gather}\label{-qvector} S_{-x} = \frac{1}{2} \begin{bmatrix} 1 & -1 & 0 & 0 \\ -1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix} \times \begin{bmatrix} 1 \\ q \\ u \\ v \end{bmatrix} = \frac{1}{2} \begin{bmatrix} 1 - q \\ -1 + q \\ 0 \\ 0 \end{bmatrix} \end{gather} \fi \iffalse \begin{gather}\label{ideal_polarizer_matrix_45} M_{pol}(45^{\circ}) = \frac{1}{2} \begin{bmatrix} 1 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix} \end{gather} \begin{gather}\label{ideal_polarizer_matrix_135} M_{pol}(135^{\circ}) = \frac{1}{2} \begin{bmatrix} 1 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 \\ -1 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix} \end{gather} \begin{gather}\label{+uvector} S_{+xy} = \frac{1}{2} \begin{bmatrix} 1 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix} \times \begin{bmatrix} 1 \\ q \\ u \\ v \end{bmatrix} = \frac{1}{2} \begin{bmatrix} 1 + u \\ 1 + u \\ 0 \\ 0 \end{bmatrix} \end{gather} \begin{gather}\label{-uvector} S_{-xy} = \frac{1}{2} \begin{bmatrix} 1 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 \\ -1 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix} \times \begin{bmatrix} 1 \\ q \\ u \\ v \end{bmatrix} = \frac{1}{2} \begin{bmatrix} 1 - u \\ -1 + u \\ 0 \\ 0 \end{bmatrix} \end{gather} \fi \iffalse \begin{gather}\label{Wollaston_prism_matrix} M_{WP}(x) = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix} \end{gather} \fi \iffalse \subsection{Mueller Matrix of Half-Wave Retarder} The Mueller matrix of a wave plate with retardance $\delta$ with its fast axis aligned along the x-axis is given by given by Equation~\ref{retarder_matrix_nominal}. \begin{gather}\label{retarder_matrix_nominal} M_{\delta} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & cos\delta & sin\delta \\ 0 & 0 & -sin\delta & cos\delta \end{bmatrix} \end{gather} The Mueller Matrix of a retarder oriented at an angel $\alpha$ can be found by using Equations~\ref{mmatrix_rotation_algebra} and \ref{retarder_matrix_nominal}, and is given by Equation~\ref{retarder_matrix_rotated}. \begin{gather}\label{retarder_matrix_rotated} M_{\theta, \alpha} =% \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & cos^{2}2\alpha+ sin^{2}2\alpha cos\delta & cos2\alpha sin2\alpha(1- cos\delta) & -sin2\alpha sin\delta \\ 0 & cos2\alpha sin2\alpha(1- cos\delta) & cos^{2}2\alpha cos\delta + sin^{2}2\alpha& cos2\alpha sin\delta \\ 0 & sin2\alpha sin\delta & -cos2\alpha sin\delta & cos\delta \end{bmatrix} \end{gather} For an ideal half-wave retarder plate (HWP), $\delta = \lambda/2$. So the Mueller matrix of an ideal HWP at an angle $\alpha$ is given by Equation~\ref{retarder_matrix_theta_half_wave} and the output Stokes vector is given by Equation~\ref{Svector_HWP}. We note that the matrix $M_{\alpha, \lambda/2}$ is in fact a rotation matrix (Equation~\ref{rotation_matrix}) with double the rotation rate and $u$ is changed to $-u$- if the HWP is orientated at angle $\alpha$, it rotates the coordinate system by angle $2\alpha$. \begin{gather}\label{retarder_matrix_theta_half_wave} M_{\alpha, \lambda/2} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & cos^{2}2\alpha - sin^{2}2\alpha & 2cos2\alpha sin2\alpha & 0 \\ 0 & 2cos2\alpha sin2\alpha& -1(cos^{2}2\alpha- sin^{2}2\alpha) & 0 \\ 0 & 0 & 0 & -1 \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & cos4\alpha & sin4\alpha & 0 \\ 0 & sin4\alpha& -cos4\alpha & 0 \\ 0 & 0 & 0 & -1 \end{bmatrix} \end{gather} \begin{gather}\label{Svector_HWP} S_{\alpha, \lambda/2} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & cos4\alpha & sin4\alpha & 0 \\ 0 & sin4\alpha& -cos4\alpha & 0 \\ 0 & 0 & 0 & -1 \end{bmatrix} \times \begin{bmatrix} 1 \\ q \\ u \\ v \end{bmatrix} = \begin{bmatrix} 1 \\ q cos4\alpha+ u sin4\alpha \\ q sin4\alpha - u cos4\alpha \\ -v \end{bmatrix} \end{gather} \begin{gather}\label{Svector_WP} S_{\pm x, \alpha} = \frac{1}{2} \begin{bmatrix} 1 & \pm 1 & 0 & 0 \\ \pm 1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix} \times \begin{bmatrix} 1 \\ q cos4\alpha + u sin4\alpha \\ q sin4\alpha - u cos4\alpha \\ -v \end{bmatrix} = \frac{1}{2} \begin{bmatrix} 1 \pm (q cos4\alpha + u sin4\alpha) \\ \pm 1 + (q cos4\alpha + u sin4\alpha)\\ 0 \\ 0 \end{bmatrix} \end{gather} The output Stokes vector of the two beams coming out of a Wollaston Prism when the HWP is at angle $\alpha$ is given by Equation~\ref{Svector_WP}. The normalized difference between the two beams can be represented by Equation~\ref{R_eqn}. By orienting the HWP at $0^{\circ}$ and $22.5^{\circ}$ angles, the Stokes parameters can be obtained without the need for motion for any other optical component in the polarimeter. This is the basic principle of operation of working of a two channel polarimeter.% \begin{equation}\label{R_eqn} R(\alpha) = \frac{I_{+x} - I_{-x}}{I_{+x} + I_{-x}} = q cos4\alpha + u sin4\alpha% \end{equation} \iffalse \begin{equation*} q = p cos(2\zeta) \end{equation*} \begin{equation*} u = p sin(2\zeta) \end{equation*} \fi \begin{equation} R(0) = q \end{equation} \begin{equation} R(22.5) = u \end{equation} \fi \iffalse \fi When a ray enters a birefringent medium at an oblique angle, the effective refractive indices experienced by the ray is a combination of both the ordinary and extraordinary indices which depends on the azimuth as well as the incident angle, $\alpha$ and $\theta$, as shown in Fig~\ref{oblique_angle_ray_propagation}. The general optical path difference (OPD), $L_k$ and retardance $\delta_{k}$ for a retarder plate $k$ with thickness $d_k$ and retardance $\Delta{n}$ are given by the Equations~\ref{opd} and \ref{ret}.% \begin{equation}\label{opd} L_{k} = \Delta{n}\times{d_{k}} \end{equation} \begin{equation}\label{ret} \delta_{k} = \frac{2\pi}{\lambda}L_{k} \end{equation} For a HWP made of two wave plates, the net retardance comes out to be: \begin{equation} \delta = \delta_{quartz} + \delta_{MgF_{2}} = \frac{2\pi}{\lambda}\{L_{quartz} + L_{MgF_{2}}\} \end{equation} For a wave plate aligned along the x-axis (in this case the $MgF_{2}$ plate), while the ordinary refractive index experience by the light ray remains same irrespective of $\alpha$ and $\theta$, the effective extraordinary refractive index, $ne^{'}_{x}$ is given by Equation~\ref{ne_eff}. Consequently, $n_{x} = ne^{'}_{x}$ and $n_{y} = no$. Carrying out further calculations (presented in Gu et al.) yields the general retardance formula for a wave plate for any given $\alpha$ and $\theta$, as shown by Equation~\ref{general_ret}. \begin{equation}\label{ne_eff} ne^{'}_{x} = ne\sqrt{1 + (\frac{1}{n_{e}^{2}} - \frac{1}{n_{o}^2}){sin^{2}\theta}{cos^{2}\alpha}} \end{equation} \begin{equation}\label{general_ret} \delta_{k} = \frac{2\pi}{\lambda}d_{k}({\sqrt{{n_{xk}}^{2} - sin^{2}\theta} - \sqrt{{n_{yk}}^{2} - sin^{2}\theta}} ) \end{equation} For the quartz wave plate aligned along the y-axis, similar calculations can be carried out by considering the effective rotation of $90^{\circ}$ and accounting for the change in azimuth angles. Following the above steps, the retardance and subsequent Mueller matrix for HWP1 can be found. Similar calculations can be done for HWP2, by considering the overall rotation of both the quartz and MgF$_{2}$ plates as a shift in the azimuth angles by $22.5^{\circ}$. Python programs were written to carry out these calculations for the complete WALOP-South FoV and the resultant plots are shown in Figs~\ref{walop_LHWP} and \ref{walop_RHWP}. \section{Wavelength dependence of Instrumental Polarization}\label{specpol_appendix} Fig~\ref{hwp_ret_wavelength} plots the retardance of the HWPs used in the instrument across the SDSS-r filter for normal incidence. As described above, the HWP retardance varies across the WALOP-South FoV due to the angle of incidence. Consequently, the instrument's polarimetric behavior (cross-talk) varies as a function of wavelength within the SDSS-r band and the pattern of this variation is a function of the field position. Fig~\ref{specpol} plots the \textit{polarimetric efficiencies} across the SDSS-r band, i.e., measured $q$ and $u$ for input polarizations of $q$ =1 and $u$ = 1, respectively, for grid point 1 for a flat spectrum in SDSS-r filter. To estimate the effect of different spectra on the measured Stokes parameters, we fed as input stellar spectra corresponding to different effective temperatures of a star to the Zemax optical model, as shown in Fig~\ref{instrument_specpol_model}~(a). As noted earlier, the majority of the stars in the high Galactic latitude areas targeted by the \textsc{pasiphae} survey are expected to have $p < 0.3~\%$\cite{Skalidis}. We calculated the measured $q$ and $u$ values for $p$ = 1~\% and different EVPAs. Figs~\ref{instrument_specpol_model}~(b) and (c) show the measured $q$ and $u$ values for different spectra for $q$ =1 and $u$ = 1, respectively, for grid point 1. The variation is negligible for $q_m$ while it is of the order of few hundredths of a percent for $u_m$. This variation scales with $p$ of the source and will be further smaller for $p< 0.3~\%$. These variations remain similar for any EVPA for a given $p$. The deviation is more prominent for low-temperature stars ($T< 4000~K$) as their spectral transmission has a steep curve in the SDSS-r filter. Fig~\ref{instrument_specpol_model}~(d) and (e) are plotted for the extent of spread in $q_m$ and $u_m$ for the four spectral types across the 144 grid points of the FoV; $\Delta_{qm}$ and $\Delta_{um}$ are the difference between the maximum and minimum values among the spectra for the corresponding quantities. The effect is negligible for $q_m$ across the FoV. For $u_m$, there is non-negligible dependence of instrument measured $u$ on the spectral type of the star. Yet, the difference is much less than 0.05~\% for almost all field points, which is the target sensitivity of the instrument. \par The true estimate of the instrument's on-sky wavelength-dependent polarimetric behavior will be carried out during the commissioning. For this, we have made provision for placing multiple narrowband filters in the filter wheel to facilitate modeling the instrument's polarization behavior across the SDSS-r filter. To correct for any observed dependence on the spectral class of the star, the calibration model can be developed with the spectral class/temperature of the star as a parameter, i.e., $q_i/u_i = f(q_m, u_m, s)$, where s is the spectral type of the star. \textsc{GAIA} data release 3\cite{GAIA_DR3_paper} will provide spectroscopic classification and effective temperature of all stars in the \textsc{pasiphae} survey footprint area. This information will be fed into the calibration model development. \iffalse \section{WALOP Calibration Model- Lab Test Results }\label{lab_calibration_appendix} Figs~\ref{calibration_results_iqu2} to \ref{calibration_results_iqu5} are plots of the calibration model performance for cross-talk values (corresponding to different tilt angles of the HWP) from Position No 2 to 5 in Table~\ref{Calibration_lab_Results}. \fi \acknowledgments \par The \textsc{pasiphae} program is supported by grants from the European Research Council (ERC) under grant agreements No 771282 and No 772253; by the National Science Foundation (NSF) award AST-2109127; by the National Research Foundation of South Africa under the National Equipment Programme; by the Stavros Niarchos Foundation under grant \textsc{pasiphae} ; and by the Infosys Foundation. This work utilized the open source software packages Astropy\cite{astropy:2013, astropy:2018}, Numpy\cite{numpy}, Scipy\cite{scipy}, Matplotlib\cite{matplotlib} and Jupyter notebook\cite{jupyter_notebook}. \par We are thankful to Vinod Vats at Karl Lambrecht Corp. for his inputs and suggestions on various aspects of half-wave plate design and performance. \bibliography{article} % \bibliographystyle{spiejour} % \appendix \iffalse \vspace{2ex}\noindent\textbf{Siddharth Maharana} is an astrophysics PhD student at the Inter-University Centre for Astronomy \& Astrophysics, Pune, India. He received his Bachelor in Mechanical Engineering from Central University, Bilaspur, India in 2015. He is currently working on the design and development of the WALOP instruments for \textsc{pasiphae} survey. His areas of interest are polarimetric instrumentation and data analysis. \vspace{2ex}\noindent\textbf{Ramya M. Anche} is a Postdoctoral research associate at the Steward Observatory, University of Arizona, USA. She obtained her Ph.D. at the Indian Institute of Astrophysics, Bangalore, in 2020. Her areas of interest are polarimetric modeling of telescopes and instruments, high contrast imaging, and polarimetric data analysis. \vspace{2ex}\noindent\textbf{A. N. Ramaprakash} is a Senior Professor at the Inter-University Centre for Astronomy \& Astrophysics, Pune, India. He heads the instrumentation laboratory and has several years of experience building astronomical instruments both for ground and space based applications. He was responsible for building polarimeters like IMPOL, RoboPol etc. \vspace{2ex}\noindent\textbf{Bhushan Joshi} Not Available \vspace{2ex}\noindent\textbf{Dmitry Blinov} Not Available \vspace{2ex}\noindent\textbf{Kishan Deka} Not Available \vspace{2ex}\noindent\textbf{Hans Kristian Eriksen} Not Available \vspace{2ex}\noindent\textbf{Tuhin Ghosh} Not Available \vspace{2ex}\noindent\textbf{Eirik Gjerløw} Not Available \vspace{2ex}\noindent\textbf{John A. Kypriotakis} Not Available \vspace{2ex}\noindent\textbf{Nikolaos Mandarakas} Not Available \vspace{2ex}\noindent\textbf{Georgia V. Panopoulou} Not Available \vspace{2ex}\noindent\textbf{Vasiliki Pavlidou} Not Available \vspace{2ex}\noindent\textbf{Timothy J. Pearson} Not Available \vspace{2ex}\noindent\textbf{Vincent Pelgrims} Not Available \vspace{2ex}\noindent\textbf{Stephen B. Potter} Not Available \vspace{2ex}\noindent\textbf{Anthony C. S. Readhead} Not Available \vspace{2ex}\noindent\textbf{Raphael Skalidis} Not Available \vspace{2ex}\noindent\textbf{Konstantinos Tassis} Not Available \vspace{2ex}\noindent\textbf{Ingunn K. Wehus} Not Available \vspace{1ex} \fi \end{spacing}
Title: Beyond galaxy bimodality: the complex interplay between kinematic morphology and star formation in the local Universe
Abstract: It is generally assumed that galaxies are a bimodal population in both star formation and structure: star-forming galaxies are disks, while passive galaxies host large bulges or are entirely spheroidal. Here, we test this scenario by presenting a full census of the kinematic morphologies of a volume-limited sample of galaxies in the local Universe extracted from the MaNGA galaxy survey. We measure the integrated stellar line-of-sight velocity to velocity dispersion ratio ($V/\sigma$) for 4574 galaxies in the stellar mass range $9.75 < \log M_{\star}[\rm{M}_{\odot}] < 11.75$. We show that at fixed stellar mass, the distribution of $V/\sigma$ is not bimodal, and that a simple separation between fast and slow rotators is over-simplistic. Fast rotators are a mixture of at least two populations, referred to here as dynamically-cold disks and intermediate systems, with disks dominating in both total stellar mass and number. When considering star-forming and passive galaxies separately, the star-forming population is almost entirely made up of disks, while the passive population is mixed, implying an array of quenching mechanisms. Passive disks represent $\sim$30% (both in number and mass) of passive galaxies, nearly a factor of two higher than that of slow rotators, reiterating that these are an important population for understanding galaxy quenching. These results paint a picture of a local Universe dominated by disky galaxies, most of which become somewhat less rotation-supported upon or after quenching. While spheroids are present to a degree, they are certainly not the evolutionary end-point for the majority of galaxies.
https://export.arxiv.org/pdf/2208.01936
\thispagestyle{plain} \newcommand{\btx}{\textsc{Bib}\TeX} \newcommand{\thestyle}{\texttt{\filename}} \begin{center}{\bfseries\Large Reference sheet for \thestyle\ usage}\\ \large(Describing version \fileversion\ from \filedate) \end{center} \begin{quote}\slshape For a more detailed description of the \thestyle\ package, \LaTeX\ the source file \thestyle\texttt{.dtx}. \end{quote} \head{Overview} The \thestyle\ package is a reimplementation of the \LaTeX\ |\cite| command, to work with both author--year and numerical citations. It is compatible with the standard bibliographic style files, such as \texttt{plain.bst}, as well as with those for \texttt{harvard}, \texttt{apalike}, \texttt{chicago}, \texttt{astron}, \texttt{authordate}, and of course \thestyle. \head{Loading} Load with |\usepackage[|\emph{options}|]{|\thestyle|}|. See list of \emph{options} at the end. \head{Replacement bibliography styles} I provide three new \texttt{.bst} files to replace the standard \LaTeX\ numerical ones: \begin{quote}\ttfamily plainnat.bst \qquad abbrvnat.bst \qquad unsrtnat.bst \end{quote} \head{Basic commands} The \thestyle\ package has two basic citation commands, |\citet| and |\citep| for \emph{textual} and \emph{parenthetical} citations, respectively. There also exist the starred versions |\citet*| and |\citep*| that print the full author list, and not just the abbreviated one. All of these may take one or two optional arguments to add some text before and after the citation. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. (1990)\\ |\citet[chap.~2]{jon90}| & Jones et al. (1990, chap.~2)\\[0.5ex] |\citep{jon90}| & (Jones et al., 1990)\\ |\citep[chap.~2]{jon90}| & (Jones et al., 1990, chap.~2)\\ |\citep[see][]{jon90}| & (see Jones et al., 1990)\\ |\citep[see][chap.~2]{jon90}| & (see Jones et al., 1990, chap.~2)\\[0.5ex] |\citet*{jon90}| & Jones, Baker, and Williams (1990)\\ |\citep*{jon90}| & (Jones, Baker, and Williams, 1990) \end{tabular} \end{quote} \head{Multiple citations} Multiple citations may be made by including more than one citation key in the |\cite| command argument. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90,jam91}| & Jones et al. (1990); James et al. (1991)\\ |\citep{jon90,jam91}| & (Jones et al., 1990; James et al. 1991)\\ |\citep{jon90,jon91}| & (Jones et al., 1990, 1991)\\ |\citep{jon90a,jon90b}| & (Jones et al., 1990a,b) \end{tabular} \end{quote} \head{Numerical mode} These examples are for author--year citation mode. In numerical mode, the results are different. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. [21]\\ |\citet[chap.~2]{jon90}| & Jones et al. [21, chap.~2]\\[0.5ex] |\citep{jon90}| & [21]\\ |\citep[chap.~2]{jon90}| & [21, chap.~2]\\ |\citep[see][]{jon90}| & [see 21]\\ |\citep[see][chap.~2]{jon90}| & [see 21, chap.~2]\\[0.5ex] |\citep{jon90a,jon90b}| & [21, 32] \end{tabular} \end{quote} \head{Suppressed parentheses} As an alternative form of citation, |\citealt| is the same as |\citet| but \emph{without parentheses}. Similarly, |\citealp| is |\citep| without parentheses. Multiple references, notes, and the starred variants also exist. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citealt{jon90}| & Jones et al.\ 1990\\ |\citealt*{jon90}| & Jones, Baker, and Williams 1990\\ |\citealp{jon90}| & Jones et al., 1990\\ |\citealp*{jon90}| & Jones, Baker, and Williams, 1990\\ |\citealp{jon90,jam91}| & Jones et al., 1990; James et al., 1991\\ |\citealp[pg.~32]{jon90}| & Jones et al., 1990, pg.~32\\ |\citetext{priv.\ comm.}| & (priv.\ comm.) \end{tabular} \end{quote} The |\citetext| command allows arbitrary text to be placed in the current citation parentheses. This may be used in combination with |\citealp|. \head{Partial citations} In author--year schemes, it is sometimes desirable to be able to refer to the authors without the year, or vice versa. This is provided with the extra commands \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citeauthor{jon90}| & Jones et al.\\ |\citeauthor*{jon90}| & Jones, Baker, and Williams\\ |\citeyear{jon90}| & 1990\\ |\citeyearpar{jon90}| & (1990) \end{tabular} \end{quote} \head{Forcing upper cased names} If the first author's name contains a \textsl{von} part, such as ``della Robbia'', then |\citet{dRob98}| produces ``della Robbia (1998)'', even at the beginning of a sentence. One can force the first letter to be in upper case with the command |\Citet| instead. Other upper case commands also exist. \begin{quote} \begin{tabular}{rl@{\quad$\Rightarrow$\quad}l} when & |\citet{dRob98}| & della Robbia (1998) \\ then & |\Citet{dRob98}| & Della Robbia (1998) \\ & |\Citep{dRob98}| & (Della Robbia, 1998) \\ & |\Citealt{dRob98}| & Della Robbia 1998 \\ & |\Citealp{dRob98}| & Della Robbia, 1998 \\ & |\Citeauthor{dRob98}| & Della Robbia \end{tabular} \end{quote} These commands also exist in starred versions for full author names. \head{Citation aliasing} Sometimes one wants to refer to a reference with a special designation, rather than by the authors, i.e. as Paper~I, Paper~II. Such aliases can be defined and used, textual and/or parenthetical with: \begin{quote} \begin{tabular}{lcl} |\defcitealias{jon90}{Paper~I}|\\ |\citetalias{jon90}| & $\Rightarrow$ & Paper~I\\ |\citepalias{jon90}| & $\Rightarrow$ & (Paper~I) \end{tabular} \end{quote} These citation commands function much like |\citet| and |\citep|: they may take multiple keys in the argument, may contain notes, and are marked as hyperlinks. \head{Selecting citation style and punctuation} Use the command |\bibpunct| with one optional and 6 mandatory arguments: \begin{enumerate} \item the opening bracket symbol, default = ( \item the closing bracket symbol, default = ) \item the punctuation between multiple citations, default = ; \item the letter `n' for numerical style, or `s' for numerical superscript style, any other letter for author--year, default = author--year; \item the punctuation that comes between the author names and the year \item the punctuation that comes between years or numbers when common author lists are suppressed (default = ,); \end{enumerate} The optional argument is the character preceding a post-note, default is a comma plus space. In redefining this character, one must include a space if one is wanted. Example~1, |\bibpunct{[}{]}{,}{a}{}{;}| changes the output of \begin{quote} |\citep{jon90,jon91,jam92}| \end{quote} into [Jones et al. 1990; 1991, James et al. 1992]. Example~2, |\bibpunct[; ]{(}{)}{,}{a}{}{;}| changes the output of \begin{quote} |\citep[and references therein]{jon90}| \end{quote} into (Jones et al. 1990; and references therein). \head{Other formatting options} Redefine |\bibsection| to the desired sectioning command for introducing the list of references. This is normally |\section*| or |\chapter*|. Define |\bibpreamble| to be any text that is to be printed after the heading but before the actual list of references. Define |\bibfont| to be a font declaration, e.g.\ |\small| to apply to the list of references. Define |\citenumfont| to be a font declaration or command like |\itshape| or |\textit|. Redefine |\bibnumfmt| as a command with an argument to format the numbers in the list of references. The default definition is |[#1]|. The indentation after the first line of each reference is given by |\bibhang|; change this with the |\setlength| command. The vertical spacing between references is set by |\bibsep|; change this with the |\setlength| command. \head{Automatic indexing of citations} If one wishes to have the citations entered in the \texttt{.idx} indexing file, it is only necessary to issue |\citeindextrue| at any point in the document. All following |\cite| commands, of all variations, then insert the corresponding entry to that file. With |\citeindexfalse|, these entries will no longer be made. \head{Use with \texttt{chapterbib} package} The \thestyle\ package is compatible with the \texttt{chapterbib} package which makes it possible to have several bibliographies in one document. The package makes use of the |\include| command, and each |\include|d file has its own bibliography. The order in which the \texttt{chapterbib} and \thestyle\ packages are loaded is unimportant. The \texttt{chapterbib} package provides an option \texttt{sectionbib} that puts the bibliography in a |\section*| instead of |\chapter*|, something that makes sense if there is a bibliography in each chapter. This option will not work when \thestyle\ is also loaded; instead, add the option to \thestyle. Every |\include|d file must contain its own |\bibliography| command where the bibliography is to appear. The database files listed as arguments to this command can be different in each file, of course. However, what is not so obvious, is that each file must also contain a |\bibliographystyle| command, \emph{preferably with the same style argument}. \head{Sorting and compressing citations} Do not use the \texttt{cite} package with \thestyle; rather use one of the options \texttt{sort} or \texttt{sort\&compress}. These also work with author--year citations, making multiple citations appear in their order in the reference list. \head{Long author list on first citation} Use option \texttt{longnamesfirst} to have first citation automatically give the full list of authors. Suppress this for certain citations with |\shortcites{|\emph{key-list}|}|, given before the first citation. \head{Local configuration} Any local recoding or definitions can be put in \thestyle\texttt{.cfg} which is read in after the main package file. \head{Options that can be added to \texttt{\char`\\ usepackage}} \begin{description} \item[\ttfamily round] (default) for round parentheses; \item[\ttfamily square] for square brackets; \item[\ttfamily curly] for curly braces; \item[\ttfamily angle] for angle brackets; \item[\ttfamily colon] (default) to separate multiple citations with colons; \item[\ttfamily comma] to use commas as separaters; \item[\ttfamily authoryear] (default) for author--year citations; \item[\ttfamily numbers] for numerical citations; \item[\ttfamily super] for superscripted numerical citations, as in \textsl{Nature}; \item[\ttfamily sort] orders multiple citations into the sequence in which they appear in the list of references; \item[\ttfamily sort\&compress] as \texttt{sort} but in addition multiple numerical citations are compressed if possible (as 3--6, 15); \item[\ttfamily longnamesfirst] makes the first citation of any reference the equivalent of the starred variant (full author list) and subsequent citations normal (abbreviated list); \item[\ttfamily sectionbib] redefines |\thebibliography| to issue |\section*| instead of |\chapter*|; valid only for classes with a |\chapter| command; to be used with the \texttt{chapterbib} package; \item[\ttfamily nonamebreak] keeps all the authors' names in a citation on one line; causes overfull hboxes but helps with some \texttt{hyperref} problems. \end{description}
Title: Accelerating gravitational-wave parameterized tests of General Relativity using a multiband decomposition of likelihood
Abstract: The detection of gravitational waves from compact binary coalescence (CBC) has allowed us to probe the strong-field dynamics of General Relativity (GR). Among various tests performed by the LIGO-Virgo-KAGRA collaboration are parameterized tests, where parameterized modifications to GR waveforms are introduced and constrained. This analysis typically requires the generation of more than millions of computationally expensive waveforms. The computational cost is higher for a longer signal, and current analyses take weeks-years to complete for a binary neutron star (BNS) signal. In this work, we present a technique to accelerate the parameterized tests using a multiband decomposition of likelihood, which was originally proposed to accelerate parameter estimation analyses of CBC signals assuming GR by one of the authors. We show that our technique speeds up the parameterized tests of a 1.4 Msun-1.4 Msun BNS signal by a factor of O(10) for a low-frequency cutoff of 20 Hz. We also verify the accuracy of our method using simulated signals and real data.
https://export.arxiv.org/pdf/2208.03731
\pagenumbering{arabic} \title{Accelerating gravitational-wave parameterized tests of General Relativity \\ using a multiband decomposition of likelihood} \author{Naresh Adhikari} \email{naresh@uwm.edu} \affiliation{Department of Physics, University of Wisconsin-Milwaukee, Milwaukee, WI 53201, USA} \author{Soichiro Morisaki} \email{morisaki@uwm.edu} \affiliation{Department of Physics, University of Wisconsin-Milwaukee, Milwaukee, WI 53201, USA} \date{\today} \section{Introduction} \label{sec:introduction} The gravitational-wave era started with the discovery of the gravitational wave signal from the binary black hole merger, GW150914 \cite{LIGOScientific:2016aoc}, by the advanced LIGO detectors \cite{LIGOScientific:2014pky, aLIGO:2020wna, Tse:2019wcy}. The binary neutron star (BNS) signal, GW170817 \cite{LIGOScientific:2017vwq}, was observed two years later by the advanced LIGO and advanced Virgo \cite{VIRGO:2014yos, Virgo:2019juy} detectors. It became the first example of multimessenger observations involving gravitational waves \cite{LIGOScientific:2017ync,LIGOScientific:2017zic}. Recently, the first-ever observations of mergers of two distinct compact objects, i.e., a neutron star and a black hole, were also achieved \cite{LIGOScientific:2021qlt}, completing the search for a gravitational-wave signal originating from all three distinct classes of compact mergers. The detected CBC signals enabled us to test General Relativity (GR) in the strong-field regime. Various tests have been proposed and applied to the detected signals by the LIGO--Virgo--KAGRA collaboration (LVK) \cite{LIGOScientific:2016lio, LIGOScientific:2018dkp, LIGOScientific:2019fpa, LIGOScientific:2020tif, LIGOScientific:2021sio} and others \cite{Isi:2019aib, Takeda:2020tjj, Shoom:2021mdj}. Among various tests performed by LVK are the parameterized tests \cite{Arun:2006hn, Li:2011cg, Mishra:2010tp, Agathos:2013upa}. In the parameterized tests, parameterized non-GR modifications are introduced to GR waveforms, and the parameters governing the modifications are constrained. The non-GR parameters consist of inspiral parameters and post-inspiral parameters. The inspiral parameters parameterize relative or absolute shifts of the inspiral post-Newtonian coefficients, while the post-inspiral parameters parameterize relative shifts of the post-inspiral phenomenological parameters. For informative constraints to be obtained efficiently, typically only one of those non-GR parameters is allowed to deviate and constrained in a single analysis \cite{Sampson:2013lpa, LIGOScientific:2016lio}. Such a single-parameter test is known to be robust to ignorance of higher-order corrections \cite{Perkins:2022fhr}. Recent works \cite{Shoom:2021mdj, Saleem:2021nsb} also showed that multiple parameters can be investigated simultaneously using principal component analysis. Those parameterized modifications can incorporate modifications predicted by various alternative theories of gravity, and we can map those constraints to filter such non-GR theories as a post-processing step \cite{Yunes:2016jcc}. The parameterized tests typically employ stochastic sampling and require more than millions of likelihood evaluations. Each likelihood evaluation requires the evaluations of waveform values at all the frequency points considered, which is the dominant cost. Since the frequency points are sampled with an interval of $1/T$, where $T$ is the duration of data, more waveform evaluations are required for a longer signal. For a $1.4\Msun$–$1.4\Msun$ BNS signal, current analyses take weeks--years without any approximate methods. This is going to be a serious problem when the sensitivities of detectors are improved and BNS signals are detected more frequently. The same problem arises for parameter estimation analyses of CBCs assuming GR, and various techniques have been proposed to reduce the computational cost of waveform generation \cite{Canizares:2014fya, Smith:2016qas, Morisaki:2020oqk, Vinciguerra:2017ngf, Morisaki:2021ngj, Zackay:2018qdy, Cornish:2021lje}. Among the various rapid parameter estimation techniques, a recent work considers a multiband decomposition of the gravitational-wave likelihood \cite{Morisaki:2021ngj}, which exploits the chirping nature of the CBC signals and speeds up the parameter estimation of a BNS signal by more than an order of magnitude. Since the signal frequency increases with time, the time to merger, $\tau(f)$, decreases with frequency $f$. This implies that the likelihood can be approximated into a form that can be computed with waveform values at frequency points sampled with a variable interval proportional to $1 / \tau(f)$. This approximation drastically reduces the number of waveform evaluations at high frequency. A similar idea has been utilized for speeding up the matched-filter analysis for detection of CBC signals \cite{marion:in2p3-00014163, Buskulic:2010zz, Cannon:2011vi}. In this paper, we apply the multiband decomposition technique to parameterized tests of GR. In \secref{sec:basics}, we briefly explain parameterized tests of GR and the multiband decomposition method for a GR signal. To extend the multiband decomposition method to parameterized tests, we need to consider modifications of $\tau(f)$ caused by non-GR modifications in waveforms. In \secref{sec:methods}, we derive the modified $\tau(f)$ and investigate the speed-up gains of our technique for a BNS signal. In \secref{sec:validation} we study the accuracy of our technique using simulated BNS signals and real data. Finally, we conclude our work in \secref{sec:conclusion}. \section{Basics} \label{sec:basics} In this section, we explain the parameterized tests of GR and the multiband decomposition technique for rapid parameter estimation. \subsection{Parameterized tests of GR} The dominant quadrupole moment of gravitational waves is of the following form in the frequency domain, \begin{equation} \label{strain} \tilde{h}(f) = A(f)e^{i{\Phi(f)}}, \end{equation} where $A(f)$ and $\Phi(f)$ denote signal amplitude and phase respectively. The phase evolution of the early-inspiral part is calculated via the Post-Newtonian (PN) expansion \cite{Will:2011nz, Blanchet:2013haa}, which is an expansion with respect to a small orbital velocity $v/c$. A term with the order of $\mathcal{O}((v/c)^n)$ relative to the leading order is referred to as $(n/2)$PN. In GR, the phase up to the 3.5PN order is given by \cite{Buonanno:2009zt, Sathyaprakash:1991mt, Blanchet:1994ez}, \begin{equation} \label{phase} \begin{aligned} &\Phi^{\mathrm{GR}}(f) = \\ &2\pi f t_{c} - \phi_{c} - \frac{\pi}{4} + \sum_{j = 0}^7 \left[ \varphi^{\mathrm{GR}}_{j} + \varphi_{j}^{\mathrm{GR}(l)} \ln{f} \right] f^{(j-5)/3}, \end{aligned} \end{equation} where $t_c$ and $\phi_{c}$ denote the coalescence time and phase respectively, and $\varphi^{\mathrm{GR}}_{j}$ and $\varphi_{j}^{\mathrm{GR}(l)}$ are $(j/2)$PN coefficients depending on component masses $m_{1}$, $m_{2}$ and spins $\vec{S_{1}}$, $\vec{S_{2}}$. $\varphi^{\mathrm{GR}}_{j}$ is vanishing for $j=1$ and $\varphi^{\mathrm{GR}(l)}_j$ is vanishing except for $j=5, 6$. In the parameterized tests, parameterized deformations of non-zero PN coefficients are introduced by \cite{Arun:2006hn, Li:2011cg, Mishra:2010tp, Agathos:2013upa}, \begin{equation*} \varphi^{\mathrm{GR}}_{j} \to \left[ 1 + \delta \hat \varphi_{j}\right]\varphi^{\mathrm{GR}}_{j}, ~~~\varphi^{\mathrm{GR}(l)}_{j} \to \left[ 1 + \delta \hat \varphi^{(l)}_{j}\right] \varphi^{\mathrm{GR}(l)}_{j}, \end{equation*} where $\delta \hat \varphi_{j}$ and $\delta \hat \varphi^{(l)}_{j}$ are non-GR parameters quantifying relative shifts of GR inspiral phasing. In addition to the relative shifts, absolute shifts are introduced to the $-1$PN, \begin{equation} \varphi_{-2} f^{-7/3} = \frac{3 \delta \hat \varphi_{-2}}{128} \eta^{2/5} \left(\frac{\pi G \mathcal{M} f}{c^3} \right)^{-7/3}, \end{equation} and $0.5$PN, \begin{equation} \varphi_{1} f^{-4/3} = \frac{3 \delta \hat \varphi_{1}}{128 \eta^{1/5}} \left(\frac{\pi G \mathcal{M} f}{c^3} \right)^{-4/3}, \end{equation} where $\mathcal{M}$ and $\eta$ are chirp mass and symmetric mass ratio respectively, \begin{equation} \mathcal{M} = \frac{(m_1 m_2)^{3/5}}{(m_1 + m_2)^{1/5}},~~~~~\eta = \frac{m_1 m_2}{(m_1 + m_2)^2}. \end{equation} The $-1$PN term is to model gravitational dipole radiation predicted by alternative theories of gravity \cite{eardley1975observable}. The full list of inspiral non-GR parameters is, \begin{equation*} \{ \delta \hat \varphi_{-2}, \delta \hat \varphi_{0}, \delta \hat \varphi_{1}, \delta \hat \varphi_{2}, \delta \hat \varphi_{3}, \delta \hat \varphi_{4}, \delta \hat \varphi_{5}^{(l)}, \delta \hat \varphi_{6},\delta \hat \varphi_{6}^{(l)}, \delta \hat \varphi_{7}\}. \end{equation*} The 2.5PN parameter $\delta \hat \varphi_{5}$ is not included since it is completely degenerate with $\phi_{c}$. In addition to the deformations of inspiral phase, parameterized deformations of post-inspiral phase are also considered. The IMRPhenom waveform model \cite{Ajith:2007kx, Husa:2015iqa, Khan:2015jqa} employs phase ansatz parameterized by $\beta_i ~(i=0,1,2,3)$ for intermediate stage and by $\alpha_i~(i=0,1,2,3,4)$ for merger-ringdown stage. Relative shifts to those parameters are introduced in a similar way, which are parameterized by $\delta \hat \beta_i$ and $\delta \hat \alpha_i$. The full list of post-inspiral non-GR parameters as considered in \cite{LIGOScientific:2016lio, LIGOScientific:2019fpa, LIGOScientific:2020tif, LIGOScientific:2021sio} is, \begin{equation*} \{\delta \hat \alpha_2, \delta \hat \alpha_3, \delta \hat \alpha_4, \delta \hat \beta_2, \delta \hat \beta_3\}. \end{equation*} For meaningful constraints to be obtained efficiently, typically only one of those 15 non-GR parameter is allowed to deviate and constrained in a single analysis. Parameterized deformations of amplitude can also be considered, but they are difficult to be measured with the current generation of detectors \cite{VanDenBroeck:2006qu, VanDenBroeck:2006ar, OShaughnessy:2013zfw}. The non-GR parameters are estimated or constrained via Bayesian inference. In the Bayesian inference, posterior distribution $p\left(\bm{\theta}|\{\bm{d}_i\}\right)$ is calculated via the Bayes theorem: \begin{equation} p\left(\bm{\theta}|\{\bm{d}_i\}\right) \propto \mathcal{L}\left(\{\bm{d}_i\} |\bm{\theta}\right)\pi(\bm{\theta}), \end{equation} where $\bm{d}_i$ denotes the data taken from the $i$--th detector, $\bm{\theta}$ the set of model parameters consisting of one of the non-GR parameters and GR parameters, $\pi(\bm{\theta})$ the prior distribution function determined from our belief or prior knowledge on $\bm{\theta}$, and $\mathcal{L}(\bm{d}|\bm{\theta})$ the likelihood function. For the likelihood, the Gaussian-noise likelihood function is typically used \cite{Thrane:2018qnx, Christensen:2022bxb}, \begin{equation} \mathcal{L}(\{\bm{d}_i\}|\bm{\theta}) \propto \exp\left[ - \frac 1 2 \sum_{i} \|\bm{d}_{i} - \bm{h}_{i}( \bm{\theta}) \|^2_i \right], \label{eq:loglikelihodd} \end{equation} \noindent where $\bm{h}_i$ is a model signal observed at the $i$--th detector. $\| \cdot \|^2 = \left( \cdot, \cdot \right)$ is the norm induced by the inner product, \begin{equation} \left ( \bm{a}, \bm{b} \right )_i = \frac{4}{T} \Re \left[\sum_{k=\flow T}^{\fhigh T} \frac{\tilde{a}^\ast(f_{k}) \tilde{b}(f_{k})}{S_i(f_{k})}\right], \label{eq:inner_prod} \end{equation} where $\flow$ and $\fhigh$ are the low-- and high--frequency cutoffs of the analysis respectively, $T$ is the duration of data, $S_i(f)$ is the noise power spectral density of the $i$--th detector, and $f_k\equiv k/T$ is the $k$--th frequency bin. The logarithm of likelihood can be written as \begin{equation} \begin{aligned} & \ln \mathcal{L}(\bm{d}|\bm{\theta}) = \sum_i \left[\left( \bm{d}_i, \bm{h}_i( \bm{\theta}) \right)_i - \frac 1 2 \left( \bm{h}_i( \bm{\theta}), \bm{h}_i(\bm{\theta}) \right)_i \right] \\ & ~~~~~~~~~~~~~~~ + \text{const.}, \label{eq:nonconstantll} \end{aligned} \end{equation} where the constant part does not depend on $\bm{\theta}$ and is irrelevant for stochastic sampling. The inference is typically done via stochastic sampling methods, such as Markov-Chain Monte Carlo (MCMC) \cite{Metropolis:1953am, Hastings:1970aa} and nested sampling \cite{skilling2006}. The non-constant term is computed millions of times during stochastic sampling. As evident from Eqs. \eqref{eq:nonconstantll} and \eqref{eq:inner_prod}, each likelihood evaluation requires evaluations of waveform values at all the frequency points from $\flow$ to $\fhigh$. Those waveform evaluations are typically the dominant cost of analysis. The cost is proportional to the number of frequency points, \begin{equation} K_{\mathrm{orig}} = (\fhigh - \flow) T + 1, \label{eq:korig} \end{equation} and higher for a longer signal. \subsection{Multiband decomposition} \label{sec:multiband} In the multiband decomposition method, the total frequency range is divided into $B$ overlapping frequency bands $f^{(b)}_{\mathrm{s}} \leq f \leq f^{(b)}_{\mathrm{e}}~~~(b=0,1,\dots,B-1)$. The start and end frequencies are determined based on a user-specified sequence of durations, $T=T^{(0)}>T^{(1)}>\dots>T^{(B-1)}$. First, the following equation is solved with respect to $f^{(b)}$ for each $b \in \{1,2,\dots,B-1\}$, \begin{equation} \tau(f^{(b)}) + L \sqrt{-\tau'(f^{(b)})} = T^{(b)} + t_{\mathrm{c, min}} - T, \label{eq:band_equation} \end{equation} where $\tau(f)$ is a reference time-to-merger from a gravitational wave frequency $f$. $L$ is a user-specified constant controlling the accuracy of the approximation. A larger value of $L$ gives more accurate likelihood values. $t_{\mathrm{c, min}}$ is the minimum coalescence time in the prior range. $L=5$ and $T-t_{\mathrm{c, min}}=2.12\,\mathrm{s}$ are used throughout this paper, following \cite{Morisaki:2021ngj}. The start and end frequencies are determined as \begin{align} &f^{(b)}_{\mathrm{s}} = \begin{cases} \displaystyle f_{\mathrm{low}}, & (b=0) \\ \displaystyle f^{(b)} - \frac{1}{\sqrt{\tau' (f^{(b)})}}, & (b > 0) \end{cases} \\ &f^{(b)}_{\mathrm{e}} = \begin{cases} f^{(b + 1)}, & (b < B-1) \\ f_{\mathrm{high}} + \Delta f_{\mathrm{high}}. & (b = B - 1) \end{cases} \end{align} This way of constructing frequency bands guarantees that the inverse Fourier transform of $\tilde{h}(f)$ starting from $f^{(b)}_{\mathrm{s}}$ has a duration shorter than $T^{(b)}$. $\Delta f_{\mathrm{high}} > 0$ is required to avoid the loss of accuracy caused by the abrupt termination of a waveform. With the frequency bands constructed, $(\bm{d}_i, \bm{h}_i)_i$ is approximated into the following form, \begin{equation} \begin{aligned} &(\bm{d}_i, \bm{h}_i(\bm{\theta}))_i \simeq \\ &\sum^{B-1}_{b=0} \frac{4}{T^{(b)}} \Re\left[\sum_{k=\ceil{f^{(b)}_{\mathrm{s}} T^{(b)}}}^{\floor{f^{(b)}_{\mathrm{e}} T^{(b)}}} w^{(b)} (f^{(b)}_k) \tilde{D}^{(b)\ast}_{i,k} \tilde{h}_i(f^{(b)}_k;\bm{\theta}) \right], \end{aligned} \end{equation} where $w^{(b)}(f)$ is a smooth window function extracting waveform values in the $b$--th frequency band, $\tilde{D}^{(b)}_{i,k}$ is a quantity calculated from data and power spectral density, and \begin{equation}\label{fT} f^{(b)}_k \equiv \frac{k}{T^{(b)}}. \end{equation} The sum over a high-frequency band requires waveform values only at downsampled frequencies whose interval is $1/T^{(b)}$, and hence fewer waveform evaluations. The number of waveform evaluations required for a single evaluation of $(\bm{d}_i, \bm{h}_i(\bm{\theta}))_i$ is reduced to \begin{equation} K_{\mathrm{MB}} = \sum_{b=0}^{B - 1} \left( \floor{f^{(b)}_{\mathrm{e}} T^{(b)}} - \ceil{f^{(b)}_{\mathrm{s}} T^{(b)}} + 1 \right). \end{equation} There were two approximate methods proposed to compute $(\bm{h}_i(\bm{\theta}), \bm{h}_i(\bm{\theta}))_i$ with fewer waveform evaluations. One method is referred to as {\it Linear Interpolation}, which approximates $|\tilde{h}_i(f;\bm{\theta})|^2$ as a linear interpolation of the squares of downsampled waveform values. This works well if the waveform model contains only dominant quadrupole moments, where $|\tilde{h}_i(f;\bm{\theta})|^2$ is a smooth function. The other method is referred to as {\it IFFT-FFT}, which works even if the waveform model contains multiple moments. In either case, $(\bm{h}_i(\bm{\theta}), \bm{h}_i(\bm{\theta}))_i$ is computed with waveform values at the $K_{\mathrm{MB}}$ frequency points, and no additional waveform evaluations are required. Thus, the cost of a single likelihood evaluation is reduced by $K_{\mathrm{orig}} / K_{\mathrm{MB}}$. \section{Extension to parameterized tests} \label{sec:methods} In the previous work \cite{Morisaki:2021ngj}, which applies the multiband decomposition method to the analysis of a GR signal, the 0PN formula of $\tau(f)$ in GR is used for solving \eqref{eq:band_equation}. For extending the previous work to parameterized tests of GR, we need to take into account corrections of $\tau(f)$ from the parameterized modifications of inspiral phasing. In this section, we derive the modified formula of $\tau(f)$ taking them into account. We also apply it for setting up frequency bands and study the speed-up gains of our method for a typical BNS signal. \subsection{Modified time to merger} \label{sec:timetomerger} In order to get the modified time-to-merger formula, we use the following condition in accord to the stationary phase approximation \cite{Sathyaprakash:1991mt, Poisson:1995ef, Creighton:2011zz, Maggiore:2007ulw}: \begin{equation} \Phi(f) = - \Psi(t (f)) + 2 \pi f t (f) + \frac{\pi}{4}, \end{equation} where $\Psi(t)$ is the phase of a time-domain waveform and $t(f)$ is the time at which $\Psi'(t) = 2 \pi f$. Hence, we can now relate $t(f)$ to the derivative of $\Phi(f)$, \begin{equation} t(f) = \frac{\Phi'(f)}{2 \pi}. \end{equation} With the phase formula at inspiral part, we obtain the following modified time-to-merger formula, \begin{align} \tau(f) &= t_{c} - t(f) \nonumber \\ &=\frac{1}{2 \pi} \Bigg \{ \frac { 7 \varphi_{-2} } {3 f^{10/3}} + \frac{ 5 (1 + \delta \hat \varphi_{0})\varphi_{0}^{\mathrm{GR}} }{3f^{8/3}} + \frac { 4 \varphi_{1} } {3 f^{7/3}} \nonumber \\ &+ \frac{(1 + \delta \hat \varphi_{2}) \varphi_{2}^{\mathrm{GR}}}{f^{2}} + \frac{ 2(1 + \delta \hat \varphi_{3}) \varphi_{3}^{\mathrm{GR}}}{3f^{5/3}} + \frac{ (1 + \delta \hat \varphi_{4})\varphi_{4}^{\mathrm{GR}}}{3f^{4/3}} \nonumber \\ &- \frac{ (1 + \delta \hat \varphi_{5}^{(l)})\varphi_{5}^{(l){\mathrm{GR}}}}{f} \nonumber \\ &- \frac{(1 + \delta \hat \varphi_{6}) \varphi_{6}^{\mathrm{GR}} + (1 + \delta \hat \varphi_{6}^{(l)}) \varphi_{6}^{(l){\mathrm{GR}}} (\ln f + 3)}{3f^{2/3}} \nonumber \\ &- \frac{ 2 (1 + \delta \hat \varphi_{7}) \varphi_{7}^{\mathrm{GR}}}{3f^{1/3}} \Bigg \}. \label{tomerge} \end{align} Since the time-to-merger is predominantly determined by the terms up to 0PN, we ignore terms higher than that order, and employ the following formula, \begin{align} \tau(f) &= \frac { 7 \varphi_{-2} } {6 \pi f^{10/3}} + \frac{ 5 (1 + \delta \hat \varphi_{0})\varphi_{0}^{\mathrm{GR}} }{6 \pi f^{8/3}} \\ &= \frac{7 \delta \hat \varphi_{-2}}{256 \pi} \eta^{2/5} \left(\frac{\pi G \mathcal{M}}{c^3}\right)^{{-7}/{3}} f^{{-10}/{3}} \nonumber \\ &~~~~+ \frac{5(1 + \delta \hat \varphi_{0})}{256 \pi} \left(\frac{\pi G \mathcal{M}}{c^3}\right)^{{-5}/{3}} f^{{-8}/{3}}. \label{finaltau} \end{align} If higher-order multiple moments are present, the same formula with the frequency rescaling, $f \to 2 f / m$, is used, where $m$ is the maximum magnetic number of the moments. For validating our approximate time-to-merger formula, we numerically calculate time-domain waveforms incorporating higher-order PN terms and compare their durations with predictions from our formula. Figure \ref{fig:tmerge} shows time-domain waveforms for non-spinning $1.4\Msun$--$1.4\Msun$ BNS with various values of $\delta \hat{\varphi}_0$ or $\delta \hat{\varphi}_{-2}$. They are calculated as the inverse Fourier transforms of frequency-domain waveforms from $20\,\si{\hertz}$ to $1024\,\si{\hertz}$ that include terms up to the 3.5PN order in phase and the leading-order term in amplitude. The GR phase coefficients have been calculated with \texttt{SimInspiralTaylorF2AlignedPhasing} implemented in the LIGO Algorithmic Library (LAL) \cite{lalsuite}. The vertical lines represent predictions from our approximate time-to-merger formula with $f=20\,\si{\hertz}$. As seen in the figure, vertical lines accurately locate the time when waveforms start, demonstrating that our approximate time-to-merger is accurate enough. Evaluating Eq. \eqref{finaltau} demands a choice on the values of $\mathcal{M}$, $\eta$, $\delta \hat \varphi_{0}$, and $\delta \hat \varphi_{-2}$. To guarantee that the duration of each frequency band is long enough for any template waveform generated during stochastic sampling, their values are chosen to maximize $\tau(f)$. Hence, the minimum value of $\mathcal{M}$ and the maximum values of $\eta$, $\delta \hat \varphi_{0}$, and $\delta \hat \varphi_{-2}$ within the explored parameter space are chosen. The maximum value of $\eta$ is typically $1/4$, which corresponds to $m_1=m_2$. From Eq. \eqref{finaltau}, it is clear that $\tau(f)$ becomes negative for $\delta \hat \varphi_{0} < -1$ or $\delta \varphi_{-2} < 0$ unless the other terms are significant enough to compensate it. In this case, the template waveform is an inverse-chirp waveform, which starts from $t=t_{\mathrm{c}}$ and whose frequency simply decreases. The multiband approximation clearly breaks down for this type of waveform since it assumes that the signal frequency simply increases. Even without the multiband approximation, the inverse-chirp signal is not properly analyzed in analysis with the standard data conditioning \cite{veitch:2014wba}, where only data up to $\sim 2$ seconds after $t_{\mathrm{c}}$ are analyzed. Since the higher-order terms are ignored in Eq. \eqref{finaltau}, the huge deviations from GR in one or more of the higher-order terms can make the approximate time-to-merger formula inaccurate. In a typical analysis, the explored range of $\mathcal{M}$ is much wider than the width of its marginal posterior distribution, and the time to merger computed with the minimum $\mathcal{M}$ in the explored range is large enough to construct conservative frequency bands. The same argument can be made for hidden modifications with non-PN frequency dependences considered in \cite{Li:2011cg}. Also, it is straightforward to take into account higher-order terms from Eq. \ref{tomerge} when huge deviations of higher-order terms are considered. \subsection{Speed-up gains}\label{sec:speedup} Table \ref{tab:speed_up} shows speed-up gains of our multiband technique for a $1.4\Msun$--$1.4\Msun$ BNS signal with several choices of $T$ and $\delta \hat{\varphi}_i$ used for calculating $\tau(f)$. For each case in the table, frequency bands were set up with the algorithm described in \secref{sec:multiband} and Eq. \eqref{finaltau} calculated with $m_1=m_2=1.4\Msun$ and $\delta \hat{\varphi}_i$ of the row. The total frequency range is $20$--$2048\,\si{\hertz}$, and the durations of bands are powers of two, $\{T^{(b)}\}_{b=0}^{B-1} = \{T,~T/2,~T/4,~\cdots,~4\,\si{\second}\}$. The speed-up gain is estimated by the reduction of frequency points, $K_{\mathrm{orig}} / K_{\mathrm{MB}}$. For setting up frequency bands, we utilized the existing implementation of the multiband decomposition method, \texttt{MBGravitationalWaveTransient}, available in the \texttt{bilby} \cite{Ashton:2018jfp, Romero-Shaw:2020owr} software. For this study, we consider the 3 choices of $\delta \hat{\varphi}_i$: GR ($\delta \hat{\varphi}_0=\delta \hat{\varphi}_{-2}=0$) for reference, $0\mathrm{PN}$ ($\delta \hat{\varphi}_0=20,~\delta \hat{\varphi}_{-2}=0$), and $-1\mathrm{PN}$ ($\delta \hat{\varphi}_0=0,~\delta \hat{\varphi}_{-2}=1$). $\delta \hat{\varphi}_0=20$ or $\delta \hat{\varphi}_{-2}=1$ is the maximum of its range explored by LVK analyses, which we have found in configuration files available at \cite{tgrsamples}. In a standard LVK parameterized test, the duration of analyzed data is the same as that used for GR parameter estimation regardless of the explored range of a non-GR parameter. For a $1.4\Msun$--$1.4\Msun$ BNS signal, $T=256\,\si{\second}$. In either case with $T=256\,\si{\second}$ in the table, the speed-up gain is $\mathcal{O}(10)$. The speed-up gain for $0\mathrm{PN}$ or $-1\mathrm{PN}$ is smaller than that for GR because $\tau(f)$ gets larger due to the non-GR modification. To properly analyze any waveform within the explored range of a non-GR parameter, the data duration should be longer than the longest duration of the waveform with the allowed non-GR modifications. If data durations are determined in that conservative way, $T=4096\,\si{\second}$ and $T=32768\,\si{\second}$ for $0\mathrm{PN}$ and $-1\mathrm{PN}$ respectively. With that conservative choice of $T$, the speed-up gain gets larger and is $\mathcal{O}(10^2)$ for either case. \begin{table*}[t] \setlength{\tabcolsep}{12pt} \centering \captionsetup{labelfont=bf, justification=raggedright, singlelinecheck=false} \caption{The numbers of original frequency points $K_{\mathrm{orig}}$, the numbers of multibanded frequency points $K_{\mathrm{MB}}$, and speed-up gains $K_{\mathrm{orig}} / K_{\mathrm{MB}}$ for a $1.4\Msun$--$1.4\Msun$ BNS signal with several choices of data duration $T$ and a non-GR parameter value $\delta \hat{\varphi}_i$ used for calculating time to merger. The total frequency range is $20$--$2048\,\si{\hertz}$, and divided into frequency bands with $\{T^{(b)}\}_{b=0}^{B-1} = \{T,~T/2,~T/4,~\cdots,~4\,\si{\second}\}$.} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{} & \multirow{2}{*}{ $\delta \hat{\varphi}_{i}$} & \multirow{2}{*}{$T~(\si{\second})$} & \multirow{2}{*}{$K_{\mathrm{orig}}$} & \multirow{2}{*}{$K_{\mathrm{MB}}$}& \multirow{2}{*}{Speed up} \\ \\ \hline \hline GR & $0$ & $256$ & $5.2\times10^5$ & $ 1.2\times 10^4$ & $4.5\times 10^1$ \\ \hline \multirow{2}{*}{0PN ($i=0$)} & $20$ & $256$ & $5.2\times10^5$ & $2.8 \times 10^4$ & $1.8 \times 10^1$ \\ & $20$ & $4096$ & $8.3\times10^6$ & $6.8\times 10^4$ & $1.2 \times 10^2$ \\ \hline \multirow{2}{*}{-1PN ($i=-2$)} & $1$ & $256$ & $5.2\times10^5$ & $3.6 \times 10^4$ & $1.4 \times 10^1$ \\ & $1$ & $32768$ & $6.6\times10^7$ & $3.2\times10^5$ & $2.1\times10^2$ \\ \hline \hline \end{tabular} \label{tab:speed_up} \end{table*} \section{Validation} \label{sec:validation} In this section, we study the accuracy of our technique using simulated BNS signals and real data. \subsection{Simulation study} \label{sec:simulation} \begin{table} \captionsetup{labelfont=bf, justification=raggedright, singlelinecheck=false} \caption{The injection values, prior, and explored range of GR parameters: Chirp mass $\mathcal{M}$, mass ratio $q \leq 1$, luminosity distance $D_L$, right ascension RA and declination DEC, orbital inclination angle $\theta_{JN}$, polarization angle $\psi$, constant phase $\phi_{\mathrm{c}}$, and coalescence time $t_{\mathrm{c}}$. The prior uniform in cosine, sine, and comoving volume are denoted by ``Cosine", ``Sine" and ``Comoving" respectively. $t_{\mathrm{c,inj}}$ denotes the injection value of $t_{\mathrm{c}}$, and is set to the GPS time of $1187008882$ (17 Aug 2017, 12:41:04 UTC).} \begin{ruledtabular} \begin{tabular}{p{1.5cm} p{1cm} p{1.5 cm} p{1.5cm} p{1cm} p{1cm}} Parameter & Unit & Injection value & Prior & Min. & Max.\\ \hline $\mathcal{M}$ & M$_\odot$ & 1.2 & Uniform & 1.15 & 1.25\\ $q$ & - & 0.8 & Uniform & 0.125 & 1\\ $\theta_{JN}$ & rad. & 0.4 & Sine & 0 & $\pi$\\ $D_L$ & Mpc & 72 & Comoving & $10$& $100$\\ RA & rad. & 3.45 & Uniform & 0 & $2\pi$\\ DEC & rad. & $- 0.40$ & Cosine & $-\pi/2$ & $\pi/2$\\ $\psi$ & rad. & $0.65$ & Uniform & 0 & $\pi$\\ $\phi_{\mathrm{c}}$ & rad. & 1.3 & Uniform & 0 &$2\pi$\\ $t_{\mathrm{c}} - t_{\mathrm{c,inj}}$ & s & $0$ & Uniform & $- 0.1$ & $+ 0.1$ \end{tabular} \end{ruledtabular} \label{tab:bnspriors} \end{table} To verify that the multiband approximation does not bias the inference, we simulated BNS signals with non-zero $\delta \hat{\varphi}_i$, and performed parameterized tests on them with and without the multiband decomposition method. The injection values, prior, and explored range of GR parameters are common among simulations, and outlined in \tabref{tab:bnspriors}. The effects of spin angular momenta and tidal deformation of colliding objects were not taken into account for quick runs. We considered the network of the two advanced LIGO detectors and the Virgo detector, and injected signals into Gaussian noise colored by their design sensitivities. The analyzed frequency range is $20$--$2048\,\si{\hertz}$. The network signal-to-noise ratios (SNRs) of the simulated signals are $\sim 50$. The simulated signals were computed with the \texttt{TaylorF2} \cite{Buonanno:2009zt, Santamaria:2010yb} waveform model implemented in LAL, and the same waveform model was used for parameter recovery. In this study, we considered two simulated signals: the $0\mathrm{PN}$ simulation with $\delta \hat{\varphi}_0=1,~\delta \hat{\varphi}_i=0~(i \neq 0)$ and the $-1\mathrm{PN}$ simulation with $\delta \hat{\varphi}_{-2}=0.003,~\delta \hat{\varphi}_i=0~(i \neq -2)$. The duration of a signal from $20\,\si{\hertz}$ with vanishing non-GR parameters is $\sim 160\,\si{\second}$. It is doubled for the $0\mathrm{PN}$ simulation or increased by $\sim 50\%$ for the $-1\mathrm{PN}$ simulation due to the non-zero GR parameter. The explored parameter range is $-1 \leq \delta \hat{\varphi}_0 \leq 2$ for the $0$PN simulation and $-0.01 \leq \delta \hat{\varphi}_{-2} \leq 0.01$ for the $-1$PN simulation. For each simulation, the prior of the non-GR parameter is uniform over its explored range. The durations of analyzed data are $512\,\si{\second}$ and $256\,\si{\second}$ for the $0\mathrm{PN}$ and $-1\mathrm{PN}$ simulations respectively. The total frequency range is divided into $8$ frequency bands with $\{T^{(b)}\}_{b=0}^7 = \{512\,\si{\second},~256\,\si{\second},~\cdots,~4\,\si{\second}\}$ for the multiband run of the $0\mathrm{PN}$ simulation, and $7$ frequency bands with $\{T^{(b)}\}_{b=0}^6 = \{256\,\si{\second},~128\,\si{\second},~\cdots,~4\,\si{\second}\}$ for the $-1\mathrm{PN}$ simulation. The speed-up gains $K_{\mathrm{orig}} / K_{\mathrm{MB}}$ are $58$ and $37$ for the $0\mathrm{PN}$ and $-1\mathrm{PN}$ simulations respectively. The stochastic sampling was performed with the \texttt{bilby} software and the \texttt{dynesty} \cite{Speagle:2019ivv} sampler. The convergence of sampling is controlled by the number of live points $n_{\mathrm{live}}$ and the length of the MCMC chain in unit of its auto-correlation length $n_{\mathrm{ACT}}$ \cite{Romero-Shaw:2020owr}. We used $n_{\mathrm{live}}=500, n_{\mathrm{ACT}}=10$ and $n_{\mathrm{live}}=1000, n_{\mathrm{ACT}}=10$ for the $0\mathrm{PN}$ and $-1\mathrm{PN}$ simulations respectively. We have confirmed that increasing their values does not change the results significantly, which means the results are converged. We marginalized the posterior over constant phase $\phi_{\mathrm{c}}$ analytically and luminosity distance $D_{\mathrm{L}}$ using the look-up table method \cite{Singer:2015ema, Thrane:2018qnx}. Figures \ref{fig:dchi0_few_lkl} and \ref{fig:dchimin2_few_lkl} show marginal posterior distributions of chirp mass ($\mathcal{M}$), mass ratio ($q$), and a non-GR parameter ($\delta \hat{\varphi}_0$ or $\delta \hat{\varphi}_{-2}$) for the $0\mathrm{PN}$ and $-1\mathrm{PN}$ simulations respectively. The runs without and with the multiband approximation are labeled ``Standard" and ``Multiband" respectively. As shown in the figures, the standard and multiband runs produce almost equivalent results in either simulation. More quantitatively, the differences in the lower or upper bounds of the $90\%$ credible intervals are less than $4\%$ of their widths. Those observations indicate that the multiband approximation is accurate enough for a relatively high SNR of $\sim 50$. Since log-likelihood errors introduced by the multiband approximation are roughly proportional to the square of SNR, they are smaller for lower SNR values. Therefore, our results show that our multiband approximation can be safely used also for SNR values below $50$. The full posterior distributions of all the inferred parameters are presented in Figs \ref{fig:likelihood_dchi0} and \ref{fig:likelihood_dchimin2}. The standard runs took $\sim 9$ days and $\sim 14$ days to complete for the $0\mathrm{PN}$ and $-1\mathrm{PN}$ simulations respectively. They are reduced to $\sim 2$ hours and $\sim 7$ hours respectively with the multiband approximation. The reduction of run times is more or less consistent with the speed-up gains estimated from $K_{\mathrm{orig}} / K_{\mathrm{MB}}$. The runs were performed with an Intel Xeon Gold 6136 CPU with a clock rate of 3.0 GHz. The stochastic sampling is parallelized with $48$ processes for the $0\mathrm{PN}$ simulation, and $24$ processes for the $-1\mathrm{PN}$ simulation. \subsection{Likelihood errors for GW190814} \label{sec:gw190814} For validating our approximation with more complicated signal morphology, we investigate the likelihood errors of our approximation for GW190814 \cite{LIGOScientific:2020zkf}. We computed $\ln \mathcal{L}$ with and without our approximation on posterior samples from LIGO-Virgo parameter estimation analysis, and computed their differences $\Delta \ln \mathcal{L}$ as errors. This signal is an appropriate test case for validating our approximation with gravitational-wave higher-order multiple moments since their effects are statistically significant for this signal \cite{LIGOScientific:2020zkf}. We also considered the calibration uncertainties of detectors for validating our approximation with signal modulation caused by them. The data were obtained from the Gravitational Wave Open Science Center \cite{gwosc}, and posterior samples were from \cite{tgrsamples}. The \texttt{IMRPhenomPv3HM} \cite{Khan:2018fmp, Khan:2019kot} waveform model was employed for likelihood evaluations, which is the same model used for the LVK analysis. Figure \ref{fig:likelihood_errors} shows $|\Delta \ln \mathcal{L}|$ with the horizontal axis representing the non-constant part of $\ln \mathcal{L}$, \begin{equation} \ln \Lambda \equiv \sum_i \left[\left(\bm{d}_i, \bm{h}_i\right)_i - \frac{1}{2} \left(\bm{h}_i, \bm{h}_i\right)_i\right]. \end{equation} The left plot shows the errors for tests of inspiral parameters and the right for tests of post-merger parameters. The total frequency range of $20$--$1024\,\si{\hertz}$ was divided into $3$ frequency bands with $\{T^{(b)}\}_{b=0}^2 = \{16\,\si{\second},~8\,\si{\second},~4\,\si{\second}\}$. The frequency bands were determined based on the time-to-merger of the $m=4$ mode, and with the following reference values of $\mathcal{M}$, $\eta$, $\delta \hat \varphi_0$, and $\delta \hat \varphi_{-2}$, \begin{equation} \begin{aligned} &\mathcal{M}=5.5M_{\odot},~~~\eta=0.25, \\ &\delta \hat \varphi_0 = \begin{cases} 20, & (\text{test of }\delta \hat \varphi_0) \\ 0, & (\text{otherwise}) \end{cases} \\ &\delta \hat \varphi_{-2} = \begin{cases} 1, & (\text{test of }\delta \hat \varphi_{-2}) \\ 0. & (\text{otherwise}) \end{cases} \end{aligned} \end{equation} Those reference values were determined based on the parameter range explored by the LVK analysis. The speed-up gain is $2.42$ for the test of $\delta \hat \varphi_0$, $2.44$ for the test of $\delta \hat \varphi_{-2}$, and $3.28$ for the other cases. The {\it IFFT-FFT} algorithm was employed for computing $(\bm{h}, \bm{h})_i$ due to significant higher-order multiple moments. Each plot label shows the median value of $|\Delta \ln \mathcal{L}|$. The errors are $\lesssim 10^{-4}$ for the test of $\delta \hat \varphi_0$ or $\delta \hat \varphi_{-2}$, and $\lesssim 10^{-3}$ for the other tests. The smaller errors for the former case are because frequency bands are constructed from a longer time-to-merger due to $\delta \hat \varphi_0>0$ or $\delta \hat \varphi_{-2}>0$. In either case, the errors are much smaller than the unity, which shows our approximation is accurate enough for the analysis of GW190814. \section{Conclusion} \label{sec:conclusion} In this paper, we have presented a rapid inference technique for parameterized tests of GR, one of the main tests of GR using gravitational waves from CBC. Our technique is based on a multiband decomposition of the gravitational-wave likelihood, which was originally developed for speeding up parameter estimation of CBC signals under the assumption of GR. It exploits the chirping nature of a signal, and in principle is applicable to any chirp signal whose time to merger $\tau(f)$ is known. To extend this technique to parameterized tests of GR, we have derived $\tau(f)$ taking into account non-GR deformations of the waveform. Applying the multiband decomposition technique with our new formula of $\tau(f)$ to a $1.4\Msun$--$1.4\Msun$ BNS signal, we have found that our technique speeds up parameterized tests of a typical BNS signal by a factor of $\mathcal{O}(10)$ for the low-frequency cutoff of $20\,\si{\hertz}$. For validating our approximate technique, we have simulated BNS signals with SNRs of $\sim 50$. Performing parameterized tests of them with and without our technique, we have verified that our technique produces results equivalent to those from runs without any approximate methods. We have also computed log-likelihood errors of our technique for GW190814 and confirmed that they are well below unity. Therefore, our work provides an efficient and accurate way of performing parameterized tests of GR, which is useful for dealing with more frequent detections in future observations. We focus on single-parameter tests throughout this work. In principle, our technique can be applied to multiple-parameter tests using principal component analysis \cite{Shoom:2021mdj, Saleem:2021nsb}, with a modified time-to-merger formula parameterized by parameters corresponding to principal directions. We leave that extension as a future work. \begin{acknowledgments} The authors thank an anonymous referee for helpful feedback. We thank Ignacio Maga\~{n}a Hernandez for reviewing the manuscript and providing valuable comments. We also thank Jolien Creighton, Patrick Brady, and Brandon Piotrzkowski for their helpful comments on improving this paper. The authors are supported by NSF PHY-2207728 and PHY-1912649. The authors are grateful for computational resources provided by the Leonard E Parker Center for Gravitation, Cosmology and Astrophysics at the University of Wisconsin-Milwaukee and supported by NSF Grants PHY-1626190 and PHY-1700765. The authors are grateful for computational resources provided by LIGO Laboratory and supported by NSF Grants PHY-0757058 and PHY-0823459. This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the National Science Foundation. This research has made use of data or software obtained from the Gravitational Wave Open Science Center (gw-openscience.org), a service of LIGO Laboratory, the LIGO Scientific Collaboration, the Virgo Collaboration, and KAGRA. LIGO Laboratory and Advanced LIGO are funded by the United States National Science Foundation (NSF) as well as the Science and Technology Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO600 detector. Additional support for Advanced LIGO was provided by the Australian Research Council. Virgo is funded, through the European Gravitational Observatory (EGO), by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale di Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by institutions from Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal, Spain. The construction and operation of KAGRA are funded by Ministry of Education, Culture, Sports, Science and Technology (MEXT), and Japan Society for the Promotion of Science (JSPS), National Research Foundation (NRF) and Ministry of Science and ICT (MSIT) in Korea, Academia Sinica (AS) and the Ministry of Science and Technology (MoST) in Taiwan. \end{acknowledgments} \bibliographystyle{apsrev4-1} \bibliography{reference}
Title: The Lockman-SpReSO survey I. Description, target selection, observations and preliminary results
Abstract: Context. Extragalactic surveys are a key tool in better understanding the evolution of galaxies. Both deep and wide surveys are improving the emerging picture of physical processes that take place in and around galaxies and identifying which of these processes are the most important in shaping the properties of galaxies. Aims. The Lockman-SpReSO survey aims to provide one of the most complete optical spectroscopic follow-ups of far-infrared (FIR) sources detected by the $Herschel$ Space Observatory in the Lockman Hole field. Such a large optical spectroscopic sample of FIR-selected galaxies will supply valuable information about the relation between fundamental FIR and optical parameters (including extinction, star formation rate and gas metallicity). In this article, we introduce and provide an in-depth description of the Lockman-SpReSO survey and of its early results. Methods. We have selected FIR sources from the observations of the $Herschel$ telescope over the central 24 arcmin $\times$ 24 arcmin of the Lockman Hole field with an optical counterpart up to 24.5 $R_{\rm C}$(AB). The sample comprises 956 $Herschel$ FIR sources plus 188 interesting additional objects in the field. The faint component of the catalogue ($R_{\rm C}$(AB)$\geq$20) was observed using the OSIRIS instrument on the 10.4 m Gran Telescopio Canarias (GTC) in MOS mode. The bright component was observed using two multifibre spectrographs: the AF2-WYFFOS at the William Herschel Telescope (WHT) and the Hydra instrument at the WYIN telescope.
https://export.arxiv.org/pdf/2208.02828
\title{The Lockman-SpReSO project} \subtitle{Description, target selection, observations, and catalogue preparation} \author{Mauro GonzГЎlez-Otero \inst{1,2}\orcidlink{0000-0002-4837-1615} \and Carmen P. Padilla-Torres \inst{1,2,3,4} \orcidlink{0000-0001-5475-165X} \and Jordi Cepa \inst{1,2,4} \orcidlink{0000-0002-6566-724X} \and J. JesГєs GonzГЎlez \inst{5} \and ГЃngel Bongiovanni\inst{4,6}\orcidlink{0000-0002-3557-3234} \and Ana MarГ­a PГ©rez GarcГ­a\inst{4,7}\orcidlink{0000-0003-1634-3588} \and J. Ignacio GonzГЎlez-Serrano \inst{4,8}\orcidlink{0000-0003-0795-3026} \and Emilio Alfaro \inst{9} \and Vladimir Avila-Reese \inst{5} \and Erika BenГ­tez\inst{5}\orcidlink{0000-0003-1018-2613} \and Luc Binette \inst{5} \and Miguel CerviГ±o \inst{7}\orcidlink{0000-0001-8009-231X} \and Irene Cruz-GonzГЎlez \inst{8}\orcidlink{0000-0002-2653-1120} \and JosГ© A. de Diego \inst{8} \and JesГєs Gallego\inst{10}\orcidlink{0000-0003-1439-7697} \and HГ©ctor HernГЎndez-Toledo \inst{5} \and Yair Krongold \inst{5} \and Maritza A. Lara-LГіpez \inst{10}\orcidlink{0000-0001-7327-3489} \and Jakub Nadolny \inst{11}\orcidlink{0000-0003-1440-9061} \and Ricardo PГ©rez-MartГ­nez \inst{4,12}\orcidlink{0000-0002-9127-5522} \and Mirjana Povi\'c \inst{13,9,14} \and Miguel SГЎnchez-Portal \inst{4,6} \and BernabГ© CedrГ©s \inst{4} \and Deborah Dultzin \inst{5} \and Elena JimГ©nez-BailГіn \inst{5} \and RocГ­o Navarro MartГ­nez \inst{4} \and C. Alenka Negrete \inst{5} \and Irene Pintos-Castro \inst{15} \and Octavio Valenzuela \inst{5} } \institute{Instituto de AstrofГ­sica de Canarias, E-38205 La Laguna, Tenerife, Spain % \and Departamento de AstrofГ­sica, Universidad de La Laguna (ULL), E-38205 La Laguna, Tenerife, Spain % \and FundaciГіn Galileo Galilei-INAF Rambla JosГ© Ana FernГЎndez PГ©rez, 7, E-38712 BreГ±a Baja, Tenerife, Spain % \and AsociaciГіn AstrofГ­sica para la PromociГіn de la InvestigaciГіn, InstrumentaciГіn y su Desarrollo, ASPID, E-38205 La Laguna, Tenerife, Spain % \and Instituto de AstronomГ­a, Universidad Nacional AutГіnoma de MГ©xico, Apdo. Postal 70-264, 04510 Ciudad de MГ©xico, Mexico % \and Institut de Radioastronomie MillimГ©trique (IRAM), Av. Divina Pastora 7, NГєcleo Central E-18012, Granada, Spain % \and Centro de AstrobiologГ­a (CSIC/INTA), E-28692 ESAC Campus, Villanueva de la CaГ±ada, Madrid, Spain % \and Instituto de FГ­sica de Cantabria (CSIC-Universidad de Cantabria), E-39005, Santander, Spain % \and Instituto de AstrofГ­sica de AndalucГ­a (CSIC), E-18080, Granada, Spain % \and Departamento de FГ­sica de la Tierra y AstrofГ­sica, Instituto de FГ­sica de PartГ­culas y del Cosmos, IPARCOS. Universidad Complutense de Madrid (UCM), E-28040, Madrid, Spain. % \and Astronomical Observatory Institute, Faculty of Physics, Adam Mickiewicz University, ul.~S{\l}oneczna 36, 60-286 Pozna{\'n}, Poland % \and ISDEFE for European Space Astronomy Centre (ESAC)/ESA, P.O. Box 78, E-28690 Villanueva de la CaГ±ada, Madrid, Spain % \and Space Science and Geospatial Institute (SSGI), Entoto Observatory and Research Center (EORC), Astronomy and Astrophysics Research Division, PO Box 33679, Addis Abbaba, Ethiopia % \and Physics Department, Mbarara University of Science and Technology (MUST), Mbarara, Uganda % \and Centro de Estudios de FГ­sica del Cosmos de AragГіn (CEFCA), Plaza San Juan 1, 44001 Teruel, Spain % \\ \email{mauro.gonzalez@iac.es, mauromarago@gmail.com} } \date{Received ---; accepted ---} \abstract {Extragalactic surveys are a key tool for better understanding the evolution of galaxies. Both deep and wide-field surveys serve to provide a clearer emerging picture of the physical processes that take place in and around galaxies, and to identify which of these processes are the most important in shaping the properties of galaxies.} {The Lockman Spectroscopic Redshift Survey using Osiris (Lockman-SpReSO) aims to provide one of the most complete optical spectroscopic follow-ups of the far-infrared (FIR) sources detected by the \textit{Herschel} Space Observatory in the Lockman Hole (LH) field. The optical spectroscopic study of the FIR-selected galaxies supplies valuable information about the relation between fundamental FIR and optical parameters, including extinction, star formation rate, and gas metallicity. In this article, we introduce and provide an in-depth description of the Lockman-SpReSO project and of its early results.} {We selected FIR sources from \textit{Herschel} observations of the central 24 arcmin $\times$ 24 arcmin of the LH field with an optical counterpart up to 24.5 $R_{\rm C}$(AB). The sample comprises 956 \textit{Herschel} FIR sources, plus 188 additional interesting objects in the field. These are point X-ray sources, cataclysmic variable star candidates, high-velocity halo star candidates, radio sources, very red quasi-stellar objects, and optical counterparts of sub-millimetre galaxies. The faint component of the catalogue ($R_{\rm C}(\mathrm{AB})\geq20$) was observed using the OSIRIS instrument on the 10.4 m Gran Telescopio Canarias in multi-object spectroscopy (MOS) mode. The bright component was observed using two multi-fibre spectrographs: the AF2-WYFFOS at the William Herschel Telescope and the HYDRA instrument at the WYIN telescope.} {From an input catalogue of 1144 sources, we measured a secure spectroscopic redshift in the range $0.03 \lesssim z \lesssim 4.96$ for 357 sources with at least two identified spectral lines. In addition, for 99 sources that show only one emission or absorption line, a spectroscopic redshift was postulated based on the line and object properties, and photometric redshift. In both cases, properties of emission and absorption lines were measured. Furthermore, to characterize the sample in more depth with determined spectroscopic redshifts, spectral energy distribution (SED) fits were performed using the CIGALE software. The IR luminosity and the stellar mass estimations for the sample are also presented as a preliminary description.} {} \keywords{Astronomical databases: surveys - galaxies: statistics - galaxies: fundamental parameters - techniques: spectroscopic } \section{Introduction} \label{sec:1} A fundamental contribution to our understanding of galaxy formation and evolution is the availability of samples of objects as large and as deep as possible. These censuses, or surveys, may be broadly classified as either photometric or spectroscopic. Photometric surveys observe an area of the sky by integrating a range of wavelengths, commonly referred as photometric bands, pass-bands, or filters. It is common practice to perform observations of the same field in several filters, thus making the spectral coverage as complete as possible. We can differentiate several types of photometric surveys according to the wavelength range integrated during the observations or the resolving power ($R\equiv\lambda/\delta\lambda$, $\delta\lambda$ being the full width at half maximum (FWHM) of the filter's transmission curve) of the used filters. Thus, broad-band surveys have the lowest resolution power ($R\sim5$); the Sloan Digital Sky Survey (SDSS; \citealt{York2000}) and the Dark Energy Spectroscopic Instrument (DESI) Legacy Imaging Surveys \citep{Dey2019} are some examples. In the case of intermediate-band surveys, the power resolution usually ranges between 20 and 40; examples of such surveys are the ALHAMBRA \citep{Moles2008} and the COMBO-17 \citep{Wolf2003} surveys. Narrow-band surveys have the highest resolving power ($R\geq60$); the PAUS survey \citep{Eriksen2019}, the J-PAS survey \citep{Benitez2014}, and the OTELO survey \citep{Bongiovanni2019}, using 40, 54, and 36 narrow-band filters, respectively, are examples. This photometric information allows us to construct the spectral energy distribution (SED) and fit it using empirical or theoretical spectral templates. A good coverage of photometric information using different filters over a wide spectral range is crucial for obtaining the best possible fits and to accurately determine the physical properties of objects (e.g. stellar mass and IR luminosity), together with their photometric redshift ($z_\mathrm{phot}$). On the other hand, for each object, spectroscopic surveys yield the spectrum over a wavelength range determined by the instrumental configuration. These spectroscopic surveys generally have brighter flux limits, determined by the achievable resolution. Nevertheless, the multi-object spectroscopy (MOS) mode used in spectroscopic surveys has improved the efficiency of these studies, making it possible to obtain the spectra of tens, hundreds (e.g. the Near-Infrared Spectrograph on the \textit{James Webb Space Telescope}; \citealt{Ferruit2022}), or thousands (e.g. the DESI spectroscopic study; \citealt{Abareshi2022}) of objects simultaneously. Spectroscopic surveys allow us to obtain more reliable and more accurate spectroscopic redshifts ($z_\mathrm{spec}$) thanks to the possibility of very precise measurements of spectral lines in absorption and emission, both intense and weak, which in turn allow us to determine physical properties of the objects (e.g.\ stellar ages, star formation rate (SFR), extinction, ionization, and gas metallicity). Numerous spectroscopic surveys of vast numbers of objects have scanned huge areas of the sky; for example, SDSS/BOSS \citep{Dawson2013} has scanned $\sim$10\,000 deg$^2$ in the $i$-band down to 19.9 mag. Other surveys have analysed smaller areas, but in greater depth. For example, the z-COSMOS survey \citep{Lilly2007} conducted studies of the COSMOS field for a total of 30\,000 objects in the redshift range zero to three. The VANDELS ESO public spectroscopic survey \citep{McLure2018} performed a spectroscopic study of sources in the central part of the CANDELS Ultra Deep Survey and the \textit{Chandra} Deep Field South with a redshift range between one and seven. Surveys that analyse small areas of the sky in great depth generally tend to do so in sky regions of high Galactic latitude with plenty of broad-band multi-wavelength data. One of these areas that is of great interest is known as the Lockman Hole (hereafter LH) extragalactic field. The LH field is a galactic area with a minimal amount of neutral hydrogen column density ($N_\mathrm{H}$) on the sky \citep{Lockman1986}. This quality makes it one of the best Galactic windows for detecting distant and nearby weak sources, and a perfect target for developing high-sensitivity surveys. The central part of the LH field has a hydrogen column density value of $N_\mathrm{H}=5.8\times 10^{19}\mathrm{cm}^{-2}$ (\citealt{Lockman1986,Dickey1990}). This value is moderately lower than that found at the Galactic poles, where $N_\mathrm{H}\sim10^{20}\mathrm{cm}^{-2}$ \citep{Dickey1990}. Lower latitudes are unsuitable for this kind of study, given their high extinction. Because of this exceptional quality, the LH field is of great interest to the scientific community and has been observed over virtually the entire range of the electromagnetic spectrum. In high-energy regimes, missions such as \textit{Chandra}, \textit{XMM-Newton} and \textit{ROSAT} have targeted the field in a quest for deep data per unit observed solid angle. Furthermore, the \textit{GALEX} telescope observed the LH in its two UV photometric bands. Sloan, the Large Binocular Telescope (LBT), Subaru, and UKIRT are examples of telescopes that have observed the LH field at optical wavelengths. In addition, the low-IR background of the LH field (0.38 mJy sr$^{-1}$ at 100 $\mu$m, \citealt{Lonsdale2003}) has prompted deep IR observations such as those carried out with the \textit{Spitzer} and \textit{Herschel} space telescopes. However, despite the wealth of existing data and even though the LH field has been observed from the X-ray to the radio region, there is a surprising lack of deep optical spectroscopic data, which presents an unresolved challenge. The Lockman Spectroscopic Redshift Survey using OSIRIS (Lockman-SpReSO) aims to address the shortage of spectroscopic information on the LH field by performing a deep spectroscopic follow-up, up to magnitude $R_\mathrm{C}=24.5$, in the optical and near-IR (NIR) ranges, of a sample of galaxies selected from far-IR (FIR) source catalogues. This survey not only determines the spectroscopic redshifts but also estimates the principal properties of the selected galaxies. To this aim, we took advantage of the large collecting surface of the 10 m class Gran Telescopio de Canarias (GTC) telescope and the excellent performance of the MOS mode of the OSIRIS instrument. This is the first paper of a series that aims to present the Lockman-SpReSO project and it is structured as follows. In Sect. 2, we outline the main features and scientific motivations of the survey. In Sect. 3, we detail the target selection and the development of the source catalogue. In Sect. 4, we describe the planning of the observations and their main properties for the elaboration of the survey. In Sect. 5, we explain how the data reduction was carried out. In Sect. 6, we describe the first results of the spectral line measurements, the determination of the spectroscopic redshift, and the SED-fitting procedure. In Sect. 7, we summarize the main content of the paper and establish a timeline for the next data release of the survey. Throughout the paper, magnitudes in the AB system \citep{Oke1983} are used. The cosmological parameters adopted in this work are: $\Omega_\mathrm{M} = 0.3$, $\Omega_\mathrm{\Lambda} = 0.7$, and $H_\mathrm{0} = 70$ km s$^{-1}$ Mpc$^{-1}$. \section{Lockman-SpReSO} \label{sec:2} Lockman-SpReSO is a deep optical spectroscopic survey of a sample of mainly FIR-selected objects over the LH field. The region studied is the central $24\times24$ arcmin$^2$ of the LH field with equatorial coordinates (J2000) $10^\mathrm{h} 52^\mathrm{m}43^\mathrm{s}$ $+57\degr 28\arcmin 48\arcsec$ at the centre (north-eastern region). One of the first studies of optical counterparts of IR sources was that of \cite{Armus1989}, who carried out spectroscopic observations of 53 IR galaxies to determine the nature of the emission from these sources. Another relevant study is that of \cite{Veilleux1995}, who performed a spectroscopic survey of 200 luminous \textit{IRAS} galaxies in order to discern when the IR emission is due to nuclear activity in the galaxy and when it is due to intense starbursts. In their work, they found that the probability of the ionization source coming from nuclear activity increases for cases of higher IR luminosity. More recent studies have consisted of spectroscopic follow-ups of \textit{Spitzer} \citep{Berta2007} and \textit{Herschel} \citep{Casey2012} sources using the Keck observatory. The work by \cite{Berta2007} focused on the study of 35 luminous infrared galaxies (LIRGs) with $z > 1.4$ and has determined that 62$\%$ of the objects with a measurable spectroscopic redshift have an active galactic nucleus (AGN) component. \cite{Casey2012} studied 767 \textit{Herschel} sources by performing a detailed study and estimating luminosity functions for $z < 1.6$. Other studies, using the large statistical database of SDSS, have carried out analyses of IR sources detected with SDSS in the local low-redshift ($z<0.15$) universe (\citealt{Rosario2016}, \citealt{Maragkoudakis2018}). Many spectroscopic studies have observed the whole LH field in different ranges of the electromagnetic spectrum. From LH observations in the X-ray range made with \textit{XMM-Newton}, a series of papers were published explaining the data obtained \citep{Hasinger2001}, the spectral analysis performed \citep{Mainieri2002}, with a total of 61 spectroscopic redshift identifications, and a catalogue with the fluxes of the sources \citep{Brunner2008}. The \textit{ROSAT} deep survey also published a series of papers studying LH sources: optical identifications, photometry, and spectroscopy with 43 redshifts measured by \cite{Schmidt1998} and 86 by \cite{Lehmann2001}, among others. \cite{Rovilos2011} made an optical and IR analysis of the properties of AGNs in the LH field detected in the X-ray data described above, and they found 401 optical counterparts to the 409 AGNs detected by \textit{XMM-Newton}. \cite{Patel2011}, using the WYFFOS instrument on the William Herschel Telescope, carried out observations in the optical range of the XMM-LSS and LH-ROSAT X-ray fields and measured a total of 278 and 15 spectroscopic redshifts, respectively. Many other studies have performed optical/NIR spectroscopic follow-ups of X-ray sources, given their good quality. \citet{Zappacosta2005}, using the DOLORES instrument in its MOS mode at the Telescopio Nazionale Galileo (TNG), observed 215 sources down to $R=22$ mag, obtained spectroscopic redshifts for 103 objects, and found evidence of a superstructure at $z=0.8$. \citet{Henry2014} postulated that one of the most distant X-ray clusters at $z=1.753$ in the LH field \citep{Henry2010} could actually be a large-scale structure at $z=1.71$. SDSS has also scanned the entire LH field and obtained spectroscopic redshifts for $\sim$115k objects, where only 140 objects down to $r=21.8$ mag lie within the central $24\times24$ arcmin$^2$ of the field \citep{Abdurro2022}. Of particular relevance to our work is that carried out by \citet[hereafter FT12]{Fotopoulou2012}, which we describe in more detail in Sect. \ref{sec:3.2}. The authors collected all the available photometric and spectroscopic information on the LH field at the time of publication from the UV (\textit{GALEX}) to NIR (\textit{Spitzer}/IRAC) in a single catalogue, including publicly available good-quality spectroscopic redshifts and the photometric redshifts calculated by themselves. More recently, \cite{Kondapally2021} has produced another multi-wavelength catalogue of the radio sources detected by the LOw-Frequency ARray (LOFAR; \citealt{vanHarlen2013}) Two Metre Sky Survey (LoTSS; \citealt{Shimwell2017}), which observed (among other fields) the LH field at 150 MHz down to an RMS of 22 $\mu$Jy beam$^{-1}$. The multi-wavelength catalogue contains photometric information from the UV (\textit{GALEX}) to the NIR (\textit{Spitzer}/IRAC) and identifies the multi-wavelength counterparts to the radio sources detected by LoTSS. Since it is a more recent work than \citetalias{Fotopoulou2012}, the photometric information is more up to date, including measurements in the optical range from the \textit{Spitzer} Adaptation of Red-sequence Cluster Survey\footnote{\url{http://www.faculty.ucr.edu/~gillianw/SpARCS/}} (SpARCS; \citealt{Wilson2009}) and the Red Cluster Sequence Lensing Survey (RCSLenS; \citealt{Hildebrandt2016}). The merging of \citetalias{Fotopoulou2012} and the multi-wavelength catalogue of \cite{Kondapally2021} ensures that we have the most complete multi-wavelength (from UV to NIR) photometric coverage to perform accurate SED fittings (see Sect. 6.3). Other studies have been carried out at longer wavelengths. \cite{Swinbank2004}, for example, used imaging and NIR spectroscopy to study 30 (four in the LH field) LIRGs pre-selected from sub-millimetre and radio surveys (\citealt{Chapman2003}, \citeyear{Chapman2005}). Another example is the work of \citet{Coppin2010}, who analysed AGN-dominated sub-millimetre galaxy (SMG) candidates using the \textit{Spitzer}/IRAC spectrograph. The northern region of the LH field was studied as part of the SCUBA-2 Cosmology Legacy Survey at 850 $\rm \mu$m and a depth reached at 1$\sigma$ of 1.1 mJy beam$^{-1}$ using the James Clerk Maxwell Telescope (JCMT, \citealt{Geach2017}). All previous studies have covered different areas and sizes of the LH field for which there were, until the release of SDSS spectroscopic data, $\sim$600 good-quality spectroscopic redshifts for the whole LH field ($\sim$15 deg$^2$). Although the number was significantly increased thanks to the contribution of SDSS, the number of spectroscopic redshifts in the central $24\times24$ arcmin$^2$ has barely increased with $\sim$150 new values but with a limiting magnitude $i \sim 22$. To address the lack of spectroscopic information and also study specific families of optical counterparts, the main objective of the Lockman-SpReSO project is to obtain deep optical spectroscopy of a selected sample of objects in the LH field to complement the deepest observations of the \textit{XMM-Newton}, \textit{Spitzer} and \textit{Herschel} space telescopes, and radio data \citep{Ciliegi2003}. The primary sample of objects ($> 80\%$ of the total sample) to be observed with the Lockman-SpReSO project consists of sources observed in the \textit{Herschel}/PACS Evolutionary Probe (PEP) programme by \cite{Lutz2011} with robust optical counterparts down to a magnitude of $\sim$ 24.5 in the Cousins $R_\mathrm{C}$ band. They observed the central $24\times24$ arcmin$^2$ region of the field with a depth of 6 mJy (at 5$\sigma$) at 100 and 160 $\rm{\mu}$m within the framework of the time-guaranteed \textit{Herschel}/PACS key project PEP. These observations enabled sampling near the maximum of the SED of active star-forming galaxies (SFG) at high redshifts ($z<3$). This enables the bolometric luminosities to be estimated more accurately. Moreover, the typical spatial resolution of PACS allows us to perform a reliable cross-correlation with optical sources depending on the spectral band. In order to optimize the use of MOS mode observations, the primary catalogue of FIR sources was supplemented with other types of sources in the field under study, as explained in Sect. \ref{sec:3.5}. Redshifts were obtained by optical spectroscopy using various instruments (OSIRIS, \citealt{JCepa2000}; WYFFOS, \citealt{wyffos2014} and HYDRA\footnote{\url{https://www.wiyn.org/Instruments/wiynhydra.html}}), which are described in Sect. \ref{sec:4}. Spectroscopic observations of selected targets in the NIR domain are planned for the near future. Panchromatic studies (from the X-ray to the radio region) of galaxies have been shown to be a valuable strategy for studying the evolution and properties of these objects. The obtained spectra for selected Lockman-SpReSO objects, complemented with ancillary data, were used to derive the stellar masses, SFRs, gas metallicites, and extinctions. Furthermore, using ratios of spectral lines, we were able to separate SFGs from AGNs using a BPT diagram \citep{Baldwin11981, Stasinska2006}. This segregation is also possible using different IR or X-ray emission \citep[and references therein]{Marina2019}. Among other parameters, SED-fitting techniques allow us to estimate stellar masses, while SFRs can be obtained via either FIR luminosities or optical lines \citep{Kennicutt1998}. Gas metallicities can be measured using different optical relations, the aforementioned R$_{23}$ and N2 methods, and also FIR relations \citep[for example]{Pereira2017, Herrera2018}. Finally, FIR over UV luminosities are considered the best method for determining extinctions \citep{Viaene2016}. Several studies have shown that LIRGs lie below the mass--metallicity relation for SFGs \citep[and references therein]{Pereira2017}, although the offset could depend on the selection criteria used. However, these studies are limited to local samples. Ideally, they should be extended in order to ascertain whether this relation depends on FIR luminosity or redshift, or on both. Moreover, the study should encompass the possible differences of the fundamental plane of SFG \citep{Maritza2010} for LIRGs and ultra-luminous infrared galaxies (ULIRGs). For example, studies at higher redshifts have shown that spectral lines are more attenuated than the continuum \citep{Buat2020}, whereas \cite{Eales2018}, using \textit{Herschel} data, make a claim for rapid galaxy evolution in the very recent past. Some of the scientific objectives of the Lockman-SpReSO project, which are developed in forthcoming papers, include studying the possible evolution of the relation of the masses, extinctions, different SFR indicators, and gas metallicities of \textit{Herschel} galaxies with respect to FIR colours, masses, and FIR and radio luminosities. Compared to other deep spectroscopic surveys, the Lockman-SpReSO project reaches a depth parameter \citep{Djorgovski2012} 1.2 times greater than that achieved by the VVDS/ultra-deep survey \citep{LeFevre2013}. It is also more advantageous in terms of continuum sensitivity and spectral coverage than z-COSMOS \citep{Lilly2007} and AEGIS-DEEP \citep{Davis2007} respectively. These advantages are largely due to the possibility of using the collecting surface of a 10-metre class telescope and a powerful instrument such as OSIRIS. The sensitivities achievable are higher than those attainable by surveys with smaller collecting surfaces (i.e.\ the SDSS survey), even considering the later releases. In addition, our study is, to date, the most complete, extensive and statistically significant of the optical counterparts of the \textit{Herschel} IR sources. \section{Target selection} \label{sec:3} As already mentioned, the main objective of the Lockman-SpReSO project is to provide a high-quality optical spectroscopic follow-up of the FIR sources from the \textit{Herschel}-PEP survey \citep{Lutz2011} with robust optical counterparts in images from OSIRIS in the SDSS $r$ band, up to $R_C=24.5$ mag. This limiting magnitude was originally chosen to reach an S$/$N $\sim 3$ in the continuum with the OSIRIS instrument in MOS mode at resolution $R \sim 500$ and an integration time of about 3 hours, according to the GTC/OSIRIS Exposure Time Calculator\footnote{\url{http://www.gtc.iac.es/instruments/osiris/Osiris_ETC.php}} (ETC, version 2.0). After selecting the FIR sources and studying which of them have optical counterparts, it was necessary to collect all the good-quality information available in the literature to accurately study the redshifts and physical properties of those sources. Data such as photometric redshifts or magnitudes in various bands are essential for this study. After an exhaustive search of the literature, three catalogues were used to create the bulk of the sample. The first of these is the catalogue of FIR objects from the PEP Survey \textit{Data Release}\footnote{\url{www.mpe.mpg.de/ir/Research/PEP/public_data_releases}} (DR1) but limited to the central $24\times24$ arcmin$^2$ of the LH field. The second catalogue is that of \citetalias{Fotopoulou2012}, in which the available information from the LH field was collected (see Sect. \ref{sec:3.2}). Finally, the third catalogue was obtained from the broad-band optical images made with OSIRIS (see Sect. \ref{sec:3.3}). The fusion of these three catalogues makes up the primary source catalogue of the Lockman-SpReSO project. In addition, X-ray emitting counterparts of the FIR sources were identified by using the \textit{XMM-Newton} and \textit{Chandra} mission catalogues. Other secondary catalogues were added to the project to optimize the use of the masks and observation times. The following sections describe the sample selection and creation of the final catalogues. \subsection{Far-infrared sources} \label{sec:3.1} As a starting point, we used the data from the DR1 of the PEP survey \cite{Lutz2011} to select the FIR objects. The PEP programme is a \textit{Herschel} guaranteed time extragalactic survey focused on deep PACS \citep{Poglitsch2010} 70, 100, and 160 $\mathrm{\mu m}$ observations of blank fields and lensing clusters. One of them is the LH field, of which complementary observations were made within the HerMES survey \citep{Oliver2010} using the \textit{Herschel}/SPIRE \citep{Griffin2010} photometer and its channels at 250, 350, and 500 $\mathrm{\mu m}$. For the purposes of our study, we adopted the catalogue based on the 24 $\mathrm{\mu m}$ priors from \textit{Spitzer}/MIPS, which includes the positions and fluxes as measured by \cite{Egami2008}, and the \textit{Herschel}/PACS photometer fluxes detected by the PEP project at these positions at 100 and 160 $\mathrm{\mu m}$. This information is also available in the published PEP DR1 data. After applying the constraint imposed by the coordinates and discarding MIPS sources with no fluxes at 100 and 160 $\mu$m, 1181 sources in total were obtained for the FIR catalogue (hereafter PEP-catalogue). \subsection{Multi-wavelength catalogue}\label{sec:3.2} One of the most complete studies carried out on the LH field is that of \citetalias{Fotopoulou2012}, whose authors collected photometric and spectroscopic information available in the literature. They published a catalogue with all the data, including the photometric redshifts for the sources in our field. Specifically, Tables 5 and 10 from \citetalias{Fotopoulou2012} provide the data for photometric information and photometric redshifts respectively. Table 5 in \citetalias{Fotopoulou2012} contains photometric information from the far-UV (FUV) to the mid-IR, in the best case reaching up to 21 bands. Furthermore, spectroscopic redshifts are also included for those objects with high-quality spectroscopic information analysed in 27 studies and compiled in Table 4 of \citetalias{Fotopoulou2012}. All these data from \citetalias{Fotopoulou2012} were compiled and limited to magnitude $R_{\rm C}\leq24.5$ and coordinates within the $24\times24$ arcmin$^2$ of the field of study. A multi-wavelength catalogue (hereafter the FT-catalogue) of 28\,956 sources was obtained with an astrometric precision better than 0.2 arcsec \citep{Rovilos2011}. The merging of the FT-catalogue and the PEP-catalogue made it possible to limit the FIR sources in $R_{\rm C}$ magnitude. \subsection{Pre-images of the Lockman Hole field from OSIRIS} \label{sec:3.3} A total of 64 images were taken to map the central region of the LH field (J2000 equatorial coordinates: $10^\mathrm{h} 52^\mathrm{m}43^\mathrm{s}$ $+57\degr 28\arcmin 48\arcsec$) using OSIRIS in broad-band mode with the SDSS $r$ filter. Those images were meticulously reduced and astrometrically calibrated with RMS $< 0.15$ arcsec. The outcome was a mosaic of the studied field with dimensions of approximately $24\times24$ arcmin$^2$. The purpose of the mosaic was to find the optical positions and calibrated magnitudes of the optical counterparts of the FIR sources in the PEP-catalogue in agreement with the imposed flux limit of $R_\mathrm{C}\leq24.5$. The observations were thus designed to achieve a limiting magnitude for the mosaic of $r_{\rm lim} = 25.6$ at $3\sigma$. This limit ensured that all the optical counterparts of the FIR sources were detected. The optical positions determined in this process were those used in the design of the masks for the MOS observation mode. The extraction of the list of objects and photometric information from the mosaic was implemented using \texttt{SExtractor} \citep{Bertin1996}. A total of 33\,942 sources were extracted with RMS $<0.38$ arcsec, which defines the OSIRIS-catalogue. The resulting mosaic of the OSIRIS pre-image constitutes the background of the map represented in Figure \ref{fig:mosaic}. The fusion of the OSIRIS-catalogue and the PEP-catalogue allowed the IR sample to be limited to those objects with optical counterparts in the OSIRIS mosaic. \begin{table*} \centering \caption{Summary of the classes in which the 1144 LH catalogue objects have been detected or for which they are candidates. It should be noted that there are redundancies between the different classes.} \label{tab:summary} \begin{tabular}{cccccccc} \hline \hline FIR & X-ray FIR & X-ray Point Sources & High-Velocity & Radio & Very red & Sub-millimetre \\ Sources & counterparts & and Cataclysmic Stars & Halo Stars & Sources & QSOs & Galaxies \\ \hline 956 & 66 & 58 & 94 & 24 & 70 & 16 \\ \hline \end{tabular} \end{table*} \subsection{X-ray counterparts of far-infrared sources} The X-ray observations in the LH field are the deepest and most complete in the field. These data give us an opportunity to identify the FIR sources with X-ray emission, in addition to optical counterparts up to $R_\mathrm{C}\leq24.5$, which are useful for AGN host classification (e.g.\ \citealt{Povic2009}, \citeyear{Povic2009b}; \citealt{Mahoro2017} and references therein). In particular, we analysed data from the \textit{XMM-Newton} and \textit{Chandra} missions because of the high quality of their data in the LH field, looking for which sources in the PEP-catalogue have X-ray emission. The \textit{XMM-Newton} satellite provides the deepest observations in the LH field \citep{Brunner2008}, with a total of 409 sources detected. We matched this catalogue with the PEP-catalogue with a search radius of 2 arcsec, as recommended by \cite{Povic2009}, and found that 64 were objects in common. From the \textit{Chandra Source Catalogue} \citep{Evans2010}, we selected the sources in our field under the same conditions as imposed on \textit{XMM-Newton} data. We identified a total of 106 \textit{Chandra} objects, of which only 19 were matched with the PEP-catalogue, within a search radius of 2 arcsec. Finally, we matched both catalogues. We found a total of 66 X-ray sources without redundancies, that is to say 66 FIR sources with emission in the X-ray domain detected by either the \textit{XMM-Newton} or \textit{Chandra} telescopes. All selected objects are also in the FT-catalogue with $R_\mathrm{C}\leq24.5$ mag and in the OSIRIS-catalogue, so the objects are part of the final catalogue as selected objects with IR emission and X-ray counterparts. \subsection{Additional targets} \label{sec:3.5} The possibility of working with the GTC's large collecting surface and the efficiency of OSIRIS allows us to complement the Lockman-SpReSO project scientifically with secondary studies. We added interesting complementary targets to take full advantage of the OSIRIS MOS mode and to optimize the design of masks. Since none of the secondary catalogues impose the criteria of IR emission in their objects, redundancies may arise among the secondary catalogues, and even with the main catalogue. A study of the existence and correction of redundancies was carried out as the last step in the compilation of the final catalogue of the Lockman-SpReSO project. Two studies on secondary objects are proposed. For those objects whose nature has been determined in previous studies, further information is expected to be added through optical spectroscopy. This is the case for galaxies studied in the sub-millimetre and radio domains. On the other hand, for those objects whose nature is indeterminate, spectroscopy helps us reveal the type of object being analysed and its properties, an example being the very red quasi-stellar object (QSO) candidates. The following subsections describe each of the additional catalogues in detail. \subsubsection{X-ray point sources and cataclysmic variable star candidates} \label{sec:3.5.1} A new cross-match was made between the OSIRIS-catalogue and the catalogue of \citetalias{Fotopoulou2012} in order to assign the photometric information, but with a search radius of 5 arcsec. Two different search methods were applied. The first, looking for point sources, imposed the \textit{CLASSSTAR}$>0.95$ constraint and an X-ray detection (a flag in the catalogue of \citetalias{Fotopoulou2012}). The result was a list of 45 objects. The second method was the colour criteria of \cite{Drake2014} ($-0.5<u-g<0.5$ and $-0.5<g-r<0.5$) to identify possible cataclysmic variable (CV) stars. This yielded a total of 21 objects, but eight of these were in common with the point sources. Therefore, this catalogue of secondary objects comprises a total of 58 objects. The spectroscopic study of these objects help us to determine whether or not their nature is stellar, together with a study of the degree of contaminants suffered in the colour-based selection method. \subsubsection{High-velocity halo star candidates} Another secondary scientific objective focused on the study of high-velocity halo stars. Considering as a first criterion the stellarity given by the photometry, we selected 94 sources from the \textit{Initial GAIA Catalogue} \citep{Smart2014} with high proper motion ($>10$ mas/year) and $R_\mathrm{C}>18$ as the sample of stars in the Lockman-SpReSO project. Our interest was centred on spectroscopically sampling the halo while focusing on stars with high proper motion, which could include Galactic runaway stars. The spectra of the objects may provide rough radial velocity information, but good enough to classify them as halo or disc stars. On the other hand, this sampling could provide us with a better classification between stars, galaxies, and quasars, while indicating the degree of contamination of the stellar sample defined only by the stellarity parameter. \subsubsection{Radio-source population} Deep radio surveys (for more information see \citealt{deZotti2010} and \citealt{Padovani2016}) at levels of a few $\mu $Jy show that there is an excess of sources with respect to the population of powerful radio galaxies. Radio sources above $\sim$1 mJy are typically classical radio sources powered by AGNs and hosted by elliptical galaxies. Below 1 mJy, radio-source counts start to be dominated by SFGs, similar to the nearby starburst population; in other words, radio emission in these galaxies is directly related to the SFR. Thus, deep radio surveys are relevant to the study of the history of star formation in galaxies. One of these surveys in the LH field is the 6 cm (5 GHz) Very Large Array survey by \cite{Ciliegi2003}, who studied 63 radio sources at a depth of $\sim$11 $\mu$Jy. We selected objects from this survey with no spectroscopic information in the bibliography and matched them with the FT-catalogue to add the photometric information. The result was a sample with 24 radio sources with optical counterparts up to $R_\mathrm{C}=24.5$ mag in \citetalias{Fotopoulou2012}. \subsubsection{Very red quasi-stellar object candidates} As in the case of the optical spectroscopy of radio galaxies, the scope and possibilities of Lockman-SpReSO give us the opportunity to study other obscured sources that are interesting in their own right. We selected a sample of candidates of very red QSOs by following two different selection processes. The first, proposed by \cite{Glikman2013}, consists in searching for optical, not necessarily point-like objects, counterparts of radio sources from the Faint Images of the Radio Sky at Twenty\footnote{\url{http://sundog.stsci.edu}} (FIRST; \citealt{Becker1995}) survey using the criteria $R-K_{\rm Vega}>4.5$ and $J-K_{\rm Vega}>1.5$. In this way, we found seven very red QSO candidates. An alternative selection was based on the work of \cite{Ross2015}, where the colour selection criterion was $r'-W4>7.5$, with $W4$ the 22.19 $\mu$m channel of \textit{Wide-field Infrared Survey} (\textit{WISE}; \citealt{Wright2010}). As a reference to $W4$, they used a relation with MIPS $24 \mu {\rm m}-W4=0.86$ \citep{Brown2014}. For the $r$ band, they used $r'-R_\mathrm{C}=-0.2$ \citep{Ovcharov2008}. In terms of this criterion, 63 sources were classified as potential very red QSOs, yielding a total of 70 by merging both methods. \subsubsection{Optical counterparts of sub-millimetre galaxies} As is known, surveys with the \textit{Herschel Space Observatory} have identified an increasing number of SMGs (\citealt{Negrello2010}, \citealt{Mitchell2012}). The study of these sources is of paramount importance for understanding the formation and evolution of massive, dusty galaxies, which could explain the origin of present-day massive ellipticals (e.g. \citealt{Ivison2013}). For these reasons, analysis of its spectroscopic properties in the optical and NIR can be both exciting and challenging, and the Lockman-SpReSO project provides an excellent opportunity to do that at a minimum cost. The catalogue that was used as a starting point can be found in Table B3 of \cite{Michalowski2012}, which contains the LH field objects detected with JCMT$/$AzTEC at 1.1 mm as part of the SCUBA HAlf Degree Extragalactic Survey (SHADES; \citealt{Mortier2005}). This survey has a resolution of $\sim$18 arcsec and reaches a depth of $\sim$1 mJy. To determine source identifications, they used the catalogues of \cite{Austermann2010}, thus exploiting deep radio (1.4 GHz) and 24 $\mu$m (0.61 GHz) data, complemented by flux density based methods at 8 $\mu$m and $i-K$ colour. To ensure a fair identification of SMG candidates, only objects with very good detection (\textit{bID} $=1$ in Table B1 of \citealt{Michalowski2012}) were selected. We then cross-matched them with those in the PEP-catalogue with a search radius of 5 arcsec and selected those with $R_C\leq24.5$. We obtained a sample of 16 sources with good identification in AzTEC and sub-millimetre emission with optical and FIR counterparts. These SMG sources were also selected as FIR sources with optical counterparts in the OSIRIS mosaic, so they are included in the main object catalogue but flagged regarding that distinctive feature for future studies. \subsubsection{Fiducial stars} One of the requirements for quality pointing using the OSIRIS MOS mode is that we have at least three reference points in the field with good accuracy. In this case, these reference points are fiducial stars in the LH field, which help us to point the telescope with accuracy and repeatability. Each observation had between three or four fiducial stars to guarantee accurate telescope pointing. Thus, we chose 171 sources in the LH field from SDSS-DR12 \citep{Alam2015}. The sample has a coordinate accuracy of 0.3 arcsec and $R$-band magnitudes between 16 and 19. According to this, the selected fiducials are bright enough to align the mask in a few minutes, but not so bright that they could saturate the acquisition frames when a MOS mask is observed. \subsection{The final input catalogue} To compile the final sample, the three main catalogues (the PEP-, FT- and OSIRIS-catalogues) were merged to obtain a final priority target catalogue. Coordinates from \cite{Egami2008} were used as reference in the cross-match process because of their high astrometric accuracy. We tested the best value of the maximum allowed distance between the different catalogues. We found that 1.5 arcsec was the best compromise. This is slightly greater than the most significant error of the coordinates in the PEP-catalogue ($\sim$1.22 arcsec). We started by joining the target list from the OSIRIS mosaic and the PEP-catalogue. We matched a total of 991 targets within a distance of 1.5 arcsec. After this, we merged the PEP-catalogue with the FT-catalogue to obtain a list of 1063 common objects. The final step was to bring these two previous matches together into a single catalogue. After the correction for multiplicities, the definitive catalogue of primary sources was made up of a total of 956 objects (the primary catalogue). The whole process is described more schematically in Fig. \ref{fig:esquema1} (up to the green box). The last step was to check for possible redundancies between the additional catalogues and the primary catalogue, namely to see if there were objects that appeared in both the primary catalogue and any of the secondary catalogues, while skipping the SMGs that had already been taken into account. The lower half of Fig. \ref{fig:esquema1} represents the merge of the primary and the secondary catalogues. The `Preliminary Object Type' and `Catalogue' columns in Table \ref{tab:z_spec_category} show where the redundancies were found and in what quantity, respectively. The final target selection from the FIR counterparts in the primary catalogue and complementary sources present in the OSIRIS mosaic, making up the LH-catalogue, includes 1144 sources. The final composition of objects in the LH-catalogue is summarized in Table \ref{tab:summary}. Each value in the table indicates the number of sources in that category that have become part of the LH-catalogue. However, it is important to note that there are redundancies between the different classes; for example, all the SMGs are part of the FIR objects. \section{Spectroscopic observations}\label{sec:4} \begin{table*} \centering \caption{Schedule and details of the observing runs over the faint subset observed with OSIRIS/GTC. All the essential information is collected for each run. The number in parentheses in the `Grisms' column represents the number of times that the observations with the grism were made. The first row corresponds to the observation of the images used to elaborate the mosaic.} \label{tab:faint_camps} \begin{tabular}{ccccccc} \hline \hline Run & Masks & OB/Mask & Grisms & Slits Length & Req.\ Night & Exp. Time\\ & (\#) & & & (arcsec) & & (s) \\ \hline 2014A & - & - & - & - & - & 10304 \\\hline 2014B & 10 & 3$^{(1)}$ & R500B(2) \& R1000R(1) & 10 & Gray & 105600 \\ 2015A & 7 & $3^{(2)}$ & & 10 & Gray & 87400 \\ 2015B & 6 & 3 & & 10 & Gray & 71400 \\\hline 2016A & 3 & 3 & R500B(1) \& R500R(2) & 10 & Dark &24000 \\ 2016B & 6 & 3 & & 10 & Dark & 50400 \\\hline 2017B & 10 & 4 & R500B(2) \& R500R(2) & 3 & Dark & 108000 \\ 2018B & 6 & 4 & & 3 & Dark & 64800 \\ \hline \end{tabular}\\[0.2cm] \raggedright\small{$^{(1)}$ Two masks had 2 OBs (R500B(1) and R500R(1)) }\\ \raggedright\small{$^{(2)}$ One masks had 4 OBs (R500B(2) and R500R(2)) } \end{table*} Our survey used the guaranteed time of the OSIRIS instrument team and the Instituto de AstronomГ­a of the Universidad Nacional AutГіnoma de MГ©xico (IA-UNAM). The first observations were carried out over the first semester of 2014 (run 2014A in Table \ref{tab:faint_camps}) and were used to create the OSIRIS mosaic image of the study region of the LH field (Fig. \ref{fig:mosaic}). As mentioned above, our quality requirement was to reach a S$/$N $\geq 3$ in the continuum for all the objects in the survey. Considering the ETC predictions for the different $R_\mathrm{C}$ magnitude intervals of the survey, as well as the spatial distribution of the objects by $R_\mathrm{C}$ magnitude in the $24\times24$ arcmin$^2$ field, it was determined that, in order to achieve this S$/$N, it would take of the order of 1 to 1.4 h per mask with $R_\mathrm{C}<20.6$ mag, and up to 3 h per mask for sources with $R_\mathrm{C}\geq20.6$ mag. Figure \ref{fig:mag_Rc} shows the $R_\mathrm{C}$ magnitude distribution for all the objects in the catalogue. Consequently, to avoid problems in merging bright and faint objects, and with the aim of optimizing the number of masks that needed to be used, the sample was divided into two parts using the magnitude criterion $R_\mathrm{C}=20$ mag as the separation value. We denominated objects with $R_C\geq20$ mag as the faint subset (993 sources) and objects with $R_C<20$ mag as the bright subset (151 sources), where each of them was observed with a different telescope. However, there is a slight overlap (up to $R_C=20.6$ mag, 93 sources in the magnitude range 20 $< R_\mathrm{C} <$ 20.6) between the sub-samples for comparing the results obtained with the different telescopes and checking that they are in good agreement. Figure \ref{fig:mosaic} shows the spatial distribution of the objects in each of the subsets, as well as those that are common and those not observed. \begin{table*} \centering \caption{Schedule and details of the observations over the bright subset.} \label{tab:bright_camps} \begin{tabular}{cllccc} \hline \hline Telescope & Name & Date & Configuration & Grisms & Exp.\ Time \\ & & & & & (sec) \\ \hline AF2/WHT & data 1 & 15/05/2016 & $blue_{1}$ & R600R & 2$\times$1000\\ & data 2 & 02/06/2016 & $blue_{1}$ & R300B & 3$\times$1000\\ & data 3 & 19/01/2017 & $blue_{1}$ & R600R & 3$\times$2000\\ & data 4 & 20/01/2017 & $red_{1}$ & R600R & 4$\times$1800+1$\times$1100\\ & data 5 & 05/02/2017 & $red_{1}$ & R300B & 5$\times$1800 \\ & data 6 & 21/02/2017 & $blue_{1}$ & R300B & 2$\times$1000 \\ & data 7 & 30/05/2017 & $blue_{2}$ & R600R, R300B & 2$\times$2000,3$\times$1000 \\ \hline HYDRA/WYIN & blue1 & 07/05/2018 & $blue_{1}$ & R316R & 3$\times$1800 \\ & blue2 & 07/05/2018 & $blue_{2}$ & R316R & 3$\times$1800 \\ & red1 & 08/05/2018 & $red_{1}$ & R316R & 3$\times$1800 \\ & mixed & 08/05/2018 & $mix_{1}$ & R316R & 3$\times$1800 \\ \hline \end{tabular} \end{table*} \subsection{The faint subset} The observations of the faint subset were carried out using the OSIRIS instrument at the GTC telescope in MOS-mode.\footnote{\url{www.gtc.iac.es/instruments/osiris/osirisMOS.php}} The first run started in the second semester of 2014 (2014B in Table \ref{tab:faint_camps}), just after the OSIRIS mosaic observations. In addition, spectroscopic MOS mode observations for Lockman-SpReSO also began shortly after the technical commissioning of the OSIRIS MOS mode \citep{Cedillo2018}. The schedule and essential information on the runs can be seen in Table \ref{tab:faint_camps}. The masks were designed using the telescope's software, the OSIRIS Mask Designer (MD; \citealt{MaskDesigner2004}, \citealt{MaskDesigner2016}). Each mask was observed using two different grisms covering the blue and red part of the optical spectrum at an intermediate resolution. The blue region was observed with the R500B grism, which provides a wavelength coverage of 3600--7200 \angstrom\ and a nominal dispersion of 3.54 \angstrom\ pixel$^{-1}$. The red part was covered with two grisms (R500R and R1000R). The former has a wavelength coverage of 4800--10000 \angstrom\ and a dispersion of 4.88 \angstrom\ pixel$^{-1}$, while the latter has a range of 5100--10000 \angstrom\ and a dispersion of 2.62 \angstrom\ pixel$^{-1}$. The observing strategy changed because of differences in the sample brightness. In this way we could optimize the design of the masks to attain our quality objective of S$/$N $>3$. The first runs were dedicated to observing the brightest objects in the faint subset ($20\lesssim R_C\lesssim 22$, 403 objects), which have a lower density in the field. It was requested that these observations be done on grey nights and the masks were designed using slits with a length of 10 arcsec. This slit length allowed us to select a region in the 2D spectra where the contribution due to emission sky lines could be obtained in order to subtract them from the object spectra (Section \ref{sec:skycorr}). The slit width was set at 1.2 arcsec, as recommended in the official literature for the instrument. This width did not result in a noticeable loss of resolution in the spectrum, and was also in line with the 0.1 arcsec precision of the slit positioning in the OSIRIS MOS mode and the 10$\%$ accuracy requirement in positioning. Three observing blocks (OBs) were scheduled for each mask. Two were observed with the R500B grism and one with the R1000R grism. The former had two scientific images per OB, while the latter had three. Table \ref{tab:faint_camps} gives detailed information about the configuration of the runs. Sky emission subtraction was difficult for faint objects. Moreover, the increase in the number of faint objects in the field implied changing the run's configuration after the 2016B run, when we started to observe the faintest objects in the faint subset ($22\lesssim R_C\leq 24.5$, 535 objects). Each OB was observed in the \textit{ON-OFF-ON} mode (Section \ref{sec:skycorr}), where the \textit{ON-frame} is an actual image and the \textit{OFF-frame} is a consecutive image but with a slight telescope displacement that makes all the slits point to a blank region of the sky. Therefore, each OB had two frames with the spectra of the objects plus sky emission (ON-frames) and one frame with only sky emission spectra (OFF-frame). With this technique, the sky correction for faint objects was greatly improved. Additionally, the sky emission was taken from the \textit{OFF-frame} to enable the slit length to be smaller (3 arcsec) and to allow more slits to be introduced in a given mask design. The object selection for each mask was made in an effort to minimize the number of masks needed by selecting a field of view with greater object density where possible, but taking care not to merge objects with a very large difference in magnitude and so fit the exposure time in agreement with this. Each mask also had to have some slits for the fiducial stars. These slits were circular with a radius of 2 arcsec. In the masks designed for the faint subset, we also introduced special slits that were pointed to empty regions of the sky in both the \textit{ON-} and \textit{OFF-frames}. These slits were used in the sky correction (see Sect. \ref{sec:skycorr}). In total, 48 masks, covering 92\% of the objects, were designed to perform observations of the faint subset. All the observations were designed to achieve an S$/$N better than three in order to obtain good quality spectral lines, even for the faintest objects. In addition to the type of night set for each observation, both airmass and seeing were always requested to be below 1.2. All observations were made in compliance with the required seeing, 60\% with seeing better than 1 arcsec. For the airmass, the constraint was less strict according to the seeing value of that same observation; in other words, if the airmass exceeded the value of 1.2 by never more than 1.35, but the seeing value was good (seeing $<1$ arcsec), the observation was accepted. In total, 12 OBs were earmarked for repetition because the conditions in which they were performed failed to meet requirements, either because of high airmass and seeing or because the type of night was not as requested. \subsection{The bright subset} The bright subset of targets selected for the LH field, $16.8\leq R_\mathrm{C} \leq 20.6$, were observed using two medium-resolution multi-fibre spectrographs, A2F-WYFFOS (AF2) of the WHT at Roque de Los Muchachos Observatory in La Palma, and HYDRA from the WYIN telescope at Kitt Peak Observatory in Tucson. The selected targets were divided into a `blue sample' ($16.8<r_\mathrm{SDSS}<20.0$) and a `red sample' ($20.0 <r_\mathrm{SDSS}<20.6$). Three fibre configurations were prepared for the blue sample and two for the red sample, to be observed with AF2/WHT. In total, 96 targets were allocated for the blue sample and 50 for the red sample, $\sim60\%$ of the total bright sample plus the overlap with the faint subset, but not all the scheduled observations could be completed. To complete the observations of the bright subset, we used the HYDRA spectrograph on the WYIN telescope. The sample designed for WYIN was composed of objects not observed with the WHT and some were observed to compare results between telescopes in order to make the most of the observations. Finally, 134 sources with $16.8< R_\mathrm{C}<20.6$ were observed with WYIN. Regarding the bright part of the catalogue, these objects were expected to be at a lower redshift than in the faint part. Still, as for the faint subset, all observations were set to reach an S$/$N of better than five to obtain quality measurements of the spectral lines. Figure \ref{fig:mosaic} also shows the spatial distribution of the objects in the bright part. The yellow stars represent the objects in the bright subset, and the green circles represent common objects in both the bright and faint catalogues. \subsubsection{Multi-fibre optical spectroscopy with AF2-WYFFOS} We conducted multi-fibre medium-resolution spectroscopy with the AF2 wide-field multi-fibre spectrograph. The AF2 spectrograph contains 150 science fibres of a diameter of 1.6 arcsec, and ten fiducial fibres dedicated to acquiring and tracking guide stars. The AF2 spectrograph has a nominal field of view of 1 deg$^2$, but because of optical distortion and the restriction to avoid the region beyond 25 arcmin, to prevent vignetting of the telescope system and instrument, the configuration file was designed to consider only the area within the central 20 arcmin$^2$. Despite these restrictions, three optimized AF2 pointings were planned to cover the central $24\times24$ arcmin$^2$ of the LH field. With AF2, it is necessary to use the special software \textit{af2-configure}\footnote{\url{https://www.ing.iac.es/Astronomy/instruments/af2/af2_documentation.html}} (from the Isaac Newton Group) to create the map of targets to be observed. This software allowed us to optimally place, beforehand, between 50 and 70 fibres per observation on the selected targets. After allocating all the objects in the best way, the remaining fibres were used to observe blank areas in the observation field to obtain sky spectra. Observations were performed in three configurations over seven nights in 2016 and 2017 in service mode. More details of the observations are listed in Table \ref{tab:bright_camps}. The R600R and R300B gratings were used with a spectral resolution of 4.4 $\angstrom$ and 3.6 $\angstrom$, respectively. The spectra were centred at wavelength $\sim$5400 $\angstrom$ and covered the range from 3800 to 7000 $\angstrom$, using a 2$\times$2 binning of the CCD camera. \subsubsection{Multi-fibre optical spectroscopy with HYDRA} Owing to the decommissioning of AF2 before the planned observations of the bright subset were completed, we had to conduct our observations using a similar instrument. The HYDRA spectrograph on the WYIN telescope was chosen. The Lockman sample for HYDRA had 134 sources with 16.8 $< R_\mathrm{C} \leq$ 20.6. The HYDRA spectrograph has 90 active fibres. The targets were observed down to a spectrograph configuration of $\lambda_{\rm start}=4400~\angstrom$, to $\lambda_{\rm end}=9600~\angstrom$, using the the grism $316@7$ with a resolution of $R \sim$ 900 and$\delta\lambda \sim 3~\angstrom$, the lowest resolution but the largest available spectral range. The images were taken with $3\times 1800$ s exposures for each configuration, adding two series of arcs, and were taken over a total of 8 hours with overheads. We needed four configurations in two half nights. The observations were carried out during the first half of the nights of 8 and 9 May 2018, with a clear sky and a seeing of $\sim$1 arcsec. A summary of the observations is given in Table \ref{tab:bright_camps}. \section{Data reduction}\label{sec:5} Since the survey data have been obtained using different instruments and telescopes, the nature of the data of each subset is different. Thus, although some procedures are common, the data reduction is described separately for each group of data. \subsection{Faint subset reduction} Basic data reduction tasks were carried out using the IRAF-based pipeline GTCMOS (see \citealt{Gomez-Gonzalez2016}) developed by Divakara Mayya of the Instituto Nacional de AstrofГ­sica, Г“ptica y ElectrГіnica (INAOE), Mexico. For each OB, the first step was to couple the image of the two OSIRIS CCDs into a single one to make it more manageable. We created a master bias and subtracted it from science images, and corrected it for flat-field. The wavelength calibration was performed using Hg, Ar, Xe, and Ne lamp spectra for each grism and mask with a median RMS of $\sim$0.05. The correction for cosmic rays was carried out using our own python code. To this aim, we relied on the fact that each observation block had at least two science images. This allowed us to compare the same column in both 2D spectra, while looking for pixels more than $3\sigma$ of the median. These pixels are classified as cosmic rays. \subsubsection{Sky subtraction and flux calibration} \label{sec:skycorr} The sky emission subtraction of an object's spectrum is both necessary and challenging. The line strength variation of the emission of OH over time is significant, making it painstaking work. As set out above, we performed two different kinds of sky emission subtraction, one for each slit length. For the 10-arcsec slits, the subtraction was more direct as we could select regions in the 2D spectra where there was only signal from the sky. To separate the contribution of the observed object in the slit and the sky signal, iterative sigma-clipping was applied column by column. Then, once we obtained the contribution of the sky, we applied a linear fit and finally subtracted it from the original column. The linear fit was implemented to improve the correction because there is a slight curvature in the outer parts of the CCDs that introduces a distortion in the 2D spectra. The difference between the sky over the column when averaging the sky signal causes the value obtained to diverge from the sky emission and the subtraction fails. A potential problem with this method occurs when the observed object has a faint continuum because it could be erroneously selected as a sky contribution by the sigma-clipping algorithm, thus resulting in a loss of object information. Furthermore, since the density of faint objects in the field is higher than that of bright objects, the adopted solution was to use 3-arcsec slits observed with the already mentioned \textit{ON-OFF-ON} strategy. The length limits of the 3-arcsec slits constrained the possibility of selecting sky signal in the 2D spectra. Observing with the \textit{ON-OFF-ON} strategy, we had available direct sky emission spectra (OFF-frame) that could be subtracted directly from the images of the objects (ON-frames). However, the residuals obtained with direct subtraction are considerable. This difference in the sky signal between two consecutive frames is due to the significant variability of the sky emission over the time of the exposures. The solution to this problem was to introduce slits pointing to a region without objects (sky-slits), even in the ON-frames, in such a way that the sky-slits collected sky emission in both ON- and OFF-frames. Thus, if we compare the sky spectra obtained by the sky-slit in one of the ON-frames with the spectra obtained by the same slit, the result in the OFF-frame is a matrix of sky variation coefficients that can be applied to the OFF-frame spectra to correct the variation of the sky emission over time. Each mask should have at least one sky-slit per OSIRIS CCD to deal with the existing spatial variation of the sky emission, in addition to the time variation. This method of sky emission subtraction gives even better results than the sky subtraction obtained in the case of 10-arcsec slits, where the sky can be selected in the 2D spectra. With the \textit{ON-OFF-ON} strategy, the 2D spectra to be corrected for sky contribution and the 2D sky spectra used have the same shape because they come from the same slit with precisely the same characteristics (i.e.\ slit irregularities, CCD curvature, and differential refraction of the light). In Fig. \ref{fig:sky_corr}, we can see an example of the sky subtraction for a 3-arcsec-long slit. The top panel is a slice of the 2D raw spectrum, where the sky emission completely hides the emission from the observed object. The central panel shows the result after applying the previous sky emission subtraction. It can be seen how the emission of the object is now fully visible owing to the good correction applied. The lower panel shows the 1D spectrum for that object with really strong spectral lines. The spectroscopic redshifts of the objects are determined from the observed lines in the spectra. In this case, the redshift obtained is $z_\mathrm{spec}=0.275$. Flux calibration is the last step when working with 2D spectra, just before obtaining the 1D spectra. For each OB, at least one standard star was observed to perform that task. Calibration was applied using the standard IRAF procedure. \subsubsection{1D spectra obtained} Once the results of the previous data reduction steps were deemed satisfactory, we proceeded to obtain the final 1D spectra, which allowed us to know the nature of the observed objects and their main physical characteristics in the optical domain. As we have mentioned, each object was observed at least twice per OB. Furthermore, each mask was observed between three and four times with different grisms, as scheduled in Table \ref{tab:faint_camps}. Each object should finally have between three and eight observations per grism. The latter scenario is possible because an object could be observed more than once, just to fill in possible free spaces in the masks. The desired outcome was a single 1D spectrum per grism. An `average-sigma-clipping' algorithm was applied to achieve this goal. The algorithm runs through each wavelength, discarding the flux points more than 2 $\sigma$ from the median and averaging the remaining points into a single point. The most common problems occurred when the object was observed in more than one mask and some of the observations were taken with a bright moon or a zero-order from a nearby slit with a bright target contaminating the spectrum. All these problems were managed using the average-sigma-clipping algorithm. In Fig. \ref{fig:1D_sample} we can see an example of the final 1D spectrum of an object at $z_\mathrm{spec}=0.421$, observed with the R500R and R500B grisms, represented by blue and red lines, respectively. The individual observations of this object with each grism, on which the average-sigma-clipping algorithm was applied, are plotted in grey. It can be seen how the algorithm has managed to correct for residuals from sky emission or cosmic ray detection. \subsection{Bright subset reduction} The reduction of the bright part was carried out using IRAF and the \texttt{hydra.dohydra}\footnote{\url{https://astro.uni-bonn.de/\~sysstw/lfa\_html/iraf/noao.imred.hydra.dohydra.htl\#h\_1}}package. This package was specifically developed for the reduction of data obtained with the HYDRA instrument. However, it allowed us to change its configuration and adapt it to the observations made with AF2. In this way, just by changing the parameters related to each instrument, this same tool was used for the different observing instruments of the bright part of our sample. The \texttt{hydra.dohydra} task was used for scattered light subtraction, extraction, fibre throughput correction and wavelength calibration. It is a command language script that collects and combines the functions and parameters of many general-purpose tasks to provide a single complete data reduction path. The tool also allowed us to do a sky correction, but in this case, we only used the result of the combination of the fibres associated with the sky to obtain the average sky spectrum that we latter used as a final correction with the \texttt{Skycorr} \footnote{\url{https://www.eso.org/sci/software/pipelines/skytools/skycorr}} tool \citep{skycorr14}, which gave us better results in terms of the S$/$N quality of the final spectrum than those given by \texttt{dohydra}. Before obtaining the final spectrum, the He and Ne lamp spectra were used for the wavelength calibration by employing an third-order Legendre polynomial for most of the objects in the range $0.03\leq$ RMS $\leq 0.07$. Flux calibration of fibre instruments is complicated to apply, as it depends directly on the quantum efficiency of each fibre at the time of measurement and its relationship with the others; that is, it is a function that depends directly on time and is internally variable fibre to fibre in the same way. Adding to this the fact that the observing routine of the bright subset was also very complicated, we decided that it was not necessary to apply this correction. Thus, these data are used to determine properties that do not require flux calibration such as spectroscopic redshifts, line widths, and flux ratios. We likewise removed cosmic rays from individual images using the IRAF \texttt{lacos-spec} task \citep{lacospec01}. We obtained the average spectrum of each of the objects observed in the R600R and R300B networks for AF2-WYFFOS and the R316R network in HYDRA with the same reduction method. Figure \ref{fig:1D_sample_bri} shows an example of a 1D spectrum for a source in the bright subset observed with WYIN/HYDRA and the R316R grism. \section{Lockman-SpReSO catalogue}\label{sec:6} In this section, we present the first results obtained from the reduction of the Lockman-SpReSO data. Once the 1D spectra were acquired, the treatment was the same for objects in the bright part and those in the faint part, so the results shown in this section come from both subsets. The whole process of obtaining a final spectrum per grism was executed automatically by the software infrastructure developed for this task described above. Although each component was tested and analysed, it was decided to conduct a visual inspection of all the results to look for possible errors in the process. Other important reasons for the visual inspection were to determine the spectroscopic redshifts of the objects by looking for the main spectral features, checking which spectra showed a stellar continuum and which did not, giving an initial quality flag of the spectral lines, and taking note of the objects with uncommon properties. In addition, further rounds of visual inspection were carried out after the fitting and measuring of the lines to make sure that everything worked correctly. The `Object cont.' column in Table \ref{tab:z_spec_category} shows the number of objects in which the stellar continuum was detected by the object category within Lockman-SpReSO. The quality criterion imposed on the spectral lines in the visual inspection helped us to filter the objects according to that criterion. Possible values for the flag are: 1) no line appears in the spectrum; 2) the line has some error or is difficult to measure, or both (e.g.\ lines partially or totally under strong sky emission); 3) the line is weak but detectable; 4) the line is clearly visible with a moderate signal; and 5) an intense line with a high signal. This criterion defined the quality of the lines until we performed the line fits and obtained the equivalent widths (EWs) and S$/$N values. \begin{table*} \centering \caption{Summary of the observed objects and the spectroscopic redshifts measured in this study, sorted by the preliminary categories described in Sect. \ref{sec:3.5}. The `Catalogue' column represents all objects in Lockman-SpReSO. The `Object cont.' column indicates how many objects per category have the stellar continuum detected in their spectra. The `$z_\mathrm{sup}$' column indicates the number of objects for which the redshift has been determined from a single spectral line}. It should be noted that an object could be observed in the faint and the bright subset so that the sum may be greater than the total. The same happens to the columns with the spectroscopic redshift. \label{tab:z_spec_category} \begin{tabular}{cc|r|rrr|r|rrr|r} \hline \hline CAT & \begin{tabular}{@{}c@{}}Preliminary Object\\Type\end{tabular} & Catalogue & Observed &\begin{tabular}{@{}c@{}}Observed\\faint \end{tabular} & \begin{tabular}{@{}c@{}}Observed\\bright\end{tabular} & \begin{tabular}{@{}c@{}}Object\\cont.\end{tabular} & $z_\mathrm{spec}$ & \begin{tabular}{@{}c@{}}$z_\mathrm{spec}$\\faint\end{tabular} & \begin{tabular}{@{}c@{}}$z_\mathrm{spec}$\\bright\end{tabular} & $z_\mathrm{sup}$\\ \hline 1 & X-rayPoint + CatVarStars & 45 & 43 & 41 & 6 & 39 & 30 & 28 & 4 & 3\\ 2 & High-Velocity Stars & 93 & 85 & 14 & 80 & 80 & 1 & - & 1 & -\\ 3 & Radio Galaxies & 17 & 13 & 13 & 1 & 7 & 5 & 5 & 1 & 1\\ 4 & FIR & 902 & 838 & 772 & 106 & 503 & 305 & 258 & 76 & 90\\ 5 & 1 + 4 & 12 & 11 & 11 & 1 & 11 & 7 & 7 & 1 & 3\\ 7 & 3 + 4 & 4 & 4 & 3 & 1 & 4 & 3 & 2 & 1 & 1\\ 12 & 1 + 2 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & - & -\\ 20 & RedQSOs(W4) & 23 & 21 & 21 & - & 2 & 2 & 2 & - & -\\ 21 & RedQSOs(FIRST) & 5 & 5 & 5 & - & 1 & - & - & - & -\\ 23 & 20 + 3 & 2 & 1 & 1 & - & 1 & - & - & - & -\\ 24 & 20 + 4 & 38 & 33 & 33 & - & 5 & 1 & 1 & - & 1\\ 25 & 21 + 2 & 1 & 1 & - & 1 & 1 & 1 & - & 1 & -\\ 26 & 21 + 3 & 1 & 1 & 1 & - & 1 & 1 & 1 & - & -\\ \hline & TOTAL & 1144 & 1057 & 916 & 197 & 656 & 357 & 305 & 85& 99\\ \hline \end{tabular} \end{table*} \subsection{Redshift determinations}\label{sec:6.1} To determine the spectroscopic redshifts, we imposed the condition that the spectrum should have at least two spectral features. The reason for doing so was to minimize, as much as possible, cases of false determinations due to ambiguous detection. It was also decided that, for objects with spectroscopic redshift in the literature, the presence of one line coinciding with the redshift would be the only condition. In total, 357 spectroscopic redshifts were obtained using both criteria and these are shown in Table \ref{tab:z_spec_category}, where they are split up by the object category and the subset to which they belong. We verified how many of these objects had a previous spectroscopic redshift determination. To do so, we used Table 5 of \citetalias{Fotopoulou2012}, which lists the works with spectroscopic studies that existed up to the date of publication for the LH field. In addition, we updated this information with those objects for which the SDSS survey provided redshifts, and finally we searched the NASA/IPAC Extragalactic Database\footnote{\url{https://ned.ipac.caltech.edu}} (NED) for the available information of the objects for which the redshift was determined in this work. A total of 89 objects already had their redshifts determined and coincided with those determined in this study. Hence, for 268 objects ($\sim$75\%), the spectroscopic redshift was determined for the first time. Figure \ref{fig:zspec_vs_zphot} shows the comparison between the spectroscopic redshifts ($z_{\rm spec}$) measured in this study and the photometric redshifts ($z_{\rm phot}$) from \citetalias{Fotopoulou2012}. The error bars plotted for $z_{\rm phot}$ represent the range of 90 $\%$ significance reported by LePhare code (\citealt{Arnouts1999,Ilbert2006}). The errors in $z_{\rm spec}$ are also plotted, but are smaller than the data-point size, with a median value of $3\times10^{-4}$. The bottom panel shows the scatter of the difference between photometric and spectroscopic redshifts. The dashed grey lines represent the limit value defined by \cite{Hildebrandt2010} for flagging an outlier, defined by \[ \frac{|\Delta z|}{1+z_\mathrm{spec}} \geq 0.15, \] where \( |\Delta z| = |z_\mathrm{phot} - z_\mathrm{spec}|\). Applying this criterion, we found 4$\%$ of outliers in the spectroscopic redshift range $z_\mathrm{spec}<0.5$, 2$\%$ at $0.5 < z_\mathrm{spec} < 1.0$ and 34$\%$ for $z_\mathrm{spec}>1$. Outliers were marked with empty red circles in the Fig. \ref{fig:zspec_vs_zphot}. As expected, the determination of photometric redshifts is problematic for distant objects, even considering one out of three of the more distant objects as an outlier. Nevertheless, this result can be useful when using the photometric redshift to derive other quantities; for example, when the determination of the spectroscopic redshifts is not very clear because the spectral lines are not intense enough, or in cases where only one line is present in the spectrum, as we discuss below. There are 105 objects in the LH-catalogue for which we detected only one emission-line feature in their spectra without any complementary information in the literature. An attempt was made to give a redshift value ($z_\mathrm{sup}$) based on the properties of the line found (intensity, observed wavelength, photometric redshift, object magnitudes, and credibility). This preliminary analysis made it possible to give a reliable redshift for 99 objects with only one spectral line. The bottom right panel of Fig. \ref{fig:zspec_vs_zphot} shows the distributions obtained for $z_{\rm sup}$, where $71\%$ of the objects have $z_\mathrm{sup}<1.0$ and the fraction of outliers in the photometric distribution are less than 3\%, so a redshift based on the line found in the spectra is very helpful. For all other objects, the photometric redshift was used with care, more weight being given to the other information available for that object. The last column in Table \ref{tab:z_spec_category} shows how the objects with $z_{\rm sup}$ are distributed among the different categories and their number. Figure \ref{fig:mag_Rc} shows the distribution of the $R_\mathrm{C}$ magnitude for the objects with $z_\mathrm{spec}$ (blue) and $z_\mathrm{sup}$ (red) values measured in this study. In some cases, the determination of the spectroscopic redshift allowed us to clarify the nature of the objects under study. For example, for some of the candidate CV stars we, determined a range of values for $z_\mathrm{spec}$ (0.5263 $\leq$ $z_\mathrm{spec}$ $\leq$ 1.9387), which made it clear that they cannot be stars, but distant compact sources. Although we already had $z_{\rm spec}$ measured, the final value for each object was calculated as a weighted mean of each of the redshifts obtained for each line after fitting. This is discussed further in the next section.. \subsection{Line measurement}\label{secc:6.2} To measure the lines, we fitted them with a non-linear least-squares minimization routine implemented in Python (\textit{LMFIT\footnote{\url{https://lmfit.github.io/lmfit-py/index.html}}}, \citealt{LMFIT2014}). Each line was fitted with a Gaussian profile plus a linear model to take into account possible continuum variations, all of which resulted in a total of five parameters to be determined: the position of the centre, the sigma, the amplitude for the Gaussian component, and the slope and intercept for the linear model. The parameters were set free, but initial values were needed. The initial values were adapted and certain constraints were imposed for minimizing the computing time for each fitted spectral line. For example, as we knew $z_{\rm spec}$, the centre of the line could be obtained and used as the initial value for the centre of the Gaussian component in the fitting process. One of the restrictions applied concerned the value for the amplitude of H$\beta$ when the H$\alpha$ line was available in the object's spectrum. The lines were usually fitted from the most intense to the least intense so that the value of the H$\beta$ amplitude could not be greater than the calculated H$\alpha$ amplitude, thus restricting the space of values that the fitting programme had to explore. The same was applied to the other lines of the Balmer series and even for forbidden lines pairs such as [\ion{O}{III}] $\lambda$4959 \AA\ and [\ion{O}{III}] $\lambda$5007 \AA\ and [\ion{N}{II}] $\lambda$6548 \AA\ and [\ion{N}{II}] $\lambda$6583 \AA\, among others. Another constraint applied was related to the $\sigma$ of the Gaussians for the spectral lines that came from the same regions of the galaxy,such as the abovementioned forbidden line pairs. The initial $\sigma$ value of the Gaussian with which we fitted their line profiles should be almost identical because their nature is the same. Another important parameter that remained fixed during the fitting and was external to it, was the wavelength window used for each line fit. The size of this region depended on the line type, the redshift, and the object type. With the spectral resolution used in our observations, there were spectral lines very close in wavelength that could not be resolved because they were blended into a single feature. In these cases, we had to fit more than one Gaussian model at same time, one for each line plus a linear model. Even in cases where the resolution was sufficient to separate the components, they were so close that a separate fit was difficult to perform. One example is shown in Fig. \ref{fig:line_fit}, where the emission feature corresponds to a group of three spectral lines: H$\alpha$ + [\ion{N}{II}] $\lambda$6548,6583 \AA. In the left panel, the grism used to observe the objects is the R500R, which has approximately half the resolution of the grism used on the right (the R1000R grism). In both cases, the blue dots are the observed flux, the red line is the best fit, and the bottom panels represent the residuals of both fits. It is important to note that, when observed with the lowest resolution, the components were blended into the same feature but that, when performing a fit, the three components were well recovered. In the right panel, the improvement of resolution allowed us to separate the [\ion{N}{II}] $\lambda$6583 \AA\ line from the H$\alpha$ line, but not enough to perform independent fits. Another case in which multiple Gaussians were fitted simultaneously concerned objects with broad lines. As usual in these situations, we fitted one Gaussian for the narrow component and another for the broad component. Figure \ref{fig:ew_observed} shows the rest-frame EWs obtained for some of the strongest lines in the optical range. By definition, emission lines have a negative EW, and absorption lines have a positive EW. For this reason, only the H$\alpha$ and H$\beta$ lines have positive values in Fig. \ref{fig:ew_observed} because the forbidden lines [\ion{O}{II}] $\lambda$3726,3729 \AA\ and [\ion{O}{III}] $\lambda$5007 \AA\ cannot be in absorption. Finally, in Table \ref{tab:sample_line_catalogue} a sample of the information available in the database is presented. Principal information for both the FT and PEP-catalogues is included. \begin{sidewaystable*} \caption{Sample of the information in Lockman-SpReSO} \label{tab:sample_line_catalogue} \begin{tabular}{ccccccccccccccccc} \hline\hline ID & RA & DEC & CAT & mag & mag err& FIR & FIR err & zphot & z & z err & z type & line &z line & z line err & \\ units & (deg) & (deg) & & & & (mJy) & (mJy) & & & & & & & & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12) & (13) & (14) & (15) & \\ \hline 84957 & 163.07966 & 57.36440 & 4 & 19.085 & 0.002 & 260 & 30 & 0.082 & 0.07464 & 0.00006 & spec & [\ion{O}{III}] $\lambda$4959 & 0.0746 & 0.0002 & \\ & & & & & & & & & & & & [\ion{O}{III}] $\lambda$5007 & 0.0747 & 0.0001 & \\ & & & & & & & & & & & & [\ion{S}{II}] $\lambda$6716 & 0.0746 & 0.0002 & \\ & & & & & & & & & & & & [\ion{S}{II}] $\lambda$6731 & 0.0746 & 0.0002 & \\ 128229 & 163.16754 & 57.61459 & 4 & 20.854 & 0.004 & 730 & 20 & 0.474 & 0.482938 & 0.000009 & spec & \ion{Ca}{II} $\lambda$3934 & 0.48505 & - & \\ & & & & & & & & & & & & \ion{Ca}{II} $\lambda$3968 & 0.48368 & - & \\ & & & & & & & & & & & & \ion{H}{$\beta$} & 0.48305 & 0.00002 & \\ & & & & & & & & & & & & \ion{H}{$\alpha$} & 0.48291 & 0.00001 & \\ \hline \end{tabular} \bigskip \bigskip \renewcommand\thetable{5} \caption{\textit{continued}} \begin{tabular}{ccccccccccc} \hline\hline ID & line obs. centre & line obs. centre err & line flux & line flux err & FWHM line & line FWHM err & line EW & line S$/$N & line quality \\ units & (\AA) & (\AA) & (erg s$^{-1}$ cm$^2$ $\times$ 10$^{-17}$) & (erg s$^{-1}$ cm$^2$ $\times$ 10$^{-17}$) & (\AA) & (\AA) &(\AA) &(\AA) & \\ (1) & (16) & (17) & (18) & (19) & (20) & (21) & (22) & (23) & (24) \\ \hline 84957 & 5329 & 1 & 17 & 3 & 21 & 3 & -2.54 & 12 & 5 & \\ & 5381.5 & 0.5 & 29 & 2 & 19 & 1 & -7.03 & 19 & 5 & \\ & 7217 & 1 & 15 & 2 & 17 & 2 & -5.21 & 13 & 5 & \\ & 7234 & 1 & 16 & 2 & 17 & 2 & -5.75 & 14 & 5 & \\ 128229 & 5842 & - & 2 & - & 2 & - & 1.32 & 1 & 1 & \\ & 5887 & - & 2 & - & 3 & - & 1.85 & 1 & 1 & \\ & 7209.5 & 0.1 & 38.2 & 0.9 & 10.0 & 0.2 & -55.67 & 19 & 5 & \\ & 9732.36 & 0.07 & 91 & 1 & 16.4 & 0.2 & -241.34 & 35 & 5 & \\ \hline \end{tabular} \tablefoot{Column (1) is the unique identification number for each object in the catalogue. Column (2) and (3) give the optical coordinates (J2000) of the object. Coordinates of counterparts in other bands, available in \citetalias{Fotopoulou2012}, and FIR coordinates are also included. Column (4) is the flag to indicate to which catalogue the object belongs: 1) X-ray point sources or CV star candidates;2) high-velocity halo stars; 3) poorly studied radio galaxies; 4) FIR-objects; 5) objects in both $CAT = 1$ and $CAT = 4$; 7) objects in both $CAT=3$ and $CAT=4$; 12) objects in both $CAT=1$ and $CAT=2$; 20) very red QSOs selected using the \cite{Glikman2013} method; 21) very red QSOs selected using the \cite{Ross2015} criteria; 23) objects in both $CAT=20$ and $CAT=3$; 24) objects in both $CAT=20$ and $CAT=4$; 25) objects in both $CAT=21$ and $CAT=2$; and 26) objects in both $CAT=21$ and $CAT=3$. Column (5) and (6) are the AB magnitudes and errors for al the bands available in \citetalias{Fotopoulou2012} from FUV (\textit{GALEX}) to 8 $\mu$m (\textit{Spitzer}; see Section 3 in \citetalias{Fotopoulou2012} for more details). Columns (7) and (8) give the FIR fluxes and errors at 24, 100, and 160 $\mu$m from the PEP work expressed in mJy \citep{Lutz2011}. Column (9) is the photometric redshift calculated in \citetalias{Fotopoulou2012}. Columns (10) and (11) are the object redshift and the error measured in this work. Column (12) is the flag to mark how the redshift has been obtained. Two values are possible:`spec' and `sup'. Columns (13)--(24) are the main parameters of the fitted line using the model explained in the text. Each parameter of the Gaussian and line model is included in the table, as well as the EW and S/N obtained in the fitting process, and the quality criterion assigned in the first visual inspection.} \end{sidewaystable*} \subsection{Stellar-mass and IR luminosity distributions} In order to further characterize the scope of Lockman-SpReSO, basic parameters such as stellar mass and luminosity are essential. One of the most common ways to obtain them is by using SED fits to derive the physical properties of the objects from best-fit models. In the work of \cite{Shirley2019}, the authors performed a unification of the fields studied by \textit{Herschel} (1270 deg$^2$ and 170 million objects) and produced a general catalogue (HELP). In addition, they carried out SED-fitting studies on this catalogue \citep{Malek2018} to determine the properties of the objects using a previously determined photometric redshift \citep{Duncan2018}. Spectroscopic redshift gives us the advantage that, for a chosen cosmology, we know the distance of the object, which is very important for SED fitting and accurate translation of rest-frame models to the observed wavelength. Thus, a SED-fitting process was carried out to take advantage of the good photometric coverage collected in \citetalias{Fotopoulou2012}, plus the FIR information at 24, 100 and 160 $\rm \mu$m, together with the spectroscopic redshift determined in this study. As explained in Sect. \ref{sec:2}, the \citet[hereafter K21]{Kondapally2021} multi-wavelength catalogue includes recent observations of the LH field in the optical range. The SpARCS and RCSLenS surveys observed the LH field using both the CFHT/MegaCam instrument and the broad-band filters $u$, $g$, $r$, $i$, and $z$, also included in the \citetalias{Fotopoulou2012} catalogue from observations made with SDSS. The SpARCS and RCSLenS bands were therefore added to the Lockman-SpReSO catalogue by cross-matching both catalogues. The process was restricted to a maximum distance of 1.5 arcsec, the same as that used when we merged the PEP-catalogue with the FT- and OSIRIS-catalogues. It was found that 97\% of the objects had a counterpart in \citetalias{Kondapally2021} at an angular distance of less than 1.5 arcsec. Another reason for the fusion of catalogues was the completeness of the \citetalias{Kondapally2021} sample with respect to the Lockman-SpReSO catalogue; in other words, for the $u$, $g$, $r$, $i$, and $z$ bands, we have information for 30\%, 38\%,40\%, 40\%, and 39\% of the objects, respectively, within SDSS observations from \citetalias{Fotopoulou2012}. In SpARCS and RCSLenS, we have information for 97\%, 97\%, 97\%, 86\%, and 94\% of the objects. This effect is also present within the \textit{GALEX} data in the FUV and NUV bands, with information for 13\% and 27\% of the objects, respectively, in the \citetalias{Fotopoulou2012} catalogue, and in \citetalias{Kondapally2021} we have information for 74\% and 74\% of the objects, respectively. To try to cover the FIR range as well as possible, the Lockman-SpReSO catalogue was cross-matched with the HELP catalogue in order to get the flux in the 250, 350, and 500 $\rm \mu$m \textit{Herschel}/SPIRE bands. These photometric points helped to model the IR emission on the red side of the peak for the vast majority of objects in our spectroscopic sample. We found that 98.9\% of our objects have a HELP counterpart at an angular distance of less than 1.5 arcsec and 96.6\% have a HELP counterpart at less than 0.7 arcsec. The maximum separation was chosen to be 1.5 arcsec. Also, for the SMGs in the LH-catalogue, the JCMT/AzTEC 1.1 mm band value of \cite{Michalowski2012} was taken into account. Table \ref{tab:filters_c} compiles all the filters used in the SED fittings. Therefore, for objects in the LH-catalogue with a calculated spectroscopic redshift, we fitted their SEDs using the CIGALE software (Code Investigating GALaxy Emission, \citealt{Bugarella2005}, \citealt{Boquien2019}). This code allowed us to perform SED fits from the UV to the radio (X-ray regimes are included in the latest update; we do not, however, use them in this study, \citealt{Yan2020}, \citeyear{Yan2022}). The code is based on the assumption of energy balance, that is to say that UV, optical and NIR light attenuated by dust is re-emitted in the redder ranges of the IR region. For each of the components involved in the SED fitting, CIGALE allowed us to choose between different models incorporated in the software. CIGALE performs a minimization of the $\chi^2$ statistic to select the best-fitting model. In addition, CIGALE also performs a Bayesian-based study to obtain a probability distribution of the physical parameters derived from the SED fit using all the models and their errors. Thus, using the available photometric and the spectroscopic redshift, we were able to obtain essential information about the objects in the LH-catalogue. For the stellar component, we used the \texttt{sfhdelayed} model, a star formation history (SFH) model with a nearly linear growth until a certain time ($\tau$), after which it drops smoothly, plus an exponential starburst. This model allowed us to fit both late-type galaxies (small $\tau$) and early-type galaxies (large $\tau$; \citealt{Boquien2019}). As in our study we are working with mostly infrared-emitting galaxies, we adopted a recent starburst to model the young population of stars ( i.e.\ SFGs). The intrinsic component of the stars was computed using the library of \cite[][model \texttt{bc03}]{Bruzual2003} with infra- and supra-solar metallicities and the IMF of \cite{Chabrier2003}. Nebular emission was also taken into account by adding the model to the CIGALE calculation. Extinction was incorporated using the \texttt{dustatt\_modified\_starburst} model, which is based on the extinction law of \cite{Calzetti2000}, to which the curve of \cite{Leitherer2002} between the Lyman break and 150 nm was added; in addition, both the slope and the UV bump could be modified. For the dust contribution, we used the \cite{Dale2014} templates (\texttt{dale2014} model) based on nearby SFGs that also added an AGN component. More sophisticated AGN models were not used because the purpose of these SED fits was the characterization of the Lockman-SpReSO sample as a whole, while assuming the AGN fraction-free parameter provided by the models of \citep{Dale2014} were adequate. Future studies of the Lockman-SpReSO project will carry out detailed investigations on object classification, and more individualized SED fits will study each type of object more precisely. A summary of the selected models and the set of values used for the parameters of the models are given in Table \ref{tab:CIGALE}. An example of the SED-fitting process is shown in Fig. \ref{fig:SED_fit}, where the best fit obtained for a source in the Lockman-SpReSO catalogue is plotted together with the contribution of each of the models used. The lower part of Fig. \ref{fig:SED_fit} shows the relative residuals of the flux obtained in the fit. Most of our objects were selected for their emission in the \textit{Herschel} bands (i.e.\ they are FIR emitters). Therefore, one of the first parameters to determine and describe the sample is the total IR luminosity ($L_\mathrm{TIR}$). There are different methods for obtaining the IR luminosity. Some of them derive $L_\mathrm{TIR}$ by using a monochromatic proxy \citep{Chary2001}, and others, as in the work developed by \cite[see also the references therein]{Galametz2013}, the IR luminosity is obtained from an analytic expression based on \textit{Spitzer} and \textit{Herschel} data. These studies are usually designed to use the rest-frame bands, which means that when we study distant galaxies, the bands are redshifted, and a correction is needed. To surmount this difficulty, as we had the SED fits available, we could integrate the luminosity of the best fit in the IR range (usually between 8 and 1\,000 $\rm{\mu}$m) and directly obtain the $L_\mathrm{TIR}$. CIGALE already returns this luminosity and its error both from the best-fitting model and from the Bayesian approach. The left panel of Fig. \ref{fig:LTIR_Mass} shows the IR luminosity distribution obtained for the $z_\mathrm{spec}$ sample in blue and for the $z_\mathrm{sup}$ sample in red. We find that most of our objects (55$\%$) are in the LIRG regime (\(L_\mathrm{TIR} > 10^{11}L_\odot\)), 6$\%$ are ULIRGs (\(L_\mathrm{TIR} > 10^{12}L_\odot\)), and less than 1$\%$ are hyper-luminous infrared galaxies (HLIRGs,\(L_\mathrm{TIR} > 10^{13}L_\odot\)). Another parameter of importance for the general description of the sample is the stellar mass ($M_*$) of the objects. As for the IR luminosity, we used the value derived from the SED fit with CIGALE. Figure \ref{fig:LTIR_Mass} (right panel) shows the mass distribution obtained for the $z_\mathrm{spec}$ objects in blue and for the $z_\mathrm{sup}$sample in red. Considering both distributions, 75$\%$ of the objects have stellar masses greater than $\log(M_*/M_\odot)=9.93$, 25$\%$ of them have a stellar mass greater $\log(M_*/M_\odot)=10.54$, and the median value is $\log(M_*/M_\odot)=10.28$. Figure \ref{fig:LTIR_Mass_fracAGN} plots $M_*$ versus $L_\mathrm{TIR}$ obtained from the SED fittings by colour coding the AGN fraction derived from the IR templates used \citep{Dale2014}. It can be seen that the AGN fraction increases for the most luminous galaxies, which in turn are the most massive. This would be in agreement with the findings of \cite{Veilleux1995}, who showed that the probability of the source of ionization being due to nuclear activity increases with IR luminosity. The information studied in this section will be used in forthcoming papers of the Lockman-SpReSO series. Stellar mass, IR luminosity, line measurements, and SED fits will play a key role in studying the physical properties of both FIR sources and secondary catalogue objects. \section{Summary and timeline} This paper presents the Lockman-SpReSO project and focuses on its scientific motivation, target selection, observational design and the first results from the reduction of the spectra. Lockman-SpReSO was created to try to fill the notable lack of spectroscopic information in one of the best large-scale fields, the Lockman Hole field, for high depth studies owing to its low hydrogen column density. In this way, spectroscopic redshifts and the main physical parameters such as gas metallicity, extinctions, SFR, and even SED fits can be studied for a specific selection of targets. The Lockman-SpReSO catalogue contains 1144 sources of various kinds. All Lockman-SpReSO observations were carried out from 2014 to 2018, and all data have been satisfactorily reduced. The spectroscopic data obtained have been analysed and have produced spectroscopic redshifts for a total of 357 objects, from $z_\mathrm{spec} = 0.0290$ to $z_\mathrm{spec} = 4.9671$. For 99 objects with only one characteristic feature in their spectra, an attempt was made to determine the redshift and resulted in a redshift range from $z_\mathrm{sup}=0.0973$ to $z_\mathrm{sup}=1.4470$. Furthermore, for those objects whose redshift was determined, only $\sim 25\%$ have a spectroscopic redshift available in the literature. Finally, the spectral lines of the objects were measured in order to establish the initial database of Lockman-SpReSO. We performed an SED-fitting process using the spectroscopic redshift and CIGALE software, taking advantage of the wide photometric spectral coverage from the FUV to the FIR. The fit results allowed us to derive the IR luminosities and the stellar masses of the sources. Based on the $L_\mathrm{TIR}$ values derived from the SED fitting, about 55\% of the objects are LIRGs, 6\% are ULIRGs and less than 1\% are HLIRGs. The stellar mass distribution has a minimum and a maximum value of $\log(M_*/M_\odot)=7.65$ and $\log(M_*/M_\odot)=12.07$, respectively, with a median value of $\log(M_*/M_\odot)=10.28$. A data release is expected for late 2022 or early 2023, where the results of the type classification, extinction, gas metallicity and SFR determinations for the objects on which we have sufficient information will be presented. Studies will also be carried out in order to analyse the different classes of sources in the secondary catalogues. We shall try to perform observations in the NIR using other facilities for the objects with known redshifts that have emission lines in that wavelength range. \begin{acknowledgements} We thank the anonymous referee for their useful report. This work was supported by the Evolution of Galaxies project, of references AYA2017-88007-C3-1-P, AYA2017-88007-C3-2-P, AYA2018-RTI-096188-BI00, PID2019-107408GB-C41, PID2019-106027GB-C41, PID2021-122544NB-C41, and MDM-2017-0737 (Unidad de Excelencia MarГ­a de Maeztu, CAB), within the \textit{Programa estatal de fomento de la investigaciГіn cientГ­fica y tГ©cnica de excelencia del Plan Estatal de InvestigaciГіn CientГ­fica y TГ©cnica y de InnovaciГіn (2013-2016)} of the Spanish Ministry of Science and Innovation/State Agency of Research MCIN/AEI/ 10.13039/501100011033 and by `ERDF A way of making Europe'. This article is based on observations made with the Gran Telescopio Canarias (GTC) at Roque de los Muchachos Observatory on the island of La Palma, with the Willian Herschel Telescope (WHT) at Roque de los Muchachos Observatory on the island of La Palma and on observations at Kitt Peak National Observatory, NSF's National Optical-Infrared Astronomy Research Laboratory (NOIRLab Prop. ID: 2018A-0056; PI: Gonz\'alez-Serrano, J.I.), which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology. J.N. acknowledge the support of the National Science Centre, Poland through the SONATA BIS grant 2018/30/E/ST9/00208. EB and ICG acknowledge support from DGAPA-UNAM grant IN113320. MP acknowledges the support from the Space Science and Geospatial Institute under the Ethiopian Ministry of Innovation and Technology (MInT). EA and MP acknowledge the support from the State Agency for Research of the Spanish MCIU through the Center of Excellence Severo Ochoa award to the Instituto de Astrof\'isica de Andaluc\'ia (SEV-2017-0709). JAD acknowledges the support of the Universidad de La Laguna through the Proyecto de InternacionalizaciГіn y Excelencia, Programa TomГЎs de Iriarte 2022. The authors thank Terry Mahoney (at the IAC's Scientific Editorial Service) for his substantial improvements of the manuscript. \end{acknowledgements} \bibliographystyle{aa} \bibliography{biblio} \onecolumn \begin{appendix} % \section{Photometric information} \begin{table*}[h!] \centering \caption{Photometric filters used in the SED-fitting process and the number of objects with information in each of them.} \label{tab:filters_c} \begin{tabular}{cccccccc} \hline Telescope & Instrument & \multicolumn{6}{c}{Filter} \\ \hline \multirow{2}{*}{\textit{GALEX}} & \multirow{2}{*}{\textit{GALEX}} & \multicolumn{3}{c}{FUV} & \multicolumn{3}{c}{NUV} \\ & & \multicolumn{3}{c}{842} & \multicolumn{3}{c}{842} \\ \hline \multirow{4}{*}{LBT} & \multirow{2}{*}{LBCB} & \multicolumn{3}{c}{U} & \multicolumn{3}{c}{B} \\ & & \multicolumn{3}{c}{1072} & \multicolumn{3}{c}{1040} \\ \cline{2-8} & \multirow{2}{*}{LBCR} & \multicolumn{2}{c}{V} & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{z} \\ & & \multicolumn{2}{c}{1011} & \multicolumn{2}{c}{1011} & \multicolumn{2}{c}{1028} \\ \hline \multirow{2}{*}{Subaru} & \multirow{2}{*}{Suprime} & \multicolumn{2}{c}{Rc} & \multicolumn{2}{c}{Ic} & \multicolumn{2}{c}{z} \\ & & \multicolumn{2}{c}{1144} & \multicolumn{2}{c}{1081} & \multicolumn{2}{c}{1134} \\ \hline \multirow{2}{*}{SLOAN} & \multirow{2}{*}{SLOAN} & \multicolumn{2}{c}{u} & g & r & i & z \\ & & \multicolumn{2}{c}{350} & 442 & 470 & 470 & 454 \\ \hline \multirow{2}{*}{CFHT} & \multirow{2}{*}{MegaCAM} & \multicolumn{2}{c}{u} & g & r & i & z \\ & & \multicolumn{2}{c}{1102} & 1118 & 1105 & 980 & 1069 \\ \hline \multirow{2}{*}{UKIRT} & \multirow{2}{*}{WFCAM} & \multicolumn{3}{c}{J} & \multicolumn{3}{c}{K} \\ & & \multicolumn{3}{c}{1106} & \multicolumn{3}{c}{1107} \\ \hline \multirow{4}{*}{\textit{Spitzer}} & \multirow{2}{*}{IRAC} & \multicolumn{2}{c}{3.6 $\mu$m} & 4.5 $\mu$m & 5.8 $\mu$m & \multicolumn{2}{c}{8 $\mu$m} \\ & & \multicolumn{2}{c}{972} & 931 & 952 & \multicolumn{2}{c}{909} \\ \cline{2-8} & \multirow{2}{*}{MIPS} & \multicolumn{6}{c}{24 $\mu$m} \\ & & \multicolumn{6}{c}{956} \\ \hline \multirow{4}{*}{\textit{Herschel}} & \multirow{2}{*}{PACS} & \multicolumn{3}{c}{100 $\mu$m} & \multicolumn{3}{c}{160 $\mu$m} \\ & & \multicolumn{3}{c}{760} & \multicolumn{3}{c}{606} \\ \cline{2-8} & \multirow{2}{*}{SPIRE} & \multicolumn{2}{c}{250 $\mu$m} & \multicolumn{2}{c}{350 $\mu$m} & \multicolumn{2}{c}{500 $\mu$m} \\ & & \multicolumn{2}{c}{889} & \multicolumn{2}{c}{889} & \multicolumn{2}{c}{889} \\ \hline \multirow{2}{*}{JCMT} & \multirow{2}{*}{AzTEC} & \multicolumn{6}{c}{1.1 mm} \\ & & \multicolumn{6}{c}{18} \\ \hline \end{tabular} \end{table*} \section{CIGALE input parameters} \begin{table*}[h!] \centering \caption{Schedule of the parameters and models used in the CIGALE SED fitting.} \label{tab:CIGALE} \begin{tabular}{c c c} \hline \hline Model used & Parameter information & Values\\ \hline SFH: & e-folding time main population & 250, 500, 1000, 2000, 4000, 6000, 8000 [Myrs] \\ delayed SFH with & age of the main population & 250, 500, 1500, 4000, 8000, 10000 [Myrs]\\ optional exponential burst & e-folding time of late burst & 25, 50 [Myrs] \\ & age of the late burst & 10, 20, 50 [Myrs] \\ & mass fraction late burst & 0.0, 0.01, 0.05 \\ \hline SSP: & IMF & \cite{Chabrier2003} \\ \cite{Bruzual2003} & metallicity & 0.0001, 0.0004, 0.004, 0.008, 0.02, 0.05 \\ \hline Dust attenuation: & E(B-V) lines & 0, 0.1, 0.2, 0.3, ..., 2.4, 2.5 \\ power law modified & E(B-V) factor (line to continuum) & 0.44 \\ \cite{Calzetti2000} & UV bump wavelength & 217.5 [nm] \\ & UV bump FWHM & 35 \\ & UV bump amplitude & 0, 1.5, 3 \\ & Power law modification slope & -0.2, 0 \\ & $R_{\rm V}$ & 3.1 \\ \hline Dust emission & AGN fraction & 0, 0.1, 0.3, 0.5, 0.8, 0.9\\ \cite{Dale2014} & Alpha slope & 0.125, 0.625, 1.0, 1.25, 1.5, 1.75, 2.0, 2.5, 3.0, 3.5, 4.0 \\ \hline \end{tabular} \end{table*} \end{appendix}
Title: Dissimilar Donuts in the Sky? Effects of a Pressure Singularity on the Circular Photon Orbits and Shadow of a Cosmological Black Hole
Abstract: The black hole observations obtained so far indicate one thing: similar "donuts" exist in the sky. But what if some of the observed black hole shadows that will obtained in the future are different from the others? In this work the aim is to show that a difference in the shadow of some observed black holes in the future, might explain the $H_0$-tension problem. In this letter we investigate the possible effects of a pressure cosmological singularity on the circular photon orbits and the shadow of galactic supermassive black holes at cosmological redshifts. Since the pressure singularity is a global event in the Universe, the effects of the pressure singularity will be imposed on supermassive black holes at a specific redshift. As we show, the pressure singularity affects the circular photon orbits around cosmological black holes described by the McVittie metric, and specifically, for some time before the time instance that the singularity occurs, the photon orbits do not exist. We discuss the possible effects of the absence of circular photon orbits on the shadow of these black holes. Our idea indicates that if a pressure singularity occurred in the near past, then this could have a direct imprint on the shadow of supermassive galactic black holes at the redshift corresponding to the time instance that the singularity occurred in the past. Thus, if a sample of shadows is observed in the future for redshifts $z\leq 0.01$, and for a specific redshift differences are found in the shadows, this could be an indication that a pressure singularity occurred, and this global event might resolve the $H_0$-tension as discussed in previous work. However, the observation of several shadows at redshifts $z\leq 0.01$ is rather a far future task.
https://export.arxiv.org/pdf/2208.07972
\tolerance=5000 \title{Dissimilar Donuts in the Sky? Effects of a Pressure Singularity on the Circular Photon Orbits and Shadow of a Cosmological Black Hole} \author{S.D. Odintsov,$^{1,2,3}$\,\thanks{odintsov@ieec.uab.es} V.K. Oikonomou,$^{4}$\,\thanks{v.k.oikonomou1979@gmail.com}} \affiliation{$^{1)}$ ICREA, Passeig Luis Companys, 23, 08010 Barcelona, Spain\\ $^{2)}$ Institute of Space Sciences (ICE,CSIC) C. Can Magrans s/n, 08193 Barcelona, Spain\\ $^{3)}$ Institute of Space Sciences of Catalonia (IEEC), Barcelona, Spain\\ $^{4)}$ Department of Physics, Aristotle University of Thessaloniki, Thessaloniki 54124, Greece} \section{Introduction} Among other mysteries of contemporary research in astronomy and astrophysics, the $H_0$-tension is the oldest and longstanding at least for three decades or more. The problem with the value of the Hubble rate at present day is that large redshift sources, like the Cosmic Microwave Background (CMB) radiation \cite{Planck:2018vyg} indicate smaller values compared with small redshift sources like the Cepheids \cite{Riess:2020fzl}. The tension can be explained theoretically by early dark energy \cite{Niedermann:2020dwg,Poulin:2018cxd,Karwal:2016vyq,Oikonomou:2020qah,Nojiri:2019fft}. Another groundbreaking explanation could be the abrupt change of physics before $70-150\,$Myrs \cite{Perivolaropoulos:2021jda,Perivolaropoulos:2021bds,Perivolaropoulos:2022vql} (see also \cite{Odintsov:2022eqm}) which radically affected the Cepheid parameters, thus yielding a larger value for the value of the Hubble rate at present day. Although the $H_0$-tension may be attributed to the Cepheid calibration \cite{Mortsell:2021nzg,Perivolaropoulos:2021jda}, such an issue attracts the attention of cosmologist and astrophysicists, see for example Refs. \cite{Dai:2020rfo,He:2020zns,Nakai:2020oit,DiValentino:2020naf,Agrawal:2019dlm,Yang:2018euj,Ye:2020btb,Vagnozzi:2021tjv, Desmond:2019ygn,OColgain:2018czj,Vagnozzi:2019ezj, Krishnan:2020obg,Colgain:2019joh,Vagnozzi:2021gjh,Lee:2022cyh,Nojiri:2021dze,Krishnan:2021dyb,Ye:2021iwa,Ye:2022afu,Verde:2019ivm,Marra:2021fvf} and references therein. In a recent work \cite{Odintsov:2022eqm} we adopted the perspective of Refs. \cite{Perivolaropoulos:2021jda,Perivolaropoulos:2021bds,Perivolaropoulos:2022vql} that an abrupt physics change before $70-150\,$Myrs might explain in a transparent way the $H_0$-tension, and we theorized that such an abrupt and global in the Universe physics change might have occurred if the Universe experienced a pressure singularity (also known as Type II or sudden singularity). Such a singularity is not of the crushing type, and the only physical quantity that is divergent globally on the spacelike three dimensional hypersurface that is defined by the time instance at which the singularity occurs, is the pressure. The rest of the physical observables are finite, hence this can be viewed as a singularity that the Universe passes relatively smoothly through it. In this article we shall examine the qualitative effect of such a singularity on galactic supermassive black holes, and specifically on the circular photon orbits around the black hole and the corresponding shadow. Our proposal is based on the idea that if a pressure singularity occurred in the past, this could have potentially observable effects on supermassive black holes corresponding to redshifts around the time instance that the singularity occurred, so before $70-150\,$Myrs or for redshifts $z\simeq 0.01$. The absence of the circular photon orbits could affect the ring of the shadow, so if scientists in the far future have available the shadows of a large sample of cosmological black holes for $z\simeq 0.01$, if the shadows of some black holes at a specific redshift $z_t$ are different than other black hole shadows at other redshifts, then this could indicate that a pressure singularity might have occurred at the specific redshift $z_t$. The determination of the redshift might also determine the time at which the pressure singularity occurred. In order to realize technically the above idea, we shall use the McVittie metric \cite{McVittie:1933zz,Faraoni:2007es,Kaloper:2010ec,Lake:2011ni,Nandra:2011ui,Nolan:2014maa,Maciel:2015dsh,Nolan:2017rtj,Perlick:2018iye,Perez:2021etn,Bisnovatyi-Kogan:2018vxl, Tsupko:2019mfo,Perez:2019cxw,Perlick:2021aok,Nojiri:2020blr}. The motivation is rather simple, the expansion of the Universe at cosmological scales should somehow affect the cosmological black holes. So a more concrete approach for the shadow of the black hole of cosmological black holes should also take into account the expansion of the Universe, thus both the gravity of the black hole and the cosmic expansion at each part of the light trajectory. The shadow of a black hole is basically a dark spot in the direction of a black hole in the sky, which is viewable due to the background of other light sources nearby. The McVittie metric \cite{McVittie:1933zz,Faraoni:2007es,Kaloper:2010ec,Lake:2011ni,Nandra:2011ui,Nolan:2014maa,Maciel:2015dsh,Nolan:2017rtj,Perlick:2018iye,Perez:2021etn,Bisnovatyi-Kogan:2018vxl, Tsupko:2019mfo,Perez:2019cxw,Perlick:2021aok,Nojiri:2020blr} is the most refined description for describing a black hole embedded in an expanding Friedmann-Robertson-Walker background. For a mainstream of articles on the McVittie metric, see Refs. \cite{McVittie:1933zz,Faraoni:2007es,Kaloper:2010ec,Lake:2011ni,Nandra:2011ui,Nolan:2014maa,Maciel:2015dsh,Nolan:2017rtj,Perlick:2018iye,Perez:2021etn,Bisnovatyi-Kogan:2018vxl, Tsupko:2019mfo,Perez:2019cxw,Perlick:2021aok,Nojiri:2020blr} and references therein. Although initially it was debatable whether the McVittie metric describes a black hole \cite{Faraoni:2007es}, it is now widely accepted that indeed it describes a black hole in an expanding background \cite{Kaloper:2010ec,Bisnovatyi-Kogan:2018vxl,Tsupko:2019mfo,Perlick:2021aok,Nojiri:2020blr}, although the accretion of matter and radiation is not allowed. The description of the McVittie solution as a black hole is further supported by the geodesically incompleteness of the McVittie metric, due to the existence of a null surface at a finite distance. Weakly gravitating systems the size of which is small compared to the comoving Hubble radius are not affected by the expansion of the Universe in a FRW Universe, however this is not the case for large scale structures. Indeed, in large scale structures and cosmological black holes, the effects of the expansion must be taken into account. Each galactic black hole, which is a supermassive one, tracks the orbit of the galaxy in the FRW spacetime, and thus the expansion of the Universe must be taken into account. The participation of even a strongly bound local object in the Universes expansion seems to be a general rule \cite{Faraoni:2007es}. Hence by using the McVittie metric, we determine whether the condition which allows circular photon orbits is satisfied or not. As we demonstrate, for a spacetime with a pressure singularity, the circular photon orbits are not allowed for some time interval before the singularity, and we theorize how such a result could affect the shadow of the black holes being at a specific redshift. We discuss this perspective and we explain that technically it is hard to verify our proposal using present day's technology, however in some decades from now, the resolution techniques will be refined and perhaps our proposal might be directly investigated experimentally. \section{Photon Orbits in McVittie's Black Holes and Pressure Cosmological Singularities} Before discussing the effects of a pressure singularity on the circular photon orbits in a McVittie black hole, let us recall the classification of finite-time spacetime cosmological singularities, following \cite{Nojiri:2005sx}. If the singularity occurs at the time instance $t=t_s$, we have the following classification \cite{Nojiri:2005sx}: \begin{itemize} \item Type I (``Big Rip'') : A typical crushing type singularity. As the finite-time singularity is approached at $t \to t_s$, all the physical quantities that can be defined at the spacelike hypersurface defined by the time instance $t=t_s$, such as the total effective pressure $p_\mathrm{eff}$ and energy density $\rho_\mathrm{eff}$, strongly diverge, including the scale factor \cite{bigrip}. \item Type II (``sudden''): This is known as the pressure singularity, firstly studied by Barrow in Refs. \cite{barrowsudden}, see also \cite{barrowsudden1}. This is the kind of singularity we shall be interested in this work, in which case as the singularity is approached, the scale factor and the energy density is finite, however the pressure diverges. \item Type III : For this case, as the singularity is approached, the scale factor is finite, however, both the pressure and the energy density diverge. \item Type IV : This is a mild singularity studied in detail in Refs. \cite{Nojiri:2005sx,Nojiri:2004pf,Barrow:2015ora,Nojiri:2015fra,Odintsov:2015zza,Oikonomou:2015qha,Oikonomou:2015qfh}. In this case, as the singularity is approached, the scale factor, the energy density and the pressure are finite, and only the higher derivatives of the Hubble rate $\frac{\mathrm{d}^nH}{\mathrm{d}t^n}$ diverge, for $n\geq 2$. \end{itemize} In a more transparent way, let us assume that the scale factor has the following simple form, \begin{equation}\label{scalefactorini} a(t)\simeq c(t)+d(t)(t-t_s)^{\eta}\, , \end{equation} where the functions $c(t)$ and $d(t)$ including their higher order derivatives with respect to the cosmic time are finite at the cosmic time instance $t=t_s$. Also we assume that $\eta=\frac{2m}{2n+1}$ with $m$ and $n$ positive integers, in order to avoid having complex values for the scale factor. The values of $\eta$ determine the singularity type that may occur at $t=t_s$. The energy density is affected by the Hubble rate itself and the pressure by the energy density and the first derivative of the Hubble rate with respect to the cosmic time. The values of $\eta$ affect the singularity type in the following way, \begin{itemize} \item For $\eta <0$ a Type I singularity occurs, since the scale factor, the energy density and the pressure are divergent. \item For $0<\eta<1$ a Type III singularity occurs. \item For $1<\eta<2$ a Type II singularity, or pressure singularity, occurs, since only the pressure is divergent. \item For $2<\eta$ a Type IV singularity occurs. \end{itemize} Hence, for the pressure singularity one needs $1<\eta<2$, since in this case at $t=t_s$, only the derivative of the Hubble rate with respect to the cosmic time diverges. When the Universe goes through a pressure singularity, it remains geodesically complete, since the following integral takes finite values for all cosmic times \cite{Fernandez-Jambrina:2004yjt}, \begin{equation}\label{highercurvsc} \int_0^{\tau}dt R^{i}_{0j0}(t)\, . \end{equation} However, the pressure diverges globally on the spacelike hypersurface defined by the time instances for which the singularity occurs. Now we shall consider the effects of a pressure singularity on the circular photon orbits around black holes in an expanding spacetime. We shall discuss the impact of the absence of circular photon orbits around black holes on the shadow of a black hole, and also we shall discuss how to verify this observationally on the shadows of supermassive black holes but in the far future. Our main assumption will be that a pressure singularity occurred $70-150\,$Myrs ago, which corresponds to an abrupt physics change. In the spirit of Ref. \cite{Perivolaropoulos:2021jda}, this might explain the $H_0$-tension, since an abrupt physics change might affect directly the Cepheid parameters. So our assumption is that the pressure singularity indeed occurred before $70-150\,$Myrs, so at a redshift $z\leq 0.01$. It is undoubtable at present day that the McVittie metric describes a black hole in a dynamically expanding Friedmann-Robertson-Walker (FRW) Universe \cite{McVittie:1933zz,Faraoni:2007es,Kaloper:2010ec,Lake:2011ni,Nandra:2011ui,Nolan:2014maa,Maciel:2015dsh,Nolan:2017rtj,Perlick:2018iye,Perez:2021etn,Bisnovatyi-Kogan:2018vxl, Tsupko:2019mfo,Perez:2019cxw,Perlick:2021aok,Nojiri:2020blr}. The McVittie spacetime metric for a flat FRW background in geometrized units ($G=c=1$), reads, \begin{equation}\label{mcvittiemetric} ds^2=-\left(\frac{1-\frac{m(t)}{2r}}{1+\frac{m(t)}{2r}}\right)^2\cdot dt^2-\left( 1+\frac{m(t)}{2r}\right)^4 a(t)^2\cdot \left(dr^2+r^2\cdot (d\theta^2+sin^2\theta d\varphi^2)\right)\, , \end{equation} where the function $m(t)$ is defined as follows, \begin{equation}\label{mfunction} m(t)=\frac{m_0}{a(t)}\, , \end{equation} where $m_0$ is the mass of the central body which is embedded in the expanding spacetime, so basically the mass of the black hole, and $a(t)$ is the scale factor of the FRW spacetime. When $a=1$ the McVittie metric reduces to the Schwarzschild metric in isotropic coordinates, while in the limit $m_0\to 0$, the FRW metric is recovered. Let us now consider the photon orbits in such a spacetime, and for geodesics paths on the plane $\theta=\frac{\pi}{2}$, due to the spherically symmetric spacetime, the conservation of the angular momentum yields, \begin{equation}\label{conservationofangularmom} \dot{\phi}=\frac{L}{R^2},\,\,\,\dot{\theta}=0\, , \end{equation} where $R$ is the areal radius coordinate defined as follows, \begin{equation}\label{arearadiuscoordinate} R=a(t)r\left(1+\frac{m_0}{2 r a(t)} \right)^2\, . \end{equation} The corresponding circular photon orbits geodesics equation reads \cite{Perez:2021etn}, \begin{equation}\label{circulargeodesciscsphoton} \frac{L^2}{R^2}=\left(f^2-g^2 \right)\dot{t}^2\, , \end{equation} where the functions $f$ and $g$ are defined as follows \cite{Perez:2021etn}, \begin{equation}\label{functionsfandg} f=\sqrt{1-\frac{2 m(t)}{R}},\,\,\,g=R\left(H+\frac{\dot{m}}{m}(f^{-1}-1)\right)\, , \end{equation} with $H$ being the Hubble rate $H=\frac{\dot{a}}{a}$. The quantity $\chi(R,t)=f^2-g^2=g^{\mu \nu}\nabla_{\mu}r\nabla_{\nu}R$ basically defines the trapped and untrapped spacetime regions of the spherically symmetric spacetime. The condition for having circular photon orbits of radius $R_c$ for all cosmic times is \cite{Perez:2021etn}, \begin{equation}\label{stablecircularorbitscondition} \chi(R_c,t)=f^2-g^2=1-\frac{2m(t)}{R_c}-R_c^2\left(H(t)+\frac{\dot{m}(t)}{m(t)}\left(\frac{1}{\sqrt{1-\frac{2m(t)}{R_c}}}-1 \right) \right)^2>0\, . \end{equation} Obviously in the case that $\chi(t,R_c)<0$, circular photon orbits cannot exist and as we shall now explain, this will be the case for cosmic times near a pressure cosmological singularity. In order to show this explicitly, let us assume that the scale factor of the Universe is approximately described by, \begin{equation} a(t)=c+ c \vert t \vert^\eta\, , \label{scfactans} \end{equation} where $c$ is some arbitrary constant with units $[L]^{-1}$ in geometrized units and we shall assume that $\eta=\frac{2m}{2n+1}$, with $n$ and $m$ positive integers. Apparently, for values of $\eta$ satisfying the condition $1<\eta<2$, a pressure singularity occurs at the time instance $t=0$. The time instance for which the singularity occurs is arbitrary, but we chose it to be $t=0$ for convenience. This time instance $t=0$ can be considered to be any time instance in the past of our Universe, so for example it can be before $70-150\, $Myrs. As will now show, depending on the values of $c$ and $\eta$, the photon orbits with radii $2m< R_c\leq 3m$ in geometrized units around a black hole in an expanding Universe might not exist. For the scale factor (\ref{scfactans}) in Fig. \ref{plot1} we have plotted the quantity $\chi(R_c,t)$ for $R_c=2.5\,m_0$ and for $\eta=5/2$ (left plot) and for $\eta=2$ (right plot) taking $c=1.5$ in units of length. The left plot describes the existence or not of circular photon orbits for the case that a pressure singularity occurs at $t=0$. As it is obvious, for a limited time interval before the singularity, the circular photon orbits do not exist, and now we shall discuss qualitatively what this result might mean for the corresponding shadow of the black hole. The qualitative behavior presented in Fig. \ref{plot1} does not change for any radius values in the range $2m_0< R_c\leq 3m_0$. What impact would have the absence of circular photon orbits on the shadow of a supermassive black hole at cosmological distances? Probably it would affect the inner part of the shadow, possibly it would affect the ring of the shadow. Thus the idea that we want to suggest with this work is simple: by observing a large sample of supermassive black holes shadows at cosmological distances with redshifts $z\leq 0.01$, is there any mentionable difference between the shadows and if yes, at which redshift the shadows are different? If such a scenario is verified, then this would be an indication that the Universe in the recent past has experienced a pressure singularity at the redshift where the singularity occurred. At the moment, the observation of a large sample of shadows at cosmological distances is rather technically limited. Indeed, the observation of the shadow of the M87 supermassive black hole \cite{EventHorizonTelescope:2019dse,EventHorizonTelescope:2019ggy} at $z=0.004283$ is the best outcome that the current technology can achieve. Thus our proposal cannot be verified at the moment, since the technical requirements required exceed by far the current technology. Indeed, in order to capture the shadows of cosmological galactic supermassive black holes at redshifts $z\leq 0.01$, higher resolutions are required, so more refined VLBI techniques are required. Current VLBI techniques do not allow to reach higher resolutions at higher redshifts\footnote{V.K.O. is thankful to Prof. Luciano Rezzolla for this comment and perspective.}. Thus our proposal could be investigated experimentally in the far future, several decades from now. This includes also the related studies of the M87 and SgrA* which are too close to us, so their cosmological redshift is not of the order of $z\sim 0.01$, so not too large in magnitude. Moreover, the Centaurus A, which is an active galactic nucleus in the nearby galaxy NGC 5128 with redshift $z\sim 0.00183$, this supermassive black hole too is relatively chronologically close in the past, so the effects of a sudden past finite-time singularity would be more likely spotted on supermassive black holes corresponding to higher redshifts, of the order $z\sim 0.01$, thus these are future perspectives of our work, heavily relying on the refined VLBI techniques. \section{Future Perspectives} In this letter we considered the qualitative effects of a pressure cosmological singularity on the shadow of a galactic supermassive black hole at cosmological distances. Specifically we considered the McVittie metric and we investigated what effects would have a pressure singularity on the circular photon orbits around the supermassive black hole. As we showed, the circular photon orbits do not exist for a time interval before the singularity occurred. This feature can affect directly the shadow of the supermassive black hole, and specifically we theorized that the pressure singularity will affect all the supermassive galactic black holes at the redshift for which the circular photon orbits do not exist. Thus, if in the far future the shadow of a large sample of black holes is observed in detail for redshifts $z\leq 0.01$, it is possible to verify experimentally our conjecture if the supermassive black holes at a specific redshift show similar characteristics, which are absent at different redshifts. This can be a direct indication that at a specific redshift in the past the Universe experienced a global physics change caused by a pressure singularity. Thus if such a scenario actually took place before $70-150\,$Myrs this could solve simultaneously the $H_0$-tension problem. Let us now discuss the future perspectives of our theory, since for the moment it seems impossible to observe a large sample of shadows with high precision. The effect of expansion of the Universe on the nearby galactic black holes is tiny, only at cosmological distances the effect of expansion should be taken into account. Thus it is compelling to investigate cosmological black holes at cosmological distances of the order $z\leq 0.01$ in order to reveal differences between them, to pinpoint the absence of some characteristic relevant with the shadow of the black hole. This will reveal the possible effects of the pressure singularity in the past of our Universe. Precision is required though, to also take into account the effects of cosmic expansion, the shadow is a dynamical structure, not static \cite{Bisnovatyi-Kogan:2018vxl,Tsupko:2019mfo,Perlick:2021aok,Nojiri:2020blr}, however with this paper we aimed to provide a qualitative description of the whole phenomenon. The whole procedure is a rather far future task because the current VLBI technique does not allow to reach different (higher) resolutions at higher redshifts and also at higher redshifts one must take into account the influence of the cosmic expansion on the angular diameter of the black hole shadow \cite{Perlick:2018iye}. This might eventually cause differences between shadows at low and high redshifts, so one must also take this into account in order to pinpoint the effects of a pressure singularity. Moreover a realistic approach should also take seriously into account the effects of the cosmic expansion on the surface brightness of light sources \cite{Perlick:2018iye}, and of course the rotation, but the main qualitative argument of this work does not change. If some difference is observed in a sample of different shadows at redshifts $z\leq 0.01$, this could be due to a pressure singularity occurring in the near past of our Universe at this specific redshift. Although our proposal might be a far future proposal, the current scientific achievements are encouraging. Indeed, observations of high redshift supermassive black holes already exist in the literature \cite{Mortlock:2011va}. Also the shadows of supermassive black holes at large redshifts will have in general larger shadows \cite{Bisnovatyi-Kogan:2018vxl}. Also the James Webb Space Telescope could reveal some properties of supermassive black holes at large distances. So perhaps in the next decades, scientists will be able to pinpoint differences in the shadow of supermassive black holes at redshifts $z\leq 0.01$. The techniques for obtaining the shadows of black holes are continuously refined \cite{Younsi:2016azx,Abdujabbarov:2015xqa}, and also theoretical aspects are further studied \cite{Addazi:2021pty}. Finally, would be interesting to study the effects of a non-flat FRW expanding background on the McVittie metric in the case of a pressure singularity. The same analysis we performed in this paper should be repeated with non-zero spatial curvature. For astrophysical black holes, in which case the mass of the black hole or the radius of the black hole is smaller than the radius of the curvature, the effects of the curvature are negligible. However, for cosmological supermassive black holes, the effects of a non-flat expanding background should be significant. Thus it would be interesting to repeat the present study in the non-flat FRW case. In conclusion, with this work we showed that the two shadows observed so far, the M87 \cite{EventHorizonTelescope:2019dse} and the Sagittarius A$^*$ \cite{EventHorizonTelescope:2022xnr} seem like two identical ``donuts'', the occurrence of a pressure singularity $70-150\,$Myrs in the past might affect the shadows of black holes at redshifts $z\leq 0.1$. Thus although it is expected to see similar ``donuts'' in the sky, if some ``donuts'' are different, this might be due to a pressure singularity, an effect which solves simultaneously the $H_0$-tension problem. It is furthermore interesting to note that one should include the perspective of pin pointing the modified gravity which may generate both the McVittie solutions and the past finite-time singularity, so the most generalized theory is Horndeski theory, and recently McVittie's solutions were considered in the context of Horndeski gravity in \cite{Miranda:2022brj}. \section*{Acknowledgments} This work was supported by MINECO (Spain), project PID2019-104397GB-I00 (S.D.O). This work by S.D.O was also partially supported by the program Unidad de Excelencia Maria de Maeztu CEX2020-001058-M, Spain.
Title: Migratory Outbursting Quasi-Hilda Object 282P/(323137) 2003 BM80
Abstract: We report object 282P/(323137) 2003 BM80 is undergoing a sustained activity outburst, lasting over 15 months thus far. These findings stem in part from our NASA Partner Citizen Science project Active Asteroids (this http URL), which we introduce here. We acquired new observations of 282P via our observing campaign (Vatican Advanced Technology Telescope, Lowell Discovery Telescope, and the Gemini South telescope), confirming 282P was active on UT 2022 June 7, some 15 months after 2021 March images showed activity in the 2021/2022 epoch. We classify 282P as a member of the Quasi-Hilda Objects, a group of dynamically unstable objects found in an orbital region similar to, but distinct in their dynamical characteristics to, the Hilda asteroids (objects in 3:2 resonance with Jupiter). Our dynamical simulations show 282P has undergone at least five close encounters with Jupiter and one with Saturn over the last 180 years. 282P was most likely a Centaur or Jupiter Family Comet (JFC) 250 years ago. In 350 years, following some 15 strong Jovian interactions, 282P will most likely migrate to become a JFC or, less likely, a main-belt asteroid. These migrations highlight a dynamical pathway connecting Centaurs and JFC with Quasi-Hildas and, potentially, active asteroids. Synthesizing these results with our thermodynamical modeling and new activity observations, we find volatile sublimation is the primary activity mechanism. Observations of a quiescent 282P, which we anticipate will be possible in 2023, will help confirm our hypothesis by measuring a rotation period and ascertaining spectral type.
https://export.arxiv.org/pdf/2208.08592
\title{Migratory Outbursting Quasi-Hilda Object 282P/(323137) 2003 BM80}% \correspondingauthor{Colin Orion Chandler} \email{orion@nau.edu} \author[0000-0001-7335-1715]{Colin Orion Chandler} \affiliation{Department of Astronomy and Planetary Science, Northern Arizona University, PO Box 6010, Flagstaff, AZ 86011, USA} \author[0000-0001-5750-4953]{William J. Oldroyd} \affiliation{Department of Astronomy and Planetary Science, Northern Arizona University, PO Box 6010, Flagstaff, AZ 86011, USA} \author[0000-0001-9859-0894]{Chadwick A. Trujillo} \affiliation{Department of Astronomy and Planetary Science, Northern Arizona University, PO Box 6010, Flagstaff, AZ 86011, USA} \keywords{minor planets, Quasi-Hildas: individual (282P), comets: individual (282P)} \blfootnote{Based on observations obtained at the international Gemini Observatory, a program of NSF’s NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation on behalf of the Gemini Observatory partnership: the National Science Foundation (United States), National Research Council (Canada), Agencia Nacional de Investigaci\'{o}n y Desarrollo (Chile), Ministerio de Ciencia, Tecnolog\'{i}a e Innovaci\'{o}n (Argentina), Minist\'{e}rio da Ci\^{e}ncia, Tecnologia, Inova\c{c}\~{o}es e Comunica\c{c}\~{o}es (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea).} \blfootnote{Magellan telescope time was granted by \acs{NSF}’s \acs{NOIRLab}, through the \ac{TSIP}. \ac{TSIP} was funded by \ac{NSF}.} \section{Introduction} \label{282P:introduction} Volatiles are vital to life as we know it and are critically important to future space exploration, yet basic knowledge about where volatiles (e.g., H$_2$O, CO, CH$_4$) are located within our own solar system is still incomplete. Moreover, the origin of solar system volatiles, including terrestrial water, remains inconclusive. Investigating sublimation-driven active solar system bodies can help answer these questions \citep{hsiehPopulationCometsMain2006,jewittAsteroidCometContinuum2022}. We define volatile reservoirs as a dynamical class of minor planet that harbors volatile species, such as water ice. Comets have long been known to contain volatiles, but other important reservoirs are coming to light, such as the active asteroids -- objects on orbits normally associated with asteroids, such as those found in the main-belt, that surprisingly display cometary features such as tails and/or comae \citep{jewittActiveAsteroids2015a}. Fewer than 30 active asteroids have been discovered \citep{chandlerSAFARISearchingAsteroids2018} since the first, (4015)~Wilson-Harrington, was discovered in 1949 \citep{cunninghamPeriodicCometWilsonHarrington1950} and, as a result, they remain poorly understood. One scientifically important subset of active asteroids consists of members that display recurrent activity attributed to sublimation: the \acp{MBC} \citep{hsiehMainbeltCometsPanSTARRS12015}. An important diagnostic of indicator sublimating volatiles, like water ice, is recurrent activity near perihelion \citep{hsiehOpticalDynamicalCharacterization2012,snodgrassMainBeltComets2017}, a feature common to the \acp{MBC} \citep{hsiehMainbeltCometsPanSTARRS12015,agarwalBinaryMainbeltComet2017,hsieh2016ReactivationsMainbelt2018}. Fewer than 10 recurrently active \acp{MBC} have been discovered (though others exhibit activity attributed to sublimation), and as a result we know very little about this population. Another potential volatile reservoir, active Centaurs, came to light after comet 29P/Schwassmann-Wachmann 1 \citep{schwassmannNEWCOMET1927} was identified as a Centaur following the 1977 discovery of (2060)~Chiron \citep{kowalSlowMovingObjectKowal1977}. Centaurs, found between the orbits of Jupiter and Neptune, are cold objects thought to primarily originate in the Kuiper Belt prior to migrating to their current orbits (see review, \citealt{jewittActiveCentaurs2009}). The dynamical properties of these objects are discussed in Section \ref{282P:sec:dynamicalClassification}. % Fewer than 20 active Centaurs have been discovered to date, thus they, like the active asteroids, are both rare and poorly understood. In order to enable the study of active objects in populations not typically associated with activity (e.g., \acp{NEO}, main-belt asteroids), we created a Citizen Science project designed to identify roughly 100 active objects via volunteer identification of activity in images of known minor planets. The Citizen Science paradigm involves concurrently crowdsourcing tasks yet too complex for computers to perform, while also carrying out an outreach program that engages the public in a scientific endeavor. Launched in Fall 2021, our \ac{NSF} funded, \acs{NASA} partner program \textit{Active Asteroids}\footnote{\url{http://activeasteroids.net}} immediately began yielding results. \objnameFull{}, hereafter \objname{}, was originally discovered as 2003~BM$_{80}$ on UT 2003 Jan 31 by Brian Skiff of the \ac{LONEOS} survey, and independently as 2003~FV$_{112}$ by \ac{LINEAR} on UT 2003 Apr 18. \objname{} was identified to be active during its 2012--2013 epoch (centered on its perihelion passage) in 2013 \citep{bolinComet2003BM2013}. % Here, we introduce an additional activity epoch, spanning 2021--2022. In this work we (1) present our \ac{NASA} Partner Citizen Science project \textit{Active Asteroids}, (2) describe how volunteers identified activity that led to our investigation into \objname{}, (3) present (a) archival images and (b) new observations of \objname{} that show it has undergone periods of activity during at least two epochs (2012--2013 and 2021--2022) spanning consecutive perihelion passages, (4) classify \objname{} as a \ac{QHO}, (5) explore the migratory nature of this object through dynamical modeling, including identification of a dynamical pathway between \acp{QHO} and active asteroids, and (6) determine volatile sublimation as the most probable activity mechanism. \section{Citizen Science} \label{282P:subsec:citsci} We prepared thumbnail images (e.g., Figure \ref{282P:fig:282P}a) for examination by volunteers of our NASA Partner Citizen Science project \textit{Active Asteroids}, hosted on the Zooniverse\footnote{\url{https://www.zooniverse.org}} online Citizen Science platform. First we extract thumbnail images from publicly available pre-calibrated \ac{DECam} archival images using a pipeline, \ac{HARVEST}, first described in \cite{chandlerSAFARISearchingAsteroids2018} and expanded upon in \cite{chandlerSixYearsSustained2019,chandlerCometaryActivityDiscovered2020a,chandlerRecurrentActivityActive2021}. Each 126\arcsec$\times$126\arcsec\ thumbnail image shows one known minor planet at the center of the frame. We optimize the Citizen Science process by automatically excluding thumbnail images based on specific criteria, for example when (a) the image depth is insufficient for detecting activity, (b) no source was detected in the thumbnail center, and (c) too many sources were in the thumbnail to allow for reliable target identification; see \cite{chandlerChasingTailsActive2022} for an in-depth description.% Our workflow is simple: we show volunteers an image of a known minor planet and ask whether or not they see evidence of activity (like a tail or coma) coming from the object at the center of the image, as marked by a reticle (Figure \ref{282P:fig:282P}a). Each thumbnail is examined by at least 15 volunteers to minimize volunteer bias. % To help train volunteers and validate that the project is working as intended, we created a training set of thumbnail images that we positively identified as showing activity, consisting of comets and other active objects, such as active asteroids. Training images are injected at random, though the interval of injection decays over time so that experienced volunteers only see a training image 5\% of the time.% We take the ratio of ``positive for activity'' classifications to the total number of classifications the object received, as a score to estimate the likelihood of the object being active. Members of the science team visually examines all images with a likelihood score of $\ge$80\% and flag candidates that warrant archival image investigation and telescope follow-up (Section \ref{282P:sec:observations}). We also learn of activity candidates through Zooniverse forums where users interact with each other, moderators, and our science team. Volunteers can share images they find interesting which has, in turn, led us directly to discoveries.% As of this writing, over 6,600 volunteers have participated in \textit{Active Asteroids}. They have conducted over 2.8$\times10^6$ classifications, completing assessment of over 171,000 thumbnail images. One image of \objname{} from UT 2021 March 14 (Figure \ref{282P:fig:282P}a) received a score of 93\% after 14 of 15 volunteers classified the thumbnail as showing activity. A second image from UT 2021 March 17 (Figure \ref{282P:fig:282P}h) was classified as active by 15 of 15 volunteers, providing additional strong evidence of activity from 2021 March. \section{Observations} \label{282P:sec:observations} \subsection{Archival Data} \label{282P:susbec:archivalData} For each candidate active object stemming from \textit{Active Asteroids} we conduct an archival data investigation, following the procedure described in \cite{chandlerRecurrentActivityActive2021}. For this task, we query public astronomical image archives and identify images which may show \objname{} in the \ac{FOV}. We download the data, extract thumbnail images centered on \objname{}, and visually examine all images to search for evidence of activity. After visually inspecting $>400$ thumbnail images we found 57 images (listed in Section \ref{282P:sec:observationsTable}) in which we could confidently identify \objname{} in the frame. The remaining images either did not probe faintly enough, did not actually capture \objname{} (e.g., \objname{} was not on a detector), or suffered from image artifacts that made the image unsuitable for activity detection. The 57 images span 22 observing dates; nine dates had at least one image we ascertained showed probable activity, five from the 2012--2013 epoch and four dates from the 2021--2022 apparition. % Section \ref{282P:sec:observations} provides a complete listing of observations used in this work. Figure \ref{282P:fig:ActivityTimeline} shows three plots with shared $x$-axes (years). Apparent magnitude and observability (the number of hours an object is above the horizon and the Sun is below the horizon) together provide insight into potential observational biases. For example, observations for detecting activity are ideal when \objname{} is brightest, near perihelion, and observable for many hours in an observing night. When contrasting hemispheres, this plot makes it clear that some periods (e.g., 2016 -- 2020) are more favorable for observations in the northern hemisphere, whereas other observation windows (e.g., 2013 -- 2015, 2022) are better suited to southern hemisphere facilities. \subsection{Follow-up Telescope Observations} \label{282P:subsec:telescopeobservations} \paragraph{Magellan} During twilight on UT 2022 March 7 we observed \objname{} with the \ac{IMACS} instrument \citep{dresslerIMACSInamoriMagellanAreal2011} on the Magellan 6.5~m Baade telescope located atop Las Campanas Observatory (Chile). We successfully identified \objname{} in the images, however \objname{} was in front of a dense part of the Milky Way,% preventing us from unambiguously identifying activity. We used these observations to inform our Gemini \ac{SNR} calculations. \paragraph{VATT} On UT 2022 April 6 we observed \objname{} with the 1.8~m \ac{VATT} at the \ac{MGIO} in Arizona (Proposal ID S165, \ac{PI} Chandler). \objname{} was in an especially dense part of the galaxy so we conducted test observations to assess the viability of activity detection under these conditions. We concluded object detection would be challenging and activity detection essentially impossible in such a dense field. \paragraph{LDT} On UT 2022 May 21 we observed \objname{} with the \ac{LDT} in Arizona (PI: Chandler). Finding charts indicated \objname{} was in a less dense field compared to our \ac{VATT} observations, however we were hardly able to resolve \objname{} or identify any activity because the field was still too crowded. \paragraph{Gemini South} On UT 2022 June 7 we observed \objname{} with the \ac{GMOS} South instrument \citep{hookGeminiNorthMultiObjectSpectrograph2004,gimenoOnskyCommissioningHamamatsu2016} on the 8.1~m Gemini South telescope located atop Cerro Pachón in Chile (Proposal ID GS-2022A-DD-103, \acs{PI} Chandler). We timed this observation to take place during a $\sim$10 day window when \objname{} was passing in front of a less dense region of the Milky Way. We acquired eighteen images, six each in $g'$, $r'$, and $i'$. Activity was clearly visible in the reduced data in all filters, with activity appearing strongest in $g'$ (Figure \ref{282P:fig:282P}d). Our observations confirmed \objname{} was still active, 15 months after the 2021 archival data, evidence supporting sublimation as the most likely cause for activity (Section \ref{282P:sec:mechanism}). \section{Dynamical Modeling} \label{282P:subsec:dynamicalmodeling} We analyzed \objname{} orbital characteristics in order to (1) determine its dynamical class (Section \ref{282P:sec:dynamicalClassification}), and (2) inform our activity mechanism assessment (Section \ref{282P:sec:mechanism}). % We simulated a cloud of 500 \objname{} orbital clones, randomly drawn from Gaussian distributions centered on the current fitted parameters of \objname{}, with widths corresponding to uncertainties of those fits (Appendix \ref{282P:sec:ObjectData} lists parameters and associated uncertainties), as reported by \acs{JPL} Horizons \citep{giorginiJPLOnLineSolar1996}. We modeled the gravitational influence of the Sun and the planets (except Mercury) on each orbital clone using the \texttt{\ac{IAS15}} N-body integrator \citep{reinIAS15FastAdaptive2015}, typically accurate to machine precision, with the \texttt{REBOUND} \texttt{Python} package\footnote{\url{https://github.com/hannorein/rebound}} \citep{reinREBOUNDOpensourceMultipurpose2012,reinHybridSymplecticIntegrators2019}. We ran simulations 1,000 years forward and backward through time. Longer integrations were unnecessary because dynamical chaos ensues prior to $\sim$200 years ago and after $\sim$350 years into the future, thus no meaningful orbital elements can be derived outside of this window.% Results from the dynamical evolution of the \objname{} orbital clones are shown in Figure \ref{282P:fig:orbitevolution1} and Figure \ref{282P:fig:orbitevolution2}. For all plots, time $t=0$ corresponds to \ac{JD} 2459600.5 (UT 2022 Jan 21) and time ranges from $t=-250$ to $t=+350$ (1772--2372 AD). Horizontal lines at distances of one, three, and five Hill radii (Equation \ref{282P:eq:rH}) from Jupiter and Saturn are shown in Figure \ref{282P:fig:orbitevolution2} panels a and b. The Hill Radius \citep{hillResearchesLunarTheory1878} $r_H$ is a metric of orbital stability and indicates the region where a secondary body (e.g., a planet) has dominant gravitational influence over a tertiary body (e.g., a moon), with both related to a primary body, such as the Sun. At pericenter, the Hill radius of the secondary body can be approximated as \begin{equation} r_\mathrm{H} \approx a(1-e)(m/3M)^{1/3}, \label{282P:eq:rH} \end{equation} \noindent where $a$, $e$, and $m$ are the semi-major axis, eccentricity and mass of the secondary (Jupiter or Saturn in our case), respectively, and $M$ is the mass of the primary (here, the Sun). Close passages of a small body within a few Hill radii of a planet are generally considered to be significant perturbations and may drastically alter the orbit of the small body (see \citealt{hamiltonOrbitalStabilityZones1992} Section 2.1.2 for discussion). % From $\sim$180 years ago until $\sim$300 years in the future, the orbit of \objname{} is well-constrained in our simulations. Figure \ref{282P:fig:orbitevolution2}a illustrates that \objname{} has roughly 10 close encounters (within $\sim$2 au) with Jupiter, and one with Saturn, over the range $-250<t<350$ yr. These encounters have a strong effect on the semi-major axis $a$ of \objname{} (Figure \ref{282P:fig:orbitevolution1}b), and, as illustrated by Figure \ref{282P:fig:orbitevolution2}d, a noticeable influence on its Tisserand parameter with respect to Jupiter $T_\mathrm{J}$, \begin{equation} T_\mathrm{J} = \frac{a_\mathrm{J}}{a} + 2\cos(i)\sqrt{\frac{a}{a_\mathrm{J}}\left(1-e^2\right)}, \label{282P:eq:TJ} \end{equation} \noindent where $a_\mathrm{J}$ is the semi-major axis of Jupiter and $a$, $e$ and $i$ are the semi-major axis, eccentricity and inclination of the body, respectively. $T_\mathrm{J}$ essentially describes an object's close approach speed to Jupiter or, in effect, the degree of dynamical perturbation an object will experience as a consequence of Jovian influence. $T_\mathrm{J}$ is often described as invariant \citep{kresakJacobianIntegralClassificational1972} or conserved, meaning that changes in orbital parameters still result in the same $T_\mathrm{J}$, although, in practice, its value does change slightly as a result of close encounters (see Figure \ref{282P:fig:orbitevolution2}d). Due to the small Jupiter-centric distances % of \objname{} during these encounters, compounded by its orbital uncertainties, the past orbit of \objname{} (prior to $t\approx-180$ yrs) is chaotic. % This dynamical chaos is plainly evident in all panels as orbital clones take a multitude of paths within the parameter space, resulting in a broad range of possible orbital outcomes due only to slight variations in initial \objname{} orbital parameters. A consequential encounter with Saturn occurred around 1838 ($t\approx-184$~yr; Figure \ref{282P:fig:orbitevolution2}b), followed by another interaction with Jupiter in 1846 ($t=-176$ yr; Figure \ref{282P:fig:orbitevolution2}a). After these encounters \objname{} was a \ac{JFC} (100\% of orbital clones) with a semi-major axis between Jupiter's and Saturn's semi-major axes (Figure \ref{282P:fig:orbitevolution1}b), and crossing the orbits of both planets (Figure \ref{282P:fig:orbitevolution2}c). These highly perturbative passages placed \objname{} on the path that would lead to its current Quasi-Hilda orbit. In 1940 ($t=-82$~yr), \objname{} had a very close encounter with Jupiter, at a distance of 0.3~au -- interior to one Hill radius. As seen in Figure \ref{282P:fig:orbitevolution1}a, this encounter dramatically altered \objname{}'s orbit, shifting \objname{} from an orbit primarily exterior to Jupiter to an orbit largely interior to Jupiter (Figure \ref{282P:fig:orbitevolution1}b). This same interaction also caused \objname{}'s orbit to migrate from Jupiter- and Saturn-crossing to only a Jupiter-crossing orbit (Figure \ref{282P:fig:orbitevolution2}c). This step in the orbital evolution of \objname{} also changed its $T_\mathrm{J}$ (Figure \ref{282P:fig:orbitevolution2}d) to be close to the traditional $T_\mathrm{J}=3$ comet--asteroid dynamical boundary. At this point in time, \objname{} remained a \ac{JFC} (100\% of orbital clones) despite its dramatic change in orbit. % Around $t\approx200$ yr, \objname{} crosses the $T_\mathrm{J}=3$ boundary dividing the \ac{JFC}s and the asteroids on the order of 10 times. Although no major changes in the orbit \objname{} occur during this time, because of the stringency of this boundary, relatively minor perturbations result in oscillation between dynamical classes. After a major encounter with Jupiter around 2330 AD ($t\approx308$ yrs), dynamical chaos again becomes dominant and remains so for the rest of the simulation. Following this encounter, the orbit of \objname{} does not converge around any single solution. Slight diffusion following the previous several Jupiter passages are also visible in Figure \ref{282P:fig:orbitevolution1}b-d and Figure \ref{282P:fig:orbitevolution2}a-d, and these also add uncertainty concerning encounters around 2301 to 2306 ($t\approx280$ to $285$ yrs). Although we are unable to precisely determine past and future orbits of \objname{} outside of $-180\lesssim t\lesssim300$ because of dynamical chaos, we are able to examine the fraction of orbital clones that finish the simulation (forwards and backwards) on orbits associated with different orbital classes. \section{Dynamical Classifications: Past, Present and Future} \label{282P:sec:dynamicalClassification} Minor planets are often classified dynamically, based on orbital characteristics such as semi-major axis. % \objname{} was labeled a \ac{JFC} by \cite{hsiehMainbeltCometsPanSTARRS12015}, in agreement with a widely adopted system that classifies objects dynamically based on their Tisserand parameter with respect to Jupiter, $T_\mathrm{J}$ (Equation \ref{282P:eq:TJ}).% Via Equation \ref{282P:eq:TJ}, Jupiter's $T_\mathrm{J}$ is 2.998 given $a_\mathrm{J}=5.20$, $e_\mathrm{J}=0.049$, and $i_\mathrm{J}=0.013$. Notably, objects with $T_\mathrm{J}>3$ cannot cross the Jovian orbit, thus their orbits are entirely interior or exterior to Jupiter's orbit \citep{levisonCometTaxonomy1996}. Objects with $T_\mathrm{J}<3$ are considered cometary \citep{levisonCometTaxonomy1996}, while those with $T_\mathrm{J}>3$ are not \citep{vaghiOriginJupiterFamily1973,vaghiOrbitalEvolutionComets1973}, a classification approach first suggested by \cite{carusiHighOrderLibrationsHalleyType1987,carusiCometTaxonomy1996}. \acp{JFC} have $2<T_\mathrm{J}<3$ (see e.g., \citealt{jewittActiveCentaurs2009}), and Damocloids and have $T_\mathrm{J}<2$ \citep{jewittFirstLookDamocloids2005}. We note, however, that the traditional $T_\mathrm{J}$ asteroid -- \ac{JFC} -- Damocloid continuum does not include (or exclude) \acp{QHO}. As discussed in Section \ref{282P:introduction}, we adopt the \cite{jewittActiveCentaurs2009} definition of Centaur, which stipulates that a Centaur has an orbit entirely exterior to Jupiter, with both $q$ and $a$ interior to Neptune, and the body is not in 1:1 resonance with a planet. \objname{} has a semi-major axis $a=4.240$~au, well interior to Jupiter's $a_\mathrm{J}=5.2$~au. This disqualifies \objname{} as presently on a Centaurian orbit. Active objects other than comets orbiting interior to Jupiter are primarily the active asteroids, defined as (1) $T_\mathrm{J}>3$, (2) displaying comet-like activity, and (3) orbiting outside of mean-motion resonance with any of the planets. This last stipulation rules out the Jupiter Trojans (1:1 resonance) and the Hildas (3:2 resonance with Jupiter), even though both classes have members above and below the $T_\mathrm{J}=3.0$ asteroid--comet transition line. We compute $T_\mathrm{J}=2.99136891\pm(3.73\times10^{-8})$ for \objname{} (see Appendix \ref{282P:sec:ObjectData} for a list of orbital parameters). % These values do not exceed the traditional $T_\mathrm{J}=3$ cutoff; thus \objname{} cannot be considered an active asteroid in its current orbit. \acp{MBC} are an active asteroid subset defined as orbiting entirely within the main asteroid belt \citep{hsiehMainbeltCometsPanSTARRS12015}. Figure \ref{282P:fig:orbitevolution2}c shows that \objname{}'s heliocentric distance does not stay within the boundaries of the Asteroid Belt (i.e., between the orbits of Mars and Jupiter), and so \objname{} does not qualify as a \ac{MBC}. Blurring the lines between \ac{JFC} and Hilda is the Quasi-Hilda regime. A Quasi-Hilda, also referred to as a \ac{QHO}, \ac{QHA} \citep{jewittOutburstingQuasiHildaAsteroid2020}, or \ac{QHC}, is a minor planet on an orbit similar to a Hilda \citep{tothQuasiHildaSubgroupEcliptic2006,gil-huttonCometCandidatesQuasiHilda2016}. Hildas are defined by their 3:2 interior mean-motion resonance with Jupiter, however Quasi-Hildas are not in this resonance, though they do orbit near it. Quasi-Hildas likely migrated from the \ac{JFC} region (see discussion, \citealt{jewittOutburstingQuasiHildaAsteroid2020}). We favor the term \ac{QHO} or \ac{QHA} over \ac{QHC}, given that fewer than 15 Quasi-Hildas have been found to be active, while the remainder of the $>270$ identified Quasi-Hildas \citep{gil-huttonCometCandidatesQuasiHilda2016} have not been confirmed to be active. Notable objects from the Quasi-Hilda class are 39P/Oterma \citep{otermaNEWCOMETOTERMA1942}, an object that was a Quasi-Hilda prior to 1963, when a very close (0.095~au) encounter with Jupiter redirected the object into a Centuarian orbit. Another notable Quasi-Hilda was D/Shoemaker-Levy~9, which famously broke apart and impacted Jupiter in 1994 (e.g., \citealt{weaverHubbleSpaceTelescope1995}). Quasi-Hildas have orbital parameters similar to that of the Hildas, approximately $3.7 \lesssim a \lesssim 4.2$~au, $e\le0.3$, and $i\le20\degr$. In rough agreement, \objname{} has $a=4.24$~au, $e=0.188$, and $i=5.8\degr$ (Appendix \ref{282P:sec:ObjectData}). Hildas are also known for their trilobal orbits as viewed in the Jupiter corotating frame (caused by their residence in the 3:2 interior mean motion resonance with Jupiter), especially the namesake asteroid (153)~Hilda (Figure \ref{282P:fig:corotatingFrame}d). Because (153)~Hilda is in a stable 3:2 resonant orbit with Jupiter, its orbit remains roughly constant, with a small amount of libration over time. By contrast, Quasi-Hildas like 246P/\acs{NEAT} (Figure \ref{282P:fig:corotatingFrame}e) are near the same resonance and show signs of this characteristic trilobal pattern, however their orbits drift considerably on timescales of hundreds of years. \objname{} (Figure \ref{282P:fig:corotatingFrame}f) also displays a typical Quasi-Hilda orbit as viewed in the Jupiter corotating reference frame. In the past, prior to 250~yr ago, 52\% (260) of the 500 orbital clones were \acp{JFC}, 48\% (239) were Cenaturs, 5\% (26) were already \acp{QHO}, and one (0.2\%) was an \ac{OMBA}. The most probable scenario prior to 250 years ago was that was either a \ac{JFC} or Centaur, both classes that trace their origins to the Kuiper Belt (see reviews, \citealt{morbidelliKuiperBeltFormation2020} and \citealt{jewittActiveCentaurs2009}, respectively). In the future, after 350 years time, 81\% (403) of clones become \acp{JFC}, 18\% (90) remain \acp{QHO}, 14\% (69) become \acp{OMBA}, and 5.6\% (28) return to Centaurian orbits. Clearly the most likely scenario is that \objname{} will become a \ac{JFC}, however there are still significant possibilities that \objname{} remains a \ac{QHO} or becomes an active \ac{OMBA}. \section{Thermodynamical Modeling} \label{282P:sec:thermo} In order to understand the approximate temperature ranges that \objname{} experiences over the course of its present orbit in order to (1) understand what role, if any, thermal fracture may play in the activity we observe, and (2) evaluate the likelihood of ices surviving on the surface, albeit with limited effect because of the narrow window ($\sim$500 years) of dynamically well-determined orbital parameters available (Section \ref{282P:subsec:dynamicalmodeling}). Following the procedure of \cite{chandlerRecurrentActivityActive2021} (originally adapted from \citealt{hsiehMainbeltCometsPanSTARRS12015}), we compute the surface equilibrium temperature $T_\mathrm{eq}$ for \objname{} as a gray airless body. To accomplish this we begin with the water ice sublimation energy balance equation \begin{equation} {F_{\odot}\over r_h^2}(1-A) = \chi\left[{\varepsilon\sigma T_\mathrm{eq}^4 + L f_\mathrm{D}\dot m_{w}(T_\mathrm{eq})}\right] \label{equation:sublim1} \end{equation} \noindent with the solar constant $F_{\odot}=1360$~W~m$^{-2}$, heliocentric distance of the airless body $r_h$ (au), and the body's assumed Bond albedo is $A=0.05$; note that the true albedo could differ significantly from this value and thus it would be helpful to measure the albedo in the future when \objname{} is inactive. The heat distribution over the body is accounted for by $\chi$, which is bound by the coldest temperatures via the fast-rotating isothermal approximation ($\chi=1$), and the hottest temperatures via the ``slab'' sub-solar approximation, where one side of the object always faces the Sun. The assumed effective infrared emissivity is $\varepsilon=0.9$, $\sigma$ is the Stefan--Boltzmann constant, the latent heat of sublimation of water ice (which we approximate here as being independent of temperature) is $L=2.83$~MJ~kg$^{-1}$, the mantling-induced sublimation efficiency dampening is assumed to be $f_\mathrm{D}=1$ (absence of mantle), and the sublimation-driven water ice mass-loss rate in a vacuum $\dot m_\mathrm{w}$ is given by \begin{equation} \dot m_\mathrm{w} = P_\mathrm{v}(T) \sqrt{\mu\over2\pi k T} \label{equation:sublim2} \end{equation} \noindent where the mass of one water molecule is $\mu=2.991\cdot 10^{-26}$~kg, $k$ is the Boltzmann constant, % \noindent and the vapor pressure (in Pa) as a function of temperature $P_\mathrm{v}(T)$ is derived from the Clausius--Clapeyron relation, \begin{equation} P_\mathrm{v}(T) = 611 \times \exp\left[{{\Delta H_\mathrm{subl}\over R_g}\left({{1\over 273.16} - {1\over T}}\right)}\right] \label{equation:sublim3} \end{equation} \noindent where the heat of sublimation for ice to vapor is $\Delta H_\mathrm{subl}=51.06$~MJ~kmol$^{-1}$, and the ideal gas constant is $R_g=8.314\times10^{-3}~\mathrm{MJ}~\mathrm{kmol}^{-1}$~K$^{-1}$. Solving Equations \ref{equation:sublim1} -- \ref{equation:sublim3} for the body's heliocentric distance $r_\mathrm{h}$ (in au) as a function of equilibrium temperature $T_\mathrm{eq}$ and $\chi$ yields \begin{equation} % r_\mathrm{h}(T_\mathrm{eq},\chi) = \frac{F_\odot\left(1-A\right)\chi^{-1}}{\epsilon\sigma T_\mathrm{eq}^4 + L f_\mathrm{D} \cdot 611\ e^{\frac{\Delta H_\mathrm{subl}}{R_\mathrm{G}}\left(\frac{1}{273.16\mathrm{K}} - \frac{1}{T_\mathrm{eq}}\right)}} \label{282P:eq:teq} \end{equation} We translate Equation \ref{282P:eq:teq} to a function of equilibrium temperature $T_\mathrm{eq}$ by computing $r_\mathrm{h}$ for an array of values (100~K to 300~K in this case), then fit a model to these data with a \texttt{SciPy} \citep{virtanenSciPyFundamentalAlgorithms2020} (\texttt{Python} package) univariate spline. Using this model we compute \objname{} temperatures for \objname{} heliocentric distances from perihelion and aphelion. Figure \ref{282P:fig:ActivityTimeline} (bottom panel) shows the temperature evolution for the maximum and minimum solar heating distribution scenarios ($\chi=1$ and $\chi=4$, respectively) for \objname{} from 2012 through 2024. Temperatures range between roughly 175~K and 220~K for $\chi=1$, or 130~K and 160~K for $\chi=4$, with a $\sim45$~K maximum temperature variation in any one orbit. \objname{} spends some ($\chi=4$) or all ($\chi=1$) of its time with surface temperatures above 145~K. Water ice is not expected to survive above this temperature on Gyr timescales \citep{schorghoferLifetimeIceMain2008,snodgrassMainBeltComets2017}, however we showed in Section \ref{282P:subsec:dynamicalmodeling} that, prior to $\sim80$ years ago, \objname{} had a semi-major axis of $a>6$~au, a region much colder than 145~K. Even if \objname{} had spent most of its life with temperatures at the high end of our computed temperatures ($>220$~K), water ice can survive on Gyr timescales at shallow (a few cm) depths \citep{schorghoferLifetimeIceMain2008,prialnikCanIceSurvive2009}. Some bodies, such as (24)~Themis, have been found to have surface ices \citep{campinsWaterIceOrganics2010,rivkinDetectionIceOrganics2010} that suggest that an unknown mechanism may replenish surface ice with subsurface volatiles. In this case the ice lifetimes could be greatly extended. \section{Activity Mechanism} \label{282P:sec:mechanism} Infrequent stochastic events, such as impacts (e.g., (596)~Scheila, \citealt{bodewitsCollisionalExcavationAsteroid2011,ishiguroObservationalEvidenceImpact2011,moreno596ScheilaOutburst2011}), are highly unlikely to be the activity mechanism given the multi-epoch nature of the activity we identified in this work. Moreover, it is unlikely that activity ceased during the 15 month interval between the UT 2021 March 14 archival activity and our UT 7 June 2022 Gemini South activity observations (Section \ref{282P:sec:observations}), when \objname{} was at a heliocentric distance $r_\mathrm{H}=3.548$~au and $r_\mathrm{H}$=3.556~au, respectively, and \objname{} was only closer to the Sun in the interim. Similarly, our archival data shows activity lasted $\sim15$ months during the 2012 -- 2013 apparition. Recurrent activity is most commonly caused by volatile sublimation (e.g., 133P, \citealt{boehnhardtComet1996N21996,hsiehStrangeCase133P2004}) or rotational instability (e.g., (6478)~Gault, \citealt{kleynaSporadicActivity64782019,chandlerSixYearsSustained2019}). Rotational instability is impossible to rule out entirely for \objname{} because its rotation period is unknown. However, (1) no activity attributed to rotational stability for any object has been observed to be continuous for as long as the 15 month episodes we report, and (2) rotational instability is not correlated with perihelion passage. It is worth noting that there are not yet many known objects with activity attributed to rotational disruption, so it is still difficult to draw firm conclusions about the behavior of those objects. In any case it would be useful to measure a rotation period for \objname{} to help assess potential influence of rotational instability in the observed activity of \objname{}. The taxonomic class of \objname{} is unknown, but should \objname{} be classified as a member of a desiccated spectral class (e.g., S-type), then sublimation would not likely be the underlying activity mechanism. Color measurements or spectroscopy when \objname{} is quiescent would help determine its spectral class. A caveat, however, is that many of our archival images were taken when \objname{} was significantly fainter than the images showing activity (Figure \ref{282P:fig:ActivityTimeline}), thereby making activity detection more difficult than if \objname{} was brighter. Consequently, archival images showing \objname{} were predominitely taken near its perihelion passage. The farthest evidently quiescent image of \objname{} was taken when it was at $\sim$4~au (Figure \ref{282P:fig:ActivityTimeline}). Thus we cannot state with total certainty that \objname{} was inactive elsewhere in its orbit. Thermal fracture can cause repeated activity outbursts. For example, (3200)~Phaethon undergoes 600~K temperature swings, peaking at 800~K -- 1100~K, exceeding the serpentine-phyllosilicate decomposition threshold of 574~K \citep{ohtsukaSolarRadiationHeatingEffects2009}, and potentially causing thermal fracture \citep{licandroNatureCometasteroidTransition2007,kasugaObservations1999YC2008} including mass loss \citep{liRecurrentPerihelionActivity2013,huiResurrection3200Phaethon2017}. Temperatures on \objname{} reach at most $\sim220$~K (Figure \ref{282P:fig:ActivityTimeline}), with $\sim45$~K the maximum variation. Considering the relatively low temperatures and mild temperature changes we (1) consider it unlikely that \objname{} activity is due to thermal fracture, and (2) reaffirm that thermal fracture is generally considered a nonviable mechanism for any objects other than \acp{NEO}. Overall, we find volatile sublimation on \objname{} the most likely activity mechanism, because (1) it is unlikely that an object originating from the Kuiper Belt such as \objname{} would be desiccated, (2) archival and new activity observations are from when \objname{} was near perihelion (Figure \ref{282P:fig:ActivityTimeline}), a characteristic diagnostic of sublimation-driven activity \citep[e.g.,][]{hsiehOpticalDynamicalCharacterization2012}, and (3) 15 months of continuous activity has not been reported for any other activity mechanism (e.g., rotational instability, impact events) to date, let alone two such epochs. \section{Summary and Future Work} \label{282P:sec:summary} This study was prompted by Citizen Scientists from the NASA Partner program \textit{Active Asteroids} classifying two images of \objname{} from 2021 March as showing activity. Two additional images by astronomers Roland Fichtl and Michael Jäger brought the total number of images (from UT 2021 March 31 and UT 2021 April 4) to four. We conducted follow-up observations with the Gemini South 8.1~m telescope on UT 2022 June 7 and found \objname{} still active, indicating it has been active for $>15$ months during the current 2021 -- 2022 activity epoch. Our archival investigation revealed the only other known apparition, from 2012--2013, also spanned $\sim15$ months. Together, our new and archival data demonstrate \objname{} has been active during two consecutive perihelion passages, consistent with sublimation-driven activity. We conducted extensive dynamical modeling and found \objname{} has experienced a series of $\sim5$ strong interactions with Jupiter and Saturn in the past, and that \objname{} will again have close encounters with Jupiter in the near future. These interactions are so strong that dynamical chaos dominates our simulations prior to 180 years ago and beyond 350 years in the future, but we are still able to statistically quantify a probable orbital class for \objname{} prior to $-180$ yr (52\% \acp{JFC}, 48\% Centaur) and after $+350$ yr (81\% \acp{JFC}, 18\% \ac{QHO}, 14\% \ac{OMBA}). We classify present-day \objname{} as a \acf{QHO}. We carried out thermodynamical modeling that showed \objname{} undergoes temperatures ranging at most between 135~K and 220~K, too mild for thermal fracture but warm enough that surface water ice would not normally survive on timescales of the solar system lifetime. However, \objname{} arrived at its present orbit recently; prior to 1941 \objname{} was primarily exterior to Jupiter's orbit and, consequently, sufficiently cold for water ice to survive on its surface. Given that both activity apparitions (Epoch I: 2012 -- 2013 and Epoch II: 2021 -- 2022) each lasted over 15 months, and both outbursts spanned perihelia passage, we determine the activity mechanism to most likely be volatile sublimation. Coma likely accounts for the majority of the reflected light we observe emanating from \objname{}, so it is infeasible to determine the color of the nucleus and, consequently, \objname{}'s spectral class (e.g., C-type, S-type). Measuring its rotational period would also help assess what (if any) role rotational instability plays in the observed activity. Specifically, a rotation period faster than the spin-barrier limit of two hours would indicate breakup. Most images of \objname{} were taken when it was near perihelion passage (3.441~au), though there were observations from Epoch I that showed \objname{} clearly, without activity, when it was beyond $\sim$4~au. \objname{} is currently outbound and will again be beyond 4~au in mid-2023 and, thus, likely inactive; determining if/when \objname{} returns to a quiescent state would help bolster the case for sublimation-driven activity because activity occurring preferentially near perihelion, and a lack of activity elsewhere, is characteristic of sublimation-driven activity. \objname{} is currently observable, especially from the southern hemisphere, however the object is passing in front of dense regions of the Milky Way until the end of 2022 November (see Lowell \texttt{AstFinder}\footnote{\url{https://asteroid.lowell.edu/astfinder/}} finding charts). \objname{} will be in a less dense region of the Milky Way and be observable, in a similar fashion to our Gemini South observations (Section \ref{282P:sec:observations}) on UT 2022 September 26 for $\sim$12 days, carefully timed for sky regions with fewer stars. As Earth's orbit progresses around the Sun, \objname{} becomes observable for less time each night through 2022 November, until UT 2022 December 26, when it becomes observable only during twilight. Observations during this window would help constrain the timeframe for periods of quiescence. \section{Acknowledgements} \label{282P:sec:acknowledgements} The authors express their gratitude to the anonymous referee whose feedback improved the quality of this work a great deal. We thank Dr.\ Mark Jesus Mendoza Magbanua of \ac{UCSF} for his frequent and timely feedback on the project. Many thanks for the helpful input from Henry Hsieh of the \ac{PSI} and David Jewitt of \ac{UCLA}. We thank the \ac{NASA} Citizen Scientists involved in this work, with special thanks to moderator Elisabeth Baeten (Belgium) and our top classifier, Michele T. Mazzucato (Florence, Italy). Thanks also to super volunteers Milton K D Bosch MD (Napa, USA), C. J. A. Dukes (Oxford, UK), Virgilio Gonano (Udine, Italy), Marvin W. Huddleston (Mesquite, USA), and Tiffany Shaw-Diaz (Dayton, USA), all of whom also classified images of \objname{}. Many thanks to additional classifiers of the three images of \objname{}: R. Banfield (Bad Tölz, Germany), @Boeuz (Penzberg, Germany), Dr. Elisabeth Chaghafi (Tübingen, Germany), Juli Fowler (Albuquerque, USA), M. M. Habram-Blanke (Heidelberg, Germany), @EEZuidema (Driezum, Netherlands), Brenna Hamilton (DePere, USA), Patricia MacMillan (Fredericksburg, USA), A. J. Raab (Seattle, USA), Angelina A. Reese (Sequim, USA), Arttu Sainio (Järvenpää, Finland), Timothy Scott (Baddeck, Canada), Ivan A. Terentev (Petrozavodsk, Russia), and Scott Virtes (Escondido, USA) . Thanks also to \ac{NASA} Citizen Scientists Thorsten Eschweiler (Übach-Palenberg, Germany) and Carl Groat (Okeechobee, USA) . The authors express their gratitude to Prof. Mike Gowanlock (\acs{NAU}), Jay Kueny of \ac{UA} and Lowell Observatory, and the Trilling Research Group (\acs{NAU}), all of whom provided invaluable insights which substantially enhanced this work. Thank you William A. Burris (San Diego State University) for his insights into Citizen Science classifications. The unparalleled support provided by Monsoon cluster administrator Christopher Coffey (\acs{NAU}) and the High Performance Computing Support team facilitated the scientific process. We thank Gemini Observatory Director Jennifer Lotz for granting our \ac{DDT} request for observations, German Gimeno for providing science support, and Pablo Prado for observing. Proposal ID GS-2022A-DD-103, \acs{PI} Chandler. The VATT referenced herein refers to the Vatican Observatory’s Alice P. Lennon Telescope and Thomas J. Bannan Astrophysics Facility. We are grateful to the Vatican Observatory for the generous time allocations (Proposal ID S165, \acs{PI} Chandler). We especially thank Vatican Observatory Director Br. Guy Consolmagno, S.J. for his guidance, Vice Director for Tucson Vatican Observatory Research Group Rev.~Pavel Gabor, S.J. for his frequent assistance, Astronomer and Telescope Scientist Rev. Richard P. Boyle, S.J. for patiently training us to use the \ac{VATT} and for including us in minor planet discovery observations, Chris Johnson (\ac{VATT} Facilities Management and Maintenance) for many consultations that enabled us to resume observations, Michael Franz (\acs{VATT} Instrumentation) and Summer Franks (\ac{VATT} Software Engineer) for on-site troubleshooting assistance, and Gary Gray (\ac{VATT} Facilities Management and Maintenance) for everything from telescope balance to building water support, without whom we would have been lost. This material is based upon work supported by the \acs{NSF} \ac{GRFP} under grant No.\ 2018258765. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the \acl{NSF}. The authors acknowledge support from the \acs{NASA} Solar System Observations program (grant 80NSSC19K0869, PI Hsieh) and grant 80NSSC18K1006 (PI: Trujillo). Computational analyses were run on Northern Arizona University's Monsoon computing cluster, funded by Arizona's \ac{TRIF}. This work was made possible in part through the State of Arizona Technology and Research Initiative Program. \acf{WCS} corrections facilitated by the \textit{Astrometry.net} software suite \citep{langAstrometryNetBlind2010}. This research has made use of data and/or services provided by the \ac{IAU}'s \ac{MPC}. This research has made use of \acs{NASA}'s Astrophysics Data System. This research has made use of The \acf{IMCCE} SkyBoT Virtual Observatory tool \citep{berthierSkyBoTNewVO2006}. This work made use of the \texttt{FTOOLS} software package hosted by the \acs{NASA} Goddard Flight Center High Energy Astrophysics Science Archive Research Center. \ac{SAO} \ac{DS9}: This research has made use of \texttt{\acs{SAO}Image\acs{DS9}}, developed by \acl{SAO} \citep{joyeNewFeaturesSAOImage2006}. \acf{WCS} validation was facilitated with Vizier catalog queries \citep{ochsenbeinVizieRDatabaseAstronomical2000} of the Gaia \ac{DR} 2 \citep{gaiacollaborationGaiaDataRelease2018} and the \acf{SDSS DR-9} \citep{ahnNinthDataRelease2012} catalogs. This work made use of AstOrb, the Lowell Observatory Asteroid Orbit Database \textit{astorbDB} \citep{bowellPublicDomainAsteroid1994,moskovitzAstorbDatabaseLowell2021}. This work made use of the \texttt{astropy} software package \citep{robitailleAstropyCommunityPython2013}. Based on observations at \ac{CTIO}, \acs{NSF}’s \acs{NOIRLab} (\acs{NOIRLab} Prop. ID 2019A-0305; \acs{PI}: A. Drlica-Wagner, \acs{NOIRLab} Prop. ID 2013A-0327; \acs{PI}: A. Rest), which is managed by the \acf{AURA} under a cooperative agreement with the \acl{NSF}. This project used data obtained with the \acf{DECam}, which was constructed by the \acf{DES} collaboration. Funding for the \acs{DES} Projects has been provided by the US Department of Energy, the US \acl{NSF}, the Ministry of Science and Education of Spain, the Science and Technology Facilities Council of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute for Cosmological Physics at the University of Chicago, Center for Cosmology and Astro-Particle Physics at the Ohio State University, the Mitchell Institute for Fundamental Physics and Astronomy at Texas A\&M University, Financiadora de Estudos e Projetos, Fundação Carlos Chagas Filho de Amparo à Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Científico e Tecnológico and the Ministério da Ciência, Tecnologia e Inovação, the Deutsche Forschungsgemeinschaft and the Collaborating Institutions in the Dark Energy Survey. The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of Cambridge, Centro de Investigaciones Enérgeticas, Medioambientales y Tecnológicas–Madrid, the University of Chicago, University College London, the \acs{DES}-Brazil Consortium, the University of Edinburgh, the Eidgenössische Technische Hochschule (ETH) Zürich, Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ciències de l’Espai (IEEC/CSIC), the Institut de Física d’Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig-Maximilians Universität München and the associated Excellence Cluster Universe, the University of Michigan, \acs{NSF}’s \acs{NOIRLab}, the University of Nottingham, the Ohio State University, the OzDES Membership Consortium, the University of Pennsylvania, the University of Portsmouth, \ac{SLAC} National Accelerator Laboratory, Stanford University, the University of Sussex, and Texas A\&M University. These results made use of the \acf{LDT} at Lowell Observatory. Lowell is a private, non-profit institution dedicated to astrophysical research and public appreciation of astronomy and operates the \acs{LDT} in partnership with Boston University, the University of Maryland, the University of Toledo, \acf{NAU} and Yale University. The \acf{LMI} was built by Lowell Observatory using funds provided by the \acf{NSF} (AST-1005313). \ac{VST} OMEGACam \citep{arnaboldiVSTVLTSurvey1998,kuijkenOmegaCAM16k16k2002,kuijkenOmegaCAMESONewest2011} data were originally acquired as part of the \ac{KIDS} \citep{dejongFirstSecondData2015}. The \acs{Pan-STARRS}1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the \acs{Pan-STARRS} Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the \ac{LCOGT} Network Incorporated, the National Central University of Taiwan, the \acl{STScI}, the \acl{NASA} under Grant No. NNX08AR22G issued through the Planetary Science Division of the \acs{NASA} Science Mission Directorate, the \acf{NSF} Grant No. AST-1238877, the University of Maryland, \ac{ELTE}, the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation. Based on observations obtained with MegaPrime/MegaCam, a joint project of \ac{CFHT} and \ac{CEA}/\ac{DAPNIA}, at the \ac{CFHT} which is operated by the \acf{NRC} of Canada, the Institut National des Science de l'Univers of the \acf{CNRS} of France, and the University of Hawaii. The observations at the \acf{CFHT} were performed with care and respect from the summit of Maunakea which is a significant cultural and historic site. Magellan observations made use of the \ac{IMACS} instrument \citep{dresslerIMACSInamoriMagellanAreal2011}. This research has made use of the \acs{NASA}/\ac{IPAC} \ac{IRSA}, which is funded by the \acl{NASA} and operated by the California Institute of Technology. \vspace{5mm} \facilities{ Astro Data Archive, Blanco (DECam), CFHT (MegaCam), Gaia, Gemini-South (GMOS-S), IRSA, LDT (LMI), Magellan: Baade (TSIP), PO:1.2m (PTF, ZTF), PS1, Sloan, VATT (VATT4K), VST (OmegaCAM) } \software{{\tt astropy} \citep{robitailleAstropyCommunityPython2013}, {\tt astrometry.net} \citep{langAstrometryNetBlind2010}, {\tt FTOOLS}\footnote{\url{https://heasarc.gsfc.nasa.gov/ftools/}}, {\tt IAS15} integrator \citep{reinIAS15FastAdaptive2015}, {\tt JPL Horizons} \citep{giorginiJPLOnLineSolar1996}, {\tt Matplotlib} \citep{hunterMatplotlib2DGraphics2007}, {\tt NumPy} \citep{harrisArrayProgrammingNumPy2020}, {\tt pandas} \citep{mckinneyDataStructuresStatistical2010,rebackPandasdevPandasPandas2022}, {\tt REBOUND} \citep{reinREBOUNDOpensourceMultipurpose2012,reinHybridSymplecticIntegrators2019}, {\tt SAOImageDS9} \citep{joyeNewFeaturesSAOImage2006}, {\tt SciPy} \citep{virtanenSciPyFundamentalAlgorithms2020}, {\tt Siril}\footnote{\url{https://siril.org}}, {\tt SkyBot} \citep{berthierSkyBoTNewVO2006}, {\tt termcolor}\footnote{\url{https://pypi.org/project/termcolor}}, {\tt tqdm} \citep{costa-luisTqdmFastExtensible2022}, {\tt Vizier} \citep{ochsenbeinVizieRDatabaseAstronomical2000} } \clearpage \appendix \section{Table of Observations} \label{282P:sec:observationsTable} \begin{center} \footnotesize \begin{tabular}{cccccrccccrrc} Figure$^\mathrm{a}$ & Act.$^\mathrm{b}$ & Obs. Date$^\mathrm{c}$ & Source & $N^\mathrm{d}$ & Exp. [s]$^\mathrm{e}$ & Filter(s) & V$^\mathrm{f}$ & $r$ [au]$^\mathrm{g}$ & STO [$\degr$]$^\mathrm{h}$ & $\nu$ [$\degr$]$^\mathrm{i}$ & \%$_{Q\rightarrow q}^\mathrm{j}$ & Note$^\mathrm{k}$\\ \hline\hline & & 04 Feb 2011 & PS1 & 2 & 40 & $r$ & 19.7 & 4.26 & 3.3 & 258.1 & 84\% & \ref{20120224} \\ & & 16 Feb 2012 & PS1 & 1 & 45 & $i$ & 19.3 & 3.69 & 8.9 & 306.4 & 95\% & \ref{20120224} \\ & & 24 Feb 2012 & PS1 & 1,1 &43, 40 & $g$, $r$ & 19.2 & 3.68 & 6.9 & 307.6 & 95\% & \obsnote{20120224}\ref{20120224} \\ & & 26 Feb 2012 & PS1 & 2 & 40 & $r$ & 19.2 & 3.67 & 6.3 & 307.9 & 96\% & \ref{20120224}\\ \ref{282P:fig:282P}e & Y & 28 Mar 2012 & MegaPrime & 2 & 120 & $r$ & 18.9 & 3.64 & 3.2 & 312.6 & 96\% & \obsnote{20120328}\ref{20120328}\\ & Y & 05 Jul 2012 & OmegaCAM & 4 & 240 & $i$ & 20.1 & 3.54 & 16.1 & 328.0 & 98\% & \obsnote{20120705}\ref{20120705} \\ & & 14 Apr 2013 & PS1 & 2 & 45 & $i$ & 19.3 & 3.47 & 12.8 & 14.9 & 99\% & \ref{20120224}\\ % & & 22 Apr 2013 & PS1 & 2 & 30 & $z$ & 19.2 & 3.47 & 11.1 & 16.3 & 99\% & \ref{20120224}\\ \ref{282P:fig:282P}f & Y & 05 May 2013 & \acs{DECam} & 2 & 150 & $r$ & 19.5 & 3.48 & 12.9 & 318.4 & 97\% & \obsnote{20130505}\ref{20130505} \\ & Y & 15 May 2013 & PS1 & 1 & 43 & $g$ & 18.8 & 3.48 & 5.2 & 20.1 & 99\% & \ref{20120224}\\ \ref{282P:fig:282P}g & Y & 13 Jun 2013 & MegaPrime & 10 & 120 & $r$ & 18.8 & 3.50 & 4.9 & 24.8 & 99\% & \obsnote{20130613}\ref{20130613} \\ & & 03 Aug 2013 & PS1 & 2 & 80,60 & $y$, $z$ & 19.6 & 3.54 & 15.3 & 33.0 & 98\% & \ref{20120224}\\ & & 11 Jun 2014 & PS1 & 2 & 45 & $i$ & 20.0 & 3.95 & 12.5 & 78.1 & 90\% & \ref{20120224}\\ & & 14 Aug 2014 & PS1 & 3 & 45 & $i$ & 19.4 & 4.05 & 3.3 & 86.1 & 88\% & \ref{20120224}\\ & & 15 Aug 2014 & PS1 & 4 & 45 & $i$ & 19.4 & 4.04 & 3.5 & 86.2 & 88\% & \ref{20120224}\\ & & 04 Jan 2021 & \acs{ZTF} & 1 & 30 & $r$ & 20.0 & 3.63 & 15.7 & 312.5 & 96\% & \obsnote{20210104}\ref{20210104}\\ & & 07 Jan 2021 & \acs{ZTF} & 1 & 30 & $g$ & 20.0 & 3.63 & 15.7 & 312.9 & 96\% & \ref{20210104}\\ & & 09 Jan 2021 & \acs{ZTF} & 1 & 30 & $r$ & 20.0 & 3.62 & 15.7 & 313.2 & 96\% & \ref{20210104}\\ \ref{282P:fig:282P}a & Y & 14 Mar 2021 & \acs{DECam} & 1 & 90 & $i$ & 18.9 & 3.55 & 6.1 & 323.1 & 98\% & \obsnote{20210314}\ref{20210314} \\ \ref{282P:fig:282P}h & Y & 17 Mar 2021 & \acs{DECam} & 1 & 90 & $i$ & 18.9 & 3.55 & 5.2 & 323.6 & 98\% & \ref{20210314} \\ \ref{282P:fig:282P}b & Y & 31 Mar 2021 & QHY600 & 1 & 2160 & UV/IR & 18.5 & 3.54 & 0.7 & 325.9 & 98\% & \obsnote{20210331}\ref{20210331}\\ \ref{282P:fig:282P}c & Y & 04 Apr 2021 & CDS-5D & 1 & 1500 & (none) & 18.5 & 3.54 & 0.5 & 326.4 & 98\% & \obsnote{20210404}\ref{20210404} \\ & & 07 Mar 2022 & \acs{IMACS} & 5 & 10 & WB4800-7800&20.0 & 3.48 & 16.3 & 22.3 & 99\% & \obsnote{20220307}\ref{20220307}\\ & & 21 May 2022 & \acs{LDT} & 3 & 90 & VR, $i$ & 19.1 & 3.54 & 8.3 & 34.4 & 98\% & \obsnote{20220521}\ref{20220521}\\ \ref{282P:fig:282P}d & Y & 07 Jun 2022 & \ac{GMOS}-S & 6,6,6 & 120 & $g$, $r$, $i$&18.8&3.56 & 3.8 & 37.2 & 98\% & \obsnote{20220607}\ref{20220607}\\ \end{tabular} \end{center} \noindent $^\mathrm{a}$Figure showing the image. \\ $^\mathrm{b}$Activity identified in image(s). \\ $^\mathrm{c}$UT date of observation. \\ $^\mathrm{d}$Number of images. \\ $^\mathrm{e}$Exposure time. \\ $^\mathrm{f}$Apparent $V$-band magnitude (Horizons). \\ $^\mathrm{g}$Heliocentric distance. \\ $^\mathrm{h}$Sun--target--observer angle. \\ $^\mathrm{i}$True anomaly. \\%Panel of Figure \ref{282P:fig:282P}.\\ $^\mathrm{j}$Percentage to perihelion $q$ from aphelion $Q$, defined by $\%_{T\rightarrow q} = \left(\frac{Q - r}{Q-q}\right)\cdot 100\mathrm{\%}$. \\ $^\mathrm{k}$Note number. \\ \ref{20120224}: PS1 is the \acf{Pan-STARRS} One. \ref{20120328}: Prop. ID 12AH16, \acs{PI} Wainscoat. \ref{20120705}: Prop. ID 177.A-3016(D), \acs{PI} Kuijken. \ref{20130505}: \acf{DECam}; Prop. ID 2013A-0327, \acs{PI} Rest. \ref{20130613}: Prop. ID 13AH09, \acs{PI} Wainscoat. \ref{20210104}: \acf{ZTF}; Prop. ID 1467501130115, \acs{PI} Kulkarni; data acquired through \acs{ZTF} Alert Stream service \citep{pattersonZwickyTransientFacility2019}. \ref{20210314}: Prop. ID 2019A-0305, \acs{PI} Drlica-Wagner. \ref{20210331}: Michael Jäger (Weißenkirchen, Austria), QHY600 \ac{CCD} on a 14'' Newtonian, . % \ref{20210404}: Roland Fichtl (Engelhardsberg, Germany), Central DS brand modified cooled Canon 5D Mark III on a 0.4~m f/2.5 Newtonian; \url{http://www.dieholzhaeusler.de/Astro/comets/0282P.htm}. % \ref{20220307}: \acf{IMACS}; PI Trujillo. \ref{20220521}: \acf{IMACS}; PI Trujillo. \ref{20220607}: \acf{GMOS}; Prop. ID GS-2022A-DD-103, \acs{PI} Chandler. \\ \clearpage \section{Equipment and Archives} \label{282P:sec:equipQuickRef} \begin{center} \footnotesize \begin{tabular}{llclccccc} Instrument & Telescope & Pixel Scale & Location & \texttt{AstroArchive} & \acs{ESO} & \acs{SSOIS} & \acs{STScI} & \acs{IRSA}\\ & & [\arcsec/pix] & & & & &\\ \hline \hline \acs{DECam} & 4.0~m Blanco & 0.263 & Cerro Tololo, Chile & S,R & & S & \\ \acs{GMOS}-S & 8.1~m Gemini South & 0.080 & Cerro Pachón, Chile & &\\ \acs{IMACS} & 6.5~m Baade & 0.110 & Las Campanas, Chile & & & \\ OmegaCAM & 2.6~m \acs{VLT} Survey & 0.214 & Cerro Paranal, Chile & & R & S & \\ GigaPixel1 & 1.8 m \acs{Pan-STARRS}1 & 0.258 & Haleakalā, Hawaii & & & S & R \\ \acs{LMI} & 4.3~m \acs{LDT} & 0.120 & Happy Jack, Arizona & & & \\ MegaPrime & 3.6~m \acs{CFHT} & 0.185 & Mauna Kea, Hawaii & & & S,R & \\ \acs{PTF}/\acs{CFHT}12K & 48" Samuel Oschin & 1.010 & Mt. Palomar, California & & & & & S,R \\ \acs{ZTF} Camera & 48" Samuel Oschin & 1.012 & Mt. Palomar, California & & & & & S,R \\ \acs{VATT}4K \acs{CCD} & 1.8~m \acs{VATT} & 0.188 & Mt. Graham, Arizona & & & &\\ \end{tabular} \raggedright \footnotesize{\\ R indicates repository for data retrieval. S indicates search capability.\\ \texttt{AstroArchive}: \ac{NSF} \ac{NOIRLab} \texttt{AstroArchive} (\url{https://astroarchive.noirlab.edu}).\\ \ac{ESO}: \acl{ESO} (\url{https://archive.eso.org}).\\ \ac{IRSA}: \acs{NASA}/CalTech \ac{IRSA} (\url{https://irsa.ipac.caltech.edu}).\\ \acs{PTF}: The \ac{PTF}. \acs{SSOIS}: The \ac{SSOIS} (\citealt{gwynSSOSMovingObjectImage2012}, \url{https://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en/ssois/}).\\ \ac{STScI}: \url{https://www.stsci.edu/}. } \end{center} This Table % lists the instruments and telescopes used in this work, along with their respective pixel scales, locations, and data archives. \clearpage \section{282P/(323137) 2003 BM80 Data} % \label{282P:sec:ObjectData} We provide information current as of 2022 March 18 regarding \objnameFull{}{} below. \begin{center} \small \begin{tabular}{lll} Parameter & Value & Source\\ \hline\hline Designations & (323137), 2003~BM$_{80}$, 2003~FV$_{112}$, 282P & \acs{JPL} \acs{SBDB}, \acs{MPC}\\ Discovery Date & 2003 January 31 & \acs{JPL} \acs{SBDB}, \acs{MPC}\\ Discovery Observer(s) & \ac{LONEOS} & \acs{JPL} \acs{SBDB}, \acs{MPC}\\ Discovery Observatory & Lowell Observatory & \acs{JPL} \acs{SBDB}, \acs{MPC}\\ Discovery Site & Anderson Mesa Station, Arizona & \acs{JPL} \acs{SBDB}, \acs{MPC}\\ Discovery Site Code & 688 & \acs{MPC} \\ Activity Discovery Date & 2013 June 12 & \acs{CBET} 3559 \citep{bolinComet2003BM2013}\\ Activity Discoverer(s) & Bryce Bolin, Larry Denneau, Peter Veres & \acs{CBET} 3559 \citep{bolinComet2003BM2013}\\ Orbit Type & \acf{QHO} & this work\\ % Diameter & $D=$3.4$\pm$0.4~km & {\citet{harrisAsteroidsThermalInfrared2002}}\\ % Absolute $V$-band Magnitude & $H=13.63$ & \acs{MPC} (MPO648742)\\ Geometric Albedo & Unknown & \\ Assumed Geometric Albedo & 4\% & \cite{snodgrassSizeDistributionJupiter2011}\\ Rotation Period & Unknown & \\ Orbital Period & $P=8.732\pm(2.174\times10^{-7})$~yr & \acs{JPL} \acs{SBDB} \\ % Semi-major Axis & $a=4.240\pm(7.039\times10^{-8})$~au & \acs{JPL} \acs{SBDB}\\ % Perihelion Distance & $q=3.441\pm(3.468\times10^{-7})$~au & \acs{JPL} \acs{SBDB}\\ % Aphelion Distance & $Q=5.039\pm(8.366\times10^{-8})$~au & \acs{JPL} \acs{SBDB}\\ % Eccentricity & $e=0.188\pm(7.790\times10^{-8})$ & \acs{JPL} \acs{SBDB}\\ % Inclination & $i=5.812\degr\pm(1.166\degr\times10^{-5})$ & \acs{JPL} \acs{SBDB}\\ % Argument of Perihelion & $\omega=217.626\degr\pm(7.816\degr\times10^{-5})$ & \acs{JPL} \acs{SBDB}\\ % Longitude of Ascending Node & $\Omega=9.297\degr\pm(5.974\degr\times10^{-5})$ & \acs{JPL} \acs{SBDB}\\ % Mean Anomaly & $M=9.979\degr\pm(3.815\degr\times10^{-5})$ & \acs{JPL} \acs{SBDB}\\ % Tisserand Parameter w.r.t. Jupiter & $T_\mathrm{J}=2.99136891\pm\left(3.73\times10^{-8}\right)$ & this work\\ Orbital Solution Date & 2021 October 8 & \acs{JPL} \acs{SBDB}\\ \end{tabular} \raggedright\footnotesize Notes: \ac{CBET} \footnote{\url{http://www.cbat.eps.harvard.edu}}. \ac{JPL} \ac{SBDB} is the \acs{NASA} \acs{JPL} \acl{SBDB}\footnote{\url{https://ssd.jpl.nasa.gov/tools/sbdb_lookup.html}}. \acs{MPC} is the \acl{MPC}\footnote{\url{https://minorplanetcenter.net}}. \end{center} \clearpage \section*{Acronyms} \label{sec:acronyms} \begin{acronym} \acro{API}{Application Programming Interface} \acro{APT}{Aperture Photometry Tool} \acro{ARO}{Atmospheric Research Observatory} \acro{AstOrb}{Asteroid Orbital Elements Database} \acro{ASU}{Arizona Statue University} \acro{AURA}{Association of Universities for Research in Astronomy} \acro{BLT}{Barry Lutz Telescope} \acro{CADC}{Canadian Astronomy Data Centre} \acro{CASU}{Cambridge Astronomy Survey Unit} \acro{CATCH}{Comet Asteroid Telescopic Catalog Hub} \acro{CBAT}{Central Bureau for Astronomical Telegrams} \acro{CCD}{charge-coupled device} \acro{CEA}{Commissariat a l'Energes Atomique} \acro{CBET}{Central Bureau for Electronic Telegrams} \acro{CFHT}{Canada-France-Hawaii Telescope} \acro{CNRS}{Centre National de la Recherche Scientifique} \acro{CSBN}{Committee for Small Bodies Nomenclature} \acro{CTIO}{Cerro Tololo Inter-American Observatory} \acro{DART}{Double Asteroid Redirection Test} \acro{DAPNIA}{Département d'Astrophysique, de physique des Particules, de physique Nucléaire et de l'Instrumentation Associée} \acro{DECam}{Dark Energy Camera} \acro{DES}{Dark Energy Survey} \acro{DCT}{Discovery Channel Telescope} \acro{DDT}{Director's Discretionary Time} \acro{DR}{Data Release} \acro{DS9}{Deep Space Nine} \acro{ELTE}{Eotvos Lorand University} \acro{ESA}{European Space Agency} \acro{ESO}{European Space Organization} \acro{ETC}{exposure time calculator} \acro{FAQ}{Frequently Asked Questions} \acro{FITS}{Flexible Image Transport System} \acro{FOV}{field of view} \acro{GEODSS}{Ground-Based Electro-Optical Deep Space Surveillance} \acro{GIF}{Graphic Interchange Format} \acro{GMOS}{Gemini Multi-Object Spectrograph} \acro{GRFP}{Graduate Research Fellowship Program} \acro{HARVEST}{Hunting for Activity in Repositories with Vetting-Enhanced Search Techniques} \acro{HSC}{Hyper Suprime-Cam} \acro{IAS15}{Integrator with Adaptive Step-size control, 15th order} \acro{IAU}{International Astronomical Union} \acro{IMACS}{Inamori-Magellan Areal Camera and Spectrograph} \acro{IMB}{inner Main-belt} \acro{IMCCE}{Institut de Mécanique Céleste et de Calcul des Éphémérides} \acro{INT}{Isaac Newton Telescopes} \acro{IP}{Internet Protocol} \acro{IPAC}{Infrared Processing and Analysis Center} \acro{IRSA}{Infrared Science Archive} \acro{ITC}{integration time calculator} \acro{JAXA}{Japan Aerospace Exploration Agency} \acro{JD}{Julian Date} \acro{JFC}{Jupiter Family Comet} \acro{JPL}{Jet Propulsion Laboratory} \acro{KBO}{Kuiper Belt object} \acro{KIDS}{Kilo-Degree Survey} \acro{KOA}{Keck Observatory Archive} \acro{KPNO}{Kitt Peak National Observatory} \acro{LBC}{Large Binocular Camera} \acro{LBT}{Large Binocular Telescope} \acro{LCOGT}{Las Cumbres Observatory Global Telescope} \acro{LDT}{Lowell Discovery Telescope} \acro{LINEAR}{Lincoln Near-Earth Asteroid Research} \acro{LMI}{Large Monolithic Imager} \acro{LONEOS}{Lowell Observatory Near-Earth-Object Search} \acro{LSST}{Legacy Survey of Space and Time} \acro{MBC}{Main-belt Comet} \acro{MGIO}{Mount Graham International Observatory} \acro{ML}{machine learning} \acro{MMB}{middle Main-belt} \acro{MOST}{Moving Object Search Tool} \acro{MPC}{Minor Planet Center} \acro{NASA}{National Aeronautics and Space Administration} \acro{NAU}{Northern Arizona University} \acro{NEA}{near-Earth asteroid} \acro{NEAT}{Near-Earth Asteroid Tracking} \acro{NEO}{near-Earth object} \acro{NIHTS}{Near-Infrared High-Throughput Spectrograph} \acro{NOAO}{National Optical Astronomy Observatory} \acro{NOIRLab}{National Optical and Infrared Laboratory} \acro{NONCOM}{Not Orbitally a Nominal Comet but Overtly a Minor planet}% \acro{NRC}{National Research Council} \acro{OMB}{outer Main-belt} \acro{OMBA}{outer main-belt asteroid} \acro{OSIRIS-REx}{Origins, Spectral Interpretation, Resource Identification, Security, Regolith Explorer} \acro{NSF}{National Science Foundation} \acro{PANSTARRS}{Panoramic Survey Telescope and Rapid Response System} \acro{Pan-STARRS}{Panoramic Survey Telescope and Rapid Response System} \acro{PI}{Principal Investigator} \acro{PNG}{Portable Network Graphics} \acro{PSI}{Planetary Science Institute} \acro{PSF}{point spread function} \acro{PTF}{Palomar Transient Factory} \acro{QHA}{Quasi-Hilda Asteroid} \acro{QHC}{Quasi-Hilda Comet} \acro{QHO}{Quasi-Hilda Object} \acro{SAFARI}{Searching Asteroids For Activity Revealing Indicators} \acro{SDSS}{Sloan Digital Sky Survey} \acro{SMOKA}{Subaru Mitaka Okayama Kiso Archive} \acro{SAO}{Smithsonian Astrophysical Observatory} \acro{SBDB}{Small Body Database} \acro{SDSS DR-9}{Sloan Digital Sky Survey Data Release Nine} \acro{SLAC}{Stanford Linear Accelerator Center} \acro{SOAR}{Southern Astrophysical Research Telescope} \acro{SNR}{signal-to-noise ratio} \acro{SSOIS}{Solar System Object Information Search} \acro{SQL}{Structured Query Language} \acro{STScI}{Space Telescope Science Institute} \acro{SUP}{Suprime Cam} \acro{SWRI}{Southwestern Research Institute} \acro{TNO}{Trans-Neptunian object} \acro{TRIF}{Technology and Research Initiative Fund} % \acro{TSIP}{Telescope System Instrumentation Program} \acro{UA}{University of Arizona} \acro{UCSC}{University of California Santa Cruz} \acro{UCLA}{University of California Los Angeles} \acro{UCSF}{University of California San Francisco} \acro{UT}{Universal Time} \acro{VATT}{Vatican Advanced Technology Telescope} \acro{VIRCam}{VISTA InfraRed Camera} \acro{VISTA}{Visible and Infrared Survey Telescope for Astronomy} \acro{VLT}{Very Large Telescope} \acro{VST}{Very Large Telescope (VLT) Survey Telescope} \acro{WFC}{Wide Field Camera} \acro{WGSBN}{Working Group for Small Bodies Nomenclature} \acro{WIRCam}{Wide-field Infrared Camera} \acro{WISE}{Wide-field Infrared Survey Explorer} \acro{WCS}{World Coordinate System} \acro{YORP}{Yarkovsky--O'Keefe--Radzievskii--Paddack} \acro{ZTF}{Zwicky Transient Facility} \end{acronym}
Title: The Velocity-Dependent $J$-factor of the Milky Way Halo: Does What Happens in the Galactic Bulge Stay in the Galactic Bulge?
Abstract: We consider the angular distribution of the photon signal which could arise from velocity-dependent dark matter annihilation within the Galactic bulge. We find that, for the case of Sommerfeld-enhanced annihilation, dark matter annihilation within the bulge is dominated by slow speed particles which never leave the bulge, allowing one to find a simple analytic relationship between the dark matter profile within the Galactic bulge and the angular distribution. On the other hand, for the case $p$- or $d$-wave annihilation, we find that the small fraction of high-speed particles which can leave the bulge provide a significant, often dominant, contribution to dark matter annihilation within the bulge. For these scenarios, fully understanding dark matter annihilation deep within the Galactic bulge, and the angular distribution of the resulting photon signal, requires an understanding of the dark matter profile well outside the bulge. We consider the Galactic Center excess in light of these results, and find that an explanation of this excess in terms of $p$-wave annihilation would require the dark matter profile within the bulge to have a much steeper profile than usually considered, but with uncertainties related to the behavior of the profile outside the bulge.
https://export.arxiv.org/pdf/2208.14002
\section{Introduction} The Galactic Center (GC) of the Milky Way (MW) is an interesting target for the indirect detection of dark matter, because it is expected to have a large density of dark matter (DM). In fact, an excess of gamma-rays in the GeV range has been observed within the inner several degrees of the GC~\cite{Goodenough:2009gk,Hooper:2010mq,Fermi-LAT:2015sau}. The origin of these photons is under study, and may lie in ordinary astrophysical sources, such as millisecond pulsars (MSPs) (see, for example,~\cite{Abazajian:2010zy}). Dark matter is another possible origin which has been the subject of much study, particularly because the angular distribution seen in the photon excess is consistent with what one would expect from dark matter annihilation, assuming a density distribution consistent with results from numerical simulations. In this work, we will study the dependence of the gamma-ray angular distribution on the velocity-dependence of dark matter annihilation. Although our analysis will be general, we will consider an application of these results to the observed GC excess, assuming its origin is dark matter annihilation. The velocity-dependence of the dark matter annihilation cross section impacts the consistency of a dark matter explanation for the GC excess with constraints from searches of dwarf spheroidal galaxies (dSphs)~\cite{Fermi-LAT:2010cni,Fermi-LAT:2011vow,Fermi-LAT:2013sme,Fermi-LAT:2015att,Fermi-LAT:2016uux}. In the most commonly studied scenario ($s$-wave annihilation) $\sigma v$ is independent of the relative velocity $v$. In this case, models which can explain the GC excess have cross sections which are roughly at the limit of dSphs searches. Although there are systematic uncertainties which make a clear statement difficult, it is fair to say that an explanation of the GC excess through $s$-wave dark matter annihilation faces non-trivial constraints from dSphs searches~\cite{Fermi-LAT:2016uux,Chang:2018bpt,Hooper:2019xss}. But if dark matter annihilates from a $p$- or $d$-wave initial state, then annihilation is more heavily suppressed in regions where the dark matter relative velocity is smaller, suppressing the annihilation rate in dSphs relative to the GC. Since these scenarios effectively weaken any constraints from dSphs searches on explanations of the GC excess, it is important to know how these scenarios affect the effective $J$-factor, which encodes the angular distribution of the signal. There has already been previous work discussing the effective $J$-factor of halos in the case of velocity-dependent dark matter annihilation (see, for example,~\cite{Robertson_2009,Ferrer:2013cla,Boddy:2017vpe,Zhao:2017dln,Petac:2018gue,Lacroix:2018qqh,Boddy:2019wfg}), and of the GC in particular~\cite{Boddy:2018ike,Johnson:2019hsm,Board:2021bwj,McKeown:2021sob}. The case of the GC is complicated by the fact that there is a large baryonic contribution to the gravitational potential, yielding additional parameters of the baryonic distribution which affect the $J$-factor. In this work, we will consider dark matter annihilation within the Galactic bulge. Under the approximation that baryonic matter dominates the gravitational potential, which is taken to have a power-law expansion, one can then solve for the dark matter velocity-distribution and the $J$-factor analytically in the region of the Galactic bulge. We will find that for Sommerfeld-enhanced dark matter annihilation, the angular distribution in bulge region is dominated by the annihilation of low-speed particles which never leave the bulge, resulting in a simple analytic prediction for the angular distribution which depends only on the slope of the dark matter profile inside the bulge. This analytic prediction for the angular distribution can be applied to a wide range of scenarios, including different choices for the dark matter density profile outside the Galactic bulge. On the other hand, we find that the $p$-wave and $d$-wave annihilation rates receive a sizeable contribution from the energetic particles which explore the gravitational potential far outside the bulge. Although such particles only provide a negligible contribution to the dark matter density inside the bulge, they are the fastest particles, and thus may dominate the annihilation rate within the bulge. In this case, one cannot disentangle the angular distribution of the signal near the GC from the gravitational potential far from the GC. As a result, one will see deviations from the analytic prediction which depend on the steepness of the dark matter profile, and find that a complete description of the angular distribution at small angle requires knowledge of the dark matter distribution even well outside the bulge. It is generally difficult to determine the behavior of dark matter within the bulge region. Stellar data can by used to constrain the density distribution within the bulge, but there are large uncertainties associated with the complexity of the stellar populations. Results can also be obtained from numerical simulations, but these simulations often lack the resolution to adequately probe the bulge (for recent progress, see~\cite{McKeown:2021sob}). Standard density profiles, such as NFW (or generalized versions of NFW), typically assume a power-law behavior all the way out to the scale radius ($r_s \sim 21~\kpc$), but there is no particular reason why there cannot be a different power-law behavior within the bulge itself, where the baryonic potential is more important. It is thus important to understand the circumstances in which the photon angular distribution depend only on the dark matter profile near the bulge, and how important corrections due to the behavior outside the bulge can be. It has been found that, for $s$-wave annihilation, the GC excess morphology can be matched by a dark matter density profile which scales as $\rho (r) \propto r^{-\gamma}$ with $\gamma$ in the range $1.2 - 1.4$ (see, for example,~\cite{Hooper:2010mq,Calore:2014xka}). For the case of $p$-wave annihilation, we find that the same morphology may require a steeper profile. Even so, we find that an accurate model of the photon angular distribution within the bulge requires knowledge of particles which exit the bulge. This implies that ambiguities in the dark matter profile and the effects of triaxiality, for example, can have a significant effect on the angular distribution for $p$-wave annihilation in the GC region. The plan of this paper is as follows. In Section~\ref{sec:formalism}, we describe the general formalism for computing the velocity-dependent $J$-factor in the Galactic bulge region. In Section~\ref{sec:analytic}, we describe an analytic approximation to the angular distribution, and the considerations which affect its validity. In Section~\ref{sec:GCE}, we apply these results to the GC excess. We conclude with a discussion of our results in Section~\ref{sec:conclusion}. \section{Dark matter and baryons near the galactic center} \label{sec:formalism} We consider the case in which the dark matter annihilation cross section has a velocity dependence which can be expressed as \bea \sigma v &=& (\sigma v)_0 \times (v/c)^n , \eea where $v$ is the relative speed and $(\sigma v)_0$ is a constant which is independent of $v$. The most common example is dark matter annihilation from an $s$-wave initial state ($n=0$), in which case $\sigma v$ is independent of $v$ in the non-relativistic limit. But there are a variety of well-motivated scenarios which are worth considering. If dark matter annihilates from a $p$-wave initial state, then one would find $n=2$; this scenario can arise, for example, if dark matter is a Majorana fermion which couples to a Standard Model (SM) fermions/anti-fermion pair through interactions which respect minimal flavor violation (MFV) (see, for example,~\cite{Kumar:2013iva}). If the dominant annihilation channel is from a $d$-wave initial state, then one would find $n=4$; this scenario can arise if dark matter were instead a self-conjugate spin-0 particle~\cite{Kumar:2013iva,Giacchino:2013bta,Toma:2013bka}. If dark matter annihilation is Sommerfeld-enhanced due to an attractive force mediated by a nearly massless particle, one would find $n=-1$~\cite{Arkani-Hamed:2008hhe,Feng:2010zp}. If we assume that the dark matter is a self-conjugate particle, we can express the photon flux arising from dark matter annihilation in the Milky Way halo as \bea \frac{d^2 \Phi}{dE d\Omega} &=& \frac{(\sigma v)_0}{8\pi m_X^2} \frac{dN}{dE} J_S (\cos \theta) , \eea where $m_X$ is the dark matter mass, $dN / dE$ is the photon energy spectrum arising from dark matter annihilation, and \bea J_S (\cos \theta) &=& \int d\ell \int d^3 v_1~f(r(\ell, \theta), v_1 ) \int d^3 v_2~f(r(\ell, \theta), v_2 ) \times (|\vec{v}_1 - \vec{v}_2|/c)^n . \eea Here, $f(r, v)$ is the dark matter velocity-distribution within the halo, which we assume is spherically symmetric and isotropic. This essentially implies that $f$ is a function only of $r = |\vec{r}|$ and $v = |\vec{v}|$. $D$ is the distance to the GC, $\theta$ is the angle between the GC and the line-of-sight, and $\ell = D\cos \theta \pm \sqrt{|\vec{r}|^2 - D^2 \sin^2 \theta}$ is the distance along the line-of-sight. It will be convenient to express $J_S (\cos \theta)$ as \bea J_S (\cos \theta) &=& \int_0^\infty d\ell ~P_n^2 (r) , \nonumber\\ &\sim& 2 \int_{D \sin \theta}^\infty dr \left(1 - \frac{D^2}{r^2} \sin^2 \theta \right)^{-1/2} P_n^2 (r) , \label{eq:JS_exact} \eea where \bea P_n^2 (r) &=& \int d^3 v_1 \int d^3 v_2~f(r, v_1 )~f(r, v_2 ) \times (|\vec{v}_1 - \vec{v}_2|/c)^n , \nonumber\\ &=& 8\pi^2 \int_0^\infty dv_1 \int_0^\infty dv_2~v_1^2 v_2^2~f(r, v_1 )~f(r, v_2 ) \frac{(v_1 + v_2)^{n+2} - |v_1-v_2|^{n+2}}{(n+2)v_1 v_2 c^n} . \eea Note that the upper limit integration in the second line of eq.~\ref{eq:JS_exact} encompass negative values of $\ell$, including integration along the line-of-sight in both directions. But when observing near the GC, the associated error is negligible. To determine the $J$-factor, we need an expression for $f(r,v)$. It follows from Liouville's Theorem that the time-averaged velocity-distribution can only be a function of the energy, as this is the only relevant integral of motion for a classical orbit. Defining $f(r,v) = f(E(r,v))$, where $E = (1/2) v^2 + \Phi(r)$ is the energy per mass of a dark matter particle, and $\Phi (r)$ is the gravitational potential, we have \bea \rho (r) &=& 4\pi \int_0^{v_{esc}(r)} dv~v^2~f(r,v) , \nonumber\\ &=& 4\sqrt{2} \pi \int_{\Phi (r)}^{\Phi (\infty)} dE~\sqrt{E - \Phi(r)}~f(E) , \eea where $v_{esc} (r)$ is the galactic escape velocity at $r$. Inverting this equation with the Abel integral equation yields the Eddington inversion formula \bea f(E) &=& \frac{1}{\sqrt{8} \pi^2} \int_E^{\Phi(\infty)} \frac{d^2 \rho}{d\Phi^2} \frac{d\Phi}{\sqrt{\Phi - E}} , \eea where we have implicitly expressed $\rho$ as a function of $\Phi$. We may write the gravitational potential $\Phi = \Phi_{DM} + \Phi_{bary}$ as the sum of the potential due to dark matter and the potential due to baryonic matter in the Milky Way, with \bea \Phi_{DM} (r) &=& \Phi_{DM} (0) + 4\pi G_N \int_0^r \frac{dx}{x^2} \int_0^x dy~y^2~\rho(y) . \eea We utilize a spherical approximation to the gravitational potential due to baryonic matter in the bulge and the disk~\cite{Strigari:2009zb,Pato:2012fw}, yielding \bea \Phi_{bary} (r) &=& -G_N \left[\frac{M_b}{c_0 +r} + \frac{M_d}{r} \left(1-e^{-r/b_d} \right) \right] + G_N \left[\frac{M_b}{c_0} + \frac{M_d}{b_d} \right], \eea where we take $M_b = 1.5 \times 10^{10} M_\odot$ as the mass of the Galactic bulge and $M_d = 7 \times 10^{10} M_\odot$ as the mass of the Galactic disk. We take the bulge scale radius to be $c_0 = 0.6~\kpc$, and the disk scale radius to be $b_d = 4~\kpc$. We have added a convenient constant to the potential to set $\Phi(0)=0$. Note, we are ignoring the contribution to the gravitational potential due to the black hole at the center of the Milky Way. This contribution should be subleading for $r>\pc$ (see~\cite{Sandick:2016zeg}, for example). Given an ansatz for $\rho(r)$, one can then numerically integrate the above equations to obtain $J_S (\cos \theta)$~\cite{Boddy:2018ike}. We will consider generalized NFW profiles, given by \bea \rho (r) &=& \frac{\rho_s}{(r/r_s)^\gamma (1+(r/r_s))^{3-\gamma}} , \eea where $\gamma$ is a parameter describing the inner slope, $\rho_s$ is the scale density, and $r_s$ is the scale radius, which we take to be $r_s = 21~\kpc$. \section{Analytic Approximation} \label{sec:analytic} We will now consider an analytic approximation to $J_S$ at small $\theta$. For this purpose, we assume \begin{itemize} \item{The $J$-factor at small $\theta$ is dominated by dark matter annihilation within the bulge, so we can ignore dark matter annihilation for $r>c_0$.} \item{The dark matter density within the bulge can be written as $\rho(r) \sim \rho_s (r/r_s)^{-\gamma}$. This is true for the generalized NFW profiles for which we obtained numerical results, but can encompass many more profiles.} \item{The gravitational potential within the bulge is dominated by $\Phi_{bary}$, so we can ignore $\Phi_{DM}$.} \item{Within the bulge, $\Phi_{bary}(r)$ is sufficiently well-approximated by a Taylor expansion to linear order in $r$.} \end{itemize} Note, we will not always find these assumptions to be valid. As we will see in Section~\ref{subsec:validity}, some choices of the parameters will yield significant deviations from these assumptions, leading to deviations from the analytic prediction. We will address the import of deviations from the analytic treatment in Sections ~\ref{subsec:validity} and~\ref{sec:GCE}. But given these assumptions, we find \bea \Phi (r) \sim \Phi_0 r , \eea where $\Phi_0 = G_N [(M_b / c_0^2) + M_d / 2b_d^2]$, given our spherical approximation to the baryonic potential. But we will see that the our result will apply for any value of $\Phi_0$. The only necessary condition is that the linear approximation to the potential be sufficiently good within the bulge. We then find \bea \rho (r) &=& 4\sqrt{2} \pi \left(\Phi_0 r \right)^{3/2} \int_1^{\Phi(\infty) /\Phi_0 r} dx~ \sqrt{x-1}~f(x \Phi_0 r ) . \label{eq:rho} \eea For $r \ll c_0$, we may take $\Phi(\infty) / \Phi_0 r \rightarrow \infty$, in which case the only dependence of the integral on $r$ is in the argument of $f$. Since $\rho (r) \propto r^{-\gamma}$, we can solve eq.~\ref{eq:rho} by assuming a power-law ansatz for $f$, namely \bea f(E) &=& f_0 E^{-\gamma -3/2} , \nonumber\\ f_0 &=& \rho_s (r_s\Phi_0)^{\gamma} \left[4\sqrt{2} \pi \int_1^\infty dx~x^{-\gamma -3/2}\sqrt{x-1} \right]^{-1} . \label{eq:f} \eea Note that the integral in eq.~\ref{eq:f} converges for $\gamma >0$. In this case, the high-velocity tail of particles which can leave the bulge contribute negligibly to the density at small $r$. We now have \bea P_n^2 &=& 8\pi^2 f_0^2 (\Phi_0 r)^{-2\gamma +(n/2)} \int_0^\infty dy_1 \int_0^\infty dy_2~ y_1^2 y_2^2~ \left((1/2)y_1^2 + 1 \right)^{-\gamma-3/2} \left((1/2)y_2^2 + 1 \right)^{-\gamma-3/2} \nonumber\\ &\,& \times \frac{(y_1 + y_2)^{n+2} - |y_1-y_2|^{n+2}}{(n+2)y_1 y_2 c^n} , \nonumber\\ &=& \left( f_0 \Phi_0^{-\gamma} \right)^2 \left( \Phi_0/c^2 \right)^{ (n/2) } I_{\gamma,n} r^{-2\gamma + (n/2)} , \eea where \bea I_{\gamma,n} &\equiv& 8\pi^2\int_0^\infty dy_1 \int_0^\infty dy_2~ y_1^2 y_2^2~ \left((1/2)y_1^2 + 1 \right)^{-\gamma-3/2} \left((1/2)y_2^2 + 1 \right)^{-\gamma-3/2} \nonumber\\ &\,& \times \frac{(y_1 + y_2)^{n+2} - |y_1-y_2|^{n+2}}{(n+2)y_1 y_2 } . \eea Note, however, that the integral defining $I_{\gamma, n}$ only converges for $\gamma > n/2$. For profiles which are not steep enough, the divergence at large velocity indicate that dark matter annihilation, even well within the bulge, is dominated by high-velocity particles which leave the bulge. Although such particles make only a negligible contribution to $\rho(r)$, the enhancement to the annihilation cross section for high-velocity particles when $n \geq 2\gamma$ means that they can dominate the annihilation rate. If $\gamma > n/2$, however, we can find an analytic approximation to the $J$-factor of the GC within the inner few degrees. We will use our expression for $J_S (\cos \theta)$ (eq.~\ref{eq:JS_exact}), but only integrate to a distance $c_0$ from the GC, assuming that this region dominates the annihilation rate. This expression is thus restricted to the angular region $\theta \leq c_0 / D \sim 4^\circ$, for which we can approximate $\sin \theta \sim \theta$. We then find \bea J_S (\cos \theta) &=& 2 \left( f_0^2 \Phi_0^{-2\gamma + (n/2) }c^{-n} \right) I_{\gamma,n} \int_{D\theta}^{c_0} dr \left(1 - \frac{D^2}{r^2} \theta^2 \right)^{-1/2} r^{-2\gamma + (n/2)} . \eea For $\theta \ll c_0 /D$, we can take the upper limit of integration to infinity, in which case the integral has a power-law dependence on $\theta$, yielding \bea J_S (\cos \theta) &\sim & 2D \left[ (f_0/(\Phi_0 D)^\gamma)^2 (\Phi_0 D /c^2)^{n/2} I_{\gamma,n} \right] \left[ \int_1^{\infty} dx \left(1 - x^{-2} \right)^{-1/2} x^{-2\gamma + (n/2)} \right] \nonumber\\ &\,& \times \theta^{1-2\gamma + (n/2) } . \eea We thus see that, at small angles, there is a complete degeneracy between $\gamma$ and $n$, provided the baryonic potential is dominant and $\gamma > n/2$. For a sufficiently steep profile satisfying this condition, the angular distribution of photon emission at small angles is independent of the dark matter distribution outside the bulge. This condition is satisfied for all reasonable models in the case of Sommerfeld-enhanced annihilation, and for even moderately cuspy profiles in the case of $s$-wave annihilation. But it is only satisfied for profiles cuspier than NFW in the case of $p$-wave annihilation, and for very steep profiles ($n > 2$) in the case of $d$-wave annihilation. \subsection{Validity of the Analytic Approximation} \label{subsec:validity} There are several considerations which affect whether or not this analytic approximation is valid. Provided the baryonic potential dominates over the dark matter gravitational potential, the velocity-distribution at the core (that is, at small $E$) is well-approximated by the analytic power-law form, as we illustrate in Figure \ref{fig:fe}. Here we plot $f(E)$ as a function of $E$, taking $\rho_s = 8 \times 10^6 M_\odot / \kpc^3$, for various choices of $\gamma$ (solid lines). We also plot the power-law analytic approximation (dashed lines). We see that, for profiles which are not too steep, the analytic approximation fits well at low energies. The analytic approximation begins to diverge from the numerical result for $\gamma \gtrsim 1.3$, when the dark matter contribution to the potential begins to dominate at small $r$. Note, however, that for smaller values of $\rho_s$, the power-law result becomes a better approximation to the full numerical result, even for larger $\gamma$. But even if the baryonic potential dominates, the linear approximation to this potential is only very good well within the bulge; at the edge of the bulge, the gravitational potential will necessarily deviate from the linear approximation, causing $P_n^2$ to deviate from power-law form at the edge of the bulge. This can cause the angular distribution to deviate from the analytic power-law form even at small angle, because the analytic approximation mismodels the velocity-distribution at the edge of the bulge, even along a line of sight aimed directly at the GC. The accuracy of the analytic approximation to the angular distribution thus depends on the steepness of the profile; for a steeper profile, with a larger fraction of the total annihilation rate concentrated deep in the interior of the bulge, the analytic result will be better. The larger the velocity power-law exponent $n$, the steeper the profile must be in order for the analytic approximation to be valid, since a larger value of $n$ implies that dark matter annihilation is increasingly dominated by high-speed particles which explore the edges of the Bulge. We thus see that changes to the steepness of the density slope within the bulge produce two competing effects. A steeper profile tends to concentrate dark matter annihilation deep with in the bulge, making the analytic result a better approximation. More generally, for a steeper profile, dark matter annihilation in the bulge is dominated by particles which never leave the bulge, implying that the angular distribution is determined only by the potential within the bulge, and is decoupled from what goes on outside. For a more shallow cusp, one must understand the details of how gravitational potential changes as one approaches the edge of the bulge. In this case, one would similarly expect that the effects of triaxiality, as well as deviations from the spherical approximation to the baryonic potential (due, for example to the Galactic disk contribution), will become more important. For larger $n$, one requires a steeper profile in order to decouple these effects, since the annihilation rate is increasingly dominated by high-speed particles which, though a small fraction of the dark matter within the bulge, nevertheless have an enhanced annihilation rate. But the steeper the dark matter density slope, the more dark matter will tend to dominate the gravitational potential deep within the bulge, which would also invalidate the analytic approximation. The strength of the dark matter gravitational potential depends on both $\gamma$ and $\rho_s$. If one assumes a generalized NFW profile throughout the entire MW halo, then one can estimate $\rho_s$, with uncertainties, from observational data throughout the halo. But if the the dark matter distribution within the bulge forms a separate power-law distribution, then the normalization of the density distribution within the bulge is much less constrained, since it need only smoothly match onto a shallower generalized NFW outside the bulge. As an illustrative example, we consider the case $\gamma = 1.2$, with $\rho_s = 4 \times 10^6 M_\odot / \kpc^3$. For this choice, the dark matter density at the solar radius would be give by $\rho_\odot = 0.25~\gev / \cm^3$. In Figure~\ref{fig:analytic_approx}, top panel, we plot the potential $\Phi(r)$, including the baryonic and dark matter contributions, as well as the total potential. The solid lines are the full potential, while the dashed lines represent the analytic power-law approximation which we consider. We see that, for this choice of $\gamma$ and $\rho_s$, the baryonic contribution dominates the total potential, which is required for the analytic approximation to be valid. But the power-law approximation to the baryonic potential breaks down once we reach the edge of the bulge, for $r \gtrsim {\cal O}(0.1)~\kpc$. As such, we expect deviations from the analytic approximation to be driven by particles which explore the edge of the bulge. In the middle panel of Figure~\ref{fig:analytic_approx}, we plot $P_n^2$ for $n=-1, 0, 2$ and $4$. Again, the complete numerical calculation is shown in solid lines, while the power law analytic approximation is shown in dashed lines. For $n=0$, the ordinary case of $s$-wave annihilation, the analytic approximation is nearly exact within the bulge, since dark matter annihilation is velocity-independent, so deviations from the analytic approximation to the potential are irrelevant. For the case of Sommerfeld-enhanced annihilation ($n=-1$), the analytic approximation to $P_{n=-1}^2$ matches the numerical calculation well inside the bulge, because for Sommerfeld-enhanced annihilation, the annihilation rate is dominated by low-speed particles which never explore the edges of the bulge, where the potential deviates from power-law. For $p$-wave annihilation ($n=2$), we see that the power law approximation is only a good fit deep within the bulge. This is not surprising, since the profile we have chosen is only slightly steeper than the limit $\gamma =1$, at which the analytic approximation breaks down entirely for $p$-wave annihilation, which is then dominated by particles which leave the bulge. In the bottom panel of Figure~\ref{fig:analytic_approx}, we plot the angular distribution, which is proportional to $J_S (\cos \theta)$, for $n=-1, 0, 2$ and $4$. The angular distribution is normalized to unity when integrated over $4^\circ$, and is plotted with solid lines. We used dashed lines to plot the power-law analytic approximation to $J_S (\cos \theta)$, which is normalized (for ease in comparing to the numerical result) to match the numerical computation as $\theta \rightarrow 0$. As expected from the discussion of $P_n^2$, the slope of analytic approximation matches the that of the numerical computation fairly well within the inner degree, for Sommerfeld-enhanced annihilation. On the other hand, for the case of $p$-wave annihilation, the angular distribution matches the analytic prediction only for a very small angular range ($\Delta \theta \sim {\cal O}(10^{-3})$ degrees), which would not be useful for a data analysis. It is interesting to note that, for this case ($\gamma = 1.2$), the photon angular distribution arising from Sommerfeld-enhanced annihilation is a well-matched by the analytic approximation to the GC because dark matter annihilation is dominated by low-speed particles which only explore the bulge. This result thus largely depends only on the slope of the dark matter distribution within the bulge, and is independent of the dark matter distribution outside the bulge. This analytic approximation can thus be generalized beyond the assumption of an NFW profile, and does not assume that the slope of the profile inside the bulge is that same as that outside. Similarly, it is relatively robust against the effects of triaxiality, which is likely to affect the dark matter profile at relatively large distances from the GC. On the other hand, the analytic approximation to $p$-wave fails (except at very small angles) because dark matter annihilation is, in this case, dominated by high-speed particles which can leave the bulge. Interestingly, this is the case even though, for $\gamma = 1.2$, the analytic approximation to the velocity distribution matched the numerical calculation fairly well (see Figure~\ref{fig:fe}). Beyond the failure of the analytic approximation to the angular distribution, the more general lesson is that, for $p$-wave annihilation in the case of a profile which is not very steep, the angular distribution near the GC cannot be determined accurately without a full knowledge of the dark matter profile and baryonic potential, even far away from the GC. Similarly, the effects of triaxiality, which are expected to be relatively small within the Galactic bulge, should be expected to nevertheless have a significant effect on the angular distribution at small angles. Even though a negligible fraction of the DM within the Galactic bulge may explore the gravitational potential very far away, that small fraction of particles will dominate the annihilation rate (and thus the angular distribution) unless the profile is very steep. The considerations are even more relevant for the case of $d$-wave annihilation. \section{The Galactic Center Excess} \label{sec:GCE} As an application, we consider the GC excess. The excess arises in photons with an energy of a few $\gev$, at which the Fermi-LAT would have an angular resolution of order a few tenths of a degree. But for the purpose of understanding how well the analytic approximation works, and when and why it fails, we will consider angles as small as $10^{-3}$ degrees from the GC. For the case of $s$-wave dark matter annihilation ($n=0$), the angular distribution of the GC excess would require an inner slope within the bulge of $\gamma = 1.2-1.4$~\cite{Hooper:2010mq,Calore:2014xka}. For $p$-wave annihilation ($n=2$), the same morphology would require $\gamma = 1.7 - 1.9$, if the analytic approximation is valid. This range of $\gamma$ is steep enough that dark matter annihilation deep within the bulge would be dominated by particles which never left the bulge. Note that, although this is a much steeper profile than is usually considered, the slope may be much shallower outside the bulge. A model of $d$-wave annihilation would also match the angular distribution of the GC excess if $\gamma = 2.2-2.4$. We will focus on the case of $p$-wave annihilation, with $\gamma = 1.7$. But for such a steep profile the baryonic contribution to the gravitational potential may no be longer dominant. This may be the case, but need not be. One can always reduce the amplitude of the dark matter density within the bulge, in order to ensure that the baryonic potential dominates, with the amplitude of the GC excess obtained by a corresponding rescaling of $(\sigma v)_0$. However, if the dark matter density at the edge of the bulge is too small, it would be difficult to obtain a density at the solar radius which is consistent with observational data. If the dark matter density at the solar radius is $0.3~\gev / \cm^3$, and if the profile outside the bulge is NFW with $r_s = 21~\kpc$, then the dark matter density at $r=c_0$ would be $\rho_{c_0} = 2 \times 10^8 M_\odot / \kpc^3$. Setting $\rho_{c_0} = \rho_s (c_0 / r_s)^{-1.7}$, we would find $\rho_s = 5 \times 10^5 M_\odot / \kpc^3$. If we had instead assumed that the dark matter density between the edge of the bulge and scale radius had a slope of only $-0.6$, this would have reduced $\rho_{c_0}$, and in turn $\rho_s$, by a factor of 2.5. We will consider the cases $\rho_s = 4 \times 10^a M_\odot / \kpc^3$, with $a=3, 4, 5, 6$. In determining the angular distribution with a full numerical calculation, it is necessary to make some assumption for the profile outside the bulge. For simplicity, we will assume a generalized NFW profile with inner slope of $\gamma = 1.7$ throughout the MW halo. But as we have noted, such a steep profile may not be valid outside the bulge. To characterize the extent to which DM annihilation outside the bulge affects the angular distribution, we will perform the numerical calculation in two ways. First, we compute the complete angular distribution assuming a generalized NFW profile. Second, we will compute the velocity-distribution assuming a generalized NFW profile throughout the MW halo, but will compute the $J$-factor by assuming that there is no dark matter annihilation outside the Bulge. This amounts to taking the upper limit of integration in eq.~\ref{eq:JS_exact} to be $c_0$. We plot our results in Figure~\ref{fig:gamma17}. The solid lines indicate the numerically-computed angular distribution for the case of $p$-wave annihilation, normalized so that the integral to $4^\circ$ is unity. The dashed lines are similar, but neglecting dark matter annihilation outside of the core. Finally, the dotted line is the analytic approximation, which is normalized to match the numerical calculation for $\rho_s = 4 \times 10^3 M_\odot / \kpc^3$ as $\theta \rightarrow 0$. Note that the solid and dashed lines are nearly identical. This implies that, at small angles, the dark matter annihilation rate is indeed dominated by particles which annihilate within the bugle itself. We see that, as $\rho_s$ decreases, the analytic approximation becomes a better match to the numerical calculation, for a larger range of angles. This is to be expected, because, this limit amounts to a best case scenario, where the profile is steep enough that even the high-speed particles which dominate $p$-wave annihilation are less likely to explore the edge of the bulge. But the dark matter density within the bulge is also taken to be small enough that the dark matter does not deform the potential significantly. But even in this case, we see that the analytic approximation begins to diverge from the numerical calculation for angles of ${\cal O}(0.1^\circ)$. This essentially happens because, no matter how steep the profile, the effect can only be to concentrate particles near the GC. To the extent that the baryonic potential dominates the profile, there will always be a very small high-speed tail of particles which can reach the edge of the bulge, but this small tail will nevertheless give a large contribution to the annihilation rate for the case of $p$-wave annihilation. We may thus draw a broader lesson from the comparison of the analytic approximation to the numerical calculation of the angular distribution in the limit of small $\rho_s$. The difference between these curves roughly characterizes the dependence of the angular distribution, even well within the bulge, on particles which leave the bulge, and thus the level of uncertainty in the angular distribution introduced by variations in the dark matter profile form, triaxiality, or any other features of the dark matter or baryonic distribution outside the bulge. It is also interesting to note that, as $\rho_s$ increases, the steepness of the angular distribution at small angles also increases. This result can also be understood intuitively. As $\rho_s$ increases, the dark matter contribution to the gravitational potential becomes more important. In the limit in which the dark matter contribution to the potential dominates, we may ignore the baryons entirely. This problem was considered in~\cite{Boucher:2021mii}, where it was found that the angular distribution (at small angle) for $p$-wave annihilation is a power law with slope $3-3\gamma$. We would thus expect the slope of the angular distribution to increase by a factor of $3/2$, as $\rho_s$ increases. This result is confirmed in Figure~~\ref{fig:gamma17}, where the choice $\rho_s = 4 \times 10^6 M_\odot / \kpc^3$ leads to an angular distribution with a slope of $\sim -2.1$ at small angle. Interestingly, if the angular distribution is to have slope $-1.4$ at small angle (as one would expect for $s$-wave annihilation with $\gamma = 1.2$), then for $p$-wave annihilation in a dark matter-dominated halo, one would require $\gamma \sim 1.47$. Out to distances of a few $\kpc$, for which the small angle approximation is still valid, the gravitational potential varies between being either baryon-dominated or dark matter-dominated, If $p$-wave annihilation were to produce an angular distribution consistent with what is observed for the GC excess, one would expect the slope of the dark matter density profile to lie in the $\gamma \sim 1.5 - 1.7$ range. Note that, in the case of $p$- or $d$-wave annihilation, a steeper slope within the inner slope region could also lead to a larger rate for dark matter annihilation near the black hole at the center of the MW~\cite{Shelton:2015aqa,Sandick:2016zeg}. We have not included the effects of this on our analysis, but this would be an interesting topic for future work. \section{Conclusion} \label{sec:conclusion} We have considered velocity-dependent dark matter annihilation within the Galactic bulge. Because the rate of velocity-dependent annihilation at any given location depends not only on the dark matter density at that point, but on the gravitational potential at all locations sampled by particles passing through that point, determining the annihilation rate can be a very non-local problem. Our goal has been to understand the extent to which the angular distribution of photons arriving from the direction of the bulge can understood entirely using features of the dark mater and baryonic density distributions within the bulge. Because the behavior of the gravitational potential within the bulge is very different from its behavior far away, the behavior of the dark matter density profile may also be quite different from what is expected from simulations which are extrapolated to small distances. There are large uncertainties in our ability to probe the dark matter profile in baryon-rich environments such as the bulge, either using simulations or stellar tracers, making it important to understand how strongly these environments control the angular distribution. The GeV excess of photons from the GC may have its origin in dark matter annihilation. But such solutions are constrained by searches for photons from other dark matter-rich environments, such as dSphs. Scenarios of velocity-dependent dark matter annihilation can avoid such constraints, which makes it particularly interesting to determine not only if these scenarios can reproduce the observed angular distribution, but also to determine which features of distribution contribute to this determination. We have found that for the case of Sommerfeld-enhanced annihilation, dark matter annihilation within the bulge is typically dominated by slow-moving particles which never leave the bulge. In this case, the photon angular distribution at small angle is largely controlled by a single parameter: the dark matter density slope within the bulge. The behavior of the dark matter distribution outside the bulge, including the slope in the region between the bulge and the scale radius, has only a small effect. In this case, the photon angular distribution largely probes the localized astrophysics of dark matter within the bulge, which can be robustly reconstructed if the angular resolution (for the energy range of the photons produced by DM annihilation) is $\lesssim 0.1^\circ$. On the other hand, for $p$- or $d$-wave annihilation, dark matter annihilation within the bulge receives a significant contribution from high-speed particles which leave the bulge. Although this is a small fraction of the dark matter particles within the bulge, this energetic tail nevertheless dominates the annihilation rate for these particular scenarios of velocity-dependent annihilation. In this case, the photon angular distribution, even at small angle, necessarily depends on the dark matter profile and the gravitational potential well outside the bulge. As a result, uncertainties in the dark matter profile, including the effects of triaxiality and the way the dark matter density interpolates between its behavior within and outside the bulge, will necessarily have a non-trivial impact on the angular distribution. The GC excess, as currently understood, would be consistent with $s$-wave dark matter annihilation with a density slope of $\gamma = 1.2-1.4$. To alleviate constraints from dSphs searches, one would like to consider a scenario of $p$-wave annihilation. Our results have shown that, in this case, it is not possible to cleanly relate this angular distribution to the slope of the density profile. Instead, we find a general prediction that this angular distribution would require a steeper profile in the case of $p$-wave annihilation. For example, the angular distribution (at small angle) yielded by $s$-wave annihilation with $\gamma=1.2$ inside the bulge, would be yielded with $p$-wave annihilation with $\gamma \sim 1.5 - 1.7$, with the details determined by parameters such as the ratio of baryons to dark matter in the MW, the slope of the dark matter distribution outside the bulge, triaxiality, etc. Note, we have not attempted an actual fit to data from the GC. Instead, we have simply taken at face value the detailed analyses performed in previous works, which have considered Fermi data from the GC as well as a variety of backgrounds in order to assess the morphology of the excess. But millisecond pulsars may generate part or all of the GC excess. Even assuming a large contribution to the excess arises from DM annihilation, correct inclusion of a MSP contribution could change the morphology of the contribution arising form DM annihilation. Similarly, improvements in our understanding of other backgrounds could also modify our understanding of the excess morphology. Although this would modify the details of our analysis, the overall framework would remain unchanged. A more detailed study of $p$-wave annihilation as an explanation for the GC excess seen in Fermi data would be an interesting topic of future work. {\bf Acknowledgements} We are grateful to Pearl Sandick and Louis E.~Strigari for useful discussions. JK is supported in part by DOE grant DE-SC0010504. JR is supported by NSF grant AST-1934744. \bibliography{thebibliography.bib}
Title: Dependence of the Radio Emission on the Eddington Ratio of Radio-Quiet Quasars
Abstract: Roughly 10% of quasars are "radio-loud", producing copious radio emission in large jets. The origin of the low-level radio emission seen from the remaining 90% of quasars is unclear. Observing a sample of eight radio-quiet quasars with the Very Long Baseline Array, we discovered that their radio properties depend strongly on their Eddington ratio (r_Edd=L_AGN/L_Edd). At lower Eddington ratios (r_Edd < 0.3), the total radio emission of the AGN predominately originates from an extremely compact region, possibly as small as the accretion disk. At higher Eddington ratios (r_Edd > 0.3), the relative contribution of this compact region decreases significantly, and though the total radio power remains about the same, the emission now originates from regions >100 pc large. The change in the physical origin of the radio-emitting plasma region with r_Edd is unexpected, as the properties of radio-loud quasars show no dependence with Eddington ratio. Our results suggest that at lower Eddington ratios the magnetised plasma is likely confined by the accretion disk corona, and only at higher Eddington ratios escapes to larger scales. Stellar-mass black holes show a similar dependence of their radio properties on the accretion rate, supporting the paradigm which unifies the accretion onto black holes across the mass range.
https://export.arxiv.org/pdf/2208.01488
\title{Dependence of the Radio Emission on the Eddington Ratio of Radio-Quiet Quasars} \author[0000-0003-2688-7191]{Abdulla Alhosani} \affiliation{NYU Abu Dhabi, PO Box 129188, Abu Dhabi, UAE} \email{ama1029@nyu.edu} \author[0000-0003-4679-1058]{Joseph D. Gelfand} \affiliation{NYU Abu Dhabi, PO Box 129188, Abu Dhabi, UAE} \email{jg168@nyu.edu} \author[0000-0002-5208-1426]{Ingyin Zaw} \affiliation{NYU Abu Dhabi, PO Box 129188, Abu Dhabi, UAE} \email{iz6@nyu.edu} \author[0000-0002-1615-179X]{Ari Laor} \affiliation{Technion Israel Institute of Technology, Department of Physics, Haifa 3200003, Israel} \email{laor@physics.technion.ac.il} \author[0000-0002-9356-1645]{Ehud Behar} \affiliation{Technion Israel Institute of Technology, Department of Physics, Haifa 3200003, Israel} \email{behar@physics.technion.ac.il} \author[0000-0003-1586-3653]{Sina Chen} \affiliation{Technion Israel Institute of Technology, Department of Physics, Haifa 3200003, Israel} \email{sina.chen@campus.technion.ac.il} \author[0000-0002-2200-0592]{Ramon Wrzosek} \affiliation{Rice University, Department of Physics and Astronomy, P.O. Box 1892, Houston, Texas 77251-1892} \email{ramon.wrzosek@rice.edu} \keywords{} \section{Introduction}\label{sec1} Quasars are the most persistent luminous sources in the Universe. Powered by material accreting onto the super-massive black hole (SMBH) residing at the center of a galaxy (e.g., \citealt{salpeter64, lynden-bell69}), the observed manifestations of these active galactic nuclei (AGN) vary significantly across the electromagnetic spectrum. At radio wavelengths, there is a well established dichotomy in the observed properties of quasars, with $\sim10\%$ being ``radio-loud", whose ratio of 6~cm (4.8~GHz) to 4400\AA~flux density $>$ 10 \citep{1989AJ.....98.1195K}, with the remaining $\sim90\%$ being ``radio-quiet" (e.g., \citealt{sandage65} and references therein). For radio-loud AGN, the radio emission is generated by a relativistic jet that originates close to the event horizon of the black hole (see \citealt{blandford19} for a recent review), regardless of its appearance in other wavebands (e.g. \citealt{urry95} for references thereafter). However, the situation is very different for radio-quiet AGN where, not only is there a multitude of possible sources for their radio emission (see \citealt{panessa19} for a recent review), the dominant mechanism can, and almost certainly does, vary between different radio-quiet quasars (RQQs). Understanding how $\sim90\%$ of accreting SMBHs produce their radio emission is important for understanding the physics of accretion onto black holes. Recent work suggests that the radio spectrum of RQQs depends on the AGN's Eddington ratio $r_{\rm Edd} \equiv L_{\rm AGN}/L_{\rm Edd}$, where $L_{\rm AGN}$ is its bolometric luminosity and $L_{\rm Edd}$ is the ``maximum" (Eddington) luminosity of material accreting onto a SMBH with its particular mass. A Very Large Array (VLA) study of the 4.8 and 8.5~GHz emission from 25 radio-quiet Palomar-Green (PG) quasars found that RQQs with low Eddington ratios ($r_{\rm Edd} \lesssim 0.3$) had flat radio spectra (spectral index $\alpha \gtrsim -0.5$; flux density $S_\nu \propto \nu^\alpha$), while RQQs with higher Eddington ratios ($r_{\rm Edd} \gtrsim 0.3$) had steep ($\alpha \lesssim -0.5$) radio spectra \citep{laor19}. Understanding the physical implications of this correlation requires determining the origin of the radio emission in these galaxies. In general, the radio emission from an AGN is primarily produced by the accretion disk itself or an accretion-powered outflow (e.g. wind or weak jet). Radio emission from optically thick plasma -- possibly the disk corona and/or base of a jet located very close (less than a few parsecs) to the SMBH -- will result in a compact, flat spectrum ($\alpha\sim0$) radio source. Conversely, outflows are observed to produce steep spectrum ($\alpha \lesssim -0.5$) radio emission on scales ranging from parsecs (pc) to kilo-parsecs (kpc) in size. Disentangling the emission from these two components requires measuring the radio spectrum and morphology of an AGN on physical scales far smaller than the $\sim0.1~{\rm kpc}$ spatial resolution of the VLA radio data used by \citealt{laor19}. To rectify this situation, we studied the pc-scale emission of the PG RQQs studied by \citealt{laor19} with the four highest radio spectral indices and the four lowest radio spectral indices, listed in Table \ref{Extended Table 1}. In Section \ref{sec11} we describe our reduction and analysis of 1.4 GHz and 4.8 GHz Very Long Baseline Array (VLBA) observations of these PG RQQs, while in Section \ref{sec:measurements} we present the properties of the nuclear radio emission derived from these observations. In Section \ref{sec:survey}, we describe the dependence of their observed radio properties with the Eddington ratio, while in Section \ref{sec:conclusions} we discuss the implications of these results. \begin{table}[tb] \caption{Properties of PG Radio-Quiet Quasars observed with the VLBA} \begin{center} \begin{tabular}{@{}lccccccc@{}} \hline \hline Name & $\alpha_{\rm J2000}$ & $\delta_{\rm J2000}$ & z & $r_{\rm Edd}$ & $\alpha_{4.8-8.5}^{\rm VLA}$ & $S_{4.8}^{\rm VLA}$ [mJy] & Ref.\\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) \\ \hline PG 0050+124 & 00:53:34.940 & +12:41:36.20 & 0.060 & 1.07 & $-$1.45 & $2.41 \pm 0.12$ & \tablenotemark{a},\tablenotemark{b},\tablenotemark{c} \\ PG 0052+251 & 00:54:52.120 & +25:25:39.00 & 0.155 & +0.93 & +0.21 & $0.68 \pm 0.04$ & \tablenotemark{a},\tablenotemark{d} \\ PG 1149-110 & 11:52:03.540 & -11:22:24.30 & 0.050 & 0.20 & +0.48 & $2.27 \pm 0.05$ & \tablenotemark{a},\tablenotemark{d} \\ PG 1440+356 & 14:42:07.463 & +35:26:22.92 & 0.077 & 2.70 & $-$1.88 & $1.24 \pm 0.07$ & \tablenotemark{a},\tablenotemark{b} \\ PG 1612+261 & 16:14:13.203 & +26:04:16.20 & 0.131 & 0.39 & $-$1.57 & $5.58 \pm 0.08$ & \tablenotemark{a},\tablenotemark{b},\tablenotemark{d} \\ PG 1613+658 & 16:13:57.179 & +65:43:09.58 & 0.139 & 0.08 & $+$1.06 & $3.03 \pm 0.07$ & \tablenotemark{a}\\ PG 2130+099 & 21:32:27.813 & +10:08:19.46 & 0.062 & 0.85 & $-$1.40 & $2.18 \pm 0.07$ & \tablenotemark{a},\tablenotemark{b},\tablenotemark{d} \\ PG 2304+042 & 23:07:02.912 & +04:35:57.22 & 0.042 & 0.03 & $+$0.67 & $0.77 \pm 0.07$ & \tablenotemark{a}\\ \hline \hline \end{tabular} \end{center} \tablenotetext{a}{\cite{1989AJ.....98.1195K}} \tablenotetext{b}{\cite{1996AJ....111.1431B}} \tablenotetext{c}{\cite{1998MNRAS.297..366K}} \tablenotetext{d}{\cite{2006A&A...455..161L}} \tablecomments{(1): Name of galaxy in Palomar-Green Catalog \cite{1986ApJS...61..305G}. (2)(3): Right Ascension $\alpha_{\rm J2000}$ and Declination $\delta_{\rm J2000}$, respectively, of the PG RQQs. (4): z, red shift. Columns 2,3, and 4 were obtained from NED. The NASA/IPAC Extragalactic Database (NED) is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. (5): The Eddington ratio, as calculated by \cite{laor19}. (6) The spectral index between the 4.8 and 8.5 GHz peak flux density measured in VLA observations, as calculated by \cite{laor19}. (7): The average 4.8 GHz total flux density measured in past VLA observations of the targets. (8): References for the VLA 4.8 GHz flux densities.} \label{Extended Table 1} \end{table} \section{VLBA Data Reduction} \label{sec11} In this section, we describe the reduction of the 1.4 and 4.8 GHz Very Long Baseline Array (VLBA) observations of the eight Palomar-Green Radio-Quiet Quasars (PG RQQs) listed in Table \ref{Extended Table 1}. The properties of these VLBA observations are provided in Table \ref{Extended Table 2}, which used all ten main VLBA stations, recording the data from each using the Mark6 VLBI system \citep{2013PASP..125..196W} with 2-bit sampling at a rate of 4 Gbps, and then correlated at the remote Array Operation Center. At both 1.4 and 4.8 GHz, we used four 128 MHz, dual polarization, intermediate frequency (IF) bands. For the 1.4 GHz datasets, these IFs were centered at 1376, 1504, 1632, and 1760 MHz. For 4.8 GHz datasets, the IFs were centered at 4612, 4740, 4868, and 4996 MHz. Each observation was $\sim$3 hours long, and all began and ended with two-minute scans of a strong calibrator source (``Fringe Finder"; Table \ref{Extended Table 2}) at each observing frequency. Between the Fringe Finder scans, we alternated between 30 second scans of the phase calibrator and two-minute scans of the RQQ target at 1.4 GHz, and switched to 4.8 GHz halfway through the observation time. As shown in Table \ref{Extended Table 2}, for each target the phase calibrator was located $\sim0.\!\!^{\circ}4 - 1.\!\!^{\circ}9$ away, less than the $\sim5^\circ$ away as required for phase-referencing to work. This observing strategy yielded $\sim1$ hour of total integration time on the target at each frequency. \begin{table}[tb] \caption{Properties of VLBA Observations} \begin{center} \begin{tabular}{@{}lccccc@{}} \toprule Name & Observation & Fringe & Phase & Angular & Integration \\ & Date & Finder & Calibrator & Separation & Time [mins] \\ (1) & (2) & (3) & (4) & (5) & (6) \\ \hline PG 0050+124 & 2020 May 05 & 3C454.3 & J0055+1408 & $1.\!\!^{\circ}5$ & 58.7 , 58.7 \\ PG 0052+251 & 2020 May 16 & 3C454.3 & J0054+2550 & $0.\!\!^{\circ}4$ & 58.8 , 58.6 \\ PG 1149-110 & 2020 May 31 & 4C39.25 & J1153-1105 & $0.\!\!^{\circ}4$ & 58.6 , 58.6 \\ PG 1440+356 & 2020 Jun. 12 & 3C345 & J1438+3710 & $1.\!\!^{\circ}9$ & 58.6 , 58.6 \\ PG 1612+261 & 2020 May 12 & 3C345 & J1610+2414 & $2.\!\!^{\circ}0$ & 58.8 , 58.7 \\ PG 1613+658 & 2020 May 11 & J1642+3948 & J1623+6624 & $1.\!\!^{\circ}2$ & 58.7 , 58.7 \\ PG 2130+099 & 2020 May 05 & J2005+7752 & J2130+0843 & $1.\!\!^{\circ}5$ & 58.6 , 58.7 \\ PG 2304+042 & 2020 May 07 & 3C454.3 & J2300+0337 & $1.\!\!^{\circ}9$ & 58.9 , 58.7 \\ \botrule \end{tabular} \end{center} \tablecomments{(1): PG name obtained from the Palomar-Green Catalog \citep{1986ApJS...61..305G}. (2): Date of observation. (3): Source used to find the fringes and calibrate the antennae bandpass and gains. (4): Source used to calibrate the measured phase and amplitude. Both calibrator sources were selected from the NRAO VLBA calibrator database. (5): Angular separation between the PG galaxy and phase calibrator (6): On-source integration time at 1.4 (left) and 4.8 GHz (right).} \label{Extended Table 2} \end{table} The data was edited and calibrated using the Astronomical Image Processing System (AIPS; \citealt{wells85, greisen90}). We first corrected the recorded visibilities for ionospheric delays (VLBATECR) and errors in the Earth Orientation Parameters (VLBAEOPS), and then corrected for amplitude errors resulting from digital sampling (VLBACCOR). We then used observations of the fringe finder to solve for the antenna phase delays (VLBAMPCL) and calculate the bandpass of each antenna (VLBABPSS). Furthermore, we inspected the data on all baselines (pairs of antennae), IF, and sources for abnormalities which, once identified were removed. At 1.4 GHz, this necessitated a careful inspection of the visibilities for radio frequency interference (RFI), which were removed from the data using the AIPS task (SPFLG). The FRING task was then used to find the group delay and phase rate which maximized the fringes observed from first, the fringe finder, and then both the fringe finder and phase calibrator. At 4.8 GHz, these initial solutions were improved by upon by ``self-calibrating" data obtained on the phase calibrator. At 1.4 GHz, self calibration did not lead to a significant improvement in image quality, due to the increased atmospheric coherence timescale at this longer frequency, and therefore was not performed. Upon completion of the various calibration steps mentioned above, we used the AIPS task CLCAL to calculate the final calibration table for the target, which were then applied to the data using the AIPS task SPLIT. Once calibrated, we imaged the recorded visibility data at each frequency using the AIPS task IMAGR. This program uses the CLEAN algorithm \citep{hogbom74, clark80} to deconvolve the ``dirty image" (produced from taking the 2D Fourier Transform of the visibility) data with the point spread function (PSF; ``dirty beam"). This requires choosing the relative weight of data collected from different baselines, and we chose natural weighting (i.e. ``robust"=5; \citealt{briggs95}) which maximizes sensitivity at the expense of angular resolution. The resultant images, which have a spatial resolution of $\sim5-20~{\rm pc}$, are shown in Figure \ref{figure:1} while their properties are given in Table \ref{Extended Table 3}. \section{Properties of Nuclear Radio Emission} \label{sec:measurements} As described in \S\ref{sec1}, distinguishing between radio emission produced by an accretion disk and an accretion-powered outflow requires measuring the spectrum and morphology of the pc-scale emission probed by the VLBA observations discussed in \S\ref{sec11}. Below, we describe the criteria needed for determining if a PG RQQ was detected in these images (\S\ref{sec:detection}) and, if so, measuring the flux density and angular distribution of this emission (\S\ref{sec:modeling}). \begin{table}[tb] \caption{Properties of the VLBA Images of the Observed PG RQQs} \begin{center} \begin{tabular}{@{}lccccccc@{}} \toprule \multirow{3}{*}{Name} & \multirow{3}{*}{$r_{\rm Edd}$}& & 1.4 GHz $\left[\frac{\rm \mu Jy}{\rm beam} \right]$ & & & 4.8 GHz $\left[\frac{\rm \mu Jy}{\rm beam} \right]$ & \\ & & $\sigma$ & $I_{\rm max}$ & $I_{\rm min}$ & $\sigma$ & $I_{\rm max}$ & $I_{\rm min}$ \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8)\\ \hline \textbf{PG 2304+042} & \textbf{0.03} & \textbf{68} & \textbf{613} & \textbf{-260} & \textbf{18} & \textbf{273} & \textbf{-91} \\ PG 1613+658 & 0.08 & 27 & 133 & -134 & 17 & 74 & -73 \\ \textbf{PG 1149-110} & \textbf{0.20} & \textbf{47} & \textbf{629} & \textbf{-122} & \textbf{21} & \textbf{425} & \textbf{-83} \\ \textbf{PG 0052+251} & \textbf{0.21} & \textbf{30} & \textbf{297} & \textbf{-102} & \textbf{16} & \textbf{244} & \textbf{-70} \\ \hline \hline \textbf{PG 1612+261} & \textbf{0.39} & \textbf{29} & \textbf{366} & \textbf{-139} & 17 & 75 & -78 \\ PG 2130+099 & 0.85 & 31 & 151 & -135 & 15 & 80 & -70 \\ \textbf{PG 0050+124} & \textbf{1.07} & \textbf{37} & \textbf{813} & \textbf{-118} & 17 & 91 & -75 \\ PG 1440+356 & 2.70 & 25 & 108 & -111 & 17 & 72 & -70 \\ \botrule \end{tabular} \end{center} \tablecomments{(1): PG name of target galaxy \citep{1986ApJS...61..305G}, listed in order of increasing Eddington ratio $r_{\rm Edd}$ (2). (3)(6): The root-mean-squared ($\sigma$) of pixel values in the 1.4 and 4.8 GHz images, respectively. (4)(7): The maximum pixel value $I_{\rm max}$ for the 1.4 and 4.8 GHz images. (5)(8):The minimum pixel value $I_{\rm min}$ for the 1.4 and 4.8 GHz images. The rows in bold indicate a source was detected in the image using the criteria defined in \S\ref{sec:detection}.} \label{Extended Table 3} \end{table} \subsection{VLBA Detections} \label{sec:detection} If an image contains no emission from an astronomical source, the distribution of its pixel intensities should be well described by a Gaussian with an average much smaller than the root-mean-squared ($\sigma$) intensity. The symmetric nature of this distribution results in comparable absolute values for the maximum $I_{\rm max}$ and minimum $I_{\rm min}$ pixel intensities ("brightness"), though these values will likely be significantly larger than the noise level due to the large number of pixels\footnote{The 1.4 GHz images have $2048\times2048$ pixels, while the 4.8 GHz images have $1024\times1024$ pixels.} in each image ($I_{\rm max} \approx |I_{\rm min}| \gg \sigma$). However, an image where emission from a source is detected will have $I_{\rm max} > |I_{\rm min}| > \sigma$, and therefore we require a detection to have a peak intensity $I_{\rm max}\geq5\sigma$ and $I_{\rm max} \geq 2|I_{\rm min}|$. As shown in Table \ref{Extended Table 3}, these criteria indicate that emission is detected in the 1.4 GHz VLBA images of five RQQs, three of which are also detected in their 4.8 GHz VLBA images. Since PG 1612$+$261 and PG 0050$+$124 were only detected at 1.4 GHz according these criteria, we examined their 4.8 GHz images for emission spatially coincident with that observed at the lower frequency. No such emission was detected in the 4.8 GHz image of PG 1612+261 (Fig.\ \ref{figure:1}), but a $\sim4-5\sigma$ excess was detected in the 4.8 GHz image of PG 0050+124 (Fig.\ \ref{multi_comp}). While this does not the satisfy the detection criteria described above, the statistical significance of this spatial coincidence suggests it is 4.8 GHz emission from this RQQ, and is treated as such below. \subsection{Properties of Detected Nuclear Radio Emission} \label{sec:modeling} For the RQQs detected in our VLBA images, we used the AIPS task JMFIT to fit the pixel intensities in the region around the image peak intensity the image with a 2D Gaussian model, whose free parameters are the centroid location, peak intensity, major and minor axis, and orientation (position angle). JMFIT then calculates the integrated flux density using these values and the size of the synthesized beam as determined by the task IMAGR. A single 2D Gaussian was sufficient to model the observed emission for all RQQ detections but PG 0050+124, which required two separate Gaussian components at both 1.4 and 4.8 GHz (Fig.\ \ref{multi_comp}). \subsubsection{Position of Nuclear Radio Emission} \label{sec:position} \begin{table}[tb] \caption{Positional Offsets and Uncertainties of Nuclear Radio Emission} \begin{center} \begin{tabular}{cccccc} \hline \hline \multirow{3}{*}{Name} & \multirow{3}{*}{$r_{\rm Edd}$} & \multicolumn{2}{c}{JMFIT Offsets} & \multicolumn{2}{c}{Optical position} \\ & & $\Delta_{\alpha \cos \delta}$ & $\Delta_\delta$ & $\sigma_{\rm pos}$ & \multirow{2}{*}{Citation} \\ & & [mas] & [mas] & [mas] & \\ (1) & (2) & (3) & (4) & (5) & (6) \\ \hline PG 2304+042 & 0.03 & -2.6 & +3.7 & $\sim500$ & \citet{klemola87} \\ PG 1149$-$110 & 0.20 & +3.2 & -3.4 & $\sim500$ & B. Skiff (private comm. to NED) \\ PG 0052+251 & 0.21 & -3.6 & +2.4 & $\sim500$ & B. Skiff (private comm. to NED) \\ \hline \hline PG 1612+261 & 0.39 & +83 & +25 & $\sim500$ & \citet{sdssdr6}\\ PG 0050+124-C1 & \multirow{2}{*}{1.07} & +61 & +31 & \multirow{2}{*}{$\sim250$} & \multirow{2}{*}{\citet{clements81}} \\ PG 0050+124-C2 & & +54 & +29 \\ \hline \hline \end{tabular} \end{center} \tablecomments{(1): PG name of target galaxy \cite{1986ApJS...61..305G} (2): Eddington ratio of target galaxy \cite{laor19}. (3)(4): Offset between the centroid of VLBA emission, as determined using JMFIT, from the optical center of the PG galaxy as provided in NED (Table \ref{Extended Table 1}). For all but PG 1612+261, this was done using their 4.8 GHz images. The statistical errors in this fits are all $\sim0.1~{\rm mas}.$ (5) $1\sigma$ uncertainty in the optical position of the galaxy. (6) Reference for uncertainty in optical position.} \label{tab:vlba_pos} \end{table} As listed in Table \ref{tab:vlba_pos}, the radio emission detected in our VLBA observations appears displaced from the optical center of these galaxies, with the positions derived from image fitting described above suggesting offsets ranging from $\lesssim5~{\rm mas}$ to $\sim70-100~{\rm mas}$ (Table \ref{tab:vlba_pos}). Such an offset could result from errors in the absolute astrometry of our VLBA images. However, using the results of a study conducted by \cite{pradel06} to determine the positional uncertainty arising from the ``phase referencing'' technique described in \S\ref{sec11}, we obtain errors $\lesssim0.25~{\rm mas}$ -- significantly smaller than what is observed. However, this observed offset could also result from the uncertainty in the optical position used for the phase center of our VLBA observations, which were taken from the NASA/IPAC Extragalactic Database (NED)\footnote{The NASA/IPAC Extragalactic Database (NED) is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology.}. As listed in Table \ref{tab:vlba_pos}, this uncertainty is considerably larger than what is observed offsets and therefore we can not conclude if the nuclear radio emission is displaced from the center of these galaxies as observed in other quasars with more precise optical positions (e.g., \citealt{kovalev17, yao21}). However, we note that the observed offsets ($\sim60-90$~{\rm mas}) for the higher Eddington ratio $r_{\rm Edd} \gtrsim 0.3$ quasars are significantly larger than the offsets ($\lesssim5~{\rm mas}$) for the low Eddington ratio $r_{\rm Edd} \lesssim 0.3$ quasars. \subsubsection{Integrated Flux Density and Spectrum of Nuclear Radio Emission} \label{sec:spectrum} The resultant integrated flux densities of these components are listed Table \ref{Extended Table 5}. However, the measured flux densities only include emission from the angular scales probed by the individual pairs of antennae, or baselines, in this array. As described below, this has considerable implications when calculating the spectral index $\alpha$ of a source, which requires measuring this flux density at two frequencies. \begin{table}[tb] \caption{Flux density and Spectral Index of the Detected VLBA Radio Emission} \begin{center} \begin{tabular}{cccccc} \toprule Name & $\frac{b_{\rm min}}{1000 \lambda}$ & $\theta_{\rm las} [{\rm mas}]$ & $S_{\rm 1.4}$ [${\rm mJy}$] & $S_{\rm 4.8}$ [${\rm mJy}$] & $\alpha_{\rm compact}$\\ (1) & (2) & (3) & (4) & (5) & (6) \\ \hline \multirow{2}{*}{PG 2304+042} & 700 & $\sim 300$ & $0.53 \pm 0.08$ & $\cdots$ & $\cdots$ \\ & 2500 & $\sim 85$ & $0.55 \pm 0.07$ & $0.49 \pm 0.04$ & $-0.09 \pm 0.12$ \\ \hline \multirow{2}{*}{PG 1149-110} & 800 & $\sim 260$ & $1.06 \pm 0.11$ & $\cdots$ & $\cdots$ \\ & 2800 & $\sim 75$ & $0.86 \pm 0.11$ & $0.59 \pm 0.04$ & $-0.30 \pm 0.11$ \\ \hline \multirow{2}{*}{PG 0052+251} & 860 & $\sim 240$ & $0.42 \pm 0.07$ & $\cdots$ & $\cdots$ \\ & 3000 & $\sim 70$ & $0.35 \pm 0.06$ & $0.30 \pm 0.03$ & $-0.13 \pm 0.16$ \\ \hline \hline \multirow{2}{*}{PG 1612+261} & 1000 & $\sim 205$ & $0.95 \pm 0.11$ & $\cdots$ & $\cdots$ \\ & 3500 & $\sim 60$ & $0.58 \pm 0.08$ & $<0.08$ & $<-1.53$ \\ \hline \multirow{2}{*}{PG 0050+124-C1} & 860 & $\sim 240$ & $2.66 \pm 0.15$ & $\cdots$ & $\cdots$ \\ & 3000 & $\sim 70$ & $1.42 \pm 0.10$ & $0.31 \pm 0.07$ & $-1.21 \pm 0.19$ \\ \hline \multirow{2}{*}{PG 0050+124-C2} & 860 & $\sim 240$ & $0.72 \pm 0.12$ & $\cdots$ & $\cdots$ \\ & 3000 & $\sim 70$ & $0.24 \pm 0.07$ & $0.24 \pm 0.05$ & $0 \pm 0.29$ \\ \botrule \end{tabular} \end{center} \tablecomments{(1): PG name of target, listed in order of increasing Eddington ratio $r_{\rm Edd}$ (2): Minimum baseline length $b_{\rm min}$ of the data included in the image in units of $1000\times$the observing wavelength $\lambda$ (equivalent to the ``uvrang'' parameter in IMAGR used to make these images). Differences between targets reflect their different positions on the sky during their observations. (3): The largest angular scale $\theta_{\rm las}$ of the resultant image, as calculated using Equation \ref{eqn:theta_las}. (4)(5): Integrated 1.4 and 4.8 GHz flux densities of the source in the corresponding image. For non-detections, a $5\sigma$ upper limit is provided. (6): Spectral index, $\alpha_{\rm compact}$ calculated using the integrated 1.4 and 4.8 GHz flux densities measured in images with the same largest angular scale.} \label{Extended Table 5} \end{table} For a baseline of (projected) length\footnote{The projected length is defined to be the distance between two antenna as measured from the viewpoint of the source.} $b$, the measured visibility is the intensity emitted on an angular scale $\theta \sim \frac{\lambda}{b} = \frac{c}{b\nu}$, where $\lambda$ is observed wavelength, $\nu$ is the observed frequency, and $c$ is the speed of light. Therefore, an observation on the same baseline $b$ at two different frequencies $\nu$ measures the intensity not only emitted at different frequencies but also from different angular scales -- with observations at 4.8 GHz ($\lambda \approx 6~{\rm cm}$) measuring the intensity on scales $\sim3.5\times$ smaller (i.e., more compact) than that measured at 1.4 GHz ($\lambda \approx 20~{\rm cm}$). Since our 1.4 and 4.8 GHz VLBA observations (Table \ref{Extended Table 2}) were performed using the same array, and therefore the same distribution of baselines, the emission detected at each frequency originates from different, but overlapping, ranges of angular scales. As a result, differences in the measured 1.4 and 4.8 GHz flux density of a source not only reflects its intrinsic spectrum (i.e., intensity as a function of frequency $\nu$) but its morphology (i.e., intensity as a function of angular scale $\theta$). Measuring the intrinsic spectral index $\alpha$ of these sources requires measuring their flux density $S_\nu$ at different frequencies $\nu$ but on the same range of angular scales $\theta$. This requires producing 1.4 and 4.8 GHz images sensitive to the same range of angular scales. The flux density measured from an image produced at frequency $\nu$ using data from baselines longer than $b>b_{\rm min,\nu}$ only includes emission originating from angular scales $\theta \lesssim \theta_{\rm las}$, where the largest angular scale is: \begin{eqnarray} \label{eqn:theta_las} \theta_{\rm las} & \sim & \frac{\lambda}{b_{\rm min,\nu}} = \frac{c}{b_{\rm min,\nu} \nu}. \end{eqnarray} Therefore, producing 1.4 and 4.8 GHz images with the same $\theta_{\rm las}$ requires the length of the shortest baseline used at 1.4 GHz $(b_{\rm min,1.4})$ is $\frac{4.8}{1.4}\approx3.5\times$ longer than the shortest baseline used at 4.8~GHz $(b_{\rm min,4.8})$. To maximize the data used in this analysis, we set $b_{\rm min,4.8} = b_{\rm min}$, the projected distance of the shortest baseline during a particular VLBA observation. We then made a 1.4 GHz image only using data from baselines with length $b \gtrsim 3.5b_{\rm min}$ by setting the ``uvrange" parameter in AIPS task IMAGR (\S \ref{sec11}) to the appropriate value (given in Table \ref{Extended Table 5}). The flux density of the source in this 1.4 GHz image was measured using this same image fitting routine described above, with the resultant values listed in Table \ref{Extended Table 5}. We then use this 1.4 GHz flux density to calculate the spectral index $\alpha_{\rm compact}$ (flux density $S_\nu \propto \nu^\alpha$) of the 4.8 GHz emission detected in an image produced using all baselines, also listed in Table \ref{Extended Table 5}. \begin{table}[tb] \caption{Deconvolved Image Sizes} \begin{center} \begin{tabular}{cccccccc} \toprule \multirow{2}{*}{Name} & \multirow{2}{*}{$r_{\rm Edd}$} & $\nu$ & $\theta_{\rm M} \times \theta_{\rm m}$ & \multirow{2}{*}{$\frac{\rm pc}{\rm mas}$} & $d_{\rm M} \times d_{\rm m}$ & $A_{\rm proj}$ & 1.4 GHz \\ & & [GHz] & ${\rm mas}\times{\rm mas}$ & & ${\rm pc}\times{\rm pc}$ & pc$^{2}$ & Compactness \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) \\ \hline PG2304+042 & 0.03 & 4.8 & $4.0\times2.3$ & 0.83 & $3.4\times1.9$ & 5 & $1.0\pm0.2$\\ \hline \multirow{2}{*}{PG1149-110} & \multirow{2}{*}{0.20} & 1.4 & $13.8\times6.7$ & \multirow{2}{*}{0.98} & $13.6\times6.5$ & 70 & \multirow{2}{*}{$0.8\pm0.1$}\\ & & 4.8 & $4.3\times1.9$ & & $4.3\times1.9$ & 6 & \\ \hline \multirow{2}{*}{PG0052+251} & \multirow{2}{*}{0.21} & 1.4 & $8.5\times5.3$ & \multirow{2}{*}{2.71} & $23\times14$ & 250 & \multirow{2}{*}{$0.8\pm0.2$}\\ & & 4.8 & $2.4\times0.3$ & & $6.5\times0.9$ & 4.5 & \\ \hline \hline PG1612+261 & 0.39 & 1.4 & 11.2$\times$7.3 & 2.35 & 26.2$\times$17.2 & 355 & $0.6\pm0.1$ \\ \hline \multirow{2}{*}{PG0050+124-C1} & \multirow{4}{*}{1.07} & 1.4 & $16.2\times11.7$ & \multirow{4}{*}{1.17} & $18.9\times13.6$ & 200 & \multirow{2}{*}{$0.50\pm0.05$}\\ & & 4.8 & $10.1\times2.7$ & & $11.8\times3.2$ & 30 & \\ \multirow{2}{*}{PG0050+124-C2} & & 1.4 & $24.9\times5.6$ & & $29.0\times6.5$ & 150 & \multirow{2}{*}{$0.3\pm0.1$} \\ & & 4.8 & $5.7\times2.8$ & & $6.6\times3.3$ & 20 & \\ \botrule \end{tabular} \end{center} \tablecomments{(1): PG name of target quasar. (2): Eddington ratio. (3): Frequency of image. (4): Deconvolved major $\theta_{\rm M}$ and minor $\theta_{\rm m}$ axis calculated by JMFIT for a particular image where possible. PG 2304+042 is unresolved at 1.4 GHz, and PG 1612+261 was undetected at 4.8 GHz. (5): The number of pc corresponding to angular size of 1~mas and the angular distance to the galaxy, as calculated using \citet{wright06} for $H_0 = 69.6~{\rm km~s^{-1}~Mpc^{-1}}$, $\Omega_m=0.286$, $\Omega_\Lambda=0.714$ \citep{bennett14}. (6): Major $d_{\rm M}$ and minor $d_{\rm m}$ axes in parsec. (7): Projected physical area $A_{\rm proj}$ of the source assuming it is an ellipse. (8): 1.4 GHz Compactness, calculated using Equation \ref{eqn:compactness} using the quantities given in Table \ref{Extended Table 5}. } \label{tab:area} \end{table} \subsubsection{Extent of Nuclear Radio Emission} \label{sec:extent} Furthermore, if the observed emission originates from a region considerably larger than the synthesized beam, then JMFIT (\S \ref{sec11}) returns its deconvolved angular size. Since this quantity accounts for the size of the synthesized beam in the image, it should be the same at 1.4 and 4.8 GHz if the same physical region is responsible for the emission observed at both frequencies. However, as listed in Table \ref{tab:area}, this is not the case, with the deconvolved size at 1.4~GHz $\sim5-50\times$ larger than at 4.8 GHz for sources resolved at both frequencies. This discrepancy results from the different angular scales probed by VLBA observations at different frequencies, as discussed in \S\ref{sec:spectrum}, and the larger size observed at 1.4 GHz indicates emission on scales greater than detectable at 4.8 GHz. Therefore, the deconvolved size is not the physical extent of the emission region, and we require a different way of estimating this quantity. The dependence between measured flux density and baseline length at a particular frequency allows us to quantitatively measure the distribution of emission on different angular scales, i.e., its morphology. We do so by comparing the 1.4 GHz integrated flux density measured in an image made using all VLBA baselines $S_{1.4}(>b_{\rm min})$ to that measured in an image produced only using the baselines probing angular scales also measured at 4.8~GHz, $S_{1.4}(>3.5b_{\rm min})$. If the emitting region is infinitesimally small, its intensity will be the same on all baselines, and therefore $S_{1.4}(>3.5b_{\rm min}) \approx S_{1.4}(>b_{\rm min})$. However, a large emission region will produce a higher intensity on shorter baselines (i.e., larger angular scales) than longer baselines (i.e., smaller angular scales), and therefore $S_{1.4}(>3.5b_{\rm min}) < S_{1.4}(>b_{\rm min})$. As a result, the ratio of these two flux densities is inversely correlated to the extent of the emitting region. We therefore define the ``1.4 GHz Compactness" of a source to be: \begin{eqnarray} \label{eqn:compactness} {\rm 1.4~GHz~Compactness} & \equiv & \frac{S_{1.4}(>3.5b_{\rm min})}{S_{1.4}(>b_{\rm min})}, \end{eqnarray} such that $0\lesssim {\rm 1.4~GHz~Compactness} \lesssim 1$, and a more compact (higher fraction of the total emission originating from a smaller volume) source will have a larger 1.4~GHz Compactness than a more diffuse source. We calculated this quantity using the flux densities given in Table \ref{Extended Table 5}, with the results listed in Table \ref{tab:area}. \section{Eddington Ratio Dependence of PG RQQ Radio Properties} \label{sec:survey} Using the morphological and spectral properties of the radio emission of these RQQs described in \S\ref{sec:measurements}, we can now disentangle emission from the innermost accretion disk (compact, flat $\alpha \sim 0$ spectrum) and an accretion-powered outflow (diffuse, steep $\alpha \lesssim -0.5$ spectrum). In this Section, we discuss the different origins of the radio emission observed from these eight PG RQQs, and how their properties change with Eddington ratio $r_{\rm Edd}$ of these AGN. While our 4.8 GHz VLBA observation of these PG RQQs can only detect emission originating in regions $\lesssim0.1~{\rm kpc}$ in size, previous VLA observations measured the total emission from regions $\lesssim5~{\rm kpc}$ large. Since these past 4.8 GHz VLA observations measured flux densities (VLA $S_{4.8}$ Table \ref{Extended Table 1}) greatly exceeding the noise level $(>30\sigma)$ of our 4.8 GHz VLBA images (Table \ref{Extended Table 2}), a VLBA non-detection of a particular RQQ indicates its 4.8 GHz radio emission originates from a region $\gtrsim0.1~{\rm kpc}$ in size, and/or has significantly decreased during the $\sim20-30$ years since the VLA observation, with the short variability timescale requiring a compact emission region. We are unable to distinguish between these two possibilities for the only low Eddington PG RQQ not detected in our VLBA observations (PG 1613+658) since there exists only one previous 4.8 GHz VLA observation of this source. However, all of the high Eddington ratio PQ RQQs undetected in our 4.8 GHz VLBA observations (PG 1440+356, PG 1612+261, and PG 2130+099) were detected with a nearly constant flux densities in multiple 4.8 GHz VLA observations (Table \ref{Extended Table 1}), suggesting their radio emission primarily originates from a large region and therefore produced by an outflow. Similarly, the detection of 4.8 GHz VLBA emission from a PG RQQ suggests a significant contribution from compact ($\lesssim0.1~{\rm kpc}$) regions. The higher detection rate of low Eddingtion RQQs in our 4.8 GHz VLBA observations suggests a higher fraction of their radio emission originates from smaller regions than their high Eddington counterparts. To further investigate this dependence, we calculate the ratio of a RQQ's 4.8 GHz VLBA to VLA flux densities, which constrains the fraction of the AGN's total radio emission originating from its nucleus (defined as the central $\lesssim0.1~{\rm kpc}$ of the galaxy in this work). As shown in Figure \ref{fig:nuc_ratio}, $\lesssim20\%$ of the total 4.8 GHz radio emission from high Eddington ratio RQQs originates from inside their nucleus -- indicating their radio emission is predominately produced by a larger scale outflow. However, the nucleus of low Eddington ratio RQQs can contribute as much as $\sim70\%$ of their total 4.8 GHz radio emission. The increased 4.8 GHz VLBA detection rate and VLBA-to-VLA flux ratio of low Eddington ratio RQQs suggests that nuclear radio emission is both more common and more prominent in such AGN. To determine if the nuclear properties (Section \ref{sec:modeling}) of the VLBA detected AGN also depends on Eddington ratio, we plotted the 1.4 GHz Compactness (Equation \ref{eqn:compactness}) and compact spectral index $\alpha_{\rm compact}$ of their nuclear radio emission (Table \ref{Extended Table 5}) as a function of $r_{\rm Edd}$ -- shown in Figure \ref{VS. EDD RATIO}. We find that the 1.4 GHz compactness of nuclear radio sources decreases as the Eddington ratio increases (Figure \ref{VS. EDD RATIO}; {\it left}), with compact regions responsible for $\sim80\%-100\%$ of the total 1.4 GHz nuclear emission in the three low Eddington ratio RQQs detected in our VLBA observations, but only $\sim30\% - 60\%$ of the emission from the nuclear radio sources detected in the high Eddington ratio RQQs. We also find differences between the 1.4 - 4.8 GHz spectral index $\alpha_{\rm compact}$ of compact regions within the nucleus of low and high Eddingtion RQQs. Low Eddington ratio RQQs all have flat spectra, as does the second component observed from high Eddington ratio RQQ PG 0050+124 (PG 0050+124-C2; Fig.\ \ref{multi_comp}). However, the first component observed from PG 0050+124 (PG 0050+124-C1; Fig.\ \ref{multi_comp}) and the other high Eddington ratio RQQ detection both have extremely steep radio spectra. The observed difference in 1.4 GHz compactness and radio spectral index of the nuclear emission of low and high Eddington RQQs suggests a difference in physical origin. Using the measured compactness and spectral indices of the nuclear radio emission, we can determine if it originates in the innermost accretion disk, outflow, or a mixture of both. As shown in the right panel of Fig.\ \ref{VS. EDD RATIO}, the observed compact nature and flat spectrum ($\alpha \sim 0$) of the nuclear emission from the detected low Eddington ratio RQQs -- PG 2304+042, PG 1149-110, and PG 0052+251 -- suggests this emission primarily originates from the innermost regions of the accretion disk. This is further supported by the relatively small offsets between the radio emission in these sources and the optical center of the galaxy. The diffuse 1.4 GHz emission (low compactness) from the second component in high Eddington RQQ PG 0050+124 (PG 0050+124-C2; Fig.\ \ref{multi_comp}) is characteristic of an outflow, but its compact (4.8 GHz) core has a flatter radio spectrum than expected for such emission. This suggests that C2 may be the location of the SMBH, with the flat-spectrum emission from the disk corona embedded within the more diffuse, steep-spectrum, radio emission generated by the outflow powered by this AGN (e.g., \citealt{an10, pushkarev12}). An alternate possibility is that C2 is a flat spectrum component of this larger scale outflow (e.g., \citealt{hovatta14}). Future VLBI observations phase centered on a more precise optical position (e.g., from the {\it GAIA} Data Release 3; \citealt{gdr3}) could distinguish between these possibilities, though the relatively large ($\sim60~{\rm mas}$) offset from the current position favors the latter. Finally, the diffuse nature (low 1.4 GHz compactness) and steep radio spectra ($\alpha \lesssim -1$) of the nuclear emission of high Eddington ratio RQQ PG 1612+261 and the first component in high Eddington RQQ PG 0050+124 (PG 0050+124-C1; Fig.\ \ref{multi_comp}) suggesting they are both produced by outflows. \section{Summary and Conclusions} \label{sec:conclusions} In this paper, we present the results of 1.4 and 4.8 GHz VLBA observations of eight PG RQQs spanning a wide range of Eddington ratios ($r_{\rm Edd} \sim 0.03 - 3$; Table \ref{Extended Table 1}). Our analysis indicates that the radio properties of the observed RQQs strongly depend on their Eddington ratio: as $r_{\rm Edd}$ increases, a smaller fraction of its total radio emission is generated in its nucleus, and its radio nuclear emission becomes increasingly diffuse and has a steeper radio spectrum. Furthermore, these differences are indicative of changes in the physical origin of the radio emission from these RQQs. At low Eddington ratios, the innermost accretion disk (i.e., the disk corona and/or jet base) is primarily responsible for the total radio emission. As the Eddington ratio increases, the size and relative contribution of outflows increases until they dominate the emission at the highest Eddington ratios. Such a dependence is not observed from radio-loud quasars, but similar results are obtained in studies of other manifestations of radio-quiet AGN, e.g. Narrow-Line Seyfert 1s (e.g., \citealt{doi15, yao21}). Therefore, our results likely hold for the broader population of radio-quiet AGN. Furthermore, a similar behavior is observed from stellar-mass black hole binaries (BHBs), whose radio properties depend strongly on the ``spectral state" (defined using a combination of Eddington ratio and X-ray spectrum) of these systems (see recent review by \citealt{gallo10}). The radio emission of BHBs in the ``low/hard" spectral state, systems with an Eddington ratio $\lesssim0.03$ (e.g. \citealt{dunn10}), is typically dominated by an extremely compact, flat spectrum radio source -- similar to what we observe for PG 2304+042 (which has a comparable Eddington ratio). As the Eddington ratio of a BHB increases, the contribution of the flat spectrum, compact core decreases, and their radio emission is increasingly dominated by optically thin synchrotron radiation generated by an outflow or weak jet (e.g., \citealt{gallo10}). This is consistent with the increasing size, and steepening spectrum, of the nuclear radio emission with Eddington ratio we observe from the RQQs in our sample. Lastly, the transition of BHBs into the ``high/soft" spectral state, when they have the highest accretion rates, is often accompanied by the ejection of radio-emitting optically thin plasmons from the accretion disk -- similar to the steep spectrum components observed in the nuclear regions of PG 1612+261 and PG 0050+124, the two high Eddington ratio RQQs in our sample detected by the VLBA. This correspondence between the nuclear radio emission of RQQs and stellar-mass black hole X-ray binaries across a wide range of Eddington ratios (and spectral states) is strong evidence for the universality of accretion onto black holes. \begin{acknowledgments} We would like to thank the anonymous referee for useful comments. E.B. acknowledges support by a Center of Excellence of the Israel Science Foundation (grant no. 2752/19). Basic research at NYU Abu Dhabi is supported by the Executive Affairs Authority of Abu Dhabi. AA acknowledges support by the Kawader Research Assistantship Program. JDG acknowledges support by NYUAD Research Grant AD022. IZ acknowledges support by NYUAD Research Grant AD013. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. AIPS is produced and maintained by the National Radio Astronomy Observatory, a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This work made use of the Swinburne University of Technology software correlator \citep{deller11}, developed as part of the Australian Major National Research Facilities Programme and operated under licence. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. \end{acknowledgments} \bibliography{sn-bibliography}% \bibliographystyle{aasjournal}
Title: Early quark deconfinement in compact star astrophysics and heavy-ion collisions
Abstract: Based on a recently developed relativistic density functional approach to color-superconducting quark matter and a novel quark-hadron transition construction which phenomenologically accounts for the effects of inhomogeneous pasta phases and quark-hadron continuity, we construct a class of hybrid equations of state applicable at the regimes typical for compact star astrophysics and heavy ion collisions. We outline that early quark deconfinement is a notable consequence of strong diquark pairing providing a good agreement with the observational data and driving the trajectories of the matter evolution during the supernovae explosions toward the regimes typical for the compact star mergers and heavy-ion collisions.
https://export.arxiv.org/pdf/2208.09085
\title{Early quark deconfinement in compact star astrophysics and heavy-ion collisions % \thanks{Presented at the $29^{\rm th}$ Conference ''Quark Matter 2022'' on ultrarelativistic nucleus-nucleus collisions, April 4-10, 2022, Krak\'ow, Poland}% } \author{ O. Ivanytskyi$^1$, D. Blaschke$^1$, T. Fischer$^1$, A. Bauswein$^2$ \address{$^1$Institute of Theoretical Physics, University of Wroclaw, Poland% \\%[3mm] $^2$GSI Helmholtzzentrum f\"ur Schwerionenforschung, % Darmstadt, Germany} } \section{Introduction} The puzzling question for the origin of compact stars (CS) with masses exceeding $2~{\rm M}_\odot$ can be successfully addressed at present only within the supernovae (SN) explosion mechanism based on quark deconfinement in the stellar matter \cite{Fischer:2017lag}. This serves as an indirect argument in favor of the existence of quark matter in cores of heavy CS. Binary CS mergers could produce a distinct postmerger gravitational wave signal \cite{Bauswein:2018bma}. These interesting applications are summarised in \cite{Bauswein:2022vtq}. They are based on a hybrid equation of state (EoS) that has been constructed from hadronic and quark matter EoS developed within relativistic density functional (RDF) approaches \cite{Kaltenborn:2017hus,Ivanytskyi:2022oxv}. In this contribution, we summarize recent developments of the RDF approach to quark matter which address beyond confinement also the aspects of chiral symmetry breaking and color superconductivity. In particular the occurrence of a large diquark pairing gap modifies the phase structure and EoS of QCD at low temperatures and is thus of central interest for the discussion of the existence and location of one or more critical endpoints (CEPs). A developed constructive scheme generates thermodynamically consistent EoS with multiple or absent CEP and provides a solid basis for discussing their effects in simulations of astrophysical phenomena and heavy-ion collisions (HIC). \section{Relativistic density functional for quark matter} The RDF approach from Ref. \cite{Ivanytskyi:2022oxv} is represented by the Lagrangian \begin{eqnarray} \label{I} \mathcal{L}=\overline{q}(i\slashed\partial- m)q- G_V(\overline{q}\gamma_\mu q)^2+ G_D(\overline{q}i\gamma_5\tau_2\lambda_A q^c)(\overline{q}^ci\gamma_5\tau_2\lambda_A q)-\mathcal{U} \end{eqnarray} with two-flavor quark field $q^T=(u~d)$, current quark mass $m$ and $G_V$, $G_S$ being coupling constants in vector repulsion and diquark pairing channels, respectively. A chirally symmetric generalization of the potential energy density functional inspired by the string-flip model (SFM) \cite{Kaltenborn:2017hus} reads \begin{eqnarray} \mathcal{U}&=&D_0\left[(1+\alpha)\langle \overline{q}q\rangle_0^2 -(\overline{q}q)^2-(\overline{q}i\gamma_5\vec\tau q)^2\right]^{\frac{1}{3}}\nonumber\\ \label{II} &\simeq&\mathcal{U}_{MF}+ (\overline{q}q-\langle\overline{q}q\rangle)\Sigma_{MF}- G_{S}(\overline{q}q-\langle\overline{q}q\rangle)^2- G_{PS}(\overline{q}i\gamma_5\vec\tau q)^2. \end{eqnarray} Here $\alpha$ and $D_0$ are constants and $\langle \overline{q}q\rangle_0$ is the chiral condensate in the vacuum. The last line in Eq. (\ref{II}) corresponds to the second order expansion of $\mathcal{U}$ around the mean-filed solutions $\langle \overline{q}q\rangle$ and $\langle \overline{q}i\gamma_5\vec\tau q\rangle=0$ labeled with the subscript index ``$MF$''. This expansion brings the present model to the form of the NJL model with the mean-field scalar self-energy of quarks $\Sigma_{MF}=\partial\mathcal{U}_{MF}/\partial\langle\overline{q}q\rangle$ and effective couplings in scalar $G_S=-\partial\mathcal{U}_{MF}^2/\partial\langle\overline{q}q\rangle^2/2$ and pseudoscalar $G_S=-\partial\mathcal{U}_{MF}^2/\partial\langle\overline{q}i\gamma_5\vec\tau q\rangle^2/6$ channels. In Ref. \cite{Ivanytskyi:2022oxv} model parameters $m=4.2$ MeV, $\Lambda=573$ MeV, $\alpha=1.43$ and $D_0\Lambda^{-2}=1.39$ were fixed in order to reproduce the pion mass $M_\pi=140$ MeV and decay constant $F_\pi=92$ MeV, with the scalar meson mass $M_\sigma=980$ MeV and the vacuum value of the chiral condensate per flavor $\langle\overline{l}l\rangle_0=-(267~{\rm MeV})^3$. We note that $\Lambda$ is a three-momentum scale which occurs in the smooth momentum cut-off by a Gaussian formfactor which regularizes divergent zero-point terms. The behavior of $G_S$ and $G_{PS}$ as well as the effective quark mass $m^*=m+\Sigma_{MF}$ is shown in Fig. \ref{fig1}. The dynamical breaking of chiral symmetry leads to $G_S\neq G_{PS}$ in the vacuum, while its dynamical restoration at high temperatures and/or densities is manifested by the asymptotic coincidence of the scalar and pseudoscalar couplings. This is reflected in the melt-down of $m^*$. Its vacuum value $m_0^*$ is controlled by the parameter $\alpha$ so, that $m_0^*\rightarrow\infty$ at $\alpha\rightarrow0$. For the mentioned set of parameters $m_0^*=718$ MeV and the pseudocritical temperature at $\mu_B=0$ defined by the peak of the chiral susceptibility is 163 MeV. The quark matter EoS is obtained by treating the present model within the mean-field approximation. It is remarkable that the BCS relation between the mass gap in the vacuum and the critical temperature for its restoration, which holds for the (P)NJL model in the chiral limit, is violated for this class of quark matter models. \section{Phase diagram of strongly interacting matter} High values of the effective quark mass at low $T$ and $\mu_B$ represent phenomenological confinement in the RDF approach. This makes description of strongly interacting matter in terms of quark degrees of freedom inadequate in the confinement region and requires matching the quark matter EoS to the hadron one yielding a hybrid quark-hadron EoS. Within the Maxwell construction of quark-hadron transition the matching point is defined by the baryon chemical potential $\mu_B^{\rm Max}$ at which the pressures of two phases coincide, while the baryon density discontinuously jumps from $n_B^h|_{\rm Max}$ on the hadron side to $n_B^q|_{\rm Max}$ on the quark one. This picture ignores inhomogeneous structures in the quark-hadron interface known as pasta phases \cite{Maslov:2018ghi} and corresponds to a sharp interface between two phases. Accounting for those pasta phases would wash out the sharp quark hadron interface allowing for the existence of a mixed phase, which is restricted by the baryon chemical potentials $\mu_B^h$ and $\mu_B^q$ (corresponding to $n_B^h$ and $n_B^q$) from the hadron and quark sides, respectively. In Ref. \cite{Ayriyan:2021prr}, the EoS of the mixed phase was parameterized by two pieces of parabolic functions. In Ref. \cite{Ivanytskyi:2022wln} such a two-zone interpolation scheme (TZIS) was further developed to the case of arbitrary fractions of electric charge and applied at finite temperatures. The parameters of these two parabolic functions were defined so that both the pressure $p$ and the baryon density $n_B$ remain continuous at the mixed phase boundaries. Continuity of $p$ is also required at the matching point of two parabolas $\mu_B^c=(\mu_B^h+\mu_B^q)/2$, while $n_B$ experiences a discontinuous jump of $\Delta n_B$. The TZIS is given a closed form with the parameterization \begin{eqnarray} \label{III} &&\mu_B^h=\mu_B^{\rm Max}|_{T=0}(1-x)\sqrt{1-T^2/T_0^2},\quad \mu_B^{q}=\mu_B^{\rm Max}(1+x),\\ \label{IV} &&\Delta n_B=n^*(T_{cep1}-T)^\beta(T-T_{cep2})^\beta\theta(T_{cep1}-T)\theta(T-T_{cep2}), \end{eqnarray} where $x=0.01$, $n^*=0$ or 0.15 fm$^{-3}$, $T_{cep1}=90$ MeV and $T_{cep2}=15$ MeV correspond to high and low temperature CEPs and $\beta=0.3265$ is the critical exponent of the 3D Universality class \cite{Campostrini:2002cf}. The TZIS allows us to construct a hybrid quark-hadron EoS at arbitrary entropy per baryon $s/n_B$. Fig. \ref{fig2} compares such an EoS to the one obtained within the Maxwell construction. Furthermore, having the edges of the mixed quark-hadron phase defined we can construct the phase diagram of strongly interacting matter that is shown in Fig. \ref{fig3}. It is remarkable that the transition from quark to hadron matter leads to a growth of $T$ along adiabates $s/n_B=const$ being a direct consequence of the reduction of the number of accessible microstates due to the transition to the color superconducting phase of quark matter \cite{Ivanytskyi:2022oxv}. \section{Compact stars at vanishing and finite entropy} Entropy of the quark-hadron matter in the interiors of the proto NS remains approximately constant during SN explosions \cite{Fischer:2017lag}. Therefore, isentropic EoS of quark-hadron matter is phenomenologically interesting. We applied such EoSs shown in Fig. \ref{fig2} to solving a problem of relativistic hydrostatic equilibrium \cite{Ivanytskyi:2022wln}. The corresponding mass radius relations of cold NS ($s/n_B=0$) and warm proto-NS ($s/n_B\neq0$) are shown in Fig. \ref{fig4}. In the case of cold NS our approach provides agreement with the constraints from Refs. \cite{% Riley:2021pdl,Miller:2021qha,Riley:2019yda, LIGOScientific:2018cki,Bauswein:2017vtn,Annala:2017llu} and gives the tidal polarizability of $1.4~{\rm M}_\odot$ mass stars $\Lambda_{1.4}=540-550$ agreeing with Ref. \cite{LIGOScientific:2018cki}. Finite $s/n_B$ increases the radius of NS but leaves their maximal mass almost unchanged. \section{Conclusions} We developed a confining RDF for color-superconducting quark matter and produced a family of hybrid quark-hadron EoS with or without (multiple) CEP(s). Due to large values of the diquark pairing gap our approach favors early quark deconfinement, provides good agreement with the present astrophysical constraints and drives trajectories of the evolution of stellar matter during the SN explosions toward the temperatures range of HIC. \subsection*{Acknowledgements} This work was supported by % NCN under grants 2019/33/B/ST9/03059 (O.I., D.B.) and 2020/37/B/ST9/00691 (T.F.). A.B. acknowledges support by the European Research Council % under the European Union's Horizon 2020 research and innovation program, grant No. 759253, by DFG Project-ID 279384907 - SFB 1245, by DFG - Project-ID 138713538 - SFB 881 and by the State of Hesse within the Cluster Project ELEMENTS. The work was performed within a project that has received funding from the Horizon 2020 program under grant agreement STRONG-2020 - No. 824093. \bibliography{refs}
Title: Call and Response: A Time-Resolved Study of Chromospheric Evaporation in a Large Solar Flare
Abstract: We studied an X1.6 solar flare produced by NOAA AR 12602 on 2014 October 22. The entirety of this event was covered by RHESSI, IRIS, and Hinode/EIS, allowing analysis of the chromospheric response to a nonthermal electron driver. We derived the energy contained in nonthermal electrons via RHESSI spectral fitting, and linked the time-dependent parameters of this call to the response in Doppler velocity, density, and nonthermal width across a broad temperature range. The total energy injected was $4.8\times10^{30}$ erg, and lasted $352$ seconds. This energy drove explosive chromospheric evaporation, with a delineation in both Doppler and nonthermal velocities at the flow reversal temperature, between 1.35--1.82 MK. The time of peak electron injection (14:06 UT) corresponded to the time of highest velocities. At this time, we found 200 km s$^{-1}$ blueshifts in the core of Fe XXIV, which is typically assumed to be at rest. Shortly before this time, the nonthermal electron population had the shallowest spectral index ($\approx$ 6), corresponding to the peak nonthermal velocity in Si IV and Fe XXI. Nonthermal velocities in Fe XIV, formed near the flow reversal temperature were low, and not correlated with density or Doppler velocity. Nonthermal velocities in ions with similar temperatures were observed to increase and correlate with Doppler velocities, implying unresolved flows surrounding the flow reversal point. This study provides a comprehensive, time-resolved set of chromospheric diagnostics for a large X-class flare, along with a time-resolved energy injection profile, ideal for further modeling studies.
https://export.arxiv.org/pdf/2208.14347
\received{February 21, 2022} \revised{July 22, 2022} \accepted{August 5, 2022} \title{Call and Response: A Time-Resolved Study of Chromospheric Evaporation in a Large Solar Flare} \author[0000-0001-5342-0701]{Sean G. Sellers} \affiliation{Department of Astronomy, New Mexico State University MSC 4500 NM 88003-8001, USA} \author[0000-0001-5031-1892]{Ryan O. Milligan} \affiliation{Astrophysics Research Centre, School of Mathematics \& Physics, Queen's University Belfast University Road, Belfast, BT7 1NN, UK} \author[0000-0003-1493-101X]{R.T. James McAteer} \affiliation{Department of Astronomy, New Mexico State University MSC 4500 NM 88003-8001, USA} \email{sellers@nmsu.edu, r.milligan@qub.ac.uk, mcateer@nmsu.edu} \section{Introduction} \label{sec:Intro} Solar flares are considered a consequence of magnetic reconnection in the corona, resulting in the release of $\leq10^{32}$~erg over the course of the event. Approximately 20\% of the released energy is partitioned into the acceleration of particles in the corona \citep{Emslie2012}. A population of electrons is accelerated near the reconnection site to relativistic speeds, and stream down coronal loops away from the acceleration region \citep{Emslie2004,Emslie2012,Aschwanden2014}. The steep density increase in the transition region down to the chromosphere is generally responsible for the sudden deceleration of these accelerated particles. Energy is dissipated in the chromosphere primarily via Coulomb collisions, with a smaller amount of energy being dissipated via the bremsstrahlung process, which produces hard X-ray (HXR) emission via interaction with the ``thick-target'' chromosphere \citep{Brown1971,Lin1976}. This energy injection in the chromosphere is likely the driving mechanism behind chromospheric evaporation, the process by which flares produce high-temperature, high-density plasma in the corona. \par The duration and evolution of HXR radiation varies dramatically from flare to flare. \cite{Warmuth2016} studied flares of several \textit{GOES} classifications, and found that the time ranges of significant nonthermal flux ranged from 0.4~minutes to over 50~minutes for longer events, but the analysis lacked information regarding the temporal evolution of the driving electron beam. Several studies attempted to resolve the time-dependence of the electron beam \citep{Kulinova2011,Kennedy2015,Fletcher2013}, and found the observed durations of the electron injection events to be on the order of several minutes. \cite{Holman2003} analyzed data from the Reuven Ramaty High Energy Solar Spectroscopic Imager (\textit{RHESSI}; \citealt{Lin2002}) of the X3.6 flare on 2002 July 23 and found nonthermal HXR emission lasting $\approx 10$ minutes. \par When the energy input to the chromosphere exceeds that which can be shed as radiation or conductive losses, the chromospheric plasma must heat and expand upward, into the lower-density corona. This process fills overlying magnetic structures of lower density with high-temperature plasma, which strongly emits extreme ultra-violet (EUV) and soft X-ray (SXR) emission. This chromospheric evaporation \citep{WernerM.Neupert1968,Bornmann1999,Fletcher2011} can occur explosively, with high-temperature lines exhibiting blueshifts, while cooler emission lines exhibit redshifts \citep{Doschek1983,Brosius2004,Milligan2009}; or gently, with blueshifted emission lines across a wide temperature range \citep{Fisher1985,Brosius2004,Allred2005,Milligan2006,Brosius2015}. The mode of evaporation is dependant first on the mechanism of flare energy transport. In the case of energy transport by a nonthermal electron driver, the mode of evaporation is further dependant on the energy flux, low energy cutoff, and population distribution of accelerated electrons reaching the chromospheric footpoints. \par \cite{Canfield1987} and \cite{Fisher1985} first deduced this effect and placed a lower limit on the requisite energy flux density required to drive explosive evaporation of $E_{e^-} \geq 3\times10^{10}$~erg~cm$^{-2}$~s$^{-1}$. If the incoming electron flux is above this threshold, determined by balancing the heating rate and the hydrodynamic expansion timescale, the over-pressure of the hot rising material causes the denser layers below to recoil, resulting in the cool, redshifted emission characteristic to explosive evaporation. \par Thermal conduction-driven chromospheric evaporation, in contrast, does not appear to be subject to above restrictions on flux deposition. \cite{Longcope2014} found that even the smallest energy fluxes studied produced explosive chromospheric evaporation. This result was also noted in earlier models from \cite{Fisher1989}. \par In addition to the Doppler velocity signatures of chromosperic evaporation, excess nonthermal width in optically thin spectral lines has been observed in flare conditions. One possible explanation is the superposition of unresolved flows. In this case, the nonthermal width is a measure of the velocity distribution of the plasma \citep{Doschek2008}. \cite{Newton1995} attempted to generalize both excess line widths and blue wing enhancements by the computation of a Velocity Differential Emission Measure (VDEM), which treats the observed line profile as a continuum of Gaussian components driven by variations in the line-of-sight velocity. This treatment is supported by reported correlations between Doppler velocities and nonthermal velocities within solar active regions \citep{Hara2008,Doschek2008,Bryans2010,Peter2010}. Another possible explanation for excess line widths is the influence of pressure or opacity broadening in regions of enhanced electron density. \cite{Milligan2011} showed a correlation between electron density and nonthermal velocity broadening, although neither pressure broadening nor opacity effects were able to account for any significant portion of the excess width. \par The flare-driven mass flow rate into the solar corona remains one of the more difficult solar flare metrics to disentangle from observations, requiring both accurate velocity information, and a measure of plasma mass. As a proxy, the electron density of the active region can be used \citep{Milligan2005,Doschek2008}. Density enhancements have been observed to be cospatial with the locations of flare footpoints \citep{Graham2011}. Densities, when combined with the emission measure \citep{DelZanna2011}, may also provide information about the dynamics of the evaporating region. The previously mentioned VDEM \citep{Newton1995} is derived in part from the electron density, and provides direct insight into plasma transport during a solar flare. \par The flare chosen for the subject of this study, an X-class flare on 22 October, 2014, is a well studied event. \cite{Bamba2017} studied the precursor conditions to this event in order to determine triggering conditions in the chromosphere and photospheric magnetic field. \cite{Veronig2015} attempted to quantify the magnetic reconnection flux and rate. \cite{Li2015} utilized data from the \textit{Interface Region Imaging Spectrometer} (\textit{IRIS}; \citealt{IRISPaper}) and \textit{RHESSI} instruments to study Doppler velocities in \ion{Fe}{21} and \ion{C}{1}, and HXR intensities. \cite{Thalmann2015} focused on the rate of magnetic reconnection. \cite{Lee2017} measured electron flux at each HXR peak using \textit{RHESSI}, and linked the electron energy budget with observed low chromospheric and photospheric energetic response. These studies showed that energy was injected via high-energy electrons, which was sufficient to produce white-light emission. \par In this study, detailed, time-resolved \textit{RHESSI} HXR spectral fit parameters are presented in order to quantify the nonthermal electron energy injection profile. The profile of electron energy injection is then connected to multispectral observations of the chromospheric evaporation response. Emission line intensities, electron densities, and Doppler and nonthermal velocities from several instrumental sources were combined in order to study the response of the flaring solar atmosphere across time, space, and temperature. Due to the abundance of data available for this flare, this data set is ideally-suited to constrain detailed hydrodynamic modeling of energy transport during this event. \par An overview of this event, the data, and analysis techniques is presented in Section~\ref{sec:Analysis}. The results of this treatment and comparison to similar studies are discussed in Section~\ref{sec:Results}, and are summarized in Section~\ref{sec:Conclusions}. \section{Data Analysis} \label{sec:Analysis} The X1.6 flare selected for study occurred on 2014 October 22, beginning at 14:02:00~UT, and was one of the largest flares produced by flare-productive NOAA AR 12192. In Figure~\ref{fig:context} the active region is presented in the 1600\AA\ and 171\AA\ passbands of the \textit{Solar Dynamics Observatory/Atmospheric Imaging Assembly} (\textit{SDO/AIA}; \citealt{Lemen2012}), with the fields of view of the \textit{EUV Imaging Spectrometer} (\textit{EIS}; \citealt{EISPaper}) and \textit{IRIS} instruments overlaid, and with HXR contours from \textit{RHESSI} imaging overlaid to highlight the primary footpoints of the flare. Two HXR sources are well-defined and are cospatial with intensity enhancements in \textit{AIA} images. A third, compact HXR kernel appears to the southwest of the primary flare loop, corresponding to a possible tertiary footpoint, or merely an extension of the large western footpoint. Figure~\ref{fig:lc_context} shows the \textit{RHESSI} HXR lightcurves in three energy bands (25--50, 50--100, and 100--300~keV) as well as SXR emission from the \textit{GOES} 1-8\AA\ band. Figure~\ref{fig:lc_context} provides additional context, with the time intervals where \textit{EIS}, \textit{IRIS} and \textit{RHESSI} spectral fits were performed. \par The \textit{GOES} flux for this event plateaus through much of the event, with a SXR peak found well after the peak of HXR emission (14:28~UT, versus 14:06~UT). Hereafter, when the peak of the flare is referred to, it is in reference to the peak of HXR emission. \subsection{RHESSI Analysis} \label{sec:RHESSI_anal} The full duration of this flare was well-covered by the \textit{RHESSI} instrument. \textit{RHESSI} entered its daylight phase just prior to the onset of the flare, and exited during the gradual phase, after the \textit{RHESSI} HXR peak and the \textit{GOES} SXR peak. As of August 2014, the \textit{RHESSI} spacecraft had undergone its fourth successful anneal, allowing five of the original nine detectors to regain high spectral resolution. \par \textit{RHESSI} spectra from 14:04:40 -- 14:16:56~UT were obtained with 16~second time bins for detectors 1, 3, 6, 8, and 9, which had consistently high count rates during the flare, signifying that they retained sufficient sensitivity to be usable. From the peak counts, detector 6 was determined to be the most sensitive, while detector 1 was the least, leading to different fit results and higher values of $\chi^2$ for detector 1. Background characterization and spectral fitting were performed for each individual detector using the \verb|OSPEX| package in \textit{SolarSoftWare} (SSW). For each detector, the background profile was determined by using the smoothed emission profile of the 100--300~keV energy band in the same detector. Save for one brief ($< 32$s) spike during the impulsive phase of the flare, emission in this energy range showed only a slow variation throughout the \textit{RHESSI} orbital cycle. This time-varying profile was used as a template for the background in lower energy bands. The count rate during \textit{RHESSI}'s night was used to determine the relative scaling between energy bands, and served as anchor points for application of the template. \par Spectra were fit using a methodology similar to that adopted by \cite{Milligan2014}. The thermal portion of the \textit{RHESSI} spectrum was best fit by a multithermal model, similar to studies by \cite{aschwanden2007}, \cite{Battaglia2015}, and \cite{Choithani2018}. The multithermal model selected was characterized by a power-law differential emission measure (DEM) between a fixed minimum plasma temperature (0.5~keV) and a variable maximum plasma temperature. The nonthermal portion of the \textit{RHESSI} spectrum was best fit by a thick-target electron beam model, with an electron distribution characterized by a single power-law. Additional instrumental effects were accounted for by modifying the detector response matrix (\verb|drm_mod|), accounting for instrumental pileup (\verb|pileup_mod|), albedo, and incorporating an additional Gaussian component to account for the 10~keV instrumental line \citep{Phillips2006}. \par Sample spectra are shown in Figure~\ref{fig:spex_fits} along with the combined fit functions used to characterize the HXR profile for a time interval with a significant nonthermal component (top panel) and a time interval without (bottom panel). Note that while spectra in Figure~\ref{fig:spex_fits} are shown in units of photons~s$^{-1}$~cm$^{-2}$~keV$^{-1}$, spectral fitting was carried out in count space. The use of the calculated photon spectrum exaggerates several notable features, such as the 10~keV instrumental line first characterized by \cite{Phillips2006}. Summaries of major parameters obtained via \textit{RHESSI} spectral fitting are discussed in Section~\ref{sec:hsi_results}. \par We also make use of the unique imaging capabilities of \textit{RHESSI} in order to identify the flare footpoints. The \verb|CLEAN| algorithm was applied to detectors 1, 3, 6, 8, and 9 during the impulsive and peak phases of the flare to identify sources of HXR emission throughout the flare duration. Contours of these images are overlaid on the center and right--hand columns of Figure~\ref{fig:context} to provide context for other observations and constrain the locations of HXR emission during the peak of the flare. \subsection{EIS Analysis}\label{sec:EIS_anal} \begin{table} \centering \caption{EIS and IRIS Line Summary} \begin{tabular}{p{0.15\linewidth} p{0.25\linewidth} p{0.4\linewidth}} \toprule \textbf{Ion} & \textbf{Formation Temperature [MK]$^{a}$} & \textbf{Central Wavelength (Angstrom)} \\ \midrule \ion{Fe}{24} & 18.20 & $192.026 \pm 0.003$ \\ \ion{Fe}{24} & 18.20 & $255.13 \pm 0.047$ \\ \ion{Fe}{23} & 14.13 & $263.78 \pm 0.053$ \\ \ion{Ca}{17} & 6.31 & $192.845 \pm 0.008$ \\ \ion{Fe}{16} & 2.51 & $263.004 \pm 0.003$ \\ \ion{Fe}{15} & 2.0 & $284.182 \pm 0.003$ \\ \ion{Fe}{14} & 1.82 & $274.225 \pm 0.003$ \\ \ion{Fe}{14} & 1.82 & $264.808 \pm 0.003$ \\ \ion{Fe}{12} & 1.35 & $192.391 \pm 0.003$ \\ \ion{Fe}{12} & 1.35 & $195.122 \pm 0.003$ \\ \ion{Fe}{10} & 1.0 & $184.536 \pm 0.012$ \\ \ion{He}{2} & 0.05 & $256.349 \pm 0.005$ \\ \midrule \ion{Fe}{21} & 11.48 & $1354.067 \pm 0.04$ \\ \ion{O}{1}$^{b}$ & N/A & $1355.599 \pm 0.04$ \\ \ion{Si}{4} & 0.08 & $1402.812 \pm 0.057$ \\ \ion{C}{2} & 0.01 & $1334.543 \pm 0.026$ \\ \ion{C}{2} & 0.01 & $1335.705 \pm 0.024$ \\ \bottomrule \end{tabular} \footnotesize{ $^a$Assuming ionization equilibrium.} \newline \footnotesize{ $^b$Used only for \ion{Fe}{21} reference wavelength} \label{tab:eis_ions} \end{table} Using the $2$\arcsec\ slit, the \textit{EIS} instrument performed rasters of NOAA AR~12192, capturing the pre-flare, impulsive, peak, and gradual phases of the solar flare centered around the eastern flare footpoint, as identified by \textit{RHESSI} HXR imaging. The rasters had a cadence of 214~seconds, covering a field of view (FOV) approximately 60\arcsec~$\times$~152\arcsec, as shown in Figure~\ref{fig:context}. The spatial resolution of the \textit{EIS} instrument is 3\arcsec\ in the horizontal, 1\arcsec\ in the vertical, with a spectral resolution of 22.3~m\AA. During this event, the footpoint was captured in several emission lines in the raster FOV; the observed emission lines are detailed in Table~\ref{tab:eis_ions}, which also includes information on emission lines from the \textit{IRIS} instrument. Gaussian fits were performed for a set of twelve emission lines from nine different ions. While most of the ions studied required only single-component fits, multiple component fits were performed in order to examine the effects of blended lines. The \ion{He}{2} 256.35\AA, \ion{Fe}{14} 272.20\AA, \ion{Fe}{15} 284.18\AA, and \ion{Ca}{17} 192.83\AA\ lines required two or more Gaussian profiles to account for known line blends \citep{Young2007}. Even in these spectral windows, the presence of strong blended lines was not consistent over each raster, or at each time. Additionally, the \ion{Fe}{23} 263.78\AA , \ion{Fe}{24} 255.13\AA , and \ion{Fe}{24} 192.02\AA\ lines required multiple components to account for both blends and a blue-wing enhancement \citep{Milligan2009}. \par The spectral fits were used to determine Doppler velocities, nonthermal velocities, electron densities, and intensities as functions of both temperature and time. Example fits from a selection of emission lines are shown in Figure~\ref{fig:eis_fit_ex}. The profiles chosen showcase a wide temperature range, from a location within the eastern footpoint early in the flare, and during the HXR peak. \par The rest wavelength for every emission line, save \ion{Fe}{23} and \ion{Fe}{24}, was determined from the mean central wavelength across the less-active raster regions. In the case of \ion{Fe}{23} and \ion{Fe}{24}, no plasma can be assumed to be at rest, and an alternate method was required. For \ion{Fe}{24} 192.02\AA\ the \ion{Fe}{12} 192.39\AA\ line was used to constrain the rest wavelength from the theoretical separation of the two lines from the CHIANTI database \citep{Chianti1,Chianti2}. For \ion{Fe}{23} and \ion{Fe}{24} 255.13\AA\ lines, the mean central wavelength from the 14:31:12~UT raster was used, as this raster consists entirely of emission produced after the nonthermal electron injection event. \par For ions with strong blue wing enhancements (\ion{Fe}{23} and \ion{Fe}{24}), the Doppler velocities presented for the blue wing were calculated with the same reference wavelength used for the line core. \par Nonthermal velocities were calculated using the method described by \cite{Mariska92}, and utilized in several other studies \citep{Doschek2007,Harra2009,Milligan2011}, where the most probable nonthermal velocity (v$_{nth}$) is calculated using the form: \begin{equation} W^2 = 4 \ln{2} \Big(\frac{\lambda}{c}\Big)^2(v_{th}^2 + v_{nth}^2) + W_{inst}^2, \end{equation}\label{eq:1} where W is the measured full width at half maximum of the Gaussian profile, $W_{inst}$ is the instrumental width ($0.056$ m\AA\, \citealt{Doschek2007} and \citealt{Harra2009}). The thermal velocity, $v_{th}$ is given by \begin{equation} v_{th} = \sqrt{\frac{2 k_B T}{M}}, \end{equation} where $k_B$ is the Boltzmann constant, $M$ is the mass of the ion, and $T$ is the peak formation temperature of the line from \cite{Young2007}, and the CHIANTI database \citep{Chianti1,Chianti2}, assuming ionization equilibrium. \par The \textit{EIS} dataset used in this work contains the density-sensitive line pair of \ion{Fe}{14} 264.81/274.23\AA. The theoretical relationship between the intensity ratio and electron density for this line pair is shown in Figure~\ref{fig:dens_relation}, from the CHIANTI v10.0 database \citep{Chianti1,Chianti2}. This line pair is sensitive to densities between $10^8 < n_e < 10^{12}$~cm$^{-3}$. It is important to note that the relationship between the \ion{Fe}{14} intensity ratio and electron density is formed under the assumption of ionization equilibrium, which may not be valid during large dynamic events such as solar flares. The Doppler and nonthermal velocity results from \textit{EIS} fitting are discussed in Sections~\ref{sec:cool_lines}~and~\ref{sec:hot_lines}, while the correlation between velocity parameters and electron density are discussed in Section~\ref{sec:eis_dens}. \subsection{IRIS Analysis}\label{sec:IRIS_anal} In this study, both the spectral and slit-jaw imaging data from the \textit{IRIS} instrument were used. For the entire duration of this event, the \textit{IRIS} instrument performed a repeated fast raster scan (131.1~s cadence per complete raster) of AR 12192, with a $45^{\circ}$ roll angle. Each spectral raster contained eight slit positions, with a spacing of 2\arcsec\ and 16.32~seconds between positions. The spatial resolution for each raster was 0.33\arcsec\ along the slit, with a slit width of 0.33\arcsec . No onboard spatial summing was carried out for these observations. The spectral resolution was 25.96~m\AA\ in the far-ultraviolet (FUV) spectral window. \par The \textit{IRIS} slit-jaw camera was used to determine the area of each flare footpoint. While this is not a direct measurement of the HXR source size, \textit{RHESSI} \verb|CLEAN| imaging tends to significantly overestimate the source size \citep{Dennis2009,Milligan2009}, and \textit{AIA} chromospheric images for this event were severely saturated during the period of interest. \par Ribbon areas were determined from \textit{IRIS} slit-jaw images using the 10\% and 50\% levels of each frame maximum. This time-dependant area measurement was interpolated from a 32-second cadence to a 16-second cadence, and rebinned to match the \textit{RHESSI} spectral time bins. The \textit{IRIS} slit-jaw camera experienced minimal saturation in two exposures during the peak of the flare; these were omitted from the final calculation of the footpoint area. The time-dependant areas were used to determine the injected electron energy flux in erg~s$^{-1}$~cm$^{-2}$. \par \textit{IRIS} spectra were available for several ion species during this flare, from which the \ion{C}{2} line doublets at 1334.54~and~1335.71\AA, the \ion{Si}{4} 1402.81\AA\ line, and the hot \ion{Fe}{21} line at 1354.07\AA\ were selected for study. Using the standard method described by \cite{IRIScal}, radiometric and instrumental calibrations were performed. The calibrated spectra were fit with multiple component Gaussian profiles, accounting for blends where applicable \citep{graham2015,young2015}, and allowing for additional blue- and red-wing components to account for asymmetry in the complex \ion{C}{2} and \ion{Si}{4} emission lines. \par Despite the increased emissivity of the faint \ion{Fe}{21} line during the flare, it becomes more difficult to accurately fit during the peak of nonthermal electron injection. This is primarily due to the \textit{IRIS} instrument automatic exposure compensation, which scales exposures in order to avoid saturation in the more emissive ion species. During the peak of the flare, this has the unfortunate side effect of obscuring weak lines, such as \ion{Fe}{21}, within the noise of the continuum. In an attempt to maximize the signal from the \ion{Fe}{21} 1354.07\AA\ line, data from this spectral window were binned by a factor of four along the slit. \par Of the three species studied, only the \ion{Fe}{21} emission line is known to be optically thin. However, simulations have shown that Doppler shifts of the optically-thick \ion{C}{2} lines are well-correlated with the plasma velocity \citep{Rathore2015a,Rathore2015b,Rathore2015c}. The \ion{Si}{4} line is sometimes optically thin \citep{Kerr2019,cai2019,Peter2014}, with complex wavelength and structure-dependant behaviour \citep{Zhou2022}. Unfortunately, the diagnostic line at 1393\AA\ was not observed, and the opacity of the line could not be determined. Nevertheless, Doppler shifts were present within the line core, as were widths in excess of the thermal profile that could not be accounted for by known blends or observed asymmetry. While the calculation of nonthermal velocity given by Equation~\ref{eq:1} is valid only for optically thin profiles, the same quantity calculated for an optically thick profile is a useful measure of line width. In the case of an optically thick line, variations in the width of the line are linked to changes in the optical depth of the line. As with the \textit{EIS} measurements, the quiescent regions in \ion{Si}{4} and \ion{C}{2} rasters were used to calculate reference rest wavelengths. For the broad \ion{Fe}{21} line, quiescent region emission of the nearby \ion{O}{1} line is used to infer the rest wavelength. \section{Results}\label{sec:Results} \subsection{RHESSI Results} \label{sec:hsi_results} \textit{RHESSI} spectral fits were used to derive a set of thermal and nonthermal parameters for the X1.6-class flare on 2014 October 22. The thermal X-ray parameters, derived from the multithermal model are presented in Figure~\ref{fig:hsi_therm}. The top panel shows that the reference DEM (calculated at 2~keV, $\approx$ 23.2~MK) rose sharply soon after the onset of electron injection, and remained at approximately the same level ($\approx 10^{49}$~cm$^{-3}$~keV$^{-1}$), well after the cessation of the injection event. The upper limit on temperature, found in the second panel of the same figure, reached a peak of 70~MK early in the flare, and continued to decline for the rest of the studied interval. It is important, however, to note that this is the maximum temperature of the plasma, as characterized by a power-law DEM, and is not characteristic of the mean plasma temperature. The power-law index of the DEM increased slowly throughout the flare, as the bulk of the plasma cooled. \par The nonthermal electron parameters are presented in Figure~\ref{fig:hsi_nth}. The nonthermal electron population is best characterized by a single-power law distribution of electrons, that lasted for 352 seconds, and deposited $>4.8\times 10^{30}$~erg of energy. The nonthermal electron flux was first observed during the 14:04:40--14:04:56~UT interval, peaked during the interval 14:06:40--14:06:56~UT, 68~seconds after the first interval where the presence of nonthermal electrons was detected, and had ceased by 14:10:32~UT. \par During the peak interval, the flux in nonthermal electrons was calculated to be between $5.99 \pm 0.66 \times10^{10}$~erg~s$^{-1}$~cm$^{-2}$, for a larger estimate of the footpoint area (corresponding to 10\% of the frame maximum for \textit{IRIS} slit-jaw imaging) and $3.07\pm0.34\times10^{11}$~erg~s$^{-1}$~cm$^{-2}$, for a smaller estimate of the footpoint area (the 50\% of the frame maximum). \par \cite{Lee2017} fit the \textit{RHESSI} spectrum of this event for two intervals during this flare, and calculated an energy flux of $7.7\times10^{10}$~erg~cm$^{-2}$~s$^{-1}$ during the time interval 14:05:32--14:06:32~UT. This is similar to the value obtained for the time interval 14:06:16--14:06:32~UT of $8.37 \pm 0.62 \times 10^{10}$~erg~cm$^{-2}$~s$^{-1}$ for the more conservative 50\% intensity threshold used to determine the area of the energy injection region. These results presented here are not compared with results from the second interval shown by \cite{Lee2017} ($6.1\times10^{10}$~erg~cm$^{-2}$~s$^{-1}$ at 14:11~UT). The differences between the these two studies are primarily due to differences in footpoint area determination and the determination of the low-energy electron cutoff. This study used the time-varying 10\% and 50\% contours of \textit{IRIS} imaging for footpoint area determination while \cite{Lee2017} take the 60\%\ contour of \textit{RHESSI} HXR imaging. This study additionally allows the low-energy electron cutoff to vary in time. This results in a cutoff between 5 and 8~keV higher than the 30~keV assumed by \cite{Lee2017}. The treatment presented here additionally fits for albedo effects and instrumental pileup. \par In general, the low-energy cutoff presented in this work was higher than found in other, similar, studies, particularly \cite{Milligan2014}, who studied a flare of a similar size (X2.2). The study by \cite{Warmuth2016} contained several flares of similar magnitude, all of which had low-energy cutoffs less than found here. Most similar was the X1.3 flare of 2005 January 19, studied by \cite{Warmuth2009}, who found a low-energy cutoff between 30-40~keV during parts of that event. Due to the low-energy electron electron cutoff level, the particularly steep slope of nonthermal emission, and the choice of a multithermal plasma model, the derived electron power was, on the whole, weaker than studies of flares of a similar size. As with other studies \citep{xia2021}, a consequence of uncertainty in the low-energy cutoff is that the $4.8\times10^{30}$~erg total energy fit should be treated only as a lower limit \citep{Warmuth2009,aschwanden2019}. \par In Figure~\ref{fig:lc_context}, a secondary enhancement in the \textit{RHESSI} 25--50~keV band occurred around 14:24~UT. At the same time, flux in the \textit{GOES} 1--8\AA\ band is boosted. Taken together, this would imply the existence of a second nonthermal event at this time. Fits to the HXR spectrum were attempted from 14:15~UT through this secondary peak, until 14:30~UT, but the presence of a second nonthermal event could not be determined. Excess HXR emission was equally well fit by a thick-target bremsstrahlung component as by a pulse-pileup phenomenon component, with both cases yielding a similar $\chi^2$. \verb|CLEAN| images formed during this interval showed no significant sources of emission above the 30~keV low-energy electron cutoff derived during the nonthermal electron event. \subsection{EIS Results} \label{sec:eis_results} \subsubsection{Lines formed below 10~MK}\label{sec:cool_lines} Fit-derived parameters from ions with temperature T~$<$~10~MK are shown in Figure~\ref{fig:eis_cool_ions} for the four rasters spanning 14:02:39--14:16:56~UT. Line intensities, Doppler velocities, and nonthermal velocities are shown as rows in Figure~\ref{fig:eis_cool_ions} for each raster time interval, with ion formation temperature increasing left to right across each row. Columns in Figure~\ref{fig:eis_cool_ions} correspond to one emission line each (labelled at the top of each column). Each parameter was scaled to the same range across each time interval and temperature, to allow direct comparison between ion species, and the location of the flare footpoint (from \textit{RHESSI} 25--50~keV \verb|CLEAN| images) is overlaid in cyan. All HXR sources are part of the eastern flare footpoint; the western footpoint lay outside the \textit{EIS} FOV. Alignment between \textit{EIS} rasters and \textit{RHESSI} imaging was performed by first determining the offset between \textit{EIS} rasters and \textit{AIA} filtergrams using the \verb|eis_aia_offsets| procedure available in SSW, then aligning \textit{AIA} filtergrams with \textit{RHESSI} \verb|CLEAN| maps. The alignment between \textit{AIA} and \textit{EIS} is accurate to within $\approx 5$\arcsec \citep{EISSW}. The accuracy of alignment between \textit{AIA} and \textit{RHESSI} is accurate to $2.26$\arcsec, within the minimum spatial resolution element of \textit{RHESSI} imaging. The middle column of Figure~\ref{fig:context} shows the \textit{EIS} FOV in context of the flaring region for comparison to the structures shown in Figure~\ref{fig:eis_cool_ions}. \par In the rasters beginning at 14:06:13,~UT, 14:09:48~UT, and 14:13:22~UT, the velocity distribution found in \textit{EIS} spectral lines was typical of explosive chromospheric evaporation. Within the flare footpoint, warmer ions exhibited strong blueshifts, while cooler ions exhibited only redshifts. Given adequate temperature sampling, the Doppler velocities \textit{EIS} spectral lines can be used to derive a range for the temperature of flow reversal. The flow reversal temperature (FRT) is the temperature at which the division between evaporative upflows and condensation-driven downflows occurs during periods of explosive chromospheric evaporation. Analysis of Doppler velocities at or near this temperature provide insight into the processes that transport energy from the corona to the chromosphere \citep{Brannon2014,Fisher1985}. With six different ion species between T=1~MK and T=6.3~MK, the \textit{EIS} observations presented in this study are adequate to place constraints on this temperature. \par Figure~\ref{fig:eis_cool_ions} shows a clear delineation in Doppler velocity cospatial with HXR emission between 1.35--1.82~MK, first observed in the 14:06:13~UT raster. This raster spanned the time interval with the largest nonthermal electron flux density (Figure~\ref{fig:hsi_nth}). The distribution of nonthermal electrons during this interval was characterized by a steepening power-law index. In this, and the two following rasters, the \ion{Fe}{12} line, formed at 1.35~MK, exhibited mild downflows within the flare footpoint, on the order of $\approx10$--$40$~km~s$^{-1}$, while the \ion{Fe}{14} line, formed at 1.82~MK, was blueshifted between $\approx -20$ and $\approx$ $-60$~km~s$^{-1}$. The FRT fell within this 0.5~MK range during this raster, and remained in this range for the remainder of the flare. This range is consistent with limits determined in previous studies \citep{Kamio2005,Milligan2009}. Above this temperature, spectral lines were observed to have increasingly strong blueshifts, peaking at nearly -100~km~$^{-1}$ for the \ion{Ca}{17} line, while the cooler ions exhibited relatively consistent redshifted emission across the three cool species studied, including the weak \ion{Fe}{10} line. \par Minor evolution in the Doppler velocity distribution was found throughout the duration of the flare. The earliest raster studied, which began at 14:02:39~UT, showed markedly different behaviour compared to later observations. Nonthermal emission from \textit{RHESSI} observations were first observed at 14:04:40~UT, thus, this raster observed both the pre-flare and early-flare chromosphere. As early as 14:03:00--14:03:11~UT, ions warmer than the FRT were observed to have blueshifted velocity enhancements, 90~s before the \textit{RHESSI} instrument detected nonthermal emission. The \ion{Fe}{16} ion, in particular displayed a blueshift of -68.9$\pm$4.6~km~s$^{-1}$ in the region that subsequently became the flare footpoint. This early velocity behaviour is more consistent with gentle chromospheric evaporation \citep{Schmieder87,Zarro88}, possibly driven by a nonthermal electron component with an energy below the \textit{RHESSI} sensitivity threshold. \par The compact kernels of blueshifted emission apparent in warm ($\geq$1.82~MK) ions during the 14:02:39~UT raster expanded to fill both lobes of the flare ribbon during the 14:06:13~UT raster. At this time, additional blueshifted material bridged the two HXR sources. By the 14:13:22~UT raster, while significant upflows remained in these species, they were mostly contained within the eastern structure, while the larger, western structure had begun to return to rest as early as the 14:09:48~UT raster. The blueshifted material bridging the two HXR sources persisted through the 14:09:48~UT raster, but is largely absent by 14:13:22~UT. \par During the 14:02:39~UT raster, ions cooler than the FRT (\ion{He}{2}, \ion{Fe}{10}, and \ion{Fe}{12}) exhibited small Doppler velocity enhancements within the region that would later become the flare footpoint. The downflows in these species peak during the 14:06:13~UT raster (for \ion{He}{2}, downflows peaked at $v_{max }=$~41.7~$\pm$~5.5~km~s$^{-1}$ during this raster), gradually returning to rest over the remaining duration. The results presented here are broadly consistent with the results of \cite{Lee2017}, who presented selected ion species within a point in the western lobe. \par Nonthermal velocities (calculated from the line width) are shown in every third tow of Figure~\ref{fig:eis_cool_ions}. The highest nonthermal velocities derived from \textit{EIS} spectral fits were found at cooler temperatures, specifically, those below the FRT, and are largest for \ion{Fe}{12} and \ion{He}{2}. The ion observed by \textit{EIS} with the smallest nonthermal velocity was \ion{Fe}{14}, which is formed at a temperature just above the FRT. Ions warmer than \ion{Fe}{14} showed higher nonthermal velocities with increasing temperature. There is little evolution in nonthermal velocity after 14:06:13~UT. During the 14:02:39~UT raster, the nonthermal velocity, particularly in \ion{Fe}{14}, \ion{Fe}{15}, and \ion{Fe}{16} was mildly enhanced across the region that would later become the flare footpoint. Overall, the nonthermal velocities observed are markedly similar in magnitude to those observed by \cite{Milligan2011}, though the flare studied in that work was significantly smaller (C1.1). \subsubsection{Lines formed above 10~MK}\label{sec:hot_lines} Figure~\ref{fig:eis_hot_ions} shows Doppler velocity behaviour for the hottest \textit{EIS} ions: \ion{Fe}{23} and \ion{Fe}{24}, with temperatures of 14.13~and~18.20~MK respectively. The \textit{EIS} instrument observed one line of \ion{Fe}{23}, and two of \ion{Fe}{24}. Of the three, the reference wavelength constraints for \ion{Fe}{24} 192.02\AA\ were most reliable, and this line serves as the focus of this discussion. In every raster where these lines were present, the core was accompanied by strong enhancements to the blue wing. Figure~\ref{fig:eis_hot_ions} presents the fits to the core and the blue wing enhancement, with time increasing top to bottom for the same four raster time intervals presented in Figure~\ref{fig:eis_cool_ions}. The six columns correspond to: \ion{Fe}{23} core and blue wing, the \ion{Fe}{24} 255.13\AA\ core and blue wing, and the \ion{Fe}{24} 192.02\AA\ core and blue wing, while the rows alternate between intensity and Doppler velocity for these four components. For the blue wing, Doppler velocity was measured relative to the same reference wavelength as the line core. Where the detector saturated observing \ion{Fe}{24} 192.02\AA, or where there was insufficient signal to fit the emission line, as was often the case outside the flare ribbon, the fits were replaced with a null value. \par During the 14:02:39~UT raster, no emission was detected from the \ion{Fe}{23} line or the \ion{Fe}{24} 255.13\AA\ line. The \ion{Fe}{24} 192.02\AA\ line, while faint, was present in locations that later became a part of the footpoint during this time interval. \par An example of this early, low-intensity emission is shown in the top panel of Figure~\ref{fig:fexxiv_complex_shift}. Where it is present at 14:02:39~UT, the magnitude of Doppler velocity for \ion{Fe}{24} is small for the core, and the separation of the wing is approximately constant. \par Significant \ion{Fe}{23} and \ion{Fe}{24} 255.13\AA\ emission first appeared during the 14:06:13~UT raster, and grew in intensity with each successive raster. All three lines exhibited core blueshifts within the footpoint at this time, with further blue-wing enhancement. The Doppler velocity of the blue wing peaked during the 14:06:13~UT raster, and decreased thereafter. \par Generally, these hot ions are expected to display a stationary core, with an enhanced blue wing \citep{Milligan2009}. During the raster covering the flare peak (14:06:13~UT), however, the entire line complex for both the \ion{Fe}{23} line, and the \ion{Fe}{24} line pair was significantly blueshifted. Within non-saturated footpoint pixels, the \textit{core} of the \ion{Fe}{24} 192.02\AA line was found to have blueshifts as high as -240~km~s$^{-1}$, while maintaining a blue wing enhancement. For the same profile, the blue wing velocity was as high as -480~km~s$^{-1}$, relative to the same reference wavelength. By the 19:09:48~UT raster, while core blueshifts were still found within the flare ribbon, the magnitude and extent were far less than found one raster prior, and by 14:13:22~UT the core of these lines had mostly returned to rest. Significant Doppler velocities observed in the ``rest'' component of this line complex is not expected. An example of this atypical behaviour is shown in Figure~\ref{fig:fexxiv_complex_shift}, which shows the \ion{Fe}{24} 192.02\AA complex across three rasters from the same location. \subsubsection{Correlations between Doppler and Nonthermal Velocity, and Electron Density}\label{sec:eis_dens} \begin{table} \centering \caption{Density, Velocity Correlation Coefficients} \begin{tabular}{p{0.15\linewidth} p{0.18\linewidth} p{0.235\linewidth} p{0.235\linewidth}} \toprule & \textbf{Raster:} & \textbf{Pearson $\lvert r \lvert$:} & \textbf{Pearson $\lvert r \lvert$:} \\ \textbf{Ion:} & (UT) & n$_e$, v$_{nth}$ & v$_{nth}$, v$_{Dopp}$ \\ \midrule \ion{Fe}{12} & 14:02:39 & \textbf{---} & 0.151 \\ \ion{Fe}{12} & 14:06:13 & \textbf{---} & 0.624 \\ \ion{Fe}{12} & 14:09:48 & \textbf{---} & 0.502 \\ \ion{Fe}{12} & 14:13:22 & \textbf{---} & 0.526 \\ \midrule \ion{Fe}{14}$^a$ & 14:02:39 & 0.037 & 0.377 \\ \ion{Fe}{14} & 14:06:13 & 0.069 & 0.360 \\ \ion{Fe}{14} & 14:09:48 & 0.095 & 0.114 \\ \ion{Fe}{14} & 14:13:22 & 0.040 & 0.032 \\ \midrule \ion{Fe}{14}$^b$ & 14:02:39 & 0.014 & 0.287 \\ \ion{Fe}{14} & 14:06:13 & 0.092 & 0.063 \\ \ion{Fe}{14} & 14:09:48 & 0.114 & 0.045 \\ \ion{Fe}{14} & 14:13:22 & 0.112 & 0.058 \\ \midrule \ion{Fe}{15} & 14:02:39 & \textbf{---} & 0.710 \\ \ion{Fe}{15} & 14:06:13 & \textbf{---} & 0.102 \\ \ion{Fe}{15} & 14:09:48 & \textbf{---} & 0.089 \\ \ion{Fe}{15} & 14:13:22 & \textbf{---} & 0.273 \\ \midrule \ion{Fe}{16} & 14:02:39 & \textbf{---} & 0.714 \\ \ion{Fe}{16} & 14:06:13 & \textbf{---} & 0.506 \\ \ion{Fe}{16} & 14:09:48 & \textbf{---} & 0.579 \\ \ion{Fe}{16} & 14:13:22 & \textbf{---} & 0.545 \\ \bottomrule \end{tabular} \footnotesize{ $^a$ 264.81\AA, $^b$ 274.23\AA } \label{tab:dens_pears} \end{table} Density maps formed from the \ion{Fe}{14} 264.81/274.23 \AA\ line pair are presented in Figure~\ref{fig:eis_dens} (left column) for the same four raster times as shown in Figures~\ref{fig:eis_cool_ions}~and~\ref{fig:eis_hot_ions}, with extracted slices in the Solar-Y direction shown in the right column, in order to provide a density cross section of the flare ribbon. The three slices selected for plotting are the same at every time, and are color-coordinated (such that the purple points on the left image denote the start and end of the purple curve right). The density evolution along the flare ribbon (identified by cyan \textit{RHESSI} \verb|CLEAN| contours) exceeded the upper limit of the line ratio at various times. Several regions within the ribbon exceed the limits of the intensity ratio, reaching electron densities greater than $10^{12}$~cm$^{-3}$, with the highest densities over the largest areas found in the 14:06:13~UT raster. \cite{Lee2017} focused on a particular kernel of density enhancement, the peak of which coincided with the SXR emission peak, with only a smaller enhancement found at 14:06:13~UT. However, when the entire field of view is considered, the density enhancement is greatest during the peak of the nonthermal electron event, with much of the field exceeding the limits of the density relation. \par Potential mechanisms responsible for excess line broadening within the flare ribbon can be investigated by correlations of density, nonthermal velocity, and Doppler velocity. A strong correlation between Doppler and nonthermal velocity within the flare ribbon may be indicative of a blend of unresolved plasma flows. Conversely, a stronger correlation between electron density and nonthermal velocity would indicate other effects, such as opacity, pressure, or potentially even turbulent broadening, are dominant. Measured correlations between these quantities within the flare ribbon are presented in Table~\ref{tab:dens_pears} for \ion{Fe}{12}, \ion{Fe}{14}, \ion{Fe}{15}, and \ion{Fe}{16}, which span a 1.15~MK range. No correlations are presented with density for \ion{Fe}{12}, \ion{Fe}{15}, or \ion{Fe}{16}, as there are no reliable density measurements in these lines. \par For the entire duration studied, neither \ion{Fe}{14} line exhibited any correlation between electron density and nonthermal velocity. There is a weak correlation between nonthermal velocity and the Doppler shift of the line core, with a peak correlation of $\lvert r \lvert$=0.377 in \ion{Fe}{14} 264.81\AA\ during the early flare 14:02:39~UT raster. The two hotter lines exhibited correlation between nonthermal velocity and Doppler velocity during the 14:02:39~UT raster. By the 14:06:13~UT raster, this correlation is found only in \ion{Fe}{16}. The cooler \ion{Fe}{12} 195.12\AA\ line only exhibits correlation between nonthermal and Doppler velocities after the peak of energy injection. During the 14:06:13~UT raster, coincident with the peak of nonthermal electron injeciton, this correlation peaked at $\lvert r \lvert = $~0.624, indicating significant unresolved flow structure. \par The behaviour of the \ion{Fe}{14} line pair stands in contrast with \cite{Milligan2011}, who found a strong correlation between nonthermal velocities and densities within the this line pair. The low correlations are more consistent with the findings of \cite{Doschek2007}, who studied plasma in a quiescent active region and also found no evidence of such a correlation. \par These correlations, taken from temperatures surrounding the FRT are a signature of explosive chromospheric evaporation, as observed in the vicinity of a major energy deposition layer. At temperatures above and below the FRT, the nonthermal widths are likely due to a superposition of unresolved flows. Near the FRT, both nonthermal and Doppler velocities were small, implying that the Doppler velocity structure was well resolved. \subsection{IRIS Results}\label{sec:iris_results} Line intensities, Doppler velocities, and nonthermal velocities from \textit{IRIS} spectral fitting are shown in Figure~\ref{fig:iris_vs} for each of the three ions fit for pixels lying within the flare ribbon. Points lying outside the flare ribbon were masked. The left column shows the integrated line intensity for \ion{C}{2}~1335.71\AA, \ion{Si}{4}~1402.81\AA, and \ion{Fe}{21}~1354.07\AA, while the middle shows the Doppler velocity, and the right shows nonthermal velocities. For the bright \ion{C}{2} and \ion{Si}{4} lines, the running mean of each parameter is overlaid in orange, with the 1$\sigma$ error in the running mean overlaid as filled contours in the same color. \par The cool \ion{Si}{4} and \ion{C}{2} ions, exhibit small Doppler shifts. Over the duration of the event, 81\% of \ion{Si}{4} profiles, and 96\% of \ion{C}{2} profiles were redshifted, with peak velocities of 47.9~$\pm$~9.6~km~s$^{-1}$ at 14:09:19~UT and 59.6~$\pm$~5.6~km~s$^{-1}$ at 14:07:16~UT, respectively. This cool choromospheric condensation provides context for \textit{EIS} observations of \ion{He}{2}. For example, \ion{He}{2} exhibited a maximum redshift of 41.7~$\pm$~5.5~km~s$^{-1}$ during the 14:06:13~UT raster. At this time (14:06:42~UT), \ion{Si}{4} redshifts peaked at 27.6~$\pm$~9.4~km~s$^{-1}$. \par More notable is the behaviour of the calculated nonthermal velocity for \ion{Si}{4}. The running mean of this quantity peaks at 14:05:49~UT, with a mean nonthermal velocity of 31.9$\pm$1.0~km~s$^{-1}$. This is coincident with the time of the hardest electron distribution, with a power-law index less than 6. As the nonthermal velocities in \ion{Si}{4} level off later in the flare, and finally flattens at 14:10~UT, the power-law index increases, until the nonthermal electron event ceases shortly before 14:11~UT. In the case of \ion{Si}{4}, at least, the excess widths calculated from spectral fitting may be linked to line opacity changes, driven by the deposition of energy from a particularly hard distribution of nonthermal electrons. \par For the hot, low-emissivity \ion{Fe}{21} line, there are comparatively few spectra with significant observable emission, particularly at earlier times. The earliest instance of an \ion{Fe}{21} profile that could be reasonably fit was at 14:04:44~UT, and was already highly blueshifted to -122.5~$\pm$~11.6~km~s$^{-1}$, with a nonthermal width of 128.2~$\pm$~15.4~km~s$^{-1}$. Most of the emission from this line during the flare impulsive phase was obscured by high levels of noise in the continuum as a consequence of shorter exposure times. At later times in the flare, \ion{Fe}{21} was observed to have Doppler velocities mostly between 0-- -80~km~s$^{-1}$, with outliers observed in excess of -150~km~s$^{-1}$ ($\lvert v_{max} \lvert = 166.67 \pm 11.4$~km~s$^{-1}$ at 14:09:19~UT). \par The Doppler shifts presented here are observed earlier and have values in excess of those profiles fit by \cite{Lee2017}, who found no \ion{Fe}{21} Doppler velocities in excess of -60~km~s$^{-1}$, which they measured at 14:10~UT, for a particular kernel of emission. \cite{Li2015} were able to fit velocities as early as 13:45~UT. However, their measured Doppler velocities were, overall, smaller. Comparable Doppler velocities were found by \cite{OtherLi2015}, who studied an X1.0 flare that occurred on 2014 March 29, and found \ion{Fe}{21} Doppler velocities of $-214$~km~s$^{-1}$. \cite{Tian2015} also found similar blueshifts for the X1.6 flare on 2014 September 10, reaching a maximum of $-240$~km~s$^{-1}$, while \cite{graham2015} found velocities of up to $-300$~km~s$^{-1}$ for the same event. \par \ion{Fe}{21} nonthermal velocities were high for the entire duration of the flare, with a mean of 54.5~km~s$^{-1}$ and a maximum nonthermal velocity of 128.2~$\pm$~15.1~km~s$^{-1}$. These measurements are larger by than other studies of this flare. \cite{Lee2017} found no nonthermal velocities greater than $\approx 54$~km~s$^{-1}$ (0.6\AA\ FWHM) within the kernel chosen by that study. The nonthermal velocities presented here are some of the highest observed for this ion, comparable to observations by \cite{graham2015}, \cite{Polito2015}, and \cite{Polito2016}. \par All parameters in Figure~\ref{fig:iris_vs} exhibited a large amount of scatter. As only pixels within the flare ribbon were selected, the remaining scatter must be due to differences across the field-of-view. The spatial context for these measurements is shown in Figure~\ref{fig:IRIS_context}, which shows the line intensities, Doppler velocities, and nonthermal velocities along each raster. For the \ion{C}{2} and \ion{Si}{4} lines, the raster beginning at 14:08:21~UT was selected for the high spatial coverage and low levels of saturation. For the weak \ion{Fe}{21} line, the entire time span was stacked to provide a coherent depiction of the region of interest. Where multiple \ion{Fe}{21} profiles were present, the parameter of greatest magnitude was selected for display. \par When displayed in this manner, it is apparent that enhancements in \ion{C}{2} and \ion{Si}{4} intensity, Doppler velocity, and nonthermal velocity track the structure of the flare ribbon. The \ion{Fe}{21} intensities, Doppler velocities, and nonthermal velocities, however, do not appear to track the flare ribbon. Rather, enhancements in these parameters appear to trace the edges of a loop structure connecting the two flare ribbons visible in slit-jaw imaging. A similar structure appears in the hottest \textit{EIS} ions (\ion{Fe}{23} and \ion{Fe}{24}) during the 14:09:48~UT and 14:13:22~UT rasters. As this structure is not visible in any other \textit{EIS} emission lines, the minimum temperature of this structure must be between 6.31~MK (\ion{Ca}{17}) and 11.48~MK (\ion{Fe}{21}). \subsection{Evolution of Doppler and Nonthermal Velocity as a function of Temperature}\label{sec:vel_evol_temp} Figure~\ref{fig:eis_ftpt} shows Doppler and nonthermal velocities as a function of temperature and time for a region within the primary flare ribbon. The \textit{IRIS} data are cospatial with \textit{EIS} data, and approximately co-temporal to the extent that the differing cadences could be matched. For ions that exhibited a strong blue-wing enhancement (\ion{Fe}{23} and \ion{Fe}{24}), the Doppler velocities of the core and wing components are included, as well as the nonthermal velocity of the core component only. \par The FRT was clear between \ion{Fe}{12} and \ion{Fe}{14}. \textit{IRIS} observations showed redshifts from chromospheric condensation, which continued to the coolest temperatures studied. Within the blueshifted lines, the \ion{Fe}{21} line observed with \textit{IRIS} appeared to more consistent with the blueshifts observed in the cores of the \ion{Fe}{23} and \ion{Fe}{24} lines, rather than the blue wing enhancements (which were noticeable outliers in Doppler velocity). \par Nonthermal velocities increased with temperature from the cool \textit{IRIS} lines through the \ion{Fe}{12} line observed by \textit{EIS}. There was a sudden drop in nonthermal velocity at this temperature, observed to some extent in all time bins studied. Within the blueshifted lines, the nonthermal velocity again increased with increasing temperature approximately linearly through \ion{Fe}{24}. The \ion{Fe}{21} \textit{IRIS} line fit well into this linear relation. By the 14:13:22~UT raster, while the break in nonthermal velocities was still present, the relation has become a great deal more shallow when compared to the peak raster at 14:06:13~UT. \section{Discussion and Conclusions}\label{sec:Conclusions} In this work, a notably-complete set of observations were used to relate the flare-driven nonthermal energy release with the response of the chromosphere. The results presented here place an emphasis on the time-resolved profile of nonthermal electron-driven emission in conjunction with the evolving chromosphere. The nonthermal electron distribution was provided via \textit{RHESSI} spectral fitting, while emission lines observed with the \textit{EIS} and \textit{IRIS} instruments probed the response of the event in intensity, Doppler velocity, nonthermal velocity, and density. As the nonthermal electron event began and proceeded, the chromosphere was observed to transition from gentle to explosive chromospheric evaporation, with densities and high-temperature velocities peaking during the interval identified as the peak of nonthermal electron energy deposition. \par Solar flares are true multiwavelength events in every sense of the word, with telltale signatures across spectral bands from radio to HXR. Events covered with a wide range of instrumentation across a wide spectral range are exceedingly rare \citep{MilliganIreland}. A holistic understanding of the generation, transport, and deposition of flare energies may be composed by integration of the many spectral windows provided by numerous instruments in the current state-of-the-art. These connected observations of the response of the chromosphere to the call of electron injection are critical to initializing models and guiding the results of numerical simulations. \par The injection of nonthermal electrons lasted 352 seconds and deposited more than 4.8~$\times 10^{30}$~erg into the chromosphere. Prior to the onset of nonthermal emission, gentle chromospheric evaporation was observed in \textit{EIS} rasters, characterized by compact blueshifted regions observed in ions with T~$\geq$~1.35~MK. After this time, the chromosphere responded explosively, with upflows in excess of -50~km~s$^{-1}$ in \ion{Fe}{16}, -65~km~s$^{-1}$ in \ion{Ca}{17}, and a core blueshift of $-242$~km~s$^{-1}$ in the \ion{Fe}{24} line. \par During the period of explosive chromospheric evaporation, several unique behaviours were observed in \textit{EIS} rasters. Most notable was the monolithic shift of the \ion{Fe}{24} complex. Typically, the \ion{Fe}{24} line can be well characterized by a stationary core, with a strong enhancement to the blue wing, characterized by a blend of Gaussian profiles. While this behaviour is observed at various times during the event, \textit{EIS} raster covering the peak of the flare (14:06:13 UT) exhibited monolithic shifts of the entire \ion{Fe}{24} line complex, with little to no stationary emission. This behaviour is greatly diminished by the start of the next raster, and absent by the following. \par The presence of blue wing enhanced spectral lines at hot temperatures was first noted in observations of \ion{Ca}{19} using the Bragg Crystal Spectrometer (BCS) aboard \textit{Yokoh} by \cite{Doschek2005}. \cite{Milligan2009} found similar profiles in \textit{EIS} observations of \ion{Fe}{23} and \ion{Fe}{24} lines during a solar flare. This behaviour was theorized to be a consequence of the low spatial resolutions of these instruments (BCS in particular was a disk-integrated instrument). The low resolution had the effect of superimposing stationary looptop emission with blueshifted footpoint emission. Confirmation seemingly came with observations of the \ion{Fe}{21} line utilizing the higher-resolution \textit{IRIS} instrument. \cite{graham2015,Polito2015,Polito2016} found that this line exhibited no notable asymmetry. \cite{Doschek2013} and \cite{Brosius2013} both found instances of symmetric, blueshifted \ion{Fe}{23} profiles in an M1.8 and C1 flare, respectively. The behaviour exhibited by the \ion{Fe}{24} line here, where the core of the line was found to be highly blueshifted while maintaining an enhanced blue wing is not an expected behaviour. \par This behavior may be attributed to a superposition of unresolved flows. During the peak of this flare, several atmospheric strata with temperatures $\geq 14.1$~MK could have formed. However, the absence of a stationary population of 14.1~MK plasma until late in the flare remains unexplained. That it is present later in the event could indicate either that the stationary plasma was heated beyond the 18.2~MK \ion{Fe}{24} formation temperature, or that the looptop heating lagged behind the heating of the flare footpoint. \textit{RHESSI} spectral fitting showed the presence of plasma as hot as 70~MK during the peak of the flare, and as hot as 40~MK by the time a strong stationary core was observed at 14:13:22~UT. \par The temperature sampling provided by the \textit{EIS} instrument allowed constraints on the FRT, which was found to be in the range 1.35--1.82~MK. This is comparable to the FRT presented in \cite{Milligan2009}, between 1.5--2.0~MK, despite the differences in flare size (\textit{GOES} C1.1 versus X1.6). This is similar as well to values presented by several other studies, including \cite{Graham2011} (1.25--1.6~MK for a C6.6 flare), \cite{Young2013} (1.1--1.6~MK for an M1.1 flare), and \cite{Watanabe2020}, who found two FRTs; T$<$1.3~MK in one region, 1.3$<$T$<$1.8~MK in another during an X1.8 flare. \cite{Brannon2014}, however, modeled flow reversal properties in flares driven by thermal conduction, and found FRTs ranging from 0.526--4.78~MK, with some evolution in time. While the FRT range found for this event is similar to the range found in much smaller events, the area affected by the energy input is significantly larger, with a second flare ribbon well outside the \textit{EIS} field for this event. It may be that flow reversal always, or nearly always occurs around this temperature, which is independent of deposited energy. \par Every emission line studied exhibited line broadening. In \textit{EIS} rasters, the smallest nonthermal velocities are found just above the FRT in the \ion{Fe}{14} emission line pair. The nonthermal velocities of \textit{EIS} emission lines increase up to the FRT, with a sudden drop in nonthermal velocity just above temperature, before increasing again to the highest temperatures. Two particular emission lines, \ion{Si}{4} and \ion{Fe}{21}, both observed by the \textit{IRIS} instrument, are of note. The \ion{Fe}{21} line exhibited broad, symmetric profiles, that were often low-intensity. While the magnitude of the nonthermal widths of these profiles are not unprecedented \citep{young2015,Lee2017,Kerr2020}, they are among the broadest yet observed \citep{graham2015,Polito2015,Polito2016}. Broad, highly-shifted profiles in this line appeared early in the flare, prior to the peak of electron injection, implying that even relatively weak electron precipitation is sufficient to generate profiles with large nonthermal widths, lending further questions as to their generation \citep{Polito2019}. The cool \ion{Si}{4} line also exhibited enhanced nonthermal widths, albeit at a much lower level. These enhancements are notable due to their similarity with the evolution of the nonthermal electron spectral index, implying that the nonthermal velocity enhancement at cool temperatures may be linked directly to the deposition of energy in the lower atmosphere by nonthermal electrons. \par The electron density within the flare footpoint, as measured by the \ion{Fe}{14} 264.81/274.23 \AA\ ratio increased by nearly two orders of magnitude in the minutes following the onset of the electron injection event. Enhancements in nonthermal velocity in \ion{Fe}{14} were found to be small and not correlated with the density or Doppler velocity, standing in contrast to the findings of \cite{Milligan2011}. The \ion{Fe}{16} emission line exhibited correlation between Doppler and nonthermal velocity, in agreement with the findings of \cite{Milligan2011} and \cite{Doschek2013} for this emission line. A significant correlation was also observed between the Doppler and nonthermal \ion{Fe}{12} velocities, suggesting that nonthermal velocities in lines formed above and below the FRT originated from unresolved velocity flow structures along the line of sight, similar to the findings of \cite{Young2013}. \par This study combined temporally, spatially and spectrally-resolved, observations for a large number of distinct emission lines. This set of flare parameters combined a time-dependant electron injection profile with a time-dependant chromospheric response, including Doppler and nonthermal velocities, electron densities, and emission line intensities, with multiple rasters covering the nonthermal electron event. In addition to providing a detailed profile of this large solar flare, the derived parameters can be used to guide and interpret modeling of the atmosphere, using state-of-the-art hydrodynamic flare simulation codes, such as HYDRAD \citep{HYDRAD1,HYDRAD2}, RADYN \citep{RADYN1,RADYN2,RADYN3}, or FLARIX \citep{FLARIX1,FLARIX2}. Time-dependant parameters of nonthermal electron energy injection from \textit{RHESSI} would be used to provide the electron beam input. The chromospheric response across temperatures from 10$^{4}$--10$^{7}$~K provides guidance for the correlation of simulation outputs. Together, this allows for both a deeper understanding of the dynamic response of the chromosphere to an impulsive injection of energy, as well as the ability to constrain the numerical simulations to the underlying physics. \par Several unanswered questions remain: the specifics of energy and mass transport in the post-flare atmosphere as driven by relatively-weak nonthermal electron heating; the origin and nature of symmetrical nonthermal-broadened emission line profiles, especially in the extreme case of \ion{Fe}{21}; the origin of the oft-observed blue wing asymmetry observed in the hottest \ion{Fe}{23} and \ion{Fe}{24} emission lines; the unexpected shift of the typically-stationary \ion{Fe}{24} line core; and the transition from gentle to explosive chromospheric evaporation. Such questions can be answered in part by detailed simulations of coupled datasets containing both the coronal call and the chromospheric response. In the era of multi-wavelength solar observations, coordinated observations of large solar flares have become not only a possibility but a necessity. As activity increases during the rise of Solar Cycle 25, a new suite of instrumentation will enable similar studies to be conducted in unprecedented detail. Joining the venerable \textit{IRIS} and \textit{EIS} instruments, whose capabilities have not yet been exhausted, are the new \textit{STIX} \citep{STIX} and \textit{SPICE} \citep{SPICE} instruments, aboard Solar Orbiter \citep{SolarOrbiter}, which will enable a new era of observations, both in terms of their capabilities, but also their unique heliocentric positioning, which will be vital for studying the activity that will accompany the rise of this new solar cycle. \begin{acknowledgments} We thank the referee for the careful reading of our manuscript, as well as the extremely productive and helpful comments, which have improved the quality of the work immensely. This work was funded by NASA grant NNX17AD31G and support from DoD Research and Education Program for Historically Black Colleges and Universities and Minority-Serving Institutions (HBCU/MI) Basic Research Funding Opportunity Announcement W911NF-17-S-0010, Proposal Number: 72536-RT-REP, Agreement Number: W911NF-18-1-0484. ROM would like to thank Science and Technologies Facilities Council (UK) for the award of an Ernest Rutherford Fellowship (ST/N004981/2). \end{acknowledgments} \bibliography{call_and_response_0803.bib}{} \bibliographystyle{aasjournal}
Title: SN 2016iyc: A Type IIb supernova arising from a low-mass progenitor
Abstract: In this work, photometric and spectroscopic analyses of a very low-luminosity Type IIb supernova (SN) 2016iyc have been performed. SN 2016iyc lies near the faint end among the distribution of similar supernovae (SNe). Given lower ejecta mass ($M_{\rm ej}$) and low nickel mass ($M_{\rm Ni}$) from the literature, combined with SN 2016iyc lying near the faint end, one-dimensional stellar evolution models of 9 - 14 M$_{\odot}$ zero-age main-sequence (ZAMS) stars as the possible progenitors of SN 2016iyc have been performed using the publicly available code MESA. Moreover, synthetic explosions of the progenitor models have been simulated using the hydrodynamic evolution codes STELLA and SNEC. The bolometric luminosity light curve and photospheric velocities produced through synthetic explosions of ZAMS stars of mass in the range 12 - 13 M$_{\odot}$ having a pre-supernova radius $R_{\mathrm{0}} =$ (240 - 300) R$_{\odot}$, with $M_{\rm ej} =$ (1.89 - 1.93) M$_{\odot}$, explosion energy $E_{\rm exp} = $ (0.28 - 0.35) $\times 10^{51}$ erg, and $M_{\rm Ni} < 0.09$ M$_{\odot}$, are in good agreement with observations; thus, SN 2016iyc probably exploded from a progenitor near the lower mass limits for SNe IIb. Finally, hydrodynamic simulations of the explosions of SN 2016gkg and SN 2011fu have also been performed to compare intermediate- and high-luminosity examples among well-studied SNe IIb. The results of progenitor modelling and synthetic explosions for SN 2016iyc, SN 2016gkg, and SN 2011fu exhibit a diverse range of mass for the possible progenitors of SNe IIb.
https://export.arxiv.org/pdf/2208.07377
\label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \begin{keywords} supernovae: general -- supernovae: individual: SN~2016iyc, SN~2016gkg, SN 2011fu -- techniques: photometric -- techniques: spectroscopic \end{keywords} \section{Introduction} \label{sec:Introduction} Type IIb supernovae (SNe) are a subclass of catastrophic core-collapse SNe (CCSNe). These SNe form a transition class of objects that link hydrogen (H)-rich Type II and H-deficient Type Ib SNe \citep[][]{Filippenko1988, Filippenko1993, Smartt2009}; see \citet{Filippenko1997} for a review. Their early-phase spectra show strong H features, and distinct helium (He) lines start to appear a few weeks later; thus, these SNe are thought to be partially stripped by retaining a significant H envelope, and the He core is exposed once the envelope becomes optically thin. The predominant powering mechanisms in SNe~IIb are the radioactive decay of $^{56}$Ni and the deposition of internal energy by the shock in the ejecta \citep[e.g.,][]{Arnett1980, Arnett1982, Arnett1996, Nadyozhin1994, Chatzopoulous2013, Nicholl2017}. In a few cases, the SN progenitors are also surrounded by dense circumstellar material (CSM) which may interact violently with the SN ejecta. The interaction of CSM with the SN ejecta results in the formation of a two-component shock structure: a forward shock moving into the CSM and a reverse shock moving back into the SN ejecta. Both of these shocks deposit their kinetic energies into the material that is radiatively released, powering the light curves of the SNe \citep[e.g.,][]{ Chevalier1982, Chevalier1994, Moriya2011, Ginzberg2012, Chatzopoulous2013, Nicholl2017}. Understanding the possible progenitors of stripped or partially-stripped CCSNe is a challenging task. Methods to investigate the SN progenitors and their properties include (a) direct detections of objects in pre-explosion images, and (b) modelling of certain mass zero-age main-sequence (ZAMS) stars as the possible progenitors, based on the observed photometric and spectroscopic properties of the SNe. Direct detections of progenitors are rare owing to the uncertainty associated with the spatial positions and the infrequent occurrence of these transient phenomena. One has to be very lucky to get such pre-explosion images. However, for SNe~IIb, four cases of the direct detection of objects in pre-explosion images have been claimed. These include SN~1993J \citep[][]{Filippenko1993-IAUC,Aldering1994}, SN~2008ax \citep[][]{Crockett2008}, SN~2011dh \citep[][]{Maund2011,Van2011}, and SN~2013df \citep[][]{Van2014}, indicating either massive Wolf-Rayet (WR) stars ($M_{\rm ZAMS} \approx 10$--28\,M$_\odot$; \citealt{Crockett2008}) or more extended yellow supergiants (YSGs) with $M_{\rm ZAMS} = 12$--17\,M$_\odot$ \citep[][]{Van2013, Folatelli2014, Smartt2015} as SN~IIb progenitors. Following \citet[][]{Smartt2009b} and \citet[][]{Van2017}, there have only been $\sim 34$ cases of direct CCSNe progenitor detections. With these direct detections, the progenitors of SNe~IIP are red supergiants (RSGs); SN~IIn progenitors are luminous blue variables; the progenitors of SNe~IIL are still debated, with only the case of SN~2009kr suggesting RSG or yellow supergiant progenitors; and the progenitors of SNe~Ib/c are either low-mass stars in a binary system \citep[][]{Podsiadlowski1992, Nomoto1995, Smartt2009} or a single massive WR star \citep[e.g.,][]{Gaskell1986, Eldridge2011, Groh2013}. The second method, progenitor modelling using stellar evolution codes to constrain the nature of the possible progenitors of stripped or partially-stripped CCSNe, identified either via direct imaging as in the case of iPTF13bvn \citep[][]{Cao2013} or indirect methods including nebular-phase spectral modelling \citep[][]{Jerkstrand2015, Uomoto1986}, and simulating the synthetic explosions of their pre-SN models, is also vital to understand their nature, physical conditions, circumstellar environment, and chemical compositions. But, the progenitor modelling of such objects using various stellar evolution codes is difficult owing to the complicated stages of shell burning. Another problem associated with such modelling is the obscure nature of the mixing-length-theory parameter ($\alpha_{\rm MLT}$). The basis of $\alpha_{\rm MLT}$ has no physical origin \citep[][]{Joyce2018,Viani2018}. Furthermore, \citet[][]{Joyce2018} mentions that $\alpha_{\rm MLT}$ is neither a physical constant nor a computational one; it is rather a free parameter, so the value of $\alpha_{\rm MLT}$ must be determined individually in each stellar evolution code. Owing to the above-mentioned difficulties, only a handful of such studies including progenitor modelling followed by their synthetic explosions have been performed in the case of stripped or partially-stripped CCSNe, including the Type Ib SN~iPTF13bvn \citep[][]{Cao2013, Bersten2014, Paxton2018}, the famous Type IIb SN~2016gkg \citep[][]{Bersten2018}, a few other Type IIb SNe including SN~2011dh \citep[][]{Bersten2012}, SN~2011fu \citep[][]{Morales2015}, two Type Ib SNe~2015ap and 2016bau \citep[][]{Aryan2021}, and another Type Ib SN~2012au \citep[][]{Pandey2021}. Considering these limited studies, our work goes one step further, as we perform the one-dimensional stellar evolution of the possible progenitors of the low-luminosity Type IIb SN~2016iyc and also simulate synthetic explosions. Our studies in this work point toward SN~2016iyc originating from the lower-mass end of the ZAMS progenitor systems observed for Type IIb CCSNe. This paper is divided into eight sections, including an introduction in Sec.~\ref{sec:Introduction}. Sec.~\ref{sec:Data_red} provides details about various telescopes and reduction procedures, including the discovery of SN~2016iyc using the Katzman Automatic Imaging Telescope (KAIT) at Lick Observatory as well as recalibrated photometry of SN~2016gkg. In Sec.~\ref{sec:Photometric}, methods to correct for the extinction, photometric properties including the bolometric light curve, black-body temperature, and radius evolutions are discussed. We present the analyses describing the spectral properties and comparisons with other similar and well-studied SNe in Sec.~\ref{sec:Spectral}; we also model the spectra of these SNe using {\tt SYN++}. The assumptions and methods for modelling the possible progenitor of SN~2016iyc and the evolution of the models until the onset of core collapse using {\tt MESA} are presented in Sec.~\ref{sec:mesa_snec}. Further, in this section, we discuss the assumptions and methods for simulating the synthetic explosions using {\tt SNEC} and {\tt STELLA}. Here, comparisons between the parameters obtained through synthetic explosions and observed ones are presented. We also perform hydrodynamic modelling of the synthetic explosions of SN~2016gkg and SN~2011fu in Sec.~\ref{sec:SN2016gkg_model}. In Sec.~\ref{sec:Discussions}, we discuss our major results and findings. We summarise our work and provide concluding remarks in Sec.~\ref{sec:Conclusions}. \section{Data acquisition and reduction } \label{sec:Data_red} SN~2016iyc was discovered \citep[][]{de2016} in an 18\,s unfiltered image taken at 03:28:00 on 2016~Dec.~18 (UT dates are used throughout this paper) by the 0.76\,m KAIT as part of the Lick Observatory Supernova Search \citep[LOSS;][]{Filippenko2001}. Its brightness was $17.81\pm0.11$\,mag, and the object was not detected earlier on Dec. 04.14 with an upper limit of 19.0\,mag. We measure its J2000.0 coordinates to be $\alpha=22^{\mathrm{h}}09^{\mathrm{m}}14\farcs29$, $\delta=+21^{\circ}31\arcmin17\farcs3$, with an uncertainty of $0\farcs5$ in each coordinate. SN~2016iyc is $14\farcs0$ west and $10\farcs4$ north of the nucleus of the host galaxy UGC~11924, which has redshift $z=0.012685 \pm 0.000017$ \citep[][]{Giovanelli1993}, with a spiral morphology (Sd). $B$, $V$, $R$, and $I$ multiband follow-up images of SN~2016iyc were obtained with both KAIT and the 1\,m Nickel telescope at Lick Observatory; KAIT also obtained additional unfiltered ({\it Clear (C)}-band) images. Although unfiltered and thus nonstandard, $C$ is most similar to the $R$ band \citep[][]{Li2003}, and has been widely used for SN observations by KAIT \citep[e.g.,][]{Stahl2019,dejager2019,Zheng2022}. All images were reduced using a custom pipeline\footnote{https://github.com/benstahl92/LOSSPhotPypeline} detailed by \citet[][]{Stahl2019}. Here we briefly summarise the photometric procedure. Image subtraction was conducted in order to remove the host-galaxy contribution, using additional images obtained after the SN had faded below the detection limit. Point-spread-function (PSF) photometry was obtained using DAOPHOT \citep[][]{Stetson1987} from the IDL Astronomy User’s Library\footnote{http://idlastro.gsfc.nasa.gov/}. Several nearby stars were chosen from the Pan-STARRS1\footnote{http://archive.stsci.edu/panstarrs/search.php} catalogue for calibration purpose; their magnitudes were first transformed into the \citet{Landolt1992} system using the empirical prescription presented by \citet[][Eq.~6]{Torny2012}, and then into the KAIT/Nickel natural system. All apparent magnitudes were measured in the KAIT4/Nickel2 natural system. The final results were transformed to the standard system using local calibrators and colour terms for KAIT4 and Nickel2 \citep[][]{Stahl2019}. The same method was adopted to reprocess the LOSS data of SN~2016gkg (originally published by \citealt{Bersten2018}), except that no subtraction procedure was applied to SN~2016gkg; the calibration source was also chosen from the Pan-STARRS1 catalog. Photometry of SN~2016gkg at two epochs was also obtained with the 3.6\,m Devasthal optical telescope (DOT) using the 4K$\times$4K CCD Imager \citep{Pandey2018, Kumar2021a}. SN~2016gkg was the first SN detected by the 3.6\,m DOT during its initial commissioning phases. For the data obtained from the 3.6\,m DOT, the \citet{Landolt1992} photometric standard fields PG 0918, PG 1633, and PG 1657 were observed on 2021 Feb. 07 along with the SN field in the $UBVRI$ bands under good photometric conditions. These three Landolt fields have standard stars with a $V$-band magnitude range of 12.27--15.26\,mag and a $B-V$ colour range of $-$0.27 to +1.13\,mag. The SN fields observed in 2021 were used for template subtraction to remove the host-galaxy contributions from the source images. Template subtraction was performed with standard procedures by matching the full width at half-maximum intensity (FWHM) and flux values of respective images. The optical photometric data reduction and calibration were made with a standard process discussed by \citet[][]{Kumar2021} and Python scripts hosted on \textsc{RedPipe} \citep[][]{Singh2021}. The average atmospheric extinction values in the $U$, $B$, $V$, $R$, and $I$ bands for the Devasthal site were adopted from \citet{Kumar2021a}. The recalibrated KAIT data of SN~2016gkg along with those observed at later epochs using the 4K$\times$4K CCD Imager mounted at the axial port of the 3.6\,m DOT were used for the construction of bolometric light curves as described in the following sections. A single optical spectrum of SN~2016iyc was obtained on 2016 Dec. 23 with the Kast double spectrograph \citep[][]{MillerStone1993} mounted on the 3\,m Shane telescope at Lick Observatory. The 2700\,s exposure was taken at or near the parallactic angle to minimise slit losses caused by atmospheric dispersion \citep[][]{Filippenko1982}. The observations were conducted with a $2''$-wide slit, 600/4310 grism on the blue side, and 300/7500 grating on the red side. This instrument configuration has a combined wavelength range of $\sim 3500$--10,400\,\si{\angstrom} and spectral resolving power of $R \approx 800$. Data reduction followed standard techniques for CCD processing and spectrum extraction \citep[][]{Silverman2012} utilising IRAF\footnote{IRAF is distributed by the National Optical Astronomy Observatory, which is operated by AURA, Inc., under a cooperative agreement with the U.S. NSF.} routines and custom Python and IDL codes\footnote{https://github.com/ishivvers/TheKastShiv}. Low-order polynomial fits to comparison-lamp spectra were used to calibrate the wavelength scale, and small adjustments derived from night-sky lines in the target frames were applied. Observations of appropriate spectrophotometric standard stars were used to flux calibrate the spectrum. \section{Photometric Properties} \label{sec:Photometric} In this section, we discuss the photometric properties of SN~2016iyc, including the colour evolution, extinction, bolometric light curves, and various black-body parameters. The $BVRI$- and $C$-band photometric data of SN~2016iyc are presented in Table~\ref{tab:optical_observations_2016iyc}. Most of the analyses in this paper have been performed with respect to the phase of $V$-band maximum brightness. The photometric data of SN~2016iyc lack dense coverage near peak brightness; thus, to find the phase of $V$-band maximum, we used the $V$-band light curve of SN~2013df as a template having a rising timescale similar to that of SN~2016iyc (Figure~\ref{fig:V_max}). We fit a fourth-order polynomial to the template light curve and find the date of $V$ maximum to be MJD $57752.7 \pm 0.2$. The left panel of Figure~\ref{fig:BVRIC_LC} shows the $BVRI$- and $C$-band light curves of SN~2016iyc. The characteristic extended shock-breakout (hereafter, extended-SBO) feature typically observed in SNe~IIb is seen in all of the bands. Multiple mechanisms and/or ejecta/progenitor properties have been theorised to explain such enhancement in the luminosity before the primary peak, including an increase in the progenitor radius up to a few 100\,R$_{\odot}$ \citep[e.g.][]{Nomoto1993,Podsiadlowski1993,Woosley1994}; an interaction with CSM similar to the case of Type IIn SNe \citep[][]{Schlegel1990}; in a close-binary system, the interaction with the companion \citep[][]{Kasen2010,Moriya2015}; and sometimes enhanced $^{56}$Ni mixing into the outer ejecta \citep[e.g.,][]{Arnett1989}. The right-hand panel of Figure~\ref{fig:BVRIC_LC} shows the $BVRI$ and $C$ light curves along with the late-time upper limits in each band. These upper limits are very useful in constraining the upper limit on $M_{\rm Ni}$. \subsection{Distance estimation of SN~2016iyc} \begin{table*} \caption {The adopted total extinction values, distances, and corresponding distance moduli of a subset of SNe considered here.} \label{tab:comparison_Sample} \begin{center} {\scriptsize \begin{tabular}{ccccccccccccc} \hline\hline SN name & $E(B-V)_{\rm tot}$ & Adopted distance & Distance modulus & $V_{\rm max}$ & log\,$(L_{BVRI})_p$ \\ & (mag) & (Mpc) & (mag) & (mag) & (erg\,s$^{-1}$) \\ \hline SN~1987A & 0.16 \citep[][]{Bose2021} & 0.05 \citep[][]{Bose2021} & 18.44 & $-15.52 \pm 0.02$ & 42.555 $\pm$ 0.004\\\\ SN~1993J & 0.18 \citep[][]{Richmond1996} & 3.68 \citep[][]{Bose2021} & 27.82 & $-16.97 \pm 0.03$ & 42.01 $\pm$ 0.01\\\\ SN~2003bg & 0.02 \citep[][]{Mazzali2009} & 24 \citep[][]{Mazzali2009} & 31.90 & -17.8$\pm$0.2 & 42.31 $\pm$ 0.03\\\\ SN~2008ax & 0.3 \citep[][]{Tsvetkov2009} & 9.6 \citep[][]{Pastorello2008} & 29.92 & $-16.35 \pm 0.05$ & 42.07 $\pm$ 0.03\\\\ SN~2011dh & 0.035 \citep[][]{Sahu2013} & 8.4 \citep[][]{Sahu2013} & 29.62 & $-17.06 \pm 0.02$ & 41.99 $\pm$ 0.03\\\\ SN~2011fu & 0.22 \citep[][]{Kumar2013} & 77.0 \citep[][]{Kumar2013} & 34.46 & $-17.51 \pm 0.03$ & 42.426 $\pm$ 0.006\\\\ SN~2011hs & 0.17 \citep[][]{Bufano2014} & 23.44$^{*}$ & 31.85 & $-16.03 \pm 0.03$ & 41.74 $\pm$ 0.02\\\\ SN~2013df & 0.098 \citep[][]{Morales2015} & 16.6 \citep[][]{Van2014} & 31.1 & $-16.47 \pm 0.05$ & 41.87 $\pm$ 0.01\\\\ SN~2016gkg & 0.017 \citep[][]{Bersten2018} & 26.4 \citep[][]{Kilpatrick2017} & 32.11 & $-17.03 \pm 0.05$ & 41.98 $\pm$ 0.02\\\\ SN~2016iyc & 0.137 & 46.0 & 33.31 & $-15.32 \pm 0.05$ & 41.44 $\pm$ 0.01\\ \hline\hline \end{tabular}} \end{center} {The sources for the $BVRI$ light curves for SNe in the comparison sample are as follows. SN~1987A, \citet[][]{Menzies1987} and \citet[][]{Makino1987}; SN~1993J, \citet[][]{Zhang2004}; SN~2003bg, \citet[][]{Hamuy2009}; SN~2008ax, \citet[][]{Pastorello2008}; SN~2011dh, \citet[][]{Sahu2013}; SN~2011fu, \citet[][]{Kumar2013}; SN~2011hs, \citet[][]{Bufano2014}; SN~2013df, \citet[][]{Morales2015}. Adopted distances have been used to calculate the distance moduli. The total extinction correction and distance moduli for all the SNe in the comparison sample have been taken into account while calculating the bolometric light curves. $^*$For SN~2011hs, the distance modulus is 31.85\,mag \citep[][]{Bufano2014}, which is used to back-calculate a distance of 23.44\,Mpc.} \end{table*} Distance determinations from redshifts ($z$) are severely biased for nearby SNe because of the peculiar motions of nearby galaxies that are comparable to the Hubble flow. So, the redshift-based distance estimates can be used only for SNe having $z > 0.1$. Hence, the redshift-based distance for SN~2016iyc, published by \citet[][]{Planck2016}, could be spurious. SN~2016iyc being nearby ($z = 0.012685$), we cross-verified the redshift-based distance estimate (56.6837\,Mpc as mentioned by \citealt[][]{Planck2016}) with an advanced tool (Figure~\ref{fig:distance}) recently featured by \citet[][]{Kourkchi2020}\footnote{http://edd.ifa.hawaii.edu/CF3calculator/}, known as the Distance-Velocity ($D$--$V$) Calculator. Corresponding to a heliocentric velocity $V_{\rm h} \approx 3804$\,km\,s$^{-1}$, the observed velocity ($V_{\rm ls}$) at the location of SN~2016iyc is found to be $\sim 4089$\,km\,s$^{-1}$ by utilising Eq.~5 of \citet[][]{Kourkchi2020}. Corresponding to $V_{\rm ls} = 4089$\,km\,s$^{-1}$, the $D$--$V$ Calculator gives a distance of $\sim 46$\,Mpc, which is $\sim 20$\% less than \citet[][]{Planck2016}. The distance modulus for SN~2016iyc corresponding to this distance is 33.31\,mag and is adopted for all further analyses in this paper. The distances for the SNe used as a comparison sample are well established in the literature and are used as such for the estimation of their respective bolometric luminosities. The distance of each SN in the comparison sample along with the corresponding distance modulus is presented in Table~\ref{tab:comparison_Sample}. \subsection{Colour evolution and extinction correction} For SN~2016iyc, we corrected for the Milky Way (MW) extinction using NED, following \citet[][]{Schlafly2011}. In the direction of SN~2016iyc, $E(B-V)_{\rm MW} = 0.067$\,mag, so the MW extinction corrections for the $B$, $V$, $R$, and $I$ bands are 0.278, 0.207, 0.155, and 0.099\,mag, respectively. Only one spectrum of SN~2016iyc is available, and it does not exhibit a clear Na~I~D absorption line produced by gas in the host galaxy, suggesting that there is negligible host-galaxy extinction. However, neglecting host-galaxy extinction based on only the absence of obvious Na~I~D could be spurious. In a recently published Lick/KAIT data-release paper for various stripped-envelope SNe, \citet[][]{Zheng2022} performed comprehensive analysis to determine the host-galaxy contamination and found $E(B-V)_{\rm host} = 0.07$\,mag for SN~2016iyc. We also performed a simple analysis to put an upper limit on the host-galaxy extinction. Five early epochs were selected, and the spectral energy distribution (SED) was fitted with black-body curves by assuming different $E(B-V)_{\rm host}$ values (Figure~\ref{fig:extinction_trial}). We found that going beyond 0.07\,mag of host-galaxy extinction results in black-body temperature exceeding 11,200\,K. Such high temperatures are generally not seen in SNe~IIb. Following \citet[][]{Ben2015}, the early-time black-body temperatures associated with SN~1993J, SN~2011dh, and SN~2013df are 8200\,K, 8200\,K, and 7470\,K, respectively. There have been only a few cases where the early black-body temperature exceeds 11,000\,K; one such example is SN~2001ig \citet[][]{Ben2015}, but this SN may have come from a compact WR binary progenitor system\citep[][]{Ryder2004}. Based on the above analyses and the results of \citet[][]{Zheng2022}, we adopt a host-galaxy extinction of 0.07\,mag throughout this paper. Thus, a total (Milky Way + host-galaxy) extinction of $E(B-V)_{\rm tot} = 0.137$\,mag is adopted for SN~2016iyc. Figure~\ref{fig:color_curve} shows the comparison of total extinction corrected $(B-V)_{0}$ colour of SN~2016iyc with other similar SNe. \subsection{Bolometric light curves} \label{subsec3.3} Before computing the bolometric light curves, the absolute $V$-band light curve of SN~2016iyc is compared with a few other similar SNe~IIb. The left panel of Figure~\ref{fig:abs_bol} shows that SN~2016iyc lies toward the fainter end of the distribution. Furthermore, to obtain the quasibolometric light curve, we make use of the {\tt SUPERBOL} code \citep{Nicholl2018}. The extinction-corrected $B$, $V$, $R$, and $I$ data are provided as input to {\tt SUPERBOL}. The light curve in each filter is then mapped to a common set of times through the processes of interpolation and extrapolation. Thereafter, {\tt SUPERBOL} fits black-body curves to the SED at each epoch, up to the observed wavelength range (4000--9000\,\si{\angstrom}), to give the quasibolometric light curve by performing trapezoidal integration. The right-hand panel of Figure~\ref{fig:abs_bol} shows the comparison of the quasibolometric light curve of SN~2016iyc with other well-studied SNe~IIb as listed in Table~\ref{tab:comparison_Sample}. The peak quasibolometric luminosity (log\,$(L_{BVRI})_p$) of each SN has also been calculated by fitting a third-order polynomial to the quasibolometric light curve. As indicated by the right-hand panel of Figure~\ref{fig:abs_bol}, SN~2016iyc lies toward the fainter limit of SNe~IIb in the comparison sample. It is also worth mentioning that the low-luminosity SNe with low $^{56}$Ni yields are thought to arise from progenitors having masses near the threshold mass for producing a CCSN \citep[][]{Smartt2009b}. Furthermore, the bolometric luminosity light curve of SN~2016iyc is also produced after considering the additional black-body corrections to the observed $BVRI$ quasibolometric light curve, by fitting a single black body to observed fluxes at a particular epoch and integrating the fluxes trapezoidally for a wavelength range of 100--25,000\,\AA\ using {\tt SUPERBOL}. Figure \ref{fig:bb_fits} shows the black-body fits to the SED of SN~2016iyc. The top panel of Figure~\ref{fig:BB_param_SN2016iyc} shows the resulting quasibolometric and bolometric light curves of SN~2016iyc. \subsection{Temperature and radius evolution} From {\tt SUPERBOL}, the black-body temperature ($T_{\rm BB}$) and radius ($R_{\rm BB}$) evolution of SN~2016iyc are also obtained. During the initial phases, the photospheric temperature is high, reaching $\sim 10,900$\,K at $-10.63$\,d. Furthermore, the SN seems to evolve very rapidly; its temperature quickly drops to $\sim 7600$\,K in only a few days around $-7.7$\,d and then remains nearly constant (Figure~\ref{fig:BB_param_SN2016iyc}, second panel from top). Along with SN~2016iyc, the temperature evolution of a few more similar SNe~IIb are also shown in this panel. The black-body temperature of SN~2016iyc seems to follow the typical temperature evolution as seen in SNe~IIb. A conventional evolution in radius is also seen. Initially, at an epoch of $-10.63$\,d, the black-body radius is $2.64 \times 10^{14}$\,cm. Thereafter, the SN expands and its radius increases, reaching a maximum radius of $\sim$\,$5.8 \times 10^{14}$\,cm, beyond which the photosphere seems to recede into the SN ejecta (Figure~\ref{fig:BB_param_SN2016iyc}, third panel from top). Along with SN~2016iyc, the black-body radius evolution of a few more similar SNe~IIb are also shown. SN~2016iyc seems to exhibit anomalous behaviour, with its black-body radii at various epochs being the smallest among other similar SNe~IIb. This result can be attributed to the low ejecta velocity of SN~2016iyc. \section{Spectral studies of SN~2016iyc} \label{sec:Spectral} In this section, we identify the signatures of various lines by modelling the only available spectrum of SN~2016iyc using {\tt SYN++} \citep[][]{Branch2007, Thomas2011}. We discuss various spectral features of SN~2016iyc, and the spectrum is also compared with other similar SNe. \subsection{Spectral modelling} A single optical spectrum of SN~2016iyc was obtained on 2016 Dec. 23 with the Kast double spectrograph \citep[][]{MillerStone1993} mounted on the 3\,m Shane telescope at Lick Observatory. Figure~\ref{fig:syn++} shows the spectral modelling of it, corresponding to a phase of $-6.6$\,d. The individual lines corresponding to various elements and ions are also indicated for better identification of the features. The profiles of H$\alpha$ near 6563\,\AA, He\,I near 5876\,\AA, and Ca\,II~H\&K are very nicely reproduced by {\tt SYN++} modelling. A very strong H$\alpha$ feature near 6563\,\AA\ classifies SN~2016iyc as an SN~IIb. The observed velocities obtained using H$\alpha$, He\,I, and Fe\,II features in the spectrum are $\sim 10,000$\,km\,s$^{-1}$, $\sim 6700$\,km\,s$^{-1}$, and $\sim 6100$\,km\,s$^{-1}$ (respectively), while the respective velocities of these lines from {\tt SYN++} modelling are 10,100\,km\,s$^{-1}$, 6800\,km\,s$^{-1}$, and 6100\,km\,s$^{-1}$, very close to the observed ones. The parameterisation velocity and photospheric velocity used to produce the {\tt SYN++} model are 6000\,km\,s$^{-1}$ and 6100\,km\,s$^{-1}$, respectively. Also, a photospheric temperature of 6300\,K is employed to produce the model spectrum. \subsection{Spectral comparison} Figure~\ref{fig:spec_comparison} shows a comparison of the normalised spectrum of SN~2016iyc with other well-studied SNe~IIb. The top plot displays the comparison with the spectra of SN~1993J at +2.3\,d and $-7.0$\,d; we see that the spectral features of SN~2016iyc closely resemble those of SN~1993J spectra. In the second panel from the top, the spectrum of SN~2016iyc is compared with spectra of SN~2011fu at +3.4\,d and $-7.0$\,d; the match is close, except for the H$\alpha$ feature where the spectra of SN~2011fu are slightly off. The third panel from the top shows the spectral comparison of SN~2016iyc with spectra of SN~2013df at epochs of +2.7\,d and $-11.0$\,d, revealing a good match with the $-11.0$\,d spectrum. The progenitor of SN~2013df is also thought to be arising from the lower-mass end. In the bottom panel, the spectrum of SN~2016iyc is compared with spectra of SN~2016gkg at epochs of $-0.7$\,d and +16\,d. The $-0.7$\,d spectrum of SN~2016gkg resembles the spectrum of SN~2016iyc toward the bluer side, while features in the redder part of the spectrum are slightly off. The +16\,d spectrum of SN~2016gkg does not show a good resemblance with the spectrum of SN~2016iyc. From Figure~\ref{fig:spec_comparison}, we conclude that the spectrum of SN~2016iyc shows close resemblance with the spectra of other well-studied SNe~IIb, thereby providing good evidence for the classification of SN~2016iyc as an SN~IIb. \section{Possible progenitor modelling and the results of synthetic explosions for SN~2016iyc} \label{sec:mesa_snec} To constrain the physical properties of the possible progenitor of SN~2016iyc, we attempted several progenitor models. Following the available literature, SN~2016iyc lies near the faint limit (see Table~\ref{tab:comparison_Sample}), with $M_{\rm ej}$ also close to the lowest limit (Table~\ref{tab:mejecta_comparison}). As mentioned earlier, low-luminosity SNe with low $^{56}$Ni production are thought to arise from progenitors having masses near the threshold mass for producing CCSNe \citep[][]{Smartt2009}. With low $M_{\rm ej}$ among typical SNe~IIb and having intrinsically low luminosity, we started with the nearly lowest possible ZAMS progenitor mass of 9\,M$_{\odot}$ for a Type IIb SN. Starting from the pre-main sequence, the model could be evolved up to the onset of core collapse. But the 9\,M$_{\odot}$ model at pre-SN phase in our simulation is very compact, having a radius of only 0.14\,R$_{\odot}$. Such a compact progenitor cannot generate the generic extended-SBO feature of typical SNe~IIb. Furthermore, no direct observational signatures have been found for an SN~IIb arising from a progenitor having ZAMS mass $\leq 11$\,M$_{\odot}$, so we do not make any further attempt to model progenitors having masses $\leq 11$\,M$_{\odot}$. Thus, we select models having ZAMS masses of 12, 13, and 14\,M$_{\odot}$, and evolve them up to the onset of core collapse. Such models originating from the lower limits of progenitor mass systems lack sufficiently strong winds to suffer much stripping; thus, the models are artificially stripped to mimic the effect of a binary companion. A brief description of our models is provided below. \begin{table} \caption {Ejecta masses of various SNe~IIb and SN~2016iyc.} \label{tab:mejecta_comparison} {\scriptsize \begin{tabular}{ccccccccccccc} \hline\hline SN name & $M_{\rm ej}$ & source \\ \hline SN~1993J & 1.9--3.5 & \citet[][]{Young1995} \\\\ SN~2003bg & 4 & \citet[][]{Mazzali2009} \\\\ SN~2008ax & 2--5 & \citet[][]{Taubenberger2011} \\\\ SN~2011dh & 1.8--2.5 & \citet[][]{Bersten2012} \\\\ SN~2011fu & 3.5 & \citet[][]{Morales2015} \\\\ SN~2011hs & 1.8 & \citet[][]{Bufano2014} \\\\ SN~2013df & 0.8--1.4 & \citet[][]{Morales2014} \\\\ SN~2016gkg & 3.4 & \citet[][]{Bersten2018} \\\\ SN~2016iyc & 1.2 & \citet[][]{Zheng2022} \\ \hline\hline \end{tabular}} \end{table} We first evolve the nonrotating 9, 12, 13, and 14\,$M_{\odot}$ ZAMS stars until the onset of core collapse, using the one-dimensional stellar evolution code {\tt MESA}\citep[][] {Paxton2011,Paxton2013,Paxton2015,Paxton2018}. For the 9\,$M_{\odot}$ model, $\alpha_{\rm MLT} = 2.0$ is used throughout the evolution, except for the phase when the model evolves to reach the beginning of core-Si burning (i.e., in the {\tt inlist\_to\_si\_burn} file), where $\alpha_{\rm MLT} = 0.01$ is used. At this phase, the evolution of the models is extremely sensitive to this $\alpha_{\rm MLT}$, since even a slight change (say, 0.02) results in the failure of the beginning of core-Si burning. Although $\alpha_{\rm MLT} = 0.01$ seems to be very low, this is required for the successful evolution of considered models through the last phases of their evolution. Furthermore, as mentioned by \citet[][]{Joyce2018}, $\alpha_{\rm MLT}$ is neither a physical constant nor a computational one; it is rather a free parameter, so its value must be determined on an individual basis in each stellar evolution code. Thus, as $\alpha_{\rm MLT} = 0.01$ is helpful for evolving the models beyond the beginning of core-Si burning, it is acceptable. For the 12, 13, and 14\,$M_{\odot}$ models, $\alpha_{\rm MLT} = 3.0$ is used throughout the evolution. Convection is modelled using the mixing theory of \citet[][]{Henyey1965}, adopting the Ledoux criterion. Semiconvection is modelled following \citet[][]{Langer1985} with an efficiency parameter of $\alpha_{\mathrm{sc}} = 0.01$. For the thermohaline mixing, we follow \citet[][]{Kippenhahn1980}, and set the efficiency parameter as $\alpha_{\mathrm{th}} = 2.0$. We model the convective overshoot with the diffusive approach of \citet[][]{Herwig2000}, with $f= 0.001$ and $f_0 = 0.007$ for all the convective cores and shells. We use the \say{Dutch} scheme for the stellar wind, with a scaling factor of 1.0. The \say{Dutch} wind scheme in MESA combines results from several papers. Specifically, when $T_{\mathrm{eff}} > 10^4$\,K and the surface mass fraction of H is greater than 0.4, the results of \citet[][]{Vink2001} are used, and when $T_{\mathrm{eff}} > 10^4$\,K and the surface mass fraction of H is less than 0.4, the results of \citet[][]{Nugis2000} are used. In the case when $T_{\mathrm{eff}} < 10^4$\,K, the \citet[][]{dejager1988} wind scheme is used. SNe~IIb have been considered to originate from massive stars which have retained a significant amount of their hydrogen envelope. We have adopted a mass-loss rate of $\dot{M} \gtrsim 10^{-4}$\,M$_{\odot}\,\mathrm{yr}^{-1}$ to artificially strip the models until the final $M_{\rm H}$ reaches in range of (0.013--0.055)\,M$_{\odot}$. Such extensive mass loss rates are supported by the studies performed by \citet[][]{Ouchi2017}, where they mention that the binary scenario for the progenitors of SNe~IIb leads to such high mass-loss rates. Furthermore, \citet[][]{VanLoon2005} have also reported extensive mass-loss rates reaching $\dot{M} = 10^{-4}$\,M$_{\odot}\,\mathrm{yr}^{-1}$, solely by a single stellar wind. Once the models have stripped off upto the specified limit of the H envelope, we switch off the artificial mass loss and further evolve the models until the onset of core collapse. Corresponding to various ZAMS masses, the amount of remaining H varies. Massive progenitors with a similar rate of stripping as less-massive progenitors will retain a larger amount of H. In our simulations, the specified limit on H mass depends primarily on (a) the model's ability to evolve up to the stage of core collapse by retaining the specified amount of H (also, there is a limit on stripping), and (b) the radius of the pre-SN progenitor. If we need a compact progenitor, the amount of retained H is less, and if pre-SN progenitors are required to have extended envelopes, the amount of retained H is more. The evolution of the models using {\tt MESA} takes place in various steps. The models start to evolve on the pre-main sequence and reach the main sequence. The arrival of the models on the main sequence is marked when the ratio of the luminosity due to nuclear reactions and the total luminosity of the models is 0.8. Later, the models further evolve on the main sequence, becoming giants or supergiants. As a next step, artificial stripping of the models is performed, after which they are allowed to settle down. Once the stripping of the models reaches the specified H-envelope mass limit and the models have settled down, they further evolve until the ignition of Si~burning in their core. Once the Si~burning has started in the core, the models begin to develop inert iron ($^{56}$Fe) cores responsible for their cores to collapse. The evolution of such models having ZAMS masses of 9, 12, 13, and 14\,M$_{\odot}$ with metallicity $Z = 0.0200$ on the Hertzsprung-Russell (HR) diagram is illustrated in the left panel of Figure~\ref{fig:HR_Rho_T}. We simulated a total of 9 models covering progenitor masses of 9--14\,M$_{\odot}$ and also covering subsolar to supersolar metallicity wherever necessary. The pre-explosion properties using {\tt MESA} and explosion properties using {\tt STELLA/SNEC} are listed in Table~\ref{tab:MESA_MODELS}. The models have been so named that they include informations of ZAMS mass, metallicity, $M_{\rm Ni}$, and $E_{\rm exp}$. Thus, the model {\tt M9.0\_Z0.0200\_Mni0.034\_E0.56} has a ZAMS mass of 9\,M$_{\odot}$, $Z = 0.0200$, $M_{\rm Ni} = 0.034$\,M$_{\odot}$, and $E_{\rm exp} = 0.56\times10^{51}$\,erg. The right-hand panel of Figure~\ref{fig:HR_Rho_T} shows the variation of core temperature ($T_{\rm core}$) with core density ($\rho_{\rm core}$) as the models evolve through various phases on the HR diagram. The core-He and core-Si burning phases are marked. The onset of core collapse is marked by $T_{\rm core}$ and $\rho_{\rm core}$ reaching above $\sim10^{10}$\,K and $10^{10}$\,g\,cm$^{-3}$, respectively. The left panel of Figure~\ref{fig:mass_Kipp} shows the mass fractions of various species present when the 12\,M$_{\odot}$ model (with $Z = 0.0200$) has achieved Fe-core infall. The core is composed mainly of $^{56}$Fe with negligible fractions of other species. Significant fractions of heavier metals are seen toward the surface of the pre-explosion progenitor. The right-hand panel of Figure~\ref{fig:mass_Kipp} shows the Kippenhahn diagram for the 12\,M$_{\odot}$ model (with $Z = 0.0200$) for a period from the beginning of main-sequence evolution to the stage when the model is ready to begin envelope stripping. In this figure, the convective regions are marked by the hatchings with the logarithm of the specific nuclear energy generation rate ($\epsilon_{\rm nuc}$) inside the stellar interiors marked with the blue colours. The dark-yellow regions indicate the stellar interior where the thermohaline mixing is going on. \begin{table*} \caption{{\tt MESA} model and {\tt STELLA/SNEC} explosion parameters of various models for SN~2016iyc.} \label{tab:MESA_MODELS} \begin{center} {\scriptsize \begin{tabular}{ccccccccccccc} \hline\hline Model Name & $M_{\rm ZAMS}$ & $Z$ & $M_{\mathrm{H}}^{a}$ & $R_{\mathrm{0}}^{b}$ & $f_{ov}^{c}$ & $M_{\mathrm{f}}^{d}$ & $M_\mathrm{ci}^{e}$ & $M_\mathrm{cf}^{f}$ & $M_{\mathrm{ej}}^{g}$ & $M_{\mathrm{Ni}}^{h}$ & $E_{\mathrm{exp}}^{i}$ \\ & (M$_{\odot}$) & & (M$_{\odot}$) & (R$_{\odot}$) & & (M$_{\odot}$) & (M$_{\odot}$) & (M$_{\odot}$) & (M$_{\odot}$) & (M$_{\odot}$) & ($10^{51}$\,erg) \\ \hline \hline M9.0\_Z0.0200\_Mni0.034\_E0.56 & 9.0 & 0.0200 & 0.013 & 0.14 & 0.007 & 2.17 & 1.4 & 1.4 & 0.77 & 0.034 & 0.56 \\ M12.0\_Z0.0215\_Mni0.02\_E0.33 & 12.0 & 0.0215 & 0.035 & 596 & 0.007 & 3.96 & 1.54 & 1.54 & 2.42 & 0.02 & 0.33 \\ M12.0\_Z0.0185\_Mni0.03\_E0.35 & 12.0 & 0.0185 & 0.055 & 315 & 0.007 & 3.49 & 1.46 & 1.46 & 2.03 & 0.03 & 0.35 \\ M12.0\_Z0.0200\_Mni0.025\_E0.35 & 12.0 & 0.0200 & 0.05 & 300 & 0.007 & 3.45 & 1.52 & 1.52 & 1.93 & 0.025 & 0.35 \\ M12.0\_Z0.0200\_Mni0.09\_E0.35 & 12.0 & 0.0200 & 0.05 & 300 & 0.007 & 3.45 & 1.52 & 1.52 & 1.93 & 0.09 & 0.35 \\ M13.0\_Z0.0200\_Mni0.024\_E0.28 & 13.0 & 0.0200 & 0.04 & 204 & 0.007 & 3.79 & 1.64 & 1.90 & 1.88 & 0.024 & 0.28 \\ M13.0\_Z0.0200\_Mni0.01\_E0.32 & 13.0 & 0.0200 & 0.04 & 204 & 0.007 & 3.79 & 1.64 & 1.64 & 2.15 & 0.01 & 0.32 \\ M13.0\_Z0.0185\_Mni0.02\_E0.35 & 13.0 & 0.0185 & 0.06 & 318 & 0.007 & 3.92 & 1.53 & 1.56 & 2.36 & 0.02 & 0.35 \\ M13.0\_Z0.0215\_Mni0.03\_E0.40 & 13.0 & 0.0215 & 0.015 & 10 & 0.007 & 3.81 & 1.61 & 1.62 & 2.19 & 0.03 & 0.40 \\ M14.0\_Z0.0200\_Mni0.03\_E0.50 & 14.0 & 0.0200 & 0.03 & 55 & 0.007 & 4.23 & 1.54 & 1.54 & 2.69 & 0.03 & 0.50 \\ \hline\hline \end{tabular}} \end{center} {$^a$ Amount of hydrogen retained after stripping. $^b$Pre-SN progenitor radius. $^c$Overshoot parameter. $^d$Final mass of pre-SN model. $^e$Initial mass of the central remnant. $^f$Final mass of the central remnant. $^g$Ejecta mass. $^h$Nickel mass. $^i$Explosion energy.}\\ \end{table*} Using the progenitor models on the verge of core collapse obtained through {\tt MESA}, we carried out radiation hydrodynamic calculations to simulate the synthetic explosions. For this purpose, we used {\tt STELLA}\citep[][]{Blinnikov1998, Blinnikov2000, Blinnikov2006} and {\tt SNEC} \citep[][]{Morozova2015}. {\tt STELLA} solves the radiative transfer equations in the intensity momentum approximation in each frequency bin, while {\tt SNEC} is a one-dimensional Lagrangian hydrodynamic code that solves the radiation energy transport equations in the flux-limited diffusion approximation. {\tt STELLA} and {\tt SNEC}, both generate the bolometric light curve and the photospheric velocity evolution of the SN, along with a few other observed parameters. The radioactive decay of $^{56}$Ni to $^{56}$Co is considered to be one of the prominent mechanisms for powering the primary peak of SNe~IIb. Both of these codes incorporate this model by default. Thus in this section, we model the entire bolometric light curve of the SN~2016iyc assuming this powering mechanism. Here, we provide the setup of the explosions to incorporate the Ni--Co decay model. The setups for simulating the synthetic explosion using {\tt SNEC} and {\tt STELLA} closely follow \citet[][]{Ouchi2019} and \citet[][]{Aryan2021}, respectively. Here, we briefly summarise the important parameters and modifications made to \citet[][]{Ouchi2019} and \citet[][]{Aryan2021}. We simulate the synthetic explosions of the 9\,M$_{\odot}$ model using {\tt SNEC}. First, the innermost 1.4\,M$_{\odot}$ is excised before the explosion, assuming that the model collapses to form neutron stars. The number of grid cells is set to be 1000 so that the light curves and photospheric velocities of the SNe from synthetic explosions are well converged in the time domain of interest. For the M9.0\_Z0.0200\_Mni0.034\_E0.56 model, the synthetic explosion is carried out using {\tt SNEC}. The explosion is simulated as a {\tt Thermal\_Bomb} by adding $0.56 \times 10^{51}$\,erg of energy in the inner 0.1\,M$_{\odot}$ of the model for a duration of 0.1\,s. {\tt SNEC} does not include a nuclear-reaction network, so the amount of $^{56}$Ni is set by hand. A total of 0.034\,M$_{\odot}$ of $^{56}$Ni is distributed from the inner boundary up to the mass coordinate $m(r) = 2.0$\,M$_{\odot}$. For the models having ZAMS masses of 12, 13, and 14\,M$_{\odot}$, we used {\tt STELLA} to simulate the synthetic explosions. The pre-SN model masses from 12\,M$_{\odot}$ models lie in the range of (3.45--3.96)\,M$_{\odot}$, while from 13\,M$_{\odot}$ models, the pre-SN model masses lie in the range (3.79--3.81)\,M$_{\odot}$. Furthermore, the 14\,M$_{\odot}$ model has a pre-SN mass of 4.23\,M$_{\odot}$ and is thus prone to produce a much higher $M_{\rm ej}$ than required for SN~2016iyc. For producing the synthetic explosions, the hydrodynamic simulations are performed using {\tt Thermal\_Bomb}-type explosion. Various explosion parameters including the ejecta masses, synthesised nickel masses, and explosion energies corresponding to different models are presented in Table~\ref{tab:MESA_MODELS} The results of the hydrodynamic simulations are shown in Figure~\ref{fig:hydrodynamical_lc}. The left panel of Figure~\ref{fig:hydrodynamical_lc} shows the comparison of the {\tt SNEC}- and {\tt STELLA}-calculated bolometric light curves with the observed bolometric light curve (see Sec.~\ref{subsec3.3} for details on bolometric light curves) produced by fitting black bodies to the SEDs and integrating the fluxes over the wavelength range of 100--25,000\,\AA, while the right-hand panel shows the comparison of the corresponding photospheric velocities with the photospheric velocity obtained using the only available spectrum indicated by the Fe~II line velocity. The M9\,M\_Z0.0200\_Mni\_0.034\_E0.56 model could match the observed stretch factor and peak of the bolometric light curve, but it fails to reproduce the early extended SBO feature. The failure in producing the generic extended-SBO feature could be associated to the compactness of the pre-SN model having a radius of only 0.14\,R$_{\odot}$. Moreover, all of the remaining models generate the generic extended-SBO feature, but only the M12.0\,M\_Z0.0200\_Mni0.025\_E0.35 could generate the extended-SBO and overall bolometric light curve that could match with actual bolometric light curve of SN~2016iyc. Another model that could nearly match the SN~2016iyc bolometric light curve is M13.0\,M\_Z0.0200\_Mni0.024\_E0.28. The remaining models deviate largely from the observed bolometric light curve of SN~2016iyc (left panel, Figure~\ref{fig:hydrodynamical_lc}). From the right-hand panel of Figure~\ref{fig:hydrodynamical_lc}, we find that the photospheric velocity evolution generated by the models, M13.0\,M\_Z0.0200\_Mni0.01\_E0.25 and M12.0\,M\_Z0.0200\_Mni0.025\_E0.35 pass closely to the observed line velocity from Fe~II which is a good indicator of observed photospheric velocity. We also have an upper limit on the bolometric luminosity of SN~2016iyc at a phase nearly 220\,d since explosion. To produce the luminosity at that epoch, the model M12.0\,M\_Z0.0200\_Mni0.025\_E0.35 requires $M_{\rm Ni} = 0.09$\,M$_{\odot}$ (inset plot in the left panel of Figure~\ref{fig:hydrodynamical_lc}; M12.0\,M\_Z0.0200\_Mni0.09\_E0.35 is the corresponding model), serving as an upper limit on the synthesised nickel in SN~2016iyc. \citet[][]{Anderson2019} has estimated a median value of 0.102\,M$_{\odot}$ for the nickel mass by considering 27 SNe~IIb. Moreover, \citet[][]{Afsariardchi2019} have also estimated the Ni mass for eight SNe~IIb and found that except for SN\,1996cb ($M_{\rm Ni} = 0.04\pm0.01$\,M$_{\odot}$) and SN\,2016gkg ($M_{\rm Ni} = 0.09\pm0.02$\,M$_{\odot}$), each SN~IIb has higher $M_{\rm Ni}$ than 0.09\,M$_{\odot}$. These comparisons show that SN~2016iyc definitely suffered low nickel production. Thus, the one-dimensional stellar evolution of various models along with the hydrodynamic simulations of their explosions suggest that a ZAMS progenitor having mass in the range 12--13\,M$_{\odot}$ with $M_{\rm ej}$ in the range (1.89--1.93)\,M$_{\odot}$, $M_{\rm Ni} < 0.09$\,M$_{\odot}$, and $E_{\rm exp}$ = (0.28--0.35) $\times 10^{51}$\,erg could be the possible progenitor of SN~2016iyc. Recent studies suggest the masses of possible progenitors of Type IIb CCSNe to be usually higher than 9\,M$_{\odot}$, lying in the range 10--18\,M$_{\odot}$ \citep[][]{Van2013, Folatelli2014, Smartt2015}. However, there has been no direct observational evidence of an SN~IIb arising from a ZAMS progenitor of $\lesssim 12$\,M$_{\odot}$. The present analysis indicates that SN~2016iyc arises from the lower-mass end of the SN~IIb progenitor channel. As part of our study, we also performed the one-dimensional stellar evolutions of the possible progenitors of SN~2016gkg and SN~2011fu, and we simulated their hydrodynamic explosions in the next section to cover the range of faintest (SN~2016iyc), intermediate (SN~2016gkg), and highest (SN~2011fu) luminosity SNe in the comparison sample. \section{Stellar modelling and synthetic explosions for SN 2016gkg and SN 2011fu} \label{sec:SN2016gkg_model} In this section, we perform hydrodynamic simulations of explosions from the possible progenitors of an intermediate-luminosity SN~2016gkg and the most-luminous SN~2011fu in the comparison sample to cover the higher end of the progenitor masses of SNe~IIb. After modelling their progenitors using {\tt MESA}, we simulate the synthetic explosions using {\tt SNEC} and match the {\tt SNEC} produced bolometric light curves with the observed ones. To construct the bolometric light curve of SN~2016gkg, we used the recalibrated $BVRI$ KAIT data along with the data from the 3.6\,m DOT at two epochs and incorporated {\tt SUPERBOL}. The photometric data of SN~2016gkg in this work are presented in Table~\ref{tab:optical_observations_2016gkg}. Previously, \citet[][]{Bersten2018} also used KAIT data calibrated from an older KAIT reduction pipeline. Figure~\ref{fig:SN2016gkg_comparison} shows the comparison between the KAIT data used by \citet[][]{Bersten2018} and the recalibrated KAIT data. To construct the bolometric light curve of SN~2011fu, we make use of {\tt SUPERBOL} as we did in earlier sections by incorporating the $BVRI$ data from \citet[][]{Kumar2013}. To model the possible progenitor of SN~2016gkg, we closely follow the HE5 model from \citet[][]{Bersten2018}. Also, \citet[][]{Morales2015} suggests a nearly similar model for the possible progenitor of SN~2011fu. An 18\,M$_{\odot}$ ZAMS progenitor mass is employed for both SNe. The modelling and explosion parameters are listed in Table~\ref{tab:MESA_MODELS_11fu_n_16gkg}. Starting from the ZAMS, the model is evolved up to the stage where the core starts to collapse. The evolution of the model on the HR diagram is shown in the left panel of Figure~\ref{fig:HR_Rho_T}. Various physical processes during the evolution on the HR diagram have been indicated. Also, the right-hand panel displays the variation of $T_{\rm core}$ with $\rho_{\rm core}$. It is indicated that during the last evolutionary phases, the core density and temperatures have reached over $10^{10}$\,g\,cm$^{-3}$ and $10^{10}$\,K, respectively. Such high core density and temperatures mark the onset of core collapse. The left panel of Figure~\ref{fig:mass_Kipp_2} shows the mass fractions of various elements at the stage when the model has just reached the stage of Fe-core infall. As another piece of evidence for the onset of core collapse, we can see that the core is mainly composed of inert $^{56}$Fe. The right-hand panel of Figure~\ref{fig:mass_Kipp_2} shows the Kippenhahn diagram of the model for a period from the beginning of main-sequence evolution to the stage when the model is ready to be stripped. Models He5\_A and He5\_B are used for SN~2016gkg and SN~2011fu, respectively. Although the parameters including the ZAMS mass, metallicity, rotation, and overshoot parameter are same for these two models, different explosion parameters are employed using {\tt SNEC} to simulate the synthetic explosions. The left panel of Figure~\ref{fig:snec_2} illustrates the comparison of our hydrodynamic simulation of synthetic explosions for SN~2016gkg with the results of \citet[][]{Bersten2018}. The difference between the bolometric light curve from \citet[][]{Bersten2018} and calculated using KAIT revised photometry are within the error bars. Our model could explain the bolometric light curve of SN~2016iyc very well. Furthermore, the right-hand panel of Figure~\ref{fig:snec_2} shows the comparison of the {\tt SNEC}-calculated bolometric light curve with the observed quasibolometric light curve of SN~2011fu. The one-dimensional stellar modelling of possible progenitors using {\tt MESA} along with their hydrodynamic simulation of explosions using {\tt SNEC} explain the observed light curves of SN~2016gkg and SN~2011fu very well. Now, we have performed the stellar modelling of the possible progenitors and the hydrodynamic explosions of SN~2016iyc, SN~2016gkg, and SN~2011fu to cover a range of faintest (SN~2016iyc), intermediate (SN~2016gkg), and highest (SN~2011fu) luminosity SNe in the comparison sample. \section{Discussion} \label{sec:Discussions} Detailed photometric and spectroscopic analyses of the low-luminosity Type IIb SN~2016iyc are performed in this work. The extinction-corrected data of SN~2016iyc are used to construct the quasibolometric and bolometric light curves using {\tt SUPERBOL}. Comparisons of the absolute $V$-band and quasibolometric light curves of SN~2016iyc with other well-studied SNe~IIb indicates that SN~2016iyc lies toward the faint limit of this subclass. Low-luminosity SNe~IIb with low $^{56}$Ni production are thought to arise from progenitors having masses near the threshold mass for producing a CCSN. Our study indicates that among the comparison sample in this work, SN~2016iyc has the smallest black-body radius at any given epoch. This anomalous behaviour could be attributed to its low ejecta velocity. Based on the low $M_{\rm ej}$ and the lowest intrinsic brightness among SNe in the comparison sample, 9--14\,M$_{\odot}$ ZAMS progenitors are modelled as the possible progenitor of SN~2016iyc using {\tt MESA}. The results of synthetic explosions simulated using {\tt STELLA} and {\tt SNEC} are in good agreement with the observed ones. The one-dimensional stellar modelling of the possible progenitor using {\tt MESA} and simulations of hydrodynamic explosions using {\tt SNEC}/{\tt STELLA} indicate that SN~2016iyc originated from a (12--13)\,M$_{\odot}$ ZAMS progenitor, near the lower end of progenitor masses for SNe~IIb. The models show a range of parameters for SN~2016iyc, including $M_{\rm ej} =$ (1.89--1.93)\,M$_{\odot}$ and $E_{\rm exp} =$ (0.28--0.35) $\times 10^{51}$\,erg. We also put an upper limit of 0.09\,M$_{\odot}$ on the amount of nickel synthesised by the SN. The pre-SN radius of the progenitor of SN~2016iyc is (240--300)\,R$_{\odot}$. Stellar evolution of the possible progenitors and hydrodynamic simulations of synthetic explosions of SN~2016gkg and SN~2011fu have also been performed to compare the intermediate- and high-luminosity ends among well-studied SNe~IIb using {\tt MESA} and {\tt SNEC}. The results of stellar modelling and synthetic explosions for SN~2016iyc, SN~2016gkg, and SN~2011fu exhibit a diverse range of mass of the possible progenitors for SNe~IIb. \section{Conclusions} \label{sec:Conclusions} We performed detailed photometric and spectroscopic analyses of SN~2016iyc, a Type IIb SN discovered by LOSS/KAIT. The observed photometric properties of SN~2016iyc were unique in many ways: low luminosity, low ejecta mass, and small black-body radius. Attempts to model the possible progenitor were made using the one-dimensional hydrodynamic code {\tt MESA}. As a part of the present work, hydrodynamic modelling of the synthetic explosion of an intermediate-luminosity Type IIb SN~2016gkg using recalibrated KAIT data and late-time data from the 3.6\,m DOT, along with an optically very luminous Type IIb SN~2011fu, were also performed. The main results based on the present analysis are as follows. \begin{enumerate} \item{Based on the low value of $M_{\rm ej}$, ZAMS stars having masses of 9--14\,M$_{\odot}$ were adopted to model the possible progenitor of SN~2016iyc using {\tt MESA}. The results of synthetic explosions simulated using {\tt SNEC} and {\tt STELLA} were in good agreement with observed properties for ZAMS progenitor masses of 12--13\,M$_{\odot}$ having a pre-SN radius of (240--300)\,R$_{\odot}$. Thus, SN~2016iyc likely had a progenitor arising from the lower end of the progenitor mass channel of an SN~IIb.}\\ \item{We concluded that the overall detailed hydrodynamic simulations of the explosions from various models showed a range of parameters for SN~2016iyc, including an $M_{\rm ej}$ of (1.89--1.93)\,M$_{\odot}$, an $E_{\rm exp}$ of (0.28--0.35) $\times 10^{51}$\,erg, and an upper limit of $< 0.09$\,M$_{\odot}$ on the amount of nickel synthesised by SN~2016iyc.}\\ \item{Finally, one-dimensional stellar evolution models of possible progenitors and the hydrodynamic explosions of SN~2016gkg and SN~2011fu were also performed to compare intermediate- and high-luminosity examples among well-studied SNe~IIb. The results for SN~2016iyc, SN~2016gkg, and SN~2011fu showed a diverse range of mass [(12.0--18.0)\,M$_{\odot}$] for the possible progenitors of SNe~IIb considered in this work. Discovery of more such events through survey projects in the near future should provide additional data with which to establish the lower mass limits of such explosions.} \end{enumerate} \section*{Acknowledgements} We thank the anonymous referee for providing very useful and constructive comments that helped to improve the manuscript significantly. A.A. acknowledges funds and assistance provided by the Council of Scientific \& Industrial Research (CSIR), India with the file no. 09/948(0003)/2020-EMR-I. A.A., S.B.P., and R.G. also acknowledge BRICS grant DST/IMRCD/BRICS/Pilotcall/ProFCheap/2017(G). RG and SBP acknowledge the financial support of ISRO under AstroSat archival Data utilization program (DS$\_$2B-13013(2)/1/2021-Sec.2). We sincerely acknowledge the extensive use of the High Performance Computing (HPC) facility at ARIES. Support for A.V.F.'s supernova research group has been provided by the TABASGO Foundation, the Christopher R. Redlich Fund, the U.C. Berkeley Miller Institute for Basic Research in Science (where A.V.F. was a Senior Miller Fellow), and numerous individual donors. Additional support was provided by NASA/{\it HST} grant GO-15166 from the Space Telescope Science Institute (STScI), which is operated by the Associated Universities for Research in Astronomy, Inc. (AURA), under NASA contract NAS 5-26555. J.V. is supported by the project ``Transient Astrophysical Objects'' (GINOP 2.3.2-15-2016-00033) of the National Research, Development, and Innovation Office (NKFIH), Hungary, funded by the European Union. We acknowledge Prof. Keiichi Maeda for useful scientific discussions. The "Open Supernova Catalog" is duly acknowledged here for spectroscopic data. Lick/KAIT and its ongoing operation were made possible by donations from Sun Microsystems, Inc., the Hewlett-Packard Company, AutoScope Corporation, Lick Observatory, the U.S. National Science Foundation, the University of California, the Sylvia \& Jim Katzman Foundation, and the TABASGO Foundation. Research at Lick Observatory is partially supported by a generous gift from Google. Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and NASA; the observatory was made possible by the generous financial support of the W. M. Keck Foundation. The Lick and Keck Observatory staff provided excellent assistance with the observations. This work makes use of observations from the Las Cumbres Observatory Global Telescope Network. The authors sincerely acknowledge M. Garcia and R. Case for performing the Nickel observations. \section*{Data availability} The photometric and spectroscopic data used in this work, as well as the {\tt inlist} files to create the {\tt MESA} models, can be made available on reasonable request to the corresponding author. \appendix \section{Tables and figures} \begin{table*} \caption{Photometry of SN~2016iyc} \centering \smallskip \begin{tabular}{c c c c c c c c} \hline \hline MJD & $B$ & $V$ & $R$ & $C$ & $I$ & Telescope \\ &(mag) & (mag) & (mag) & (mag) & (mag) \\ \hline 57740.144 & ... & ... & ... & 17.807 $\pm$ 0.111 & ... & KAIT \\ 57741.116 & 18.473 $\pm$ 0.075 & 18.262 $\pm$ 0.074 & 18.081 $\pm$ 0.091 & 17.985 $\pm$ 0.048 & 17.875 $\pm$ 0.103 & KAIT \\ 57742.089 & 18.774 $\pm$ 0.149 & 18.484 $\pm$ 0.118 & 18.102 $\pm$ 0.147 & 18.057 $\pm$ 0.149 & 17.832 $\pm$ 0.148 & KAIT \\ 57743.109 & 18.875 $\pm$ 0.114 & 18.484 $\pm$ 0.111 & 18.105 $\pm$ 0.117 & 17.987 $\pm$ 0.135 & 17.949 $\pm$ 0.142 & KAIT \\ 57744.087 & 18.796 $\pm$ 0.139 & 18.451 $\pm$ 0.113 & 17.957 $\pm$ 0.113 & 17.957 $\pm$ 0.143 & 17.731 $\pm$ 0.126 & KAIT \\ 57744.091 & 18.787 $\pm$ 0.054 & 18.299 $\pm$ 0.050 & 17.933 $\pm$ 0.067 & -- $\pm$ -- & 17.656 $\pm$ 0.119 & Nickel \\ 57745.092 & 18.865 $\pm$ 0.129 & 18.237 $\pm$ 0.084 & 18.038 $\pm$ 0.106 & 17.855 $\pm$ 0.115 & 17.658 $\pm$ 0.114 & KAIT \\ 57749.111 & 18.568 $\pm$ 0.105 & 18.090 $\pm$ 0.081 & 17.728 $\pm$ 0.098 & 17.742 $\pm$ 0.131 & 17.524 $\pm$ 0.130 & KAIT \\ 57750.090 & 18.582 $\pm$ 0.100 & 18.050 $\pm$ 0.075 & 17.735 $\pm$ 0.080 & 17.693 $\pm$ 0.092 & 17.433 $\pm$ 0.084 & KAIT \\ 57751.095 & 18.567 $\pm$ 0.085 & 18.037 $\pm$ 0.063 & 17.693 $\pm$ 0.070 & 17.665 $\pm$ 0.055 & 17.439 $\pm$ 0.073 & KAIT \\ 57751.095 & 18.567 $\pm$ 0.085 & 18.037 $\pm$ 0.063 & 17.693 $\pm$ 0.070 & 17.665 $\pm$ 0.055 & 17.439 $\pm$ 0.073 & KAIT \\ 57753.099 & 18.578 $\pm$ 0.083 & 18.009 $\pm$ 0.061 & 17.679 $\pm$ 0.063 & 17.635 $\pm$ 0.091 & 17.408 $\pm$ 0.082 & KAIT \\ 57754.091 & 18.724 $\pm$ 0.137 & 17.947 $\pm$ 0.066 & 17.678 $\pm$ 0.067 & 17.653 $\pm$ 0.090 & 17.409 $\pm$ 0.093 & KAIT \\ 57768.107 & ... & 18.638 $\pm$ 0.059 & 18.582 $\pm$ 0.395 & 18.570 $\pm$ 0.242 & 18.069 $\pm$ 0.321 & KAIT \\ 57769.105 & 19.295 $\pm$ 0.464 & 19.111 $\pm$ 0.214 & 18.666 $\pm$ 0.194 & 18.784 $\pm$ 0.282 & 18.466 $\pm$ 0.232 & KAIT \\ 57956.471 & $>$21.490 & $>$21.336 & $>$21.542 & ... & $>$20.631 & Nickel\\ \hline \end{tabular} \label{tab:optical_observations_2016iyc} \end{table*} \begin{table*} \caption{Revised KAIT photometry of SN~2016gkg along with 3.6\,m DOT data} \centering \smallskip \begin{tabular}{c c c c c c c c} \hline \hline MJD & $B$ & $V$ & $R$ & $C$ & $I$ & Telescope \\ &(mag) & (mag) & (mag) & (mag) & (mag) \\ \hline 57653.315 & 15.949 $\pm$ 0.357 & 15.799 $\pm$ 0.234 & 15.642 $\pm$ 0.214 & 15.661 $\pm$ 0.069 & 15.518 $\pm$ 0.286 & KAIT \\ 57654.322 & 17.039 $\pm$ 0.079 & 16.569 $\pm$ 0.113 & 16.257 $\pm$ 0.053 & 16.272 $\pm$ 0.030 & 16.056 $\pm$ 0.068 & KAIT \\ 57658.373 & 16.626 $\pm$ 0.054 & 15.990 $\pm$ 0.038 & 15.696 $\pm$ 0.045 & 15.745 $\pm$ 0.045 & 15.652 $\pm$ 0.050 & KAIT \\ 57659.444 & 16.383 $\pm$ 0.049 & 15.854 $\pm$ 0.041 & 15.557 $\pm$ 0.053 & 15.630 $\pm$ 0.042 & 15.539 $\pm$ 0.063 & KAIT \\ 57660.444 & 16.282 $\pm$ 0.047 & 15.690 $\pm$ 0.018 & 15.450 $\pm$ 0.022 & 15.429 $\pm$ 0.083 & 15.401 $\pm$ 0.045 & KAIT \\ 57661.316 & 16.106 $\pm$ 0.066 & 15.566 $\pm$ 0.052 & 15.249 $\pm$ 0.061 & 15.482 $\pm$ 0.071 & 15.183 $\pm$ 0.079 & KAIT \\ 57662.408 & 16.024 $\pm$ 0.060 & 15.466 $\pm$ 0.053 & 15.213 $\pm$ 0.059 & 15.238 $\pm$ 0.062 & 15.182 $\pm$ 0.085 & KAIT \\ 57663.334 & 15.861 $\pm$ 0.074 & 15.378 $\pm$ 0.068 & 15.125 $\pm$ 0.162 & 15.145 $\pm$ 0.068 & 15.049 $\pm$ 0.109 & KAIT \\ 57666.368 & 15.748 $\pm$ 0.039 & 15.217 $\pm$ 0.034 & 15.013 $\pm$ 0.045 & 14.992 $\pm$ 0.033 & 14.887 $\pm$ 0.049 & KAIT \\ 57667.373 & 15.676 $\pm$ 0.037 & 15.162 $\pm$ 0.038 & 14.969 $\pm$ 0.051 & 14.931 $\pm$ 0.025 & 14.868 $\pm$ 0.050 & KAIT \\ 57668.362 & 15.504 $\pm$ 0.035 & 15.009 $\pm$ 0.012 & 14.838 $\pm$ 0.011 & ... & 14.763 $\pm$ 0.013 & KAIT \\ 57668.375 & 15.605 $\pm$ 0.117 & 15.093 $\pm$ 0.060 & 14.916 $\pm$ 0.132 & 14.830 $\pm$ 0.084 & 14.805 $\pm$ 0.146 & KAIT \\ 57669.369 & 15.565 $\pm$ 0.117 & 15.011 $\pm$ 0.139 & 14.845 $\pm$ 0.128 & 14.758 $\pm$ 0.180 & 14.671 $\pm$ 0.024 & KAIT \\ 57671.420 & 15.536 $\pm$ 0.058 & 15.024 $\pm$ 0.043 & 14.760 $\pm$ 0.051 & 14.759 $\pm$ 0.063 & 14.621 $\pm$ 0.069 & KAIT \\ 57672.330 & 15.576 $\pm$ 0.048 & 15.003 $\pm$ 0.074 & 14.760 $\pm$ 0.072 & 14.723 $\pm$ 0.054 & 14.590 $\pm$ 0.090 & KAIT \\ 57683.307 & 16.856 $\pm$ 0.023 & 15.699 $\pm$ 0.016 & 15.187 $\pm$ 0.014 & ... & 14.951 $\pm$ 0.014 & KAIT \\ 57687.298 & 17.165 $\pm$ 0.040 & 15.995 $\pm$ 0.017 & 15.383 $\pm$ 0.016 & ... & 15.076 $\pm$ 0.016 & KAIT \\ 57694.279 & 17.545 $\pm$ 0.096 & 16.266 $\pm$ 0.049 & 15.645 $\pm$ 0.054 & 15.692 $\pm$ 0.053 & 15.226 $\pm$ 0.069 & KAIT \\ 57696.255 & 17.544 $\pm$ 0.021 & 16.302 $\pm$ 0.016 & 15.716 $\pm$ 0.023 & ... & ... & 3.6\,m DOT \\ 57697.300 & 17.560 $\pm$ 0.863 & 16.379 $\pm$ 0.076 & 15.703 $\pm$ 0.020 & ... & 15.363 $\pm$ 0.019 & KAIT \\ 57697.350 & 17.562 $\pm$ 0.106 & 16.286 $\pm$ 0.106 & 15.697 $\pm$ 0.118 & 15.815 $\pm$ 0.016 & 15.372 $\pm$ 0.120 & KAIT \\ 57701.256 & 17.669 $\pm$ 0.081 & 16.440 $\pm$ 0.052 & 15.809 $\pm$ 0.059 & 15.891 $\pm$ 0.040 & 15.387 $\pm$ 0.059 & KAIT \\ 57702.253 & 17.617 $\pm$ 0.038 & 16.487 $\pm$ 0.018 & 15.833 $\pm$ 0.018 & ... & ... & KAIT \\ 57703.289 & 17.532 $\pm$ 0.108 & 16.463 $\pm$ 0.095 & 15.847 $\pm$ 0.071 & 15.884 $\pm$ 0.055 & 15.426 $\pm$ 0.053 & KAIT \\ 57706.262 & 17.805 $\pm$ 0.156 & 16.562 $\pm$ 0.071 & 15.894 $\pm$ 0.059 & 15.965 $\pm$ 0.086 & 15.543 $\pm$ 0.071 & KAIT \\ 57707.237 & 17.592 $\pm$ 0.329 & 16.486 $\pm$ 0.232 & 15.829 $\pm$ 0.186 & 16.023 $\pm$ 0.037 & 15.469 $\pm$ 0.178 & KAIT \\ 57710.259 & 17.893 $\pm$ 0.085 & 16.612 $\pm$ 0.055 & 16.011 $\pm$ 0.063 & 16.050 $\pm$ 0.046 & 15.535 $\pm$ 0.070 & KAIT \\ 57710.312 & 16.692 $\pm$ 0.084 & 16.689 $\pm$ 0.038 & 16.077 $\pm$ 0.035 & ... & 15.617 $\pm$ 0.021 & KAIT \\ 57744.149 & 18.032 $\pm$ 0.109 & 17.277 $\pm$ 0.061 & 16.678 $\pm$ 0.035 & ... & 16.143 $\pm$ 0.027 & KAIT \\ 57753.135 & 18.208 $\pm$ 0.039 & 17.371 $\pm$ 0.089 & 16.854 $\pm$ 0.029 & ... & 16.282 $\pm$ 0.026 & KAIT \\ 58080.168 & 22.541 $\pm$ 0.258 & 21.751 $\pm$ 0.225 & 20.821 $\pm$ 0.03 & ... & 20.137 $\pm$ 0.055 & 3.6\,m DOT \\ \hline \end{tabular} \label{tab:optical_observations_2016gkg} \end{table*} \begin{table*} \caption{{\tt MESA} model and {\tt SNEC} explosion parameters of SN~2011fu and SN~2016gkg.} \label{tab:MESA_MODELS_11fu_n_16gkg} \begin{center} {\scriptsize \begin{tabular}{ccccccccccccc} \hline\hline SN name &Model Name & $M_{\rm ZAMS}$ & $Z$ & $f_{ov}^{a}$ & $M_{\mathrm{f}}^{b}$ & $M_\mathrm{c}^{c}$ & $M_{\mathrm{ej}}^{d}$ & $M_{\mathrm{Ni}}^{e}$ & $E_{\mathrm{exp}}^{f}$ \\ & & (M$_{\odot}$) & & & (M$_{\odot}$) & (M$_{\odot}$) & (M$_{\odot}$) & (M$_{\odot}$) & ($10^{51}$\,erg) \\ \hline \hline SN~2016gkg & He5\_A & 18.0 & 0.0200 & 0.01 & 5.00 & 1.6 & 3.40 & 0.087 & 1.30 \\ SN~2011fu & He5\_B & 18.0 & 0.0200 & 0.01 & 5.00 & 1.5 & 3.50 & 0.140 & 1.25 \\ \hline\hline \end{tabular}} \end{center} {$^a$Overshoot parameter. $^b$Final mass of the pre-SN model. $^c$Final mass of the central remnant $^d$Ejecta mass. $^e$Nickel mass $^f$Explosion energy.}\\ \end{table*} \renewcommand{\thefigure}{A\arabic{figure}} \setcounter{figure}{0} \bsp % \label{lastpage}
Title: Dark Grand Unification in the Axiverse: Decaying Axion Dark Matter and Spontaneous Baryogenesis
Abstract: The quantum chromodynamics axion with a decay constant near the Grand Unification (GUT) scale has an ultralight mass near a neV. We show, however, that axion-like particles with masses near the keV - PeV range with GUT-scale decay constants are also well motivated in that they naturally arise from axiverse theories with dark non-abelian gauge groups. We demonstrate that the correct dark matter abundance may be achieved by the heavy axions in these models through the misalignment mechanism in combination with a period of early matter domination from the long-lived dark glueballs of the same gauge group. Heavy axion dark matter may decay to two photons, yielding mono-energetic electromagnetic signatures that may be detectable by current or next-generation space-based telescopes. We project the sensitivity of next-generation telescopes including $\textit {Athena,}$ AMEGO, and e-ASTROGAM to such decaying axion dark matter. If the dark sector contains multiple confining gauge groups, then the observed primordial baryon asymmetry may also be achieved in this scenario through spontaneous baryogenesis. We present explicit orbifold constructions where the dark gauge groups unify with the SM at the GUT scale and axions emerge as the fifth components of dark gauge fields with bulk Chern-Simons terms.
https://export.arxiv.org/pdf/2208.10504
\title{ Dark Grand Unification in the Axiverse: Decaying Axion Dark Matter and Spontaneous Baryogenesis } \author{Joshua W. Foster} \email{jwfoster@mit.edu} \affiliation{Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, U.S.A.} \author{Soubhik Kumar} \email{soubhik@berkeley.edu} \affiliation{Berkeley Center for Theoretical Physics, University of California, Berkeley, CA 94720, U.S.A.} \affiliation{Theoretical Physics Group, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, U.S.A.} \author{Benjamin R. Safdi} \email{brsafdi@berkeley.edu} \affiliation{Berkeley Center for Theoretical Physics, University of California, Berkeley, CA 94720, U.S.A.} \affiliation{Theoretical Physics Group, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, U.S.A.} \author{Yotam Soreq} \email{soreqy@physics.technion.ac.il} \affiliation{Physics Department, Technion – Israel Institute of Technology, Haifa 3200003, Israel} \date{\today} \preprint{MIT-CTP/5458} \tableofcontents \section{Introduction} The quantum chromodynamics~(QCD) axion was originally introduced to explain the strong {\it CP} problem connected to the absence of the neutron electric dipole moment~\cite{Peccei:1977hh,Peccei:1977ur,Weinberg:1977ma,Wilczek:1977pj}. The axion is naturally realized as the pseudo-Nambu Goldstone boson of a symmetry, the Peccei-Quinn~(PQ) symmetry, which is spontaneously broken at a high scale $f_a$. The axion $a$ would be exactly massless but for its interactions with QCD through the dimension-5 operator $a G \tilde G / f_a$, where $G$ is the QCD field strength and $\tilde G$ is its dual. Below the QCD confinement scale instantons generate a potential for the axion; when the axion minimizes this potential, it dynamically removes the neutron electric dipole moment. The axion acquires a mass ${m^{\rm QCD}_a} \approx \frac{\sqrt{m_u m_d}}{m_u+m_d}\frac{m_\pi f_\pi}{f_a}$, with $m_\pi$ ($f_\pi$) the pion mass (decay constant) and $m_u$ ($m_d$) the up quark (down quark) mass. Coherent fluctuations of the axion field about its minimum may explain the observed dark matter~(DM) abundance~\cite{Preskill:1982cy,Abbott:1982af,Dine:1982ah}. If the PQ symmetry is broken after inflation then the correct DM abundance is achieved for $m_a^{\rm QCD} \sim 100$ $\mu$eV~\cite{Gorghetto:2020qws,Buschmann:2021sdq}, while if the PQ symmetry is broken before inflation the final DM abundance depends on the initial misalignment angle, and much lower axion masses may still explain the DM~\cite{Tegmark:2005dy,Hertzberg:2008wr,Co:2016xti,graham2018stochastic,takahashi2018qcd}. The idea of the `axiverse' naturally emerges in the context of String Theory constructions~\cite{Green:1984sg,Witten:1984dg,Svrcek:2006yi,Arvanitaki:2009fg,Halverson:2019cmy,Conlon:2006tq,Acharya:2010zx,Ringwald:2012cu,Cicoli:2012sz}, whereby there is a large number $N$ of ultralight axion-like particles~(ALPs). One linear combination of the ALPs couples to QCD and becomes the QCD axion mass eigenstate. It is commonly assumed that the rest of the $N-1$ ALPs remain ultralight, with masses much less than the mass of the QCD axion mass eigenstate; the non-zero masses of these ultralight ALPs could arise, {\it e.g.}, from string or gravitational instantons. Indeed, in String Theory constructions ALPs arise as the zero modes of higher-dimensional gauge fields compactified on the internal manifold, and the gauge invariance protects the masses of the ALPs from perturbative contributions~\cite{Demirtas:2021gsq}. These ultralight ALPs interact with matter and SM gauge fields except for QCD. The ALP decay constants may range, roughly, from $f_a \sim 10^{9} - 10^{18}$ GeV, which means that the ALP-matter interactions are heavily suppressed since they arise through higher-dimensional operators suppressed by this scale. The upper bound on $f_a$ arises from the theoretical assumption that the decay constant is smaller than the Planck scale, while the lower bound on $f_a$ is determined by stellar cooling and laboratory searches~\cite{Adams:2022pbo}. There is currently significant effort dedicated to searching for ultralight ALPs and the QCD axion in the laboratory and in astrophysical environments (see~\cite{Adams:2022pbo,Baryakhtar:2022obg,Boddy:2022knd} for recent reviews). In this work, we consider the possibility that at least one of the axion\footnote{Throughout the rest of this Article we refer to all ALPs as `axions' for simplicity, with the axion that solves the strong {\it CP} problem distinguished as the QCD axion.} mass eigenstates may be much heavier than the eV scale because of instantons from a dark gauge group. In particular, through a coupling $a G_d \tilde G_d / f_a$ to the dark gauge group with gauge field $G_d$, the heavy axion acquires a mass $m_a \sim \Lambda_D^2 / f_a$, with $\Lambda_D$ being the dark confinement scale. If the dark gauge group unifies with the Standard Model~(SM) near the Grand Unified Theory~(GUT) scale then we show it is natural to expect $\Lambda_D \sim 10^4 - 10^{10}$ GeV, for low-dimension dark gauge groups such as $SU(2)$ or $SU(3)$.\footnote{See~\cite{Demirtas:2018akl,Cvetic:2020fkd}, however, which claim that in certain F-theory constructions lower confinement scales may be preferred.} Assuming $f_a$ is also around the GUT scale ($\sim$ $10^{15} - 10^{17}$ GeV) this then implies $m_a\sim {\rm eV}-0.1\,{\rm PeV}$, though the heavy axion mass could be beyond this range depending on the dark confinement scale and decay constant. We show that if the axion is in the keV-MeV mass range it may explain the observed DM abundance. A crucial ingredient to this story, however, is a period of early matter domination induced by the dark glueballs that dilutes the otherwise over-abundant population of cold axions. The axion DM may decay today to two photons to give rise to monochromatic $X$-ray and gamma-ray lines. Such signatures can be searched for with existing telescopes such as {\it XMM-Newton}, {\it Chandra}, {\it NuSTAR}, INTEGRAL, and COMPTEL~\cite{Boddy:2022knd}. Here, we reinterpret existing limits from these telescopes on decaying DM in the context of the heavy axion DM model. We further show that future telescopes in the keV-MeV range offer significant discovery potential for heavy axions. In particular, we project sensitivity to heavy axion DM from the possible e-ASTROGAM~\cite{e-ASTROGAM:2017pxr} and AMEGO~\cite{Kierans:2020otl} missions in the MeV band that are currently being proposed (see~\cite{Engel:2022bgx} for additional proposals). These telescopes will improve the sensitivity to MeV sources by orders of magnitude, and we show that this will provide significant discovery potential for decaying heavy axion DM. The recently-launched eROSITA $X$-ray telescope may provide an incremental increase in sensitivity in the $X$-ray band~\cite{eROSITA:2020emt,Dekker:2021bos}, while larger leaps in sensitivity will arise from next-generation $X$-ray telescopes such as {\it Athena}~\cite{Barcons:2015dua, Piro:2021oaa} and THESEUS~\cite{THESEUS:2017qvx} that will probe natural parameter space where we may expect a decaying axion DM model to appear. The keV-MeV axions proposed in this work provide strong motivation for pursuing next-generation telescopes across this energy range. The observable signature of axion DM decay is similar to that of keV-scale mass sterile neutrino DM~\cite{Drewes:2016upu}, but such models have been increasingly in tension with data~\cite{Perez:2016tcq,Roach:2019ctw,Foster:2021ngm}. On the other hand, much of the best-motivated parameter space for heavy axions, as we show, has yet to be covered experimentally. As we show, achieving the correct DM abundance without significant fine-tuning of the initial axion misalignment angle requires the reheat temperature from glueball decay to be ${\cal O(\MeV-\GeV)}$. With such low values of the reheat temperature, it is interesting to know whether successful baryogenesis can occur. We demonstrate that in addition to having a keV-MeV mass axion explaining the DM abundance, an even heavier axion state may give rise to the observed baryon asymmetry through spontaneous baryogenesis~\cite{Cohen:1987vi}. Spontaneous baryogenesis proceeds through leptogenesis, with the Weinberg operator~\cite{Weinberg:1979sa} providing lepton number violation and the oscillating heavy axion field providing a time-dependent chemical potential for lepton number~\cite{Kusenko:2014uta, Co:2020jtv}. Thus a lepton asymmetry can develop in thermal equilibrium. This lepton asymmetry is then converted to an initially too large baryon asymmetry through electroweak sphalerons. The baryon asymmetry is subsequently diluted to the observed value by the entropy dilution induced from the glueballs of the same dark sector that gives rise to the lighter keV-MeV scale axion DM. Therefore, we can explain both the primordial DM and baryon abundances in this scenario with two heavy axions. As for the case of the axiverse, motivation for considering dark gauge groups arises from String Theory constructions, which may give rise to non-abelian dark gauge sectors, including in the hetorotic string, type II string theory models, M-theory, and $F$-theory (see, {\it e.g.},~\cite{Gross:1984dd,Dixon:1985jw,Dixon:1986jc,Ibanez:1986tp,Lebedev:2006kn,Blaszczyk:2009in,Braun:2005ux,Bouchard:2005ag,Anderson:2012yf,Cvetic:2004ui,Gmeiner:2005vz,Blumenhagen:2008zz,Acharya:1998pm,Halverson:2015vta,Grassi:2014zxa,Halverson:2015jua,Taylor:2015ppa,Acharya:2017szw}). In particular, at energy scales well below the GUT scale, the gauge group may be written as $G_{\rm SM} \times G_{\rm dark}$, where $G_{\rm SM}$ is the SM gauge group and $G_{\rm dark}$ is the dark gauge group. No SM matter is charged under $G_{\rm dark}$. In this work we show, through explicit constructions based on, {\it e.g.}, an $SU(8)$ group, that such dark gauge sectors can also arise in orbifold GUT models where $G_{\rm SM}$ and $G_{\rm dark}$ unify into a single non-abelian gauge group. Note that while dark glueballs may themselves be the DM, as discussed first in~\cite{Carlson:1992fn} and further elaborated upon in {\it e.g.}~\cite{Halverson:2016nfq,Faraggi:2000pv,Feng:2011ik,Boddy:2014yra,Soni:2016gzf,Kribs:2016cew,Acharya:2017szw,Yamanaka:2019aeq,Halverson:2018olu,Halverson:2018vbo,Acharya:2017kfi,Soni:2017nlm,Cohen:2016uyg,Asadi:2021pwo}, in this work we assume the glueballs decay before big bang nucleosynthesis~(BBN). Scalar (moduli) DM may also arise in String Theory constructions with similar phenomenology to the heavy axions discussed in this work~\cite{Kusenko:2012ch}. For previous discussions of the unification of dark gauge groups with the SM see, {\it e.g.}, Ref.~\cite{Gherghetta:2016fhp, Gaillard:2018xgk, Murgui:2021eqf}. The remainder of this Article is organized as follows. In Sec.~\ref{sec:axiverse} we discuss the field theory of multiple axions connected to non-abelian, confining dark sectors. We describe the axion masses, decay constants, and couplings to matter that would naturally arise in the presence of such dark sectors. In Sec.~\ref{sec:DM} we describe the cosmology in the presence of the associated glueballs of the confining dark sector. We show that such glueballs give rise to an early matter dominated era and naturally avoid the DM overclosure problem, making the heavy axions a suitable DM candidate. The resulting axion DM can decay into a pair of photons, and we show that the existing and proposed $X$-ray and gamma-ray missions are capable of probing much of the motivated parameter space. In Sec.~\ref{sec:bary} we first show that an even heavier axion can lead to successful baryogenesis through the mechanism of spontaneous baryogenesis. Subsequently, we discuss a scenario in which the presence of two heavy axions can explain both the observed DM and baryon abundances through their connected cosmological evolution. To give an example of how such confining dark sectors might arise, in Sec.~\ref{sec:model} we construct an extra-dimensional orbifold GUT model, describing a breaking pattern $SU(8)\rightarrow SU(3)_D \times G_{\rm SM}$. The dark axion naturally emerges in this scenario from a higher-dimensional gauge field. We conclude in Sec.~\ref{sec:conclu}. \section{keV - PeV axions from confining dark sectors} \label{sec:axiverse} In this section we motivate keV-PeV scale axions from confining dark sectors. We claim that such heavy axion states arise generically in axiverse models with dark gauge groups. In particular, we assume that well below the GUT scale, the gauge group of nature may be written as $G_{\rm SM} \times G_{\rm dark}$. For simplicity we assume that the dark sector has no light matter content. Importantly, all interactions between the dark sector and the visible sector occur through higher-dimensional operators. We consider a scenario where $G_{\rm SM}$ and $G_{\rm dark}$ are unified at some high scale $M_{\rm GUT} \sim 10^{15}-10^{17}$~GeV, whereas they still interact through an intermediate scale $\Lambda \lesssim M_{\rm GUT}$. Explicit, example constructions of $G_{\rm SM}$ and $G_{\rm dark}$ unification in an extra dimensional framework are given in Sec.~\ref{sec:model}. \subsection{Field theory considerations in the axiverse with confining dark sectors} The confinement scale of the dark sector $\Lambda_D$ is related to the ultraviolet~(UV) coupling $\alpha_{\rm UV}$ at the energy scale $\Lambda_{\rm UV}$ via the relation \es{eq:dark_confinement}{ \Lambda_D = \Lambda_{\rm UV} \, {\rm exp} \left({- \frac{2 \pi}{ 3 T_{G_{\rm dark}}\alpha_{\rm UV}}} \right)\,, } where $T_{G_{\rm dark}}$ is the dual Coxeter number, which is $N$ for $G_{\rm dark} = SU(N)$, and assuming that the dark sector is ${\mathcal N} = 1$ supersymmetric. If the dark gauge group is not supersymmetric then we may use the one-loop $\beta$-function to estimate the dark confinement scale, which leads to an analogous expression to~\eqref{eq:dark_confinement} but with $T_{G_{\rm dark}} \to {11 } C_2 /9$, with $C_2$ the quadratic Casimir ($C_2 = N$ for $SU(N)$). We assume no light matter charged under $G_{\rm dark}$. Let us suppose that the dark sector unifies with the visible sector at $\Lambda_{\rm GUT} \approx 10^{16}$ GeV, with $\alpha_{\rm UV} \approx 1/24$, as motivated by supersymmetric grand unification~\cite{Mohapatra:1997sp}. Then, taking $G_{\rm dark} = SU(2)$ we find $\Lambda_D \approx 10^5$ GeV ($\Lambda_D \approx 10^7$ GeV) for the supersymmetric (non-supersymmetric) theory; if instead $G_{\rm dark} = SU(3)$ then the dark confinement scale rises to $5 \times 10^8$ GeV and $10^{10}$ GeV for the supersymmetric and non-supersymmetric theories, respectively. Matter content in the dark sector may further lower the confinement scales. Moreover, the assumed value $\alpha_{\rm UV} \approx 1/24$ is suggestive but may deviate in any particular GUT model, which broadens the possible confinement scales. Thus, for most of this work we remain somewhat agnostic as to the scale $\Lambda_D$, though high confinement scales $\sim 10^5 - 10^{10}$ GeV appear to be natural expectations. We assume that there are $N$ axions $a_i$, with $i = 1, \cdots, N$, that have ultralight bare masses (much lighter than the QCD axion mass). The axions will acquire non-trivial potentials through their couplings to $G_{\rm dark}$ and to the visible $SU(3)_{c}$. In principle, $G_{\rm dark}$ may have multiple confining sub-sectors. For the moment, however, we take $G_{\rm dark}$ to be a simple Lie group, whose confinement scale is assumed to be much larger than $\Lambda_{\rm QCD}$. (Later, we consider the scenario where $G_{\rm dark}$ is the product of two simple Lie groups, one of which gives rise to the DM axion and the other produces the heavy axion that leads to baryogenesis.) We denote the field strength as $G_{\rm d, \mu\nu}^a$, with $a$ a dark color index. Then, the relevant terms in the Lagrangian are \es{}{ {\mathcal L} = \sum_i {\alpha_d \over 8 \pi}{c_{\rm d}^i a_i \over f_a} G_{{\rm d} \, {\mu \nu}}^a \tilde G_{{\rm d}\,a}^{\mu \nu} \,, } where $f_a$ is the decay constant giving the scale of the ultraviolet completion to the axion sector, $\alpha_d$ is the dark gauge coupling, and the $c_{\rm d}^i$ are dimensionless coefficients that describe the magnitude of the coupling of each axion to the dark gauge group. Effectively we treat the axions $a_i$ to have decay constants $f_a / |c_d^i|$, but we chose to factor out the common scale $f_a$; in particular, we assume that in the UV completion each axion field $a_i$ is periodic with period $2 \pi f_a / |c_d^i|$. At energies well below the dark confinement scale, instantons in the dark sector generate a potential for the axions, which for small displacements is of the form \es{eq:pot}{ V \approx &\Lambda_{D}^4 \left( \sum_i {c_{\rm d}^i a_i \over f_a} + \bar \theta_D \right)^2 \,, } where ${\bar \theta_D}$ is the CP-violating theta-angle of the dark sector. The canonically normalized axion mass eigenstate is given by \es{}{ a_D = {\sum_i c_{\rm d}^i a_i \over \sqrt{\sum_i (c_{\rm d}^i)^2 }} \,, } and the axion mass is \es{eq:axion_mass}{ m_a \approx {\Lambda_D^2 \over \tilde f_a} \approx 1 \, \, {\rm MeV} \left( {\Lambda_D \over 10^6 \, \, {\rm GeV}} \right)^2 \left( {10^{15} \, \, {\rm GeV} \over \tilde f_a}\right) \,. } We define $\tilde f_a \equiv f_a / \sqrt{\sum_i (c_{\rm d}^i)^2 }$, such that the axion coupling to the dark gauge group is \es{}{ {\mathcal L} = {\alpha_d \over 8 \pi}{a_D \over \tilde f_a} G_{{\rm d} \, {\mu \nu}}^a \tilde G_{{\rm d}\,a}^{\mu \nu} \,. } If we assume that all of the $c_{\rm d}^i$ are order unity, then $\tilde f_a \sim f_a / \sqrt{N}$. The axion $a_D$ has domain wall number $N$ in this construction with respect to the dark gauge group; that is, $a_D$ is periodic with period $N \times 2 \pi \tilde f_a$, but the dark-gauge-group-induced potential is periodic with period $2 \pi \tilde f_a$. Let us now consider the couplings of the axion $a_D$ to other gauge sectors. In particular, we assume that the ensemble of axions, $a_i$, interact with a gauge group specified by field strength $G_{\mu\nu}$ and coupling strength $\alpha$ by the terms \es{eq:other_sector}{ {\mathcal L} = \sum_i {\alpha \over 8 \pi} {d^i a_i \over f_a} G_{\mu \nu} \tilde G^{\mu \nu} = {\alpha \over 8 \pi} {d_D a_D \over \tilde f_a} G_{\mu \nu} \tilde G^{\mu \nu} + \cdots \,, } where the $d^i$ are dimensionless constants. Note that this gauge group may represent an additional confining dark gauge group, the visible QCD sector, or $U(1)_{\rm EM}$; the point we make about this coupling is generic assuming that the confinement scale for $G_{\mu \nu}$, if it confines, is much lower than $\Lambda_D$. In the second equality in~\eqref{eq:other_sector} we have isolated the interaction of $a_D$ and left off the other $N-1$ axion states. The dimensionless coupling $d_D$ can be written as \es{}{ d_D = {\sum_i c_d^i d^i \over {\sum_i (c_d^i)^2}} \,. } Under the assumption $\langle c_d^i \rangle = \langle d^i \rangle = \langle c_d^i d^i \rangle = 0$ and $\langle (c_d^i)^2 \rangle = \langle (d^i)^2 \rangle = 1$, with brackets denoting correlations over statistical realizations of the couplings, then $\langle d_D^2 \rangle \approx 2 / N$. This is important because it suggests that in an axiverse with $N$ axions the couplings of massive axion states to gauge groups with lower confinement scales will be suppressed by $\sim$ $1/\sqrt{N}$. Of course, the exact suppression depends upon the distributions of axion couplings to the various gauge groups, but generically we may expect the couplings of the massive axion state to the other gauge groups to be suppressed. (See~\cite{Halverson:2019kna} for a similar observation in the context of axion reheating through couplings to gauge sectors in F-theory.) As an aside from the heavy axion discussion, consider the IR coupling of the QCD axion to electromagnetism, at energy scales below the QCD confinement scale in the context of the axiverse: \es{}{ {\mathcal L} = C_{a\gamma\gamma} {\alpha_{\rm EM} \over 8 \pi} {a_{\rm QCD} \over \tilde f_a} \tilde F_{\mu \nu} F^{\mu \nu} \,, } where we have identified $\tilde f_a$ with the decay constant of the QCD axion mass eigenstate, which is generically a factor of $1 / \sqrt{N}$ smaller than the decay constants of the $a_i$ axion states, assuming the appropriate $c_d^i$ coefficients are order unity and uncorrelated. Note that for this particular discussion the presence of a possible dark gauge sector does not play an important role. The IR coefficient $C_{a\gamma\gamma} = C_{a\gamma\gamma}^{\rm UV} + C_{a\gamma\gamma}^{\rm QCD}$ has an ultraviolet contribution $C_{a\gamma\gamma}^{\rm UV}$ and a contribution from mixing of the neutral pion, $C_{a\gamma\gamma}^{\rm QCD} \approx -1.92(4)$~\cite{diCortona:2015ldu}. The UV contribution is typically written as $C_{a\gamma\gamma}^{\rm UV} = E_{\rm QCD}/ N_{\rm QCD}$, where $E_{\rm QCD}$\,($N_{\rm QCD}$) is the electromagnetic\,(QCD) anomaly coefficient. The argument above suggests that in an axiverse with $N$ nearly degenerate (in decay constant) axions, we expect the QCD mass eigenstate to have $N_{\rm QCD} \sim N$ while $E_{\rm QCD} \sim \sqrt{N}$, and thus $E_{\rm QCD} / N_{\rm QCD} \sim 1 / \sqrt{N}$. This then implies that the infrared observer should measure $C_{a\gamma\gamma} \approx C_{a \gamma\gamma}^{\rm QCD}$, which is the expectation for the KSVZ field theory axion model~\cite{Kim:1979if,Shifman:1979if}. In contrast, in models where the QCD axion couples to the SM in a way compatible with Grand Unification we expect $E_{\rm QCD} / N_{\rm QCD} = 8/3$ (see, {\it e.g.},~\cite{DiLuzio:2020wdo}), leading to the DFSZ-type expectation $C_{a\gamma\gamma} \approx 0.75$~\cite{Dine:1981rt,Zhitnitsky:1980tq}. Note that the axiverse scenario could still lead to the DFSZ-type $C_{a\gamma\gamma}$ if all of the axions share the same GUT-compatible coupling to the SM gauge groups, as that would violate our assumption that the $c^i$ are uncorrelated. For a recent discussion along these lines see~\cite{Agrawal:2022lsp}. Moreover, we note that the QCD axion decay constant, as defined through the coupling of the axion to QCD, is reduced by a factor $\sim$$ \sqrt{N}$ from the naive expectation in the axiverse, assuming uncorrelated coupling coefficients. (The decay constant would be reduced by $\sim$$N$ if the couplings are correlated.) This has important implications for axion laboratory experiments such as ABRACADABRA and DM Radio~\cite{Ouellet:2018beu,Salemi:2021gck,Brouwer:2022bwo,DMRadio:2022pkf}, as it suggests that decay constants as low as, {\it e.g.}, $\sim 10^{13} - 10^{14}$\,GeV could be directly connected with GUT models in the context of the axiverse with a large $N$ number of axions. Returning to the heavy axion story, we note that the same logic applied above to the QCD axion also suggest that heavy axion axion-photon coupling coefficients $C_{a\gamma\gamma} \sim 1 /\sqrt{N}$ might be expected in the axiverse, as the heavy axions have only UV contributions to the electromagnetic coupling. For example, $C_{a\gamma\gamma} \sim 0.1$ could be expected for $N \sim 10^2$ axions. In addition to the axion-photon coupling we also consider the axion-electron coupling, which for an axion $a$ is \es{}{ {\mathcal L} = C_{aee}{ \partial_\mu a \over 2 \,f_a} \bar e \gamma^\mu \gamma_5 e \,, } where $C_{aee}$ is the dimensionless coupling coefficient and $e$ is the electron field. Depending on the UV completion this coefficient may be zero or non-zero in the UV, though given an axion-photon coupling it is generated at one-loop under the renormalization group. We use this operator when considering axion decays to electron-positron pairs, where kinematically allowed. \section{Axion cosmology with early matter domination from dark glueballs} \label{sec:DM} In this section we discuss heavy axion cosmology and show, in particular, that the correct DM abundance may naturally arise if there is a period of early matter domination. The early matter domination, ending with a low reheat temperature $T_{\text{RH}}$, can naturally arise in the context of the heavy axion theory, with no additional ingredients beyond the heavy axion and its associated dark gauge group, because of the dark glueballs. \subsection{Signatures of heavy axion dark matter} Let us first suppose that there is a heavy axion in the keV-MeV mass range, whose mass is generated from a confining dark sector as described previously, that makes up the observed DM. If the axion mass is less than twice the electron mass then the only kinematically-allowed option for the axion to decay is into two photons. (Note that heavy axion decays to lighter axions will generically be suppressed relative to axion decays to two photons and to electron-positron pairs.) The decay rate of the axion to two photons is given by \es{eq.lifetime}{ \tau_{a\to \gamma\gamma}=\frac{256\pi^3}{\alpha^2 C_{a\gamma\gamma}^2}\frac{f_a^2}{m_a^3}\approx \, &9.6\times 10^{27}~\text{s}\left(\frac{0.1}{C_{a\gamma\gamma}}\right)^2 \left(\frac{0.1~\text{MeV}}{m_a}\right)^3 \left(\frac{f_a}{10^{15}~\text{GeV}}\right)^2 \,. } Above, and in the remainder of this Article, we depart from the notation in the previous section for simplicity and take $\tilde f_a \to f_a$ to be the axion decay constant such that $m_a \approx \Lambda_D^2 / f_a$ (see~\eqref{eq:axion_mass}). Interestingly, while much longer than the age of the Universe, DM lifetimes on the order of those in~\eqref{eq.lifetime} are at the edge of sensitivity of present-day $X$-ray and gamma-ray telescopes, as we discuss later in this Article. When $m_a > 2 m_e$, with $m_e$ the electron mass, the axion may also decay to $e^+ e^-$ pairs, with partial lifetime (see, {\it e.g.},~\cite{Bauer:2019gfk}) \es{}{ \tau_{a \to e^+e^-} &= {8 \pi f_a^2 \over m_a m_e^2} {1 \over C_{aee}^2} \left[ 1 - 4 {m_e^2 \over m_a^2}\right]^{-1/2}\\ &\approx {4 \times 10^{18}} \, \, {\rm s} \left({0.1 \over C_{aee}}\right)^2 \left( {2 \, \, {\rm MeV} \over m_a} \right) \left(\frac{f_a}{10^{15}~\text{GeV}}\right)^2 \,. } Thus, the axion to $e^+e^-$ pair decay channel may dominate the total lifetime. In fact, this may be true even if $C_{aee}$ vanishes in the UV and is only generated under the renormalization group. The IR value of $C_{aee}$ in that case depends on the relative coupling of the axion to $U(1)_Y$ versus $SU(2)_L$, but one generically expects $|C_{aee} / C_{a\gamma\gamma}| \sim 10^{-4} - 10^{-3}$~\cite{Srednicki:1985xd,Chang:1993gm,Dessert:2021bkv}. Thus, depending on $|C_{aee} / C_{a\gamma\gamma}|$, the total lifetime for $m_a \gtrsim 2 m_e$ could be dominated by the decay to $e^+e^-$ pairs. The total lifetime must be sufficiently long compared to the age of the Universe for the axion to be a DM candidate. This requirement itself limits the $f_a$ that may be realized for tree-level $C_{aee}$ and $m_a \sim {\mathcal O}(1)$ MeV. However, the constraints on $\tau_{a\to\gamma\gamma}$ are much stronger than those on $\tau_{a\to e^+e^-}$. For example, for $m_a\sim2\,\MeV$, the lower bound on the axion decay rate to photons is $\tau_{a\to\gamma\gamma} \gtrsim {\rm few} \times 10^{27}\,$s, while for electrons it is $\tau_{a\to e^+e^-} \gtrsim 10^{24}$\,s~\cite{Liu:2020wqz}. Thus, we conclude that for $m_a > 2 m_e$, DM decays to $e^+e^-$ pairs generically rule out axion DM with tree-level couplings to electrons, for $f_a$ all the way up towards the Planck scale, while in the case of loop-induced axion-electron couplings the probes using decays to two photons are more powerful. For this reason, throughout the rest of this Article we assume that for $m_a > 2 m_e$ the axion-electron coupling is loop induced, so that we may focus solely on the decay channel to $\gamma\gamma$. \subsection{Cosmology of heavy axion dark matter with and without early matter domination} A central impediment, however, to the possibility of keV-MeV scale axion DM is that assuming the standard cosmology, DM is overproduced by orders of magnitude. From the misalignment mechanism, assuming the axion field starts with a constant initial field value $a_D(t=0) = \theta_i f_a$ and its mass $m_a$ is temperature independent, the DM abundance is determined to be \es{eq.relicab_std}{ \Omega_a h^2\rvert_{\rm RD} \approx 0.12\left( \frac{\theta_if_a}{2\times10^{13}~\text{GeV}} \right)^2 \left(\frac{m_a}{1~\mu\text{eV}}\right)^{\frac{1}{2}} \left(\frac{90}{g_{*}(T_{\rm osc})}\right)^{\frac{1}{4}} \,, } in the limit $|\theta_i| \lesssim 1$ where anharmonicities of the axion potential may be ignored. Here $g_{*}(T_{\rm osc})$ is the effective number of degrees of freedom in the radiation bath when the axion starts to oscillate at $m_a=q_0 H(T_{\rm osc})$, with $q_0 \approx 1.6$ (see {\it e.g.}~\cite{Blinov:2019rhb}). (Here we ignore possible temperature dependence of the axion mass, though we will return to this possibility later in this Article.) Given that cosmic microwave background~(CMB) measurements indicate $\Omega_a h^2 \approx 0.12$~\cite{Planck:2018vyg}, masses $m_a \sim$~keV-MeV appear heavily disfavored, unless the initial misalignment angle is severely tuned. One possibility is that the tuning could appear for anthropic reasons, as has been discussed, {\it e.g.}, for the QCD axion with GUT-scale decay constant~\cite{Tegmark:2005dy}. However, in this Article we explore dynamical mechanisms that may create the correct DM abundance without requiring anthropic tuning of $\theta_i$. A key point of this work is that we may naturally match the observed abundance of DM for such massive axions by assuming that the Universe went through a period of early matter domination. We will later show that the early matter domination may arise from the dark glueballs. The DM abundance from the misalignment mechanism may generically be written as~\cite{Blinov:2019rhb} \es{eq:gen_ev}{ \Omega_a = {1 \over 2} {m_a^2 f_a^2 \theta_i^2 \over \rho_c} \left( {R_{\rm osc} \over R_{\rm RH}} \right)^3 \left( {T_0 \over T_{\rm RH}} \right)^3 {g_{*S}(T_0) \over g_{*S}(T_{\rm RH})} \,, } where $T_{\rm RH}$ is the reheat temperature after early matter domination, assuming instantaneous reheating, $T_0$ is the temperature today, $\rho_c$ is the critical density, and $R_{\rm osc}$ ($R_{\rm RH}$) is the scale factor at $T_{\rm osc}$ ($T_{\rm RH}$). The evolution is assumed to be adiabatic below $T_{\rm RH}$, and we assume for now that $m_a$ is temperature independent. The scale factor ratio appearing in~\eqref{eq:gen_ev} may be simplified by using the equation of state $H^2 \propto R^{-3(w+1)}$, with $w = 0$ ($w = 1/3$) for matter (radiation) domination. Assuming a standard radiation dominated cosmology in~\eqref{eq:gen_ev}, in which case $T_{\rm RH}$ is any intermediate reference temperature, then leads to the result quoted in~\eqref{eq.relicab_std}. On the other hand, if we suppose that the axion starts to oscillate during a period of early matter domination, with instantaneous reheating at $T_{\rm RH}$, then the dependence of $\Omega_a$ on $m_a$ and $g_{*S}(T_{\rm RH})$ cancels, leading to the result~\cite{Blinov:2019rhb} \begin{align} \label{eq.relicab} \left.\Omega_a h^2\right|_{\rm EMD}\approx0.12\left(\frac{\theta_if_a}{10^{15}~\text{GeV}}\right)^2\left(\frac{T_{\text{RH}}}{10~\text{MeV}}\right). \end{align} Note that if the axion starts to oscillate during radiation domination, with the Universe subsequently going through a period of early matter domination, the expression for $\Omega_a h^2$ in~\eqref{eq.relicab} is enhanced by the ratio $(T_{\rm osc} / T_{\rm EMR})$, where $T_{\rm EMR}$ is the temperature of matter-radiation equality at the beginning of the early matter domination epoch. Thus, we see heavy axions can indeed constitute all of DM for GUT-scale $f_a$ without fine tuning in the initial misalignment $\theta_i$, provided $T_{\text{RH}}$ is sufficiently low. In the next subsection we show that such a period of early matter domination, with low reheating temperature, may naturally arise from glueballs in the dark sector. Note that successful BBN requires $T_{\rm RH}>4$~MeV~\cite{Kawasaki:2000en, Hannestad:2004px}; {\it i.e.}, the glueballs must have decayed to give rise to a radiation dominated cosmology below this temperature. Therefore this determines the lower limit of $T_{\rm RH}$ in the subsequent analyses. \subsection{Early matter domination from dark glueballs} In the absence of any fermionic states in the dark sectors, the glueballs arising from confinement of $G_{\rm dark}$ would typically be long-lived. They can still decay into SM states through higher-dimensional couplings to the SM Higgs, which we parameterize by \es{eq:glueball_dim6}{ {\mathcal L} \supset &{c_6 \alpha_D \over 4 \pi} G^a_{d \, \mu \nu} {G}_{d \, a}^{ \mu \nu} {H^\dagger H \over \Lambda^2} +{\tilde c_6 \alpha_D \over 4 \pi} G^a_{d \, \mu \nu} {\tilde G}_{d \, a}^{ \mu \nu} {H^\dagger H \over \Lambda^2} \,. } Here $\Lambda$ is a generically high scale and can be of the order of the GUT-scale or lower. The dimensionless coefficient $c_6$ and $\tilde c_6$ will generically both be of order unity if the UV theory has CP violation and if the dimension-6 operators are generated at one-loop by couplings to heavy particles that interact with the SM Higgs and are charged under the dark gauge group. We provide an explicit construction of this operator along these lines in Sec.~\ref{sec:model} in this Article. Let us assume that the lightest glueball, which is the $J^{PC} = 0^{++}$ state, has mass $m_{0^+}$. We focus on the $c_6$ operator since it is the relevant one for the decay of the $0^{++}$ glueball in the limit of vanishing $\theta$-angle, which is accomplished by the dark axion.\footnote{A residual, oscillating $\theta$ angle may be present from the oscillating axion field about its minimum, but this possibility does not affect the arguments below, as it would simply introduce a small, time-dependent mixing between the CP even and odd glueball states.} Furthermore, in a CP-violating theory, the heavier glueball states would be even more unstable, and therefore we consider only the $0^{++}$ glueball from here on.\footnote{Note that some of the higher-spin glueballs, such as the $1^{+-}$ state, may require higher-dimensional operators to decay; however, the relic DM abundance of these states is subdominant in the parameter space we consider~\cite{Forestell:2016qhc}. } It is convenient to define the matrix element $ \langle 0 | G^a_{d \, \mu \nu} {G}_{d \, a}^{ \mu \nu} | 0^{++} \rangle \equiv 2 F_{0^+}$ and the dimensionless constant $f_{0^+}$ through the relation $4 \pi \alpha_D F_{0^+} = f_{0^+} m_{0^+}^3$, in order to factor out renormalization group and scale dependence. The quantity $f_{0^+}$ has been computed in lattice QCD for pure $SU(3)$ gauge theory to be $f_{0^+} \approx 3.06$, with the dependence on the number of colors $N_c$ expected to be minor for $N_c \sim 3$~\cite{Chen:2005mg}. We may then parameterize the glueball decay rate, in the limit $m_{0^+} \gg m_h$, with $m_h$ the Higgs mass, as~\cite{Juknevich:2009gg} \es{eq:glueball_decay}{ \Gamma_{0^{++} \to {\rm SM}} \approx &~9\, \times 10^{-2} \, \, {\rm s}^{-1} c_6^2 \left( {m_{0^+} \over 10^7 \, \, {\rm GeV} } \right)^5 \left( { 10^{14} \, \, {\rm GeV}} \over \Lambda \right)^4 \,. } Note that the glueball decays to pairs of SM Higgs bosons, pairs of $Z$ bosons, and $W^+ W^-$ with relative rates $1:1:2$, respectively. The ratio $x \equiv m_{0^+} / \sqrt{m_a f_a}$ is expected to be order unity and is independent of the dark confinement scale $\Lambda_D$, though it may have minor dependence on the number of dark colors. This ratio is important for determining the cosmological history for the simple reason that the glueball decay rate in~\eqref{eq:glueball_decay} depends on $m_{0^+}$ to the fifth power, so factors order unity in the relation between the confinement scale and the glueball mass become amplified. At a more precise level, the axion mass is related to the topological susceptibility $\chi_t$ through the relation $m_a^2 = \chi_t / f_a^2$. The topological susceptibility has been computed in lattice QCD for $SU(2)$ and larger $N_c$ gauge theories~\cite{Athenodorou:2021qvs}. The glueball mass spectrum has also been computed in lattice QCD~\cite{Bonanno:2022yjr}. Combining these results we estimate $x \approx 7.92$ ($x \approx 8.35$) for $SU(2)$ ($SU(N_c\rightarrow\infty)$) gauge theory. Given that the dependence on $N_c$ is minor, in the following calculations we simply take $x = 8$. We assume that after inflation both the dark sector and the visible sector are reheated; we denote the ratio of entropy densities between the two sectors as $B \equiv S_D / S_{\rm vis}$, with $S_D$ ($S_{\rm vis}$) the dark (visible) sector entropy density. In this section we work in the limit $B \gg 1$, though our results are not sensitive to $B$ so long as $B \gtrsim 1$ (though $B \ll 1$ is a qualitatively different regime). We take the dark confinement phase transition to take place at the critical temperature $T_c$. For $T > T_c$, with $T$ the temperature in the dark sector, the dark-sector energy density, which is the dominant energy density in the Universe by assumption, redshifts as radiation. We work in the approximation where at $T = T_c$ the dark gluons are instantaneously converted to glueballs, which then redshift like non-relativistic matter. This approximation is justified because the duration of the phase transition is expected to be much less than a Hubble time with a small degree of supercooling (see, {\it e.g.},~\cite{Halverson:2020xpg}). See, {\it e.g.},~\cite{Carlson:1992fn, Forestell:2016qhc,Soni:2016gzf} for more careful computations of ${\cal O}(1)$ corrections to this approximation accounting for glueball freezeout and number changing processes. Thus, for $T < T_c$ the Universe is matter dominated. Matter domination ends when the $0^{++}$ glueball decays to SM final states, leading, in the instantaneous decay approximation, to a visible-sector reheat temperature $T_{\rm RH}$ determined by: \begin{align} \frac{\pi^2}{30}g_{*}(T_{\rm RH})T_{\rm RH}^4 = {4 \over 3} \mpl^2 \Gamma_{0^{++} \to {\rm SM}}^2 \,, \end{align} with $g_*(T_{\rm RH})$ the degrees of freedom in the visible sector at $T_{\rm RH}$. To derive the right hand side above, we assume that the glueballs instantaneously decay at $t = \Gamma_{0^{++} \to {\rm SM}}^{-1}$, and we use the fact that glueball energy density determines the Hubble parameter during the period of early matter domination. Note that this implies a reheating temperature \es{eq:TRH_glue}{ T_{\rm RH} \approx &~5 \, \, {\rm MeV} \left( {10.8 \over g_*(T_{\rm RH}) }\right)^{1/4} c_6 \left( {m_{0^+} \over 3 \times 10^7 \, \, {\rm GeV}} \right)^{5/2} \left({10^{14} \, \, {\rm GeV} \over \Lambda} \right)^2 \,. } Referring back to~\eqref{eq.relicab}, we see that this reheating temperature is sufficiently low such that we may have $\Lambda$ and $f_a$ in the range $\sim$ $10^{14}$-$10^{16}\,$GeV and produce axions that make up the observed DM, without the need for (much) fine-tuning of the initial misalignment angle. However, in the above discussion we crucially make the assumption that the heavy axion starts oscillating {\it during} matter domination. Let us revisit this assumption to see under what conditions it holds. The ratio of the critical temperature to the topological susceptibility may be computed using lattice QCD results~\cite{Lucini:2012wq} as \es{}{ {T_c \over \sqrt{m_a f_a}} \approx 1.6 - {0.8 \over N_c^2} \label{eq:suscep}\,, } for $N_c$ dark colors. Recall that the axion field begins to oscillate at the temperature $T_{\rm osc}$ where $m_a = q_0 H(T_{\rm osc})$; for $T > T_c$, $H \approx {\pi \over \sqrt{90}} \sqrt{g_*} T^2 / \mpl$, where $g_* = 2(N_c^2 -1)$ is the number of degrees of freedom in dark gluons. Thus, we see that the dark axion begins to oscillate at $T<T_c$ if $f_a \gtrsim 9.4 \times 10^{17}$ GeV ($f_a \gtrsim 1.2 \times 10^{18} / N_c$ GeV) for $SU(2)$ ($SU(N_c)$ with $N_c \gg 1$), in which case the axion begins to oscillate during matter domination. Let us consider a benchmark scenario for which the axion beings to oscillate during matter domination. We take $N_c = 2$ and saturate $f_a = 9.4 \times 10^{17}$\,GeV, such that $T_{\rm osc} = T_c$. Then, requiring a reheat temperature of $T_{\rm RH} \approx 5$\,MeV means that the correct DM density is only achieved if the initial misalignment angle is tuned such that $|\theta_i| \approx 1.5 \times 10^{-3}$ (see~\eqref{eq.relicab}). The tuning may be partially alleviated by having the axion begin its oscillation before $T_c$, as we discuss now. For $T_{\rm osc} > T_c$, we need to account for the temperature dependent axion mass to determine $T_{\rm osc}$, since $m_a(T) \to 0$ for $T$ much larger than $T_c$, while $m_a(T)$ asymptotes to its zero temperature value $m_a$ at $T \approx T_c$. For $T/T_c \gg 1$ we may reliably use the dilute instanton gas approximation (DIGA) to calculate~\cite{Callan:1977gz}: \begin{align} m_a(T) \approx \begin{cases} m_a \left(\frac{T_c}{T}\right)^b,~ T>T_c\\ m_a,~ T\leq T_c \,, \end{cases} \end{align} where $b = 11 N_c/6 - 2$ for pure $SU(N_c)$. We then solve for $T_{\rm osc}$ by setting $m_a(T_{\rm osc}) = q_T H(T_{\rm osc})$, with $q_T = (4/5)(2+b)$~\cite{Blinov:2019rhb}, which yields the ratio \es{eq:Tosc_over_Tc}{ {T_{\rm osc} \over T_c }&\approx \left( {3 \sqrt{10} \over q_T \pi \sqrt{g_*} \left(1.6 - {0.8 \over N_c^2} \right)^2} {\mpl \over f_a} \right)^{ 1 \over 2 + b} \approx \begin{cases} 5.5 \left( {10^{15} \, \, {\rm GeV} \over f_a}\right)^{0.27},~N_c = 2\\ 2.6 \left( {10^{15} \, \, {\rm GeV} \over f_a}\right)^{0.18},~N_c = 3 \end{cases} \,. } The DM abundance is computed starting from the number density of axions present at $T_{\rm osc}$: $n_a(T_{\rm osc}) = {1 \over 2} m_a(T_{\rm osc}) f_a^2 \theta_i^2$. The number density is diluted over time, such that at reheating the energy density in axions is $\rho_a(T_{\rm RH}) = {1 \over 2} m_a(T_{\rm osc}) m_a f_a^2 \theta_i^2 \left( R_{\rm osc} / R_{\rm RH}\right)^3$. The scale-factor ratio is given by \begin{align} \left(\frac{R_{\rm osc}}{R_{\rm RH}}\right)^3 = \left(\frac{T_c}{T_{\rm osc}}\right)^3\left(\frac{H_{\rm RH}}{H_c}\right)^2 \,, \end{align} where $H_c$ is the Hubble rate at $T_c$. Further evolving $\rho_a$ down to today and comparing to $\rho_c$ yields the result \es{eq:mod_relic_ab}{ \Omega_a h^2 \approx 0.12 \, \theta_i^2 \begin{cases} \left(\frac{f_a}{10^{13}~\text{GeV}}\right)^{1.27 }\left(\frac{T_{\rm RH}}{10~\text{MeV}}\right) \,,~N_c = 2 \\ \left(\frac{f_a}{4.3\cdot 10^{12}~\text{GeV}}\right)^{1.18}\left(\frac{T_{\rm RH}}{10~\text{MeV}}\right) \,,~N_c = 3. \end{cases} } For $N_c=2$ this implies that for an order one initial misalignment angle and a reheat temperature $T_{\rm RH} \approx 10$\,MeV the correct DM abundance is obtained for $f_a \approx 10^{13}$\,GeV. If we require $f_a \approx 10^{15}$\,GeV and allow $T_{\rm RH} \approx 5$\,MeV, then the correct DM abundance may be obtained by tuning the initial misalingment angle to $|\theta_i| \approx 0.1$. If instead $N_c = 3$, then $f_a \approx 10^{15}$\,GeV and $T_{\rm RH} \approx 10$\,MeV would require $|\theta_i| \approx 0.04$. To discuss the properties of the glueballs, we now consider the scenario where $N_c = 2$, $f_a \approx 10^{15}$\,GeV, $\theta_i \approx 0.1$ and $T_{\rm RH} \approx 5$\,MeV to have the correct DM abundance. Then for $m_a \approx 0.1$\,MeV (to have a sufficiently large DM lifetime) the dark $0^{++}$ glueball mass is $m_{0^+} \approx 2.5 \times 10^6$\,GeV. This implies that the dark confinement scale is $\Lambda_D \approx 4 \times 10^5$\,GeV (using the relation of the $\bar {\rm MS}$ perturbatively-computed (at 3-loop order) confinement scale to the string tension from lattice QCD in~\cite{Athenodorou:2021qvs}). To achieve the above reheat temperature we then need $\Lambda \approx 4 \times 10^{12}$\,GeV for $c_6 = 1$. Referring to~\eqref{eq.lifetime}, the partial lifetime of the axion DM to two photons would then be $\sim 10^{28}$ s for $C_{a\gamma\gamma} \sim 0.1$. As this example illustrates, decaying heavy axion DM, with lifetimes slightly beyond current probes, may be naturally achieved with minimal tuning due to the period of early matter domination that is necessarily generated by the dark glueballs. In Fig.~\ref{fig:lifetime_plot} we extend this example to illustrate how the DM partial lifetime to two photons depends on the axion mass for different constraints on the initial misalignement angle, reheat temperature, axion-photon coupling, and scale $\Lambda$ that induces glueball decay. Across the entire parameter space shaded in green we require $T_{\rm RH} > 5$ MeV, $\Lambda > 10^{13}$ GeV, $|\theta_i| > 10^{-2}$, and $0.05 < |C_{a\gamma\gamma}| < 1$, in addition to $f_a > 10^{14}$ GeV. Note that we additionally allow $|c_6| \leq 1$, since this operator is expected to be generated at one loop. The darker shaded green regions impose more stringent constraints, as indicated. Note that if not otherwise stated, the constraint is the same as that described above. The blue line, on the other hand, shows the lifetime obtained by fixing $f_a = 10^{15}$ GeV and $|C_{a\gamma\gamma}| = 0.1$, while varying $|\theta_i|$ and $\Lambda$ to obtain the correct DM abundance at every $m_a$. In Fig.~\ref{fig:lifetime_plot} we compare our lifetime predictions to existing constraints and projected reaches from space-based $X$-ray and gamma-ray telescopes. The existing constraints are shaded in grey and arise from searches for $X$-ray and gamma-ray lines from (from left to right) {\it XMM-Newton}~\cite{Boyarsky:2006fg,Foster:2021ngm}, NuSTAR~\cite{Perez:2016tcq,Ng:2019gch,Roach:2019ctw}, INTEGRAL~\cite{Laha:2020ivk}, and COMPTEL~\cite{Essig:2013goa}. In the $X$-ray band these searches were primarily performed to search for sterile neutrino DM, which may decay into a monochromatic $X$-ray line in addition to an (unobserved) active neutrino, by looking for $X$-ray lines from DM decay in the ambient halo of the Milky Way. Above $\sim$ 200 keV the sensitivity to decaying DM will increase substantially in the coming years with instruments such as the AMEGO~\cite{Kierans:2020otl} and e-Astrogam~\cite{e-ASTROGAM:2017pxr} missions, which are in their planning stages. In Fig.~\ref{fig:lifetime_plot} we show the projected reach of AMEGO to DM decaying to two gamma-ray lines from $\sim$200\,keV to $\sim$5\,MeV. AMEGO (or e-Astrogam) will improve the DM lifetime sensitivity by up to four orders of magnitude, depending on the DM mass. Our computation of the AMEGO projections is described in App.~\ref{sec:astro}. Note that our AMEGO projections only account for statistical uncertainties to show the maximal possible science reach, though systematic uncertainties could be important and further limit the achievable lifetimes~\cite{Bartels:2017dpb}. In the $X$-ray band we show the projected sensitivity of the planned {\it Athena} mission, which may launch in the mid 2030's~\cite{Nandra:2013jka}, though the instrument specifications may evolve before this date. {\it Athena} will have two instruments: the Wide Field Imager (WFI) and the $X$-ray Integral Field Unit (X-IFU). The X-IFU will have excellent spectral resolution ($\sim$ 5 eV versus $\sim$ 100 eV for WFI) but a smaller field of view ($\sim$ 0.014 deg$^2$ versus $\sim$ 0.7 deg$^2$ for WFI). Both instruments will have similar effective areas (nearly a m$^2$), which are approximately an order of magnitude above those from {\it XMM-Newton}. In fact, the WFI is comparable to the instruments onboard {\it XMM-Newton} except for the effective area. For a search for DM decay in the ambient Milky Way halo, the signal and background fluxes are proportional to the angular size of the field of view and to the effective area, while the background flux decreases linearly with the energy resolution. The Z-score associated with an axion signal may be estimated as $S / \sqrt{B}$ for a background-dominated search, where $S$ ($B$) is the number of signal (background) counts. Thus, we estimate that the WFI and X-IFU instruments will have comparable sensitivity. The sensitivity of the WFI instrument to DM decay may be roughly projected by taking the projected sensitivity to DM decay from {\it XMM-Newton} and re-scaling the lifetimes by $\sqrt{10}$ to account for the increase in effective area (assuming the same total data taking time as in the {\it XMM-Newton} analysis, which is around 30 Ms~\cite{Foster:2021ngm}). We show this rough, projected {\it Athena} sensitivity in Fig.~\ref{fig:lifetime_plot}. In Fig.~\ref{fig:lifetime_plot} we also show the projected sensitivity of the THESEUS mission concept~\cite{Thorpe-Morgan:2020rwc}. THESEUS~\cite{THESEUS:2017qvx} is not an approved mission at this point but represents what may be possible in the future. THESEUS is proposed to carry three instruments relevant for axion searches -- SXI, XGIS-X, XGIS-S -- which would collectively cover an energy range from below a keV to above an MeV. The advantage of these instruments over, {\it e.g.}, those on {\it Athena} is the large field of view, which for THESEUS is around 1 sr across most of the energy range. Given the comparable effective area to {\it Athena}, THESEUS would provide superior sensitivity in the mass range where the two instruments can be compared. THESEUS would also provide a transformative improvement in sensitivity near the keV scale and extend to higher masses where {\it e.g.} AMEGO would operate, though at reduced sensitivity. On the other hand, we note that the THESEUS instruments do not have improved energy resolution (with $\sim$ 200 eV resolution at a few keV). This means that systematic uncertainties may be important for THESEUS and could ultimately limit the sensitivity in certain mass ranges. (Systematic uncertainties related to background mismodeling already limited the sensitivity of the {\it XMM-Newton} search for decaying DM in~\cite{Foster:2021ngm}, and THESEUS would have far reduced statistical uncertainties relative to those in that analysis.) Improved energy resolution is useful in part because it limits the total number of photon counts needed to achieve the target sensitivity, which means that statistical uncertainties are more important, relative to systematic uncertainties, compared to searches using telescopes that achieve the same sensitivity but with worse energy resolution. \section{Baryogenesis from heavy axions}\label{sec:bary} The axion decay rate scales rapidly with the dark confinement scale $\Lambda_D$, as noted in {\it e.g.}~\eqref{eq.lifetime}; for $\Lambda_D \gtrsim 10^{10}$ GeV and GUT-scale $f_a$ the axions would decay so quickly that their cosmological abundance would be depleted before BBN. In this section we explore the possibility that such a heavy, rapidly-decaying axion could be responsible for baryogenesis. For axions coupling to gauge bosons, the $B$ or the $L$ current can lead to a non-negligible baryon asymmetry in the presence of $B$ or $L$-violation through the mechanism of {\it spontaneous baryogenesis}~\cite{Cohen:1987vi}. Such a scenario with $L$-violation can naturally arise in the presence of the Weinberg operator, $(H \ell)^2/\Lambda_W +{\rm h.c.}$, which can explain the observed neutrino masses at the same time. Here $\ell $ is the left-handed lepton doublet of the SM and $\Lambda_W \sim 10^{15}~{\rm GeV}$, for which we get $m_\nu \sim 0.05$~eV (dropping flavor indices), consistent with lower bounds on the sum of neutrino masses~\cite{ParticleDataGroup:2020ssz}. A crucial ingredient of the spontaneous baryogenesis mechanism is coherent oscillations of (pseudo-)scalar fields, which give rise to an `effective chemical potential' for the SM fermions. Due to this effect, the thermal abundances of fermions and anti-fermions differ, and as a result an asymmetry between them can develop in the presence of $B$ or $L$ violation. In the limit of small chemical potential $\mu_i\ll T$, the asymmetry for a species $i$ is given by $\Delta n_i = n_i - \bar{n}_i \approx g_i \mu_i T^2/6$, where $g_i$ is the multiplicity of that species. The chemical potential induced by the scalar field, which is an axion in our applications, is determined by its coherent velocity: $\mu_i \sim \dot{a}/f_a$. Thus, the lepton asymmetry at a temperature $T$ is given by $\eta_L \propto \sum_{i=L}\Delta n_i / T^3 \sim \sum_{i=L}g_i\mu_i/T \sim \dot{a}/(T f_a)$, where the sum is over all the leptons. The above estimate assumes that when the axion begins to oscillate, the processes mediated by the Weinberg operator are in thermal equilibrium. However, if axion oscillations start at temperatures $T_{\rm osc}$ lower than $T_L$, the temperature at which the Weinberg operator decouples from the bath, then the above estimate is modified to $\eta_L \sim (\Gamma_W/H(T_{\rm osc}))\times \dot{a}/(f_a T_{\rm osc})$, where $\Gamma_W \sim T^3/\Lambda_W^2$ is the rate for scattering processes through the Weinberg operator. In particular, in this case the produced asymmetry is suppressed by a `freeze-in'-like factor of order $\Gamma_W/H(T_{\rm osc})$. Given this suppression, it is clear that the produced asymmetry is maximized if the onset of axion oscillations happens at $T_L$. After this initial production, electroweak sphalerons convert the initial lepton asymmetry into a baryon asymmetry at the electroweak phase transition, though this processes is accompanied by a small-but-calculable efficiency factor. \subsection{Baryogenesis without heavy axion dark matter} We begin by considering the possibility that there is a single heavy axion that decays before BBN and leads to baryogenesis. In the following subsection we generalize from this scenario to consider the possibility that the dark sector contains two confining gauge groups, leading to two massive axion states: one axion will be responsible for baryogenesis while the other will explain the DM. To track lepton asymmetries, we study the time evolution of the chemical potential vector $\mu_i$ via the Boltzmann equation~\cite{Domcke:2020kcp}: \begin{equation} \begin{aligned} &\frac{d}{dt}\left(\frac{\mu_i}{T}\right) = \frac{dT}{dt}\frac{1}{g_i T} \times \sum_\alpha {\cal C}_{i\alpha}\frac{\Gamma_\alpha}{H} \left(\sum_j \left(\frac{\mu_j}{T}\right){\cal C}_{j\alpha} - n_{S\alpha} \left(\frac{\dot{a}}{f_a T}\right)\right). \end{aligned} \label{eq:bary_boltz} \end{equation} Here $i = \tau, L_{12}, L_3, q_{12}, t, b, Q_{12}, Q_3, H$ runs over all the SM species, with numbers referring to SM generations. Due to the smallness of the Yukawa couplings of the first two generations, the corresponding interactions are out of thermal equilibrium at the time of asymmetry generation. Therefore they interact only through flavor universal gauge interactions. Thus we can assume that the SM left-handed lepton doublets $L_1, L_2$ have the same chemical potential and denote them together as $L_{12}$. The same is also done for SM left-handed quark doublets $Q_1, Q_2$ and right-handed (RH) quarks $q_1, q_2$. Along similar lines, the RH leptons of the first two generations can not interact with the bath given the absence of $SU(3)_c$ and $SU(2)_L$ interactions and the smallness of their Yukawa couplings. Thus, we need not include them. The vector $g_i$ counts the number of degrees of freedom for different species and is given by $g_i = (1, 4, 2, 12, 3, 3, 12, 6, 4)$.\footnote{As a side-note, since the physical processes described in this section take place at a high energy scale $\sim\!10^{12}$\,GeV or even higher, it is possible that additional BSM states beyond those of the SM could be present in the thermal plasma. In particular, if nature realizes any form of supersymmetry below $T_{\rm osc}$ then this could lead to important quantitative and qualitative modifications to the results in this section. } Returning to~\eqref{eq:bary_boltz}, the matrix ${\cal C}_{i\alpha}$ describes the charges of various SM species $i$ under interactions $\alpha$ and is given by, \begin{align} {\cal C}_{i\alpha} = \begin{pmatrix} 0 & 0 & -1 & 0 & 0 & 0 & 0 \\ 2 & 0 & 0 & 0 & 0 & 2 & 0 \\ 1 & 0 & 1 & 0 & 0 & 0 & 2 \\ 0 & -4 & 0 & 0 & 0 & 0 & 0 \\ 0 & -1 & 0 & -1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 & -1 & 0 & 0 \\ 6 & 4 & 0 & 0 & 0 & 0 & 0 \\ 3 & 2 & 0 & 1 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 & -1 & 2 & 2 \end{pmatrix}. \label{eq:C_mat} \end{align} Here $i$ and $\alpha$ run over row and column indices, respectively. The relevant interactions $\alpha$ run over weak sphaleron, strong sphaleron, tau Yukawa, top Yukawa, bottom Yukawa, and the Weinberg operator for the first two generations and the third generation. As an example, consider $i=8$ which corresponds to the left-handed, third generation quark doublet $Q_3$. This has a non-zero charge of 3 under the weak sphaleron (three colors), 2 under the strong sphaleron (weak doublet), 0 under the tau Yukawa, 1 under both the top and bottom Yukawa, and 0 under the Weinberg operator. This gives a charge vector $(3, 2, 0, 1, 1, 0, 0)$, which is the eighth row of ${\cal C}_{i\alpha}$. The coefficients $\Gamma_\alpha$ determine the rate for the interaction $\alpha$. As examples, for the dim-5 Weinberg operator ($\alpha = 6,7)$, $\Gamma_\alpha \propto T^3/\Lambda_W^2$, whereas for the marginal top Yukawa interaction ($\alpha = 4$), $\Gamma_\alpha \propto y_t^2 T$. (See App.~\ref{sec:rates} for explicit formulae for all the $\Gamma_\alpha$.) The axion source vector $n_{S\alpha}$ depends on how the axion couples to the SM. For simplicity, we assume \begin{align} c_{aG}\left(\frac{\alpha_s}{8\pi f_a}a G\tilde{G} + \frac{\alpha_2}{8\pi f_a}a W\tilde{W} + \frac{\alpha_1}{8\pi f_a}a B\tilde{B} \right) + c_{af}\left(\sum_i \frac{\partial_\mu a}{f_a}J_i\right), \label{eq:axion_couplings} \end{align} where $J_i = \bar{f}_i\gamma^\mu f_i$ with $i$ running over all left- and right-handed SM Weyl fermions. Here we chose to have a single coefficient $c_{aG}$ determining all the gauge boson couplings, motivated by grand unification, and a flavor-universal coefficient $c_{af}$ for all the fermionic couplings. We consider two benchmark choices corresponding to $c_{aG} = c_{af} = 1$ (main Article) and $c_{aG} = 1, c_{af} = 0$ (App.~\ref{app:alt}). With this choice, we may write $n_{S\alpha} = c_{aG}( n_s + n_2) + c_{af}(\sum_i {\cal C}_{i\alpha})$. Here $n_2 = (-1, 0, 0, 0, 0, 0, 0)$ and $n_s = (0, -1, 0, 0, 0, 0, 0)$ are determined by the $aW\tilde{W}$ and $aG\tilde{G}$ couplings, respectively. The term $\sum_i {\cal C}_{i\alpha}$ originates from summing over all the fermion contributions for a given interaction $\alpha$ and is determined via~\eqref{eq:C_mat} to be $(12, 0, 1, 1, -1, 4, 4)$. This has a vanishing entry under the strong sphaleron since QCD is a vector-like theory. On the other hand, the three generations of left-handed quark doublets with three colors each, and lepton doublets, have a charge of $3\times 3 + 3 = 12$ under the weak sphaleron. Combining all these contributions, we find $n_S = c_{aG} (-1, -1, 0, 0, 0, 0, 0) +c_{af} (12, 0, 1, 1, -1, 4, 4)$. It is useful to understand the physical effects of the various terms in~\eqref{eq:bary_boltz}. First we focus on the homogeneous contribution. The chemical potential of a species $i$ is affected by any $\alpha$ under which the species is charged. Furthermore, since all SM states are in thermal equilibrium, a chemical potential of species $j\neq i$ can also affect that of $i$, if $i$ and $j$ can communicate via interaction $\alpha$. As a toy example, if ${\cal C}_{i\alpha}$ were a diagonal $n\times n$ matrix, then~\eqref{eq:bary_boltz} would reduce to a set of decoupled homogeneous equations for each species $i$ under its exclusive interaction $\alpha$. The factor of $\Gamma_\alpha/H$ is a standard one denoting the efficiency of the interaction compared to the Hubble scale. Next, we focus on the source term $n_{S\alpha}$. This inhomogeneous term is the one responsible for giving rise to particle-anti-particle asymmetries. In the absence of this term and assuming there are no initial asymmetries after inflation, we see from~\eqref{eq:bary_boltz} that $\mu_i=0$ continues to be solution at later times; {\it i.e.}, no asymmetries can develop. Finally, we comment on the role of the Weinberg operator, which is crucial in seeding the asymmetries in leptons in the first place. We consider the source term $\sum_\alpha {\cal C}_{i\alpha} (\Gamma_\alpha/H) n_{S\alpha}$ for the vector $\mu_i$. From this term we may derive the final $B-L$ asymmetry by first noting that \es{}{ \mu_{B-L} = - & (\mu_\tau + 4 \mu_{L_{12}} + 2 \mu_{L_3}) + (12 \mu_{q_{12}} + 3 \mu_t + 3 \mu_b + 12 \mu_{Q_{12}} + 6 \mu_{Q_3} )/3 \,. } Using the above and writing $\Gamma_\alpha = (\Gamma_{WS}, \Gamma_{SS}, \Gamma_\tau, \Gamma_t, \Gamma_b, \Gamma_{W_{12}}, \Gamma_{W_3})$, we find the source term for $\mu_{B-L}$ to be $-8( \Gamma_{W_{12}} + \Gamma_{W_3})$. In other words, when the Weinberg operator is absent, a $B-L$ asymmetry does not get sourced, as expected. To solve~\eqref{eq:bary_boltz} we need to know the evolution of $\dot a$ as a function of time. The axion dynamics, however, depend on the temperature evolution of the dark sector, since if the dark sector has an appreciable temperature then the dark axion mass may acquire non-trivial time dependence. We begin by considering the simpler scenario where the axion mass is temperature independent (equivalently, the dark gluons are not thermalized) before we consider the case of a temperature-dependent axion mass, as we did in Sec.~\ref{sec:DM}. \subsubsection{Heavy axion mass without temperature dependence} As described above, we begin by considering the scenario where $m_a$ remains constant as the Universe evolves. This would be the case if the dark $SU(N)$ sector giving rise to $m_a$ was never reheated after inflation and never came into thermal equilibrium with the SM. In this case, the dark glueballs are not important for cosmology. However, along with the cold, misaligned heavy axion population with energy density $\rho_a$, there can be a relativistic axion population. This is because through the axion-gluon coupling, the axions can come in thermal equilibrium with the plasma if $f_a < 5\times 10^{15}{~\rm GeV}(\alpha_s/(2\pi))\left(\frac{T_{\rm RH, inf}}{10^{14}~{\rm GeV}}\right)^{1/2}$~\cite{Baumann:2016wac}. Here $T_{\rm RH, inf}$ is the reheat temperature after inflation. Note that if $f_a$ is larger than this critical value there will still be a suppressed, freeze-in contribution of relativistic axions. Such a relativistic population, with energy density $\rho_{\rm th}$, can also originate from inflaton decay. For example, if the inflaton has similar couplings to all SM particles and to the axion, then given the differences in degrees of freedom between the SM and the axion, we would expect $\rho_{\rm th}/\rho_{\rm SM}\sim 1/100$. This effectively translates into the relativistic axions having a comparable `temperature' as the SM, even if the two populations were never in thermal contact. Therefore to be conservative, we assume $\rho_{\rm th} / \rho_{\rm SM} \simeq 1/100$, while noting that a freeze-in only production would typically give an even smaller abundance for $\rho_{\rm th}$ for large enough $f_a$ as mentioned above. As the SM temperature $T$ falls below $m_a$, the relativistic heavy axion population starts diluting like matter and eventually decays at the same time as the cold heavy axion population. To track the initially generated baryon asymmetry, we therefore numerically solve~\eqref{eq:bary_boltz} along with \es{}{ \ddot{a} + 3 H \dot{a} + m_a^2 a = 0 \,, } in addition to evolution equations for the SM plasma and $\rho_{\rm th}$ with $\rho_{\rm th} \ll \rho_{\rm SM}$ initially. Note that the equation above is valid only for times $t$ much less than the heavy axion lifetime $\tau_a \approx (32 \pi^3 f_a^2) / (\alpha_s(m_a)^2 m_a^3 c_{aG}^2)$. (Even in the presence of tree-level axion-matter couplings, the heavy axion with mass $m_a \gg {\rm GeV}$ would preferentially decay to gluons.) Note also that the back-reaction of the axion-SM interactions onto the axion dynamics, predominantly arising as friction from the $SU(3)$ sphalerons, is negligible~\cite{McLerran:1990de}. If the SM plasma were to always dominate the energy density of the Universe then the SM energy density and the Hubble parameter would evolve as \es{eq:SM_density_evolution}{ \dot{\rho}_{\rm SM} + 4 H \rho_{\rm SM} \approx 0,\\ \rho_{\rm SM} \approx 3 H^2 \mpl^2 \,. } However, the axions can come to dominate the energy density at later times, and their eventual decays would in this case dilute the initially-generated baryon asymmetry. To compute this dilution we do not solve~\eqref{eq:SM_density_evolution} but rather the more general set of equations \es{}{ &\dot{\rho}_a + 3H \rho_a + \frac{\rho_a}{\tau_a} = 0 \,, \\ &\dot{\rho}_{\rm th} + 4H \Theta(T - m_a) \rho_{\rm th} + 3H \Theta(m_a -T) \rho_{\rm th} + \frac{\rho_{\rm th}}{\tau_a} = 0,\\ &\dot{\rho}_{\rm SM} + 4 H \rho_{\rm SM} - \frac{\rho_a}{\tau_a} - \frac{\rho_{\rm th}}{\tau_a} = 0,\\ &3 H^2 \mpl^2 = \rho_{\rm SM} + \rho_a +\rho_{\rm th},\\ &\dot{\Delta n_B} + 3 H \Delta n_B = 0. } In the second line we use the approximation that the relativistic axion population $\rho_{\rm th}$ instantaneously transitions from $1/R^4$ dilution to $1/R^3$ dilution at $T=m_a$ and denote this with the unit step function $\Theta(x)$. Before the entropy diluton, the `initial' baryon asymmetry $\Delta n_B$ is given by, \begin{align} Y_B = \frac{\Delta n_B}{s} = -c_{\rm sph}\frac{\Delta n_{B-L}}{s} = -c_{\rm sph}\frac{\mu_{B-L} T^2}{6s}. \end{align} Here we normalize the baryon asymmetry with respect to the entropy density $s$, and we use a sphaleron conversion factor $c_{\rm sph} \approx 0.35$ to convert the $B-L$ asymmetry into a $B$ asymmetry at the electroweak phase transition~\cite{Harvey:1990qw}. Since the heavy axion would have its own quantum fluctuations during inflation, it would source baryon isocurvature fluctuations, which are constrained by the Planck mission~\cite{Planck:2018vyg,Planck:2018jri}. Those constraints translate to \begin{align} \frac{H_{\rm inf}}{|\theta_i| f_a} \lesssim 3.2\times 10^{-4} \,, \label{eq:bary_iso} \end{align} where $\theta_i$ is the initial misalignment angle of the heavy axion and $H_{\rm inf}$ is the Hubble scale during inflation. We also need to have $m_a < H_{\rm inf}$ so that the axion misalignment angle is not driven to zero during inflation. Combining the two equations above we find the constraint \begin{align} m_a \lesssim 3.2\times 10^{-4} |\theta_i| f_a \,. \end{align} The resulting constraint is labelled as `Baryon Isocurvature' in Fig.~\ref{fig:heavy_ax_T_ind}. We also require that the energy density in the misaligned heavy axion population is smaller than the SM bath at the inflation reheat temperature, $T_{\rm RH,inf}$, otherwise axions would dominate the energy density during inflation. This requirement translates to, \begin{align} \frac{1}{2}m_a^2 (\theta_i f_a)^2 \ll \frac{\pi^2}{30}g_* T_{\rm RH, inf}^4. \label{eq:axion_subdom} \end{align} The requirement that the effective description of the dimension-5 Weinberg operator is valid implies $T_{\rm RH, inf}<\Lambda_W$, where as mentioned $\Lambda_W \sim 10^{15}$~GeV to achieve $m_\nu \sim 0.05$~eV. Thus, we require \begin{align} T_{\rm RH, inf} < 10^{15}\,\GeV \,, \end{align} though if $T_{\rm RH,inf}$ is near or above this scale it may serve to increase the baryon asymmetry by the standard thermal leptogenesis mechanism of decaying right-handed neutrinos~\cite{Davidson:2008bu}. A more constraining requirement for lighter masses comes from demanding that the PQ symmetry that produces the heavy axion is not restored after inflation; {\it i.e.}, $T_{\rm RH, inf} < f_a$. If the PQ symmetry is restored then $\langle \dot a \rangle = 0$, averaged over large (super-horizon) scales, in which case no coherent baryon asymmetry is generated. We require \es{}{ T_{\rm RH, inf} > 6 \times 10^{12} \, \, {\rm GeV} \,, } as otherwise the Weinberg operator would never be in thermal equilibrium~\cite{Domcke:2020kcp}, which could significantly the suppress the generated baryon asymmetry. To map this constraint onto the $m_a$-$f_a$ parameter space, we assume efficient reheating after inflation and set \begin{align} H_{\rm inf} \mpl \approx \frac{\pi}{\sqrt{90}}\sqrt{g_*}T_{\rm RH, inf}^2. \end{align} Then combining the restrictions $m_a < H_{\rm inf}$ and $T_{\rm RH, inf} < f_a$, we arrive at \begin{align} m_a < \frac{\pi}{\sqrt{90}} \sqrt{g_*} \frac{f_a^2}{\mpl}. \end{align} This constraint is labelled as `PQ Restoration' in Fig.~\ref{fig:heavy_ax_T_ind}. The other constraint that is important for validity of the axion effective field theory (EFT) is $T < f_a$, where $T$ is the temperature where the baryon asymmetry is dominantly generated. However, with the stronger constraint of $T_{\rm RH, inf} < f_a$, this restriction is already obeyed. This also ensures that the backreaction of the produced charges on the axion dynamics is small. Contours for various values of the present-day baryon abundance, subject to the above constraints, are illustrated in Fig.~\ref{fig:heavy_ax_T_ind} (left) for $c_{aG} = c_{af} = 1$ along with $\theta_i = 1$. Note that $\theta_i$ may be larger than unity, which would increase the baryon asymmetry, though then anharmonic effects may become important, as we discuss further below. For large regions (colored white) of parameter space we produce the correct, observed baryon asymmetry. Blue regions underproduce the baryon asymmetry, while red regions correspond to overproduction. The red regions will play an important role when adding in the second, DM axion, as this sector will add additional entropy dilution that can dilute the red regions to the observed baryon asymmetry. Note that for $m_a \sim 10^{10}$\,GeV the baryon asymmetry contours acquire a sharp dip. This dip arises partially because of a cancellation between axion-gauge-coupling produced asymmetry and the axion-matter produced asymmetry; the dip, while still present, is less pronounced in the figures in App.~\ref{app:alt} that have $c_{aG} = 1, c_{af} = 0$. We can also intuitively understand the shapes of constant $|Y_B|$ contours for smaller values of $m_a$, to the left of the dip. In this regime, the asymmetry production typically happens in the `freeze-in' regime with the initial lepton asymmetry $\eta_L \sim (\Gamma_W/H(T_{\rm osc}))\times \dot{a}/(f_a T_{\rm osc}) \propto m_a$, with no dependence on $f_a$. If not for the heavy axion domination, the baryogenesis contours would then have been horizontal. However, in this parameter space, initially thermal heavy axions do come to dominate the energy density of the Universe at $T_{\rm dom} \sim m_a$, while they decay at $T_{\rm RH} \propto m_a^{3/2}/f_a$. Therefore, the final abundance scales as $m_a\times m_a^{1/2}/f_a$. On the other hand, for larger values of $m_a$, to the right of the dip, the axion is already oscillating when the Weinberg operator decouples. As a result the initial asymmetry is mildly dependent on $m_a$. However, the entropy dilution is the same as before and hence the final abundance scales as $m_a^{1/2}/f_a$. We show these two parametric expectations, for small and large $m_a$, by solid and dashed purple lines, respectively. \subsubsection{Heavy axion mass with temperature dependence} Now we consider the scenario where all the relevant degrees of freedom are reheated after inflation. This implies along with the SM bath, there is also a thermal population of heavy axions, dark gluons, and finally, a cold, misaligned heavy axion population as before. We focus on the part of parameter space where dark gluons decay into the SM soon after dark confinement. This ensures that the generated baryon asymmetry is not diluted due to heavy glueball domination. To ensure the glueballs promptly decay we require $\Gamma_{0^{++} \to {\rm SM}} > H(T_c)$, where $T_c$ is the confinement temperature and $\Gamma_{0^{++} \to {\rm SM}}$ the glueball decay rate~\eqref{eq:glueball_decay}. This implies that \begin{equation} \begin{aligned} & 6\times 10^{-5} c_6^2 \frac{x^5 (m_a f_a)^{5/2}}{\Lambda^4} > \frac{\pi\sqrt{g_*}}{\sqrt{90}\mpl}m_a f_a\left(1.6 - \frac{0.8}{N_c^2}\right)^2 \\ & \Rightarrow m_a f_a > {3\times 10^{25}{~\rm GeV}^2 \over c_6^{4/3}} {\left(\Lambda\over10^{14}{~\rm GeV}\right)}^{8/3}~{\rm for}~N_c = 3. \end{aligned} \label{eq:h_glueball_decay} \end{equation} In the last relation above we specify to the case of a dark $SU(3)$ gauge group that is responsible for the heavy axion mass. We also take $x \equiv m_{0^+} / \sqrt{m_a f_a}\approx 8$, as before. The equations governing the generation and evolution of the baryon asymmetry are the same as in the previous subsection, except that now $m_a \rightarrow m_a(T)$ for $T>T_c$, with the temperature-dependent mass as given in Sec.~\ref{sec:DM} (we assume $SU(3)$ for definiteness). For simplicity, we also assume that the SM and dark sectors have the same temperature, though in principle the dark sector could be either colder or hotter than the SM if the two were not in equilibrium or reheated differently after inflation. The result is shown in Fig.~\ref{fig:heavy_ax_T_ind} (right). The baryon isocurvature constraint in~\eqref{eq:bary_iso} applies as before. However, since during inflation the dark sector is deconfined, we have $m_a(T)\ll H_{\rm inf}$, and thus the isocurvature constraint becomes independent of the axion mass. The vanishing axion mass also ensures that the energy density in axions is always subdominant during inflation. The restriction due to $T_{\rm RH, inf} < f_a$ continues to apply and is also independent of $m_a$. We also show the region labeled $m_a > f_a$ where the axion cannot be treated as a light Goldstone boson. Finally, the constraint from~\eqref{eq:h_glueball_decay} shows that in the shaded region labeled `Glueball', the heavy glueballs do not decay promptly. Consequently, the entropy dilution coming from their decay needs to be taken into account in this parameter space, which we have not done for simplicity. Therefore in that region, our computation of $Y_B$ does not apply. We also have chosen $\Lambda = 10^{13}$\,GeV with $c_{aG} = c_{af} = 1$ as an illustration, along with $\theta_i = 1$ (though see App.~\ref{app:alt}). Increasing $\Lambda$ makes the glueballs longer lived, which may further dilute the baryon asymmetry if the glueballs come to dominate the energy density. Lastly, note that in all of the parameter space illustrated in Fig.~\ref{fig:heavy_ax_T_ind} the $\mu/T$ values are smaller, by at least a few orders of magnitude, compared to those recently constrained in~\cite{Domcke:2022uue} by helical magnetic field generation. \subsection{Heavy axion baryogenesis and light axion dark matter} We now focus on a scenario where there are two axions in the spectrum: a heavy axion $a_h$ and a light axion $a_l$. They get their masses from dark $SU(N_h)$ and $SU(N_l)$ groups, respectively. We consider $N_l < N_h$, such that $a_h$ is heavier than $a_l$ for similar decay constants. We assume both the sectors are reheated after inflation. Therefore, at reheating we have the following populations: (a)~cold, misaligned population of both $a_h$ and $a_l$ ($\rho_a^h, \rho_a^l)$; (b)~a relativistic population of both $a_h$ and $a_l$ ($\rho_{\rm th}^h, \rho_{\rm th}^l$); and (c)~deconfined $SU(N_h)$ and $SU(N_l)$ gluons ($\rho_G^h, \rho_G^l)$. The goal of this subsection is to explore the parameter space for which the two axions can explain both the DM relic density and the primordial baryon asymmetry. This is non-trivial because the same early matter dominated era that is required to avoid DM overclosure dilutes the already generated baryon asymmetry. As in the previous section, we consider the parameter space where the $SU(N_h)$ glueballs decay soon after their confinement since they would otherwise give rise to a very early matter domination with subsequent entropy dump that would dilute the initial baryon abundance. This requirement is the same as in~\eqref{eq:h_glueball_decay}. The early cosmological history in this scenario proceeds as follows. After inflation, the Universe becomes radiation dominated with the thermal bath consisting of relativistic axion populations, the deconfined dark plasmas, and the SM. We assume all of these to have the same temperature for simplicity. When $H \sim m_h$, the field $a_h$ starts to oscillate and this generates a lepton asymmetry in the presence of the Weinberg operator. At $T < m_h$, $\rho^h_{\rm th}$ starts diluting like matter. Together with $\rho_a^h$, these cold populations can give rise to matter domination if they are sufficiently long lived. Then at the heavy axion lifetime $\tau_a(m_h, f_h)$, both $\rho_a^h$ and $\rho^h_{\rm th}$ decay. We assume that heavy axion decay contributes equally to the SM and $SU(N_l)$ gluons in terms of energy density. At times immediately after the heavy axion decay the Universe remains radiation dominated with $\rho_{\rm SM}, \rho^l_{\rm th}, \rho_{G}^l$. At $T<T_{c}^l$, $SU(N_l)$ confinement takes place, and subsequently $\rho_G^l$ gives rise to glueballs that soon start dominating the energy density. This gives rise to a matter-dominated epoch. These glueballs eventually decay before BBN and reheats the Universe. Following this point the evolution is same as in standard cosmology. When $T<m_l$, the $\rho_{\rm th}^l$ also start diluting like matter and these warm axions can potentially form a sub-component of DM. With this cosmology in mind, we now ask for which parameter space we get the correct DM and baryon abundances. Consider a heavy axion with $m_h = 10^{11}$\,GeV and $f_h = 5 \times 10^{13}$\,GeV, along with $\theta_i^h = 1$, $c_{aG} = c_{af} = 1$ and $T_{\rm RH, inf} = 10^{13}$~GeV. This implies that gluons of the heavy sector confine around $T\approx 4\times 10^{12}$\,GeV to form heavy glueballs. However for $\Lambda \sim 10^{13}$~GeV, these glueballs decay promptly after their production, as implied by~\eqref{eq:h_glueball_decay}. Since the dark gluon sector is assumed to have the same temperature as the SM, their energy density is $2(N_c^2-1)/g_* \sim 1/10$ of the SM. Consequently, the heavy glueball formation and their prompt decay does not affect the thermal bath significantly. We now take the light axion parameters to be $m_l = 7$\,keV and $f_l = 7\times 10^{13}$\,GeV. This implies that the second dark confinement transition happens around $T\approx 30$\,TeV, following which lighter dark glueballs with mass $2 \times 10^5$\,GeV form and soon come to dominate the energy density. Through the dimension-6 Higgs portal coupling, these glueballs eventually decay. Taking $c_6 =1 $ and $\Lambda = 5\times 10^{9}$\,GeV, we compute the corresponding reheat temperature to be $T_{\rm RH}\approx 3$\,GeV using~\eqref{eq:TRH_glue}. The entropy dilution caused by the glueball decay dilutes the initial value of the baryon asymmetry, and with the above choices of parameters we find $|Y_B| \approx 10^{-10}$, consistent with current observations. Using~\eqref{eq:Tosc_over_Tc} for $N_c=2$, we obtain $T_{\rm osc} \sim 3.5 \times 10^{5}$\,GeV. Given that the onset of matter domination happens around $T_{\rm EMD}\sim 30$\,TeV, the observed DM density can be explained for $|\theta_i| \sim 10^{-2}$ using~\eqref{eq:mod_relic_ab}. Lastly, for $C_{a\gamma\gamma} \sim 0.05$, the DM lifetime is determined to be $6\times 10^{29}$\,sec using~\eqref{eq.lifetime}, consistent with current searches for decaying DM, but can be probed with {\it Athena} or THESEUS. In Fig.~\ref{fig:tau_bary} we extend the above argument for a broader parameter space, highlighting the appropriate regions of parameter space for which the correct baryon and DM abundances are achieved. We fix $\theta_i^h = 1$, constrain $|\theta_i^l| > 10^{-2}$, and consider $\Lambda > 5 \times 10^9$ GeV along with $\Lambda > 2 \times 10^{10}$ GeV, fixing $c_6 = 1$. We allow the two axions to have different decay constants so long as they are above $10^{13}$ GeV. As in Fig.~\ref{fig:lifetime_plot}, we vary $0.05 < |C_{a\gamma\gamma}| < 1$. In the left (right) panel we illustrate the light (heavy) axion parameter space where the correct DM and baryon abundances are simultaneously obtained. In the left panel we show the lifetime to photons instead of $f_a$, since this is directly observable, as a function of the light axion mass, while in the right we show $f_a$ as a function of the heavy axion mass. Note that the preferred mass range for the DM axion is lower than in Fig.~\ref{fig:lifetime_plot}. We also note that the viable parameter space in Fig.~\ref{fig:tau_bary} left is not strictly nested as $\Lambda$ is increased, contrary to Fig.~\ref{fig:tau_bary} right. We label the left panel for fixed, illustrative values of $\Lambda$. In Fig.~\ref{fig:tau_bary} we fix $\theta_i^h = 1$, though in principle $\theta_i^h$ could be larger, which may enhance the baryon abundance and thus open up more of the DM parameter space where the simultaneous DM and baryon abundances may be reproduced. In particular, it is possible that $\theta_i^h$ could be near $\pi$, in which case anharmonicities in the heavy axion equation of motion become important. In particular, for $\theta_i^h = \pi - \delta_i$, with $\delta_i \ll 1$ a small, positive number, it is known that the heavy axion field value becomes logarithmically enhanced in $\delta_i$ at late times (see, {\it e.g.},~\cite{Visinelli:2009zm}). However, since the baryon abundance is at most logarithmically enhanced as $\theta_i^h$ is tuned towards $\pi$, anthropic selection of $\theta_i^h$ near $\pi$ to enhance the baryon abundance may not be efficient, though this deserves further consideration. We note that the scale $\Lambda$ controlling the glueball decay rate needs to be much smaller than $M_{\rm GUT}$ for successful baryogenesis to occur. In the next section, we describe an example UV completion that achieves $\Lambda \ll M_{\rm GUT}$. \section{Orbifold construction} \label{sec:model} So far in this Article we have motivated the scale $\Lambda_D$ by assuming a unified gauge group at some scale $M_{\rm GUT}\gtrsim 10^{16}$\,GeV that breaks to $G_{\rm SM}\times G_{\rm dark}$ below that scale. We now give an example, extra dimensional construction that achieves such a breaking pattern. To be concrete, we focus on orbifold GUTs and consider unification of $G_{\rm SM}$ with $SU(3)_D$. Construction with more general $SU(N)_D$ or $SU(N)_D \times SU(M)_D$ can be carried out in a similar way. Orbifold GUTs are extra dimensional constructions that explain grand unification in a simple and elegant way. The basic idea is that in the presence of compact extra dimensions, one needs to specify boundary conditions to completely describe the theory. It is these boundary conditions that can break the unified gauge group and also project out the unwanted zero-modes of various fields, avoiding issues such as proton decay and the doublet-triplet splitting problem. We now briefly review some necessary aspects of an orbifold construction while referring the reader to~\cite{Kawamura:2000ev,Hall:2001pg,Hebecker:2001wq} for more details. We consider the spacetime to be $M_4\times S_1/(Z_2\times Z_2')$ where $M_4$ denotes the 4D Minkowski spacetime, with coordinates denoted by $x$. The extra dimensional circle $S_1$ with radius $R$ is reduced to an interval due to the quotienting by $(Z_2\times Z_2')$. Here the first $Z_2$ implements an identification $y\rightarrow -y$ where $y$ is the coordinate along the extra dimension. The second identification, $Z_2'$ acts as $y'\rightarrow-y'$ where $y'=y-\pi R/2$, or equivalently, $y\rightarrow \pi R-y$. The action of both of these parity transformations restricts the original $y$ coordinate ranging from $0\leq y < 2\pi R$, to $0\leq y \leq \pi R/2$, with the rest of the circular space identified to this segment. In particular, the end points $y=0$ and $y=\pi R/2$ act as orbifold fixed points where other fields, such as those in the SM, can be located. We denote the parity transformations associated with $Z_2$ and $Z_2'$ as $\cal P$ and $\cal P'$, respectively. In particular, focusing on an $SU(N)$ gauge field in the bulk, $\cal P$ has an action, \begin{equation} \begin{aligned} {\cal P}: A_\mu(x,y) &\rightarrow A_\mu(x,-y) = P A_\mu(x,y) P^{-1},\\ {\cal P}: A_5(x,y) &\rightarrow A_5(x,-y) = -P A_5(x,y) P^{-1},\\ \end{aligned} \end{equation} where $P$ is an $N\times N$ matrix with eigenvalues $\pm 1$. The action of $\cal P'$ is defined analogously via a matrix $P'$. We note that under the action of a given parity operation, $A_\mu$ and $A_5$ transforms oppositely, as needed for invariance of the Lagrangian. For a field $\Phi$ in the fundamental of $SU(N)$, the actions of $\cal P, \cal P'$ are given by, \begin{equation} \begin{aligned} {\cal P}: \Phi(x,y) &\rightarrow \Phi(x,-y) = P \Phi(x,y),\\ {\cal P'}: \Phi(x,y') &\rightarrow \Phi(x,-y') = P' \Phi(x,y').\\ \end{aligned} \end{equation} To determine the action of $\cal P, \cal P'$ it is useful to recall the mode expansion of a bulk field $\phi(x,y)$ that has specific parity properties (see, {\it e.g.}, \cite{Hall:2001pg}), \begin{equation} \label{eq:extra_dim_mode} \begin{aligned} \phi_{++}(x,y) & = \sum_{m=0}^{\infty}\frac{1}{\sqrt{2^{\delta_{m,0}}\pi R}}\phi_{++}^{(2m)}(x)\cos(2my/R),\\ \phi_{+-}(x,y) & = \sum_{m=0}^{\infty}\frac{1}{\sqrt{\pi R}}\phi_{+-}^{(2m+1)}(x)\cos((2m+1)y/R),\\ \phi_{-+}(x,y) & = \sum_{m=0}^{\infty}\frac{1}{\sqrt{\pi R}}\phi_{-+}^{(2m+1)}(x)\sin((2m+1)y/R),\\ \phi_{--}(x,y) & = \sum_{m=0}^{\infty}\frac{1}{\sqrt{\pi R}}\phi_{--}^{(2m+2)}(x)\sin((2m+2)y/R). \end{aligned} \end{equation} Here the notation, $\phi_{++}$ for example, implies that the field is even under both $\cal P, P'$. The fields $\phi_{++}^{(2m)}, \phi_{+-}^{(2m+1)}, \phi_{-+}^{(2m+1)}, \phi_{--}^{(2m+2)}$ have masses $2m/R, (2m+1)/R, (2m+1)/R, (2m+2)/R$, implying only $\phi_{++}$ has a zero-mode (setting $m=0$) and is present in the low-energy EFT below the scale $1/R$. To recall how gauge coupling unification works in this scenario, consider the action for a bulk gauge theory in flat spacetime, \begin{align} S \supset \int_0^{\pi R} dy \int d^4 x \left(\frac{1}{g_5^2}F_{AB}F^{AB} + \delta(y) \sum_i \epsilon_i F_{i,\mu\nu} F_i^{\mu\nu}\right) \,. \end{align} The 5D gauge coupling is $g_5$, and we assume that the bulk gauge invariance is broken at the $y=0$ boundary. Consequently, we can write non-GUT symmetric contributions to individual gauge groups parameterized by $\epsilon_i$. The indices $A,B$ run over all the dimensions whereas $\mu,\nu$ run over only 4D. The zero modes of the gauge bosons have a flat profile in the extra dimension, as can be seen from~\eqref{eq:extra_dim_mode}. Integrating over the extra dimension we then find at the unification scale, \begin{align}\label{eq:couling_gut} \frac{1}{\alpha_i(\mu \simeq 1/ R)} = \frac{4\pi^2 R}{g_5^2} + 4\pi \epsilon_i \,, \end{align} where we match the value of $\alpha_i$ at the renormalization scale $\mu = 1/R$ to the 5D coupling. This implies as long as the size of the extra dimension is large, {\it i.e.}, $\pi R/g_5^2 \gg \epsilon_i$, all the gauge couplings $\alpha_i$ are unified at the scale $1/R$, while below that scale, each $\alpha_i$ has their own evolution.\footnote{Above the compactification scale $1/R$, there can also be some small differential running of the gauge couplings since Kaluza-Klein modes of bulk fields may not a fill an entire gauge multiplet. In this case \eqref{eq:couling_gut} would approximately hold with a precise unification taking place somewhat above $1/R$. See, {\it e.g.}~\cite{Hall:2001pg, Nomura:2001mf}.} \subsection{Orbifold construction of $SU(6)\rightarrow SU(3)_D \times SU(3)_c$} First we consider a warm up example in which only QCD is unified with $SU(3)_D$ but $SU(2)_L\times U(1)_Y$ does not unify. We imagine an extra dimensional scenario with $S_1/(Z_2\times Z_2')$ geometry, as described above. The bulk gauge group is $SU(6) \times SU(2)_L \times U(1)_Y$. For the boundary at $y=0$, we choose $P={\rm diag}(1, 1, 1, -1, -1, -1)$, whereas for $y=\pi R/2$, we choose $P'={\rm diag}(1, 1, 1, 1, 1, 1)$. With this choice, $SU(6)\rightarrow SU(3)_D \times SU(3)_c \times U(1)$ on the $y=0$ boundary, whereas the bulk gauge invariance remains intact on the $y=\pi R/2$ boundary. This shows that the low energy theory has a $SU(3)_c\times SU(3)_D\times U(1)$ symmetry. We index the unbroken generators by $a$ and the broken ones by $\hat{a}$. While $A_\mu^a$ give rise to low energy gauge theory, $A_5^{\hat{a}}$ are 4D scalars (transforming as bifundamentals of $SU(3)_D\times SU(3)_c$) and their masses are $\sim {\cal O}(1/R)$ due to quantum corrections from other bulk fields. Now we discuss how to break the residual $U(1)$. For this purpose, we can have a three-index antisymmetric scalar $\phi_{[ijk]}$ under $SU(6)$. When $i,j,k \in \{1,2,3\}$, then $\phi_{[ijk]}$ transforms as a singlet under both $SU(3)_D$ and $SU(3)_c$, but not under $U(1)$. To see this, consider a general set of indices $i,j,k,l$ for which \begin{align} D_\mu \phi_{ijk} \supset A_\mu^a\left[{T^a}_{i}^l \phi_{ljk} + {T^a}_{j}^l \phi_{ilk} + {T^a}_{k}^l \phi_{ijl}\right], \end{align} where $T^a$ are various $SU(6)$ generators. For $i=1, j=2, k=3$, the above becomes, \begin{align} A_\mu^a\left[{T^a}_{1}^1 \phi_{123} + {T^a}_{2}^2 \phi_{123} + {T^a}_{3}^3 \phi_{123}\right]. \end{align} This implies $\phi_{123}$ is charged under the $U(1)$ since it couples to the diagonal generators. Correspondingly, if $\langle \phi_{123}\rangle \neq 0 $, the $U(1)$ gets broken, leaving only $SU(3)_D\times SU(3)_c$. In this scenario, the SM Higgs is a singlet as far as orbifolding is concerned and we can put it in the bulk. We put SM leptons and quarks on the $y=0$ boundary. Since $SU(6)$ is broken into $SU(3)_D\times SU(3)_c$ on this boundary, SM quarks need not fill up a whole multiplet of $SU(6)$, and we take them to be singlets under $SU(3)_D$. Next, we have to choose parities of SM fermions under ${\cal P}$ and ${\cal P}'$ since the entire Lagrangian must have a definite parity. Under both ${\cal P}$ and ${\cal P}'$ we take all the SM fermions and SM Higgs to have + parity. Then all the SM Yukawa terms are manifestly parity invariant. Next, we discuss how to generate the intermediate scale $\Lambda \ll M_{\rm GUT}$, which we rely upon for our Higgs portal coupling that allows the dark glueballs to decay. We consider vector-like fermions $\chi_L = (3, 1, 2, -1/2)$ and $\chi_e^c = (\bar{3}, 1, 1, +1)$ under $SU(3)_D \times SU(3)_c \times SU(2)_L \times U(1)_Y$, and their partners, $\chi_L^c = (\bar{3}, 1, 2, +1/2)$ and $\chi_e = (3, 1, 1, -1)$ located on the $y=0$ boundary. They couple to the Higgs via, \begin{align} y_\chi \chi_L H \chi_e^c + m_{\chi_L} \chi_L \chi_L^c + m_{\chi_e} \chi_e \chi_e^c + {\rm h.c.} \,, \label{eq:vector-like} \end{align} where $m_{\chi_L}$ and $m_{\chi_e}$ are vector-like mass parameters. Then, $\chi_L$ and $\chi_e$ mediate a one loop interaction between the $SU(3)_D$ gluons and the Higgs. The effective dimension-6 operator may be computed as~\cite{Juknevich:2009gg} \begin{align} \frac{\alpha_D}{6\pi}\frac{y_\chi^2}{m_{\chi_L} m_{\chi_e}} |H|^2 G_{d,\mu \nu} G_d^{\mu\nu} \,. \end{align} Therefore the scale $\Lambda$ controlling the glueball decay rate in~\eqref{eq:glueball_decay} corresponds to the masses of the heavy vector-like fermions: $\Lambda^2 / c_6 \sim m_{\chi_L} m_{\chi_e} / y_\chi^2$. Consequently, $\Lambda \ll M_{\rm GUT}$ may be achieved by arranging vector-like masses $m_{\chi_L} , m_{\chi_e} \ll M_{\rm GUT}$. Recall that in the discussion of~\eqref{eq:glueball_dim6} we rely on the $\tilde c_6$ coupling to $|H|^2 G_{d,\mu \nu} \tilde G_{d}^{\mu \nu}$ to induce the decay of the CP-odd glueballs. In the theory above, this operator is not generated because the theory is CP conserving. However, the theory may be made CP violating by having at least two non-degenerate generations of vector-like fermions, with the associated mass and Yukawa matrices appearing in~\eqref{eq:vector-like} being complex. For two generations there is one surviving CP-violating phase that may not be transformed away, while more CP-violating phases survive for a larger number of generations. In the presence of at least a single CP-violating phase the $\tilde c_6$ operator appearing in~\eqref{eq:glueball_dim6} is generated, in addition to the $c_6$ operator, as the result of CP violation. \subsection{Orbifold construction of $SU(8)\rightarrow SU(3)_D \times SU(3)_c \times SU(2)_L \times U(1)_Y$} \label{sec:orbifold_SU8} We now describe how the $SU(6)$ group described in the previous subsection can also be unified with $SU(2)_L\times U(1)_Y$ into an $SU(8)$ group. Since $SU(8)$ has rank 7 and $SU(3)_D \times G_{\rm SM}$ has rank 6, to obtain the above breaking pattern we consider a scalar VEV, such as $\langle \phi_{123}\rangle \neq 0$ in the previous subsection, to reduce the rank. We first discuss the orbifold parities of the gauge fields. We again consider a $S_1/(Z_2\times Z_2')$ geometry and choose, \begin{equation} \begin{aligned} P = {\rm diag}(-1, -1, -1, +1, +1, +1, +1, +1), \\ P' = {\rm diag}(-1, -1, -1, -1, -1, -1, +1, +1). \end{aligned} \end{equation} The choice of $P$ breaks $SU(8)\rightarrow SU(3)_D \times SU(5) \times U(1)_X$. On the other hand, $P'$ breaks $SU(8)\rightarrow SU(6) \times SU(2) \times U(1)_Z$. Here $U(1)_X$ is generated by ${\rm diag} (r, r, r, s, s, s, s, s)$ with $3r+5s=0,3r^2+5s^2=1/2$, while $U(1)_Z$ is generated by ${\rm diag} (p, p, p, p, p, p, q, q)$ with $6p+2q=0,6p^2+2q^2=1/2$. These are the tracelessness and normalization constraints, respectively. With their combined action, however, the gauge group is broken to \begin{align} SU(8)\rightarrow SU(3)_D \times SU(3)_c \times SU(2)_L \times U(1)_G \times U(1)_H. \end{align} Here we can choose the $U(1)_G$ generator to be ${\rm diag} (r, r, r, s, s, s, t, t)$ with $3r+3s+2t = 0$ (zero trace) and $(3r^2 + 3s^2+2t^2)=1/2$ (normalized), and the $U(1)_H$ generator to be ${\rm diag} (0, 0, 0, p, p, p, q, q)$ with $3p+2q=0$ and $3p^2+2q^2 = 1/2$. These conditions determine $p=1/\sqrt{15}$ and $q = -3/(2\sqrt{15})$. Let us now discuss the embedding of the SM Higgs. We put the Higgs in the bulk and in the antifundamental of $SU(8)$, since we can remove the unwanted components by orbifold projection. Under $P$, we assume $(+,+,+,+,+,+,+,+)$ parity, while under $P'$, we assume $(-,-,-,-,-,-,+,+)$. This implies only the $SU(2)_L$ doublet has $+$ parity under both $P$ and $P'$, and we can identify the corresponding zero mode as the SM Higgs. All the other components are heavy. Focusing on the SM fermions, we note that we can put them on the $y=0$ boundary since they fit in a multiplet of $SU(5)$, and then we take them as singlets under $SU(3)_D$. Next, we need to assign them proper parities such that we can construct Yukawa-invariant terms. We choose all the fermions to have + parity under $P$ and $\{+,-,-,+,-\}$ under $P'$, for $q, u^c, d^c, l, e$, respectively. Along with the parity requirement on the Higgs, this lets us write appropriate SM Yukawa terms. To generate the intermediate scale $\Lambda$ that controls the glueball decay rate, we require heavy fermions $\psi_L$ and $\psi_e^c$ on the $y=\pi R/2$ boundary. Under the residual $SU(6)\times SU(2)_L \times U(1)_Z$, $\psi_L$ and $\psi_e^c$ have charges $(6, 2, q_1)$ and $(\bar{6}, 1, q_2)$, where $q_1+q_2 = - \sqrt{3}/4$. Here the $U(1)_Z$ charge of the SM Higgs, embedded into an $SU(8)$ antifundamental, is taken to be $\sqrt{3}/4$. We also have vector-like partners $\psi_L^c$ and $\psi_e$ having charges $(\bar{6}, 2, -q_1)$ and $(6, 1, -q_2)$, respectively. With these charge assignments, we can write down the Higgs coupling and the vector-like mass terms for the heavy fermions: \begin{align} y_\psi \psi_L H \psi_e^c + m_{\psi_L}\psi_L\psi_L^c + m_{\psi_e}\psi_e \psi_e^c + {\rm h.c.} \,. \end{align} Choosing $+$ parity under both $P$ and $P'$ for these fermions makes the above terms parity invariant. Just as the previous subsection, these heavy fermions mediate an interaction between $SU(3)_D$ and the Higgs and additionally also between $SU(3)_c$ and the Higgs: \begin{align} \frac{\alpha_D}{6\pi}\frac{y_\psi^2}{m_{\psi_L} m_{\psi_e}} |H|^2 G_{d,\mu \nu} G_d^{\mu\nu} + \frac{\alpha_3}{6\pi}\frac{y_\psi^2}{m_{\psi_L} m_{\psi_e}} |H|^2 G_{\mu \nu} G^{\mu\nu}. \end{align} To break $U(1)_G \times U(1)_H \rightarrow U(1)_Y$, we consider a three-index, totally anti-symmetric scalar of $SU(6)$, $\phi_{[ijk]}$. Among its elements, $\phi_{123}$ is a singlet under $SU(3)_D \times SU(3)_c \times SU(2)_L \times U(1)_H$. However, it is charged under $U(1)_G$. To see this, consider the covariant derivative for general indices $i,j,k,l$ as before, \begin{align} D_\mu \phi_{ijk} \supset A_\mu^a\left[{T^a}_{i}^l \phi_{ljk} + {T^a}_{j}^l \phi_{ilk} + {T^a}_{k}^l \phi_{ijl}\right]. \end{align} Focusing on $\phi_{123}$ in particular, we see, \begin{align} D_\mu \phi_{123} \supset A_\mu^a\left[{T^a}_{1}^1 \phi_{123} + {T^a}_{2}^2 \phi_{123} + {T^a}_{3}^3 \phi_{123}\right] = A_\mu^a ({T^a}_{1}^1 + {T^a}_{2}^2 + {T^a}_{3}^3) \phi_{123}. \end{align} Thus it is charged under only those generators for which $({T^a}_{1}^1 + {T^a}_{2}^2 + {T^a}_{3}^3) \neq 0$. Given our choices of $U(1)_G$ and $U(1)_H$, we see that it is charged only under $U(1)_G$. Therefore for $\langle \phi_{123}\rangle\neq 0$, the $U(1)_G$ gauge boson gets a mass and $U(1)_H$ survives in the low energy theory. Since $U(1)_H$ coincides with the $T_{24}$ generator of $SU(5)$, we can identify this as $U(1)_Y$ along with a multiplicative factor, $Y = c T_{24}$ with $c = -\sqrt{5/3}$. This implies $Y = {\rm diag}(0, 0, 0, -1/3, -1/3, -1/3, 1/2, 1/2)$. We note that SM fermions need not have any charge under $U(1)_X$ and they inherit their hypercharge from embedding in $SU(5)$, as in the minimal $SU(5)$ model~\cite{Georgi:1974sy}. Similarly, the SM Higgs, a part of the antifundamental of $SU(8)$, also obtains the correct hypercharge. We summarize the various particle contents and gauge group structure in Fig.~\ref{fig:5D}. In Fig.~\ref{fig:GUT}, we show the renormalization group evolution of the SM gauge couplings along with that of pure $SU(3)_D$, for a dark confinement scale of $10^5$~GeV. Such a dark confinement scale corresponds to an axion with $m_l \sim 10$~keV and $f_l \sim 10^{15}$~GeV, relevant for the decaying DM parameter space. As is well known, the SM gauge couplings do evolve to get close to each other but they do not unify perfectly. However, the running of $SU(3)_D$ coupling does indicate unification with $SU(3)_c$ and $SU(2)_L$. This raises the interesting possibility of achieving a better unification, especially with supersymmetry. To take into account the effect of vectorlike fermions on the gauge coupling running, we need to know the quantum numbers of the vectorlike fermions under the gauge group $SU(3)_D\times SU(3)_c\times SU(2)_L\times U(1)_Y$. We can write the hypercharge operator $Y$ in terms of the $U(1)_Z$ generator $T_Z =(1/\sqrt{48}) {\rm diag}(1, 1, 1, 1, 1, 1, -3, -3)$ and one diagonal generator $T_6 = (1/\sqrt{12}) {\rm diag}(1, 1, 1, -1, -1, -1)$ of $SU(6)$, \begin{align} Y = -\frac{1}{6}\left(\sqrt{48}T_Z - \sqrt{12} T_6 \right). \end{align} This gives $Y = {\rm diag}(0, 0, 0, -1/3, -1/3, -1/3, 1/2, 1/2)$. Thus the fermion representation under the bigger group $SU(6)\times SU(2)_L \times U(1)_Z$ splits under $SU(3)_D\times SU(3)_c\times SU(2)_L\times U(1)_Y$ as, \begin{align} (6, 2, q_1) \rightarrow (3, 1, 2, Y_{\psi_L}) + (1, 3, 2, Y_{\psi_L}'),\\ (\bar{6}, 1, q_2) \rightarrow (\bar{3}, 1, 1, Y_{\psi_e^c}) + (1, \bar{3}, 1, Y_{\psi_e^c}'), \end{align} with $Y_{\psi_L} = (-1/6)(\sqrt{48} q_1- \sqrt{12}), Y_{\psi_L}' = (-1/6)(\sqrt{48}q_1+\sqrt{12})$, and $Y_{\psi_e^c} = (-1/6)(\sqrt{48}q_2+\sqrt{12}), Y_{\psi_e^c}' = (-1/6)(\sqrt{48}q_2-\sqrt{12})$. We have $Y_{\psi_L} + Y_{\psi_e^c} = 1/2$, following from $q_1+q_2 = -\sqrt{3}/4$, necessary for the Higgs Yukawa couplings. \subsection{Axions from extra dimensional gauge fields} Having discussed the SM sector, we can now include an axion also using the extra dimension. We model the axion as the fifth component of a gauge field $U(1)$ in the bulk, following the construction in {\it e.g.}~\cite{Choi:2003wr}. We can choose the following parity action on the gauge field, \begin{equation} \begin{aligned} {\cal P}: B_\mu(x,y) &\rightarrow B_\mu(x,-y) = - B_\mu(x,y),\\ {\cal P}: B_5(x,y) &\rightarrow B_5(x,-y) = B_5(x,y),\\ \end{aligned} \end{equation} with identical action of $\cal P'$ with $y$ replaced by $y'$. In other words, while $B_\mu$ has a $--$ parity, $B_5$ has a $++$ parity and only it survives in the low energy theory. In the presence of this new gauge field, we can write down a Chern-Simons~(CS) term in the bulk. The Lagrangian involving $B_M$ then reads as \begin{equation} \begin{split} \int d^4 x \int_0^{\pi R} dy {} \Big(\frac{1}{4 g_{5B}^2}B_{MN}B^{MN} + \kappa_B \epsilon^{MNPQR}B_M \text{Tr}(F_{NP}F_{QR})\Big). \end{split} \end{equation} In the 4D effective theory, this reduces to \begin{align} \frac{\pi R}{2g_{5B}^2} (\partial_\mu B_5)^2 + 2\pi R \kappa_B B_5 G\tilde{G}. \end{align} Here, $G$ contains all the $SU(8)$ gauge bosons, which implies that the axion will couple both to the dark $SU(3)$ and to the SM gauge groups. Denoting $B_5\equiv a$ and canonically normalizing the kinetic term, we arrive at an axion coupling \begin{align} \frac{a}{32\pi^2 f_a}G\tilde{G} \,, \qquad f_a \equiv \frac{1}{64\pi^2 \sqrt{\pi R}\kappa_B g_{5B}} \,. \end{align} To estimate $f_a$ relative to the unification scale $M_{\rm GUT} \sim 1/R$, we use the relation $\pi R/g_5^2 = 1/g_4^2$ and $4\pi/g_4^2 \approx 25$ at the unification scale, to compute \begin{align} f_a \sim \frac{1}{64\pi^3 \kappa_B} \frac{5 M_{\rm GUT}}{2\sqrt{\pi}}. \end{align} If we suppose that the 5D CS term arises at one loop, such that $\kappa_B \sim \alpha / (4 \pi)$, then numerically $f_a\sim M_{\rm GUT}$. Thus, the orbifold model discussed in this section, while by no means unique, contains all of the necessary features needed for the heavy DM axion and baryogenesis stories -- a dark, confining gauge group that unifies with the SM but that contains Higgs portal interactions that allow the dark glueballs to decay, suppressed by an intermediate scale $\Lambda < M_{\rm GUT}$, along with an axion that couples to the SM and to the dark gauge group. \section{Discussion} \label{sec:conclu} In this Article we introduce keV - MeV axions as a decaying DM candidate that may naturally obtain the correct relic abundance through the period of early matter domination brought upon by dark glueballs. These glueballs are associated with the dark gauge group whose instantons give rise to the axion mass. Such a scenario may naturally arise in an axiverse, where there are multiple axions, in addition to dark gauge groups that decouple from the SM near the GUT scale. While such scenarios may emerge in the context of String Theory constructions, which are known to produce decoupled dark gauge groups and axions, we provide an explicit construction in the context of a 5D orbifold theory where the SM and a dark $SU(3)$ unify into a 5D $SU(8)$ theory, which also produces a 4D axion as the zero mode of the fifth component of a 5D gauge field. We also show that the heavy axions could be responsible for the primordial baryon asymmetry, through the process of spontaneous baryogenesis, and if the dark sector contains multiple confining sub-sectors the correct baryon and DM abundances can simultaneously be produced, as we demonstrate. The presence of the heavy axions does not spoil the possibility of an additional axion solving the strong {\it CP} problem. The clearest signature of heavy axion DM is the decay to two photons, which may be detected by current or near-term $X$-ray and gamma-ray telescopes, as we discuss. As illustrated in {\it e.g.} Figs.~\ref{fig:lifetime_plot} and~\ref{fig:tau_bary}, much of the best-motivated parameter space where dark-sector axions may naturally make up the observed DM abundance and also explain the primordial baryon asymmetry could be probed by future instruments, providing strong motivation for missions that increase the reach to the DM lifetime over the keV - MeV energy range. The dark-sector DM axion cosmology considered in this Article is associated with a period of early matter domination caused by the dark glueballs. The fact that low reheat temperatures, near the BBN limit, are favored for mitigating fine tuning of the initial axion misalignment angle may itself lead to observational signatures. This is because density perturbations grow linearly during matter-dominated epochs, as opposed to logarithmically during radiation-domination~\cite{Erickcek:2011us,Barenboim:2013gya,Fan:2014zua,Nelson:2018via,Visinelli:2018wza}. This implies that small-scale structure could be enhanced because of the period of early matter domination, potentially leading to large numbers of ultra-compact sub-halos that survive until today. For reheating temperatures near the BBN bound this implies an enhancement of DM substructure today at masses near Jupiter's mass and below~\cite{Erickcek:2011us}. Interestingly, these ultra-compact sub-halos may be directly observable with future Pulsar Timing Array measurements~\cite{Dror:2019twh,Lee:2020wfn} and photometric microlensing surveys~\cite{Dai:2019lud} if $T_{\rm RH} \lesssim 100$ MeV -- GeV, as is the case for most of the parameter space considered in this work. It would be interesting to also investigate the possible observational signatures of the ultra-compact mini-halos in the Galactic DM decay morphology. The period of early matter domination is brought upon by the confining phase transition in the dark non-abelian gauge sector, and depending on the dark gauge group the phase transition could be first order and associated with an efficient production of gravitational waves, see, {\it e.g.}, ~\cite{Schwaller:2015tja, Huang:2020crf, Halverson:2020xpg}. The detectability of these gravitational waves at future observatories depends on the efficiency of their production, the temperature of the phase transition, and the amount of subsequent entropy dilution; this would a useful direction to explore in future work. In this Article we have not assumed high-scale supersymmetry, except for roughly motivating the gauge couplings that we may expect at the GUT scale. Supersymmetry, even if broken at a high scale, would quantitatively and potentially qualitatively modify most of the arguments presented in this work. It would be interesting to investigate the supersymmetric completion of the models presented in this work. In summary, heavy axions connected to hidden sectors are motivated extensions of the SM that could be responsible for baryogenesis and DM. A number of upcoming astrophysical missions should shed light onto their existence, providing strong science motivation for continuing deeper explorations of the cosmos. \section*{Acknowledgements} We thank Pouya Asadi, Valerie Domcke, Lawrence Hall, Jim Halverson, Keisuke Harigaya, Simon Knapen, Nadav Outmezguine, Nick Rodd, and Raman Sundrum for useful discussions. We also thank Pouya Asadi, Valerie Domcke, Jim Halverson, and Nick Rodd for useful comments on the manuscript. J.W.F. was supported by a Pappalardo Fellowship. B.R.S. was supported in part by the DOE Early Career Grant DESC0019225. S.K., B.R.S., and Y.S. were supported in part by a grant from the United States-Israel Binational Science Foundation (BSF No.~2020300), Jerusalem, Israel. Y.S. is also supported by grants from the ISF (No.~482/20), NSF-BSF (No.~2018683) and by the Azrieli foundation. J.W.F. and S.K. thank the Mainz Institute of Theoretical Physics of the Cluster of Excellence PRISMA+ (Project ID 39083149) for its hospitality while this work was in progress. This research used resources from the Lawrencium computational cluster provided by the IT Division at the Lawrence Berkeley National Laboratory, supported by the Director, Office of Science, and Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. \appendix \section{Sensitivity projections for future gamma ray observatories} \label{sec:astro} In this Appendix we give the details for deriving the sensitivity projections for AMEGO. Projections for other missions in the $\sim$ MeV energy range can be obtained in a similar way. For an observation of an on-sky region $\Sigma$ for duration $T$ using an instrument with energy-dependent effective area $\mathcal{E}$, the expected number of observed photons produced by decay of an axion with mass $m_a$ to two photons with energy $m_a / 2$ is given by \begin{equation} N(m_a, \tau_a) = \frac{\mathcal{D}_{\Sigma} \mathcal{E}(m_a/2) T}{2 \pi m_a \tau_{a}} \,, \end{equation} where $\tau_a$ is the lifetime for axion decay to photons, and $\mathcal{D}_\Sigma$ is the DM line-of-sight density integrated over the region of interest. Assuming the axion comprises all the DM, we consider axion decays in the Milky Way halo in the vicinity of the GC with $\Sigma$ defined by $|b|,\, |l| \leq 5^\circ$. We take the Milky Way DM density profile to be described by an NFW profile \cite{Navarro:1995iw, Navarro:1996gj}, though more motivated and better constrained modeling choices for the Milky Way DM density profile may be possible in the future through improved simulation and observational efforts \cite{deSalas:2020hbh}. We take our NFW profile to have DM density $0.4\,\mathrm{GeV}/\mathrm{cm}^3$ in the solar neighborhood at $r_\odot = 8.23\,\mathrm{kpc}$ from the galactic center (GC) and a scale radius of $r_s = 20\,\mathrm{kpc}$ \cite{deSalas:2020hbh, 2022arXiv220412551L}. The integrated line-of-sight density is then calculated by \begin{gather} \rho(r) =\rho_\odot \frac{ r_\odot (r_s+ r_\odot)^2}{r (r_s+r)^2} \\ \mathcal{D}_\Sigma = \int_\Sigma d\Omega \int ds \rho_{a}(s, \Omega) \,, \end{gather} such that $\mathcal{D}_\Sigma \approx 4 \times 10^{24}\, \mathrm{MeV}/\mathrm{cm}^2$. We assume the region of interest $\Sigma$ is observed for $T = 1\,\mathrm{yr}$ and adopt AMEGO's projected energy-dependent effective area. For the energy range relevant for our axion DM scenario, AMEGO will observe incident photons as Compton scattering events with two different classifications: tracked and untracked. The Tracked Compton (TC) and Untracked Compton (UC) event classifications cover complementary energy ranges, with differences in effective area and energy resolution \cite{Kierans:2020otl}. In projecting AMEGO sensitivities, we independently consider both event types. To estimate our statistical power in constraining an axion decay line, we calculate the expected number of background photons contributed by astrophysical processes within the energy range over which the signal appears using flux spectra for bremsstrahlung, inverse-Compton, and $\pi^0$ emission in the $|b|,\, |l| \leq 5^\circ$ region developed in \cite{Bartels:2017dpb} using the cosmic ray modeling code \texttt{GALPROP}\cite{Strong:1998pw}. Since AMEGO will not resolve the decay line-width, which has relative width of $\Delta E / E \approx 10^{-3}$, the relevant energy range is the instrumental energy resolution, which is roughly $\Delta E /E \approx 5\%$, evaluated at $E = m_a / 2$. The number of background photons $N_B$ is then given by \begin{equation} N_B(m_a) = \int_\Sigma d\Omega\frac{d\Phi}{dE d\Omega} \mathcal{E}(m_a/2) T \Delta E(m_a/2) \,, \end{equation} where $\mathcal{E}$ and $\Delta E(E)$ are the energy-dependent effective area and energy-resolution appropriate chosen for the tracked or untracked event classifications. From $N_B$ and $N(m_a, \tau_a)$, the expected 95$^\mathrm{th}$ percentile upper limit on $\tau_a$ can be determined in the gaussian limit relevant to these projections by solving $N(m_a, \tau_ a) \approx 1.6 N_B(m_a)$ \cite{Cowan:2010js}. Note that we neglect systematic uncertainties, which may be important, especially at low energies where the photon counts are the highest~\cite{Bartels:2017dpb}, to show the maximal possible reach of the instruments from statistical uncertainties alone. The projected sensitivity of AMEGO for tracked and untracked event types are presented in Fig.~\ref{fig:lifetime_plot}, labeled `AMEGO TC' and `AMEGO UC', respectively. Note that we have neglected the finite angular resolution of the instrument, though this is a small correction as the angular resolution is comparable to or less than the extent of our region of interest and since the line-of-sight DM density is not sharply varying outside the very inner GC. \section{Rate formulae for lepton-asymmetry generating operators} \label{sec:rates} In this Appendix we summarize the interaction rates $\Gamma_\alpha$ relevant for lepton asymmetry generation via the Boltzman equation~\eqref{eq:bary_boltz}. These interactions, if active, are responsible for maintaining chemical equilibrium between different SM species. We recall that $\alpha$ runs over weak sphaleron ($W$), strong sphaleron ($S$), tau Yukawa ($\tau$), top Yukawa ($t$), bottom Yukawa ($b$), and the Weinberg operator for the first two generations ($W_{12}$) and the third generation ($W_3$). Since the distinction between the first two generations is immaterial at high temperatures, they can be combined into a single species, and a single Weinberg operator interaction $W_{12}$ can describe them. \paragraph{Strong and weak sphaleron.} The sphaleron rates in gauge theories can be obtained from, {\it e.g.},~\cite{Moore:2010jd} \begin{align} \Gamma_{W} = 3\kappa_{W}\alpha_2^5 T,\\ \Gamma_{S} = 3\kappa_{S}\alpha_s^5 T. \end{align} We take $\kappa_{W}\sim 24$ and $\kappa_{S}\sim 270$~\cite{Domcke:2020kcp} as relevant for asymmetry generation at high temperatures, $T\gtrsim 10^{12}$~GeV. \paragraph{Yukawa couplings.} Given the sizes of the Yukawa couplings, here only the third generation fermions is relevant, \begin{align} \Gamma_\tau = 6 \kappa_\tau y_\tau^2 T,\\ \Gamma_t = 6 \kappa_t y_t^2 T, \\ \Gamma_b = 6 \kappa_b y_b^2 T. \end{align} Here we take $\kappa_\tau \simeq 1.7\times 10^{-3}, \kappa_t \simeq \kappa_b \simeq 10^{-2}$~\cite{Domcke:2020kcp, Garbrecht:2014kda}. \paragraph{Weinberg operator.} This can be estimated as~\cite{Domcke:2020kcp} \begin{align} \Gamma_{W_{12}} = 2\Gamma_{W_3} = 12\kappa_W \frac{m_\nu^2 T^3}{v^4}, \end{align} with $\kappa_W \sim 3\times 10^{-3}$, $m_\nu = 0.05$~eV and $v = 174$~GeV. \section{Alternate axion-matter coupling choice for spontaneous baryogenesis} \label{app:alt} Recall that in constructing Fig.~\ref{fig:heavy_ax_T_ind} for the parameter space that produces the correct baryon asymmetry we make the choice $c_{aG} = c_{af} = 1$. In this Appendix we consider the alternate choice $c_{aG} = 1$, $c_{af} = 0$, which implies that the axion couples to gauge fields but has no direct coupling to fermions (see~\eqref{eq:axion_couplings}). In Fig.~\ref{fig:ax_gauge_only_T_ind} we show the analogue of Fig.~\ref{fig:heavy_ax_T_ind} for this alternate choice of heavy axion couplings. Note that the baryon asymmetries are generically suppressed relative to in the case where the axion couples at tree level to fermions; this is mostly because the axion-top coupling is the most efficient operator for baryon production, so removing this operator suppresses the baryon asymmetry production. \bibliographystyle{JHEP} \bibliography{refs}
Title: Modulation of the solar microwave emission by sausage oscillations
Abstract: The modulation of the microwave emission intensity from a flaring loop by a standing linear sausage fast magnetoacoustic wave is considered in terms of a straight plasma slab with the perpendicular Epstein profile of the plasma density, penetrated by a magnetic field. The emission is of the gyrosynchrotron (GS) nature, and is caused by mildly relativistic electrons which occupy a layer in the oscillating slab, i.e., the emitting and oscillating volumes do not coincide. It is shown that the microwave response to the linear sausage wave is highly non-linear. The degree of the non-linearity, defined as a ratio of the Fourier power of the second harmonic to the Fourier power of the principal harmonic, is found to depend on the combination of the width of the GS source and the viewing angle, and is different in the optically thick and optically thin parts of the microwave spectrum. This effect could be considered as a potential tool for diagnostics of the transverse scales of the regions filled in by the accelerated electrons.
https://export.arxiv.org/pdf/2208.11345
\label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \begin{keywords} Sun: radio radiation -- \textit{(magnetohydrodynamics)} MHD -- radiation mechanisms: non-thermal -- Sun: activity \end{keywords} \section{Introduction} Solar flares are one of the most powerful phenomena in the solar atmosphere. Physical processes operating in flares at kinetic and magnetohydrodynamic (MHD) scales, such as the charged particle acceleration and magnetic reconnection, are fundamental processes of plasma astrophysics and remain subject to intensive studies. An important method for probing the plasma in flaring sites is MHD seismology, based upon oscillatory processes detected in flares \citep[e.g.][]{2020STP.....6a...3K, 2021SSRv..217...66Z}. Among the oscillations which are most commonly observed in flaring regions are standing fast magnetoacoustic waves of the sausage symmetry, i.e. the axisymmetric oscillatory motions across the magnetic field, characterised by variations of the plasma density and absolute value of the magnetic field \citep[see, e.g.,][for recent comprehensive reviews]{2020SSRv..216..136L}. As the typical periods of sausage oscillations are from a few seconds to several tens of seconds \citep[e.g.,][]{2009SSRv..149..119N}, their detection requires the use of instruments with high time resolution. In particular, sausage oscillations are often observed as the modulation of coherent and non-coherent radio and microwave emissions. For example, sausage oscillations have been used for the interpretation of the time variations of the gyrosynchrotron (GS) emission intensity from spatially resolved coronal loops \citet{2003A&A...412L...7N, 2005A&A...439..727M}, and simultaneous oscillations of GS and soft X-ray emission along a loop \citep{2008A&A...487.1147I}, the wiggling of zebra-pattern lanes \citep{2013ApJ...777..159Y}, and the precipitation rate of the nonthermal electrons at the opposite footpoints of a loop \citep{2018ApJ...859..154N}. Sausage oscillations are used as seismological probes of perpendicular profiles of the density and the absolute value and twisting of the magnetic field in flaring plasma structures, such as plasma loops \citep[e.g.,][]{2020SSRv..216..136L}. Both regular and random perpendicular profiles could be addressed \citep[e.g.][]{2007SoPh..246..165P, 2014A&A...567A..24H, 2015ApJ...812...22C, 2015ApJ...810...87L}. Typically, the analysed plasma structures are modelled as a plasma slab or a cylinder, surrounded by a plasma with different properties \citep[see, e.g.][]{2012ApJ...761..134N, 2014A&A...567A..24H}, respectively. The consideration of sausage oscillations in a loop with a varying cross-section demonstrated that the slab or cylinder models are adequate. For example, for the fundamental parallel harmonic, the cross-section radius increase near the loop top by a factor of 2 causes the decrease in the oscillation period of about 5\% only \citep{2009A&A...494.1119P}. Despite intensive theoretical and observational studies of sausage oscillations, the link of the theory and observational outcomes remains non-trivial. One of the difficulties is the effect of the line-of-sight (LoS) integration in the optically thin regime. Let us illustrate it for the case of the EUV or soft X-ray emissions. On one hand, a sausage oscillation is characterised by the perturbations of the plasma density. One the other hand, if the LoS is perpendicular to the oscillating plasma structure, the plasma is displaced along the LoS. As the observed emission intensity is proportional to an integral of the squared density along the LoS, the observed intensity does not change much in different phases of the oscillation. This effect was first demonstrated by \citet{2012A&A...543A..12G}, and then confirmed by comprehensive forward modelling \citep{2013A&A...555A..74A}. A similar problem appears in the radio observations \citep[see, e.g.][]{2012ApJ...748..140M}, and has been addressed by forward modelling of microwave observables by \citet{2015A&A...575A..47R, 2015SoPh..290.1173K}. In contrast with the modelling of the thermal emission which is determined by the variation of the plasma density along the LoS, various kinds of non-thermal radio emissions are also determined by the magnetic field non-thermal electrons, LoS angle to the field, and their distributions along the LoS. In the majority of forward modelling studies, the spatial structure of non-thermal electrons has been considered to coincide with the oscillatory perturbations of the plasma density and magnetic field by a sausage wave \citet{2015A&A...575A..47R, 2015SoPh..290.1173K}. However, this assumption could be incorrect, as the volumes occupied by non-thermal emission and involved in the sausage oscillation are determined by different physical processes. The perpendicular structure of the sausage wave perturbation is determined by parameters of wave-guiding plasma non-uniformity \citep[e.g.,][]{2020SSRv..216..136L}. The magnetic flux tube filled in with non-thermal electrons is determined by the acceleration mechanism. For example, a coronal structure, which is observed in microwaves as a single loop, appears as a filamented structure in EUV \citep[e.g.,][]{2013AstL...39..267Z}. This structure can oscillate as a whole, being perturbed by a sausage wave, while only one its filaments may contain energetic electrons emitting the microwaves. The mismatch of the oscillating and emitting volumes could provide us with important seismological constrains on the enigmatic non-thermal electron acceleration processes. The microwave emission could be considered as a tool for diagnostics of the low-amplitude perturbation of the plasma magnetic by the sausage wave because of the high sensitivity and the non-linear dependence of the intensity of the microwave emission on the both magnitude and direction of the magnetic field and on the parameters of the emitting electrons \citep[e.g.,][]{1982ApJ...259..350D}, resulting in the non-linear microwave response to the linear sausage wave. The aim of this paper is to study how the degree of this non-linearity depends on both the width of the emitting volume and the LoS angle to the magnetic field. The paper is organised as follows. The model and basic equations are described in Section~\ref{s:MHD}. The source of the microwave emission and details of the calculation of the microwave emission are presented in Section~\ref{s:radio}, and details of obtaining the microwave light curves are presented in Section~\ref{s:LC}. Section~\ref{s:results} presents the principal results of the simulations, which are summarized and discussed in Section~\ref{s:discussion}. \section{Plasma slab and MHD wave} \label{s:MHD} We consider a transversely inhomogeneous (along the $x$-axis) plasma slab stretched along a uniform magnetic field $B_0$ directed along $z$-axis. The slab is perturbed by a symmetric (sausage) fast magnetoacoustic MHD mode of the slab. The system is uniform in the $y$ direction. If we place an infinitely remote observer in the $xz$ plane, then we may ignore the $y$ direction and simulate the MHD wave in the two-dimensional slab. So, the components of the unperturbed magnetic field are $B_{z0} = B_0 = \textrm{const}$, $B_{x0} = 0$. The transverse profile of the plasma non-uniformity, shown by black curve in panel (a) in Figure~\ref{f:rho_Bx_Bz}, is defined by the Epstein function \citep{2003A&A...409..325C} \begin{equation}\label{eq:Epstein} \rho_0(x) = \rho_\mathrm{max} \rm{sech}^2 \it \frac{x}{w} + \rho_{\infty} \end{equation} with the characteristic width $w$ referred to as the slab width, and the density contrast \begin{equation}\label{eq:dc} d=\frac{\rho_\mathrm{\max}+\rho_{\infty}}{\rho_{\infty}}. \end{equation} The system is symmetric relatively to $z$ axis (blue line in Figure~\ref{f:rho_Bx_Bz}) where the plasma density is $\rho_\mathrm{\max}+\rho_{\infty}=\rho_0(x=0)$. Plasma density at the point infinitely remote from the $z$ axis is $\rho_{\infty}=\rho_0(x\to\infty)$. Under conditions typical of the solar corona, where the plasma parameter $\beta \ll 1$, the ideal MHD equations \citep[e.g.,][]{1997JPlPh..58..315N} can be linearised, and the perturbed plasma density and the perturbed magnetic field components can be expressed via the transverse component of the perturbed speed $\tilde{V}_{x} = \tilde{V}_x(x,z,t)$: \begin{equation}\label{eq:rho} \tilde{\rho} = - \int \frac{\partial (\rho_0 \tilde{V_{x}})}{\partial x} dt, \end{equation} \vspace{-4mm} \begin{equation}\label{eq:Bx} \tilde{B_{x}} = B_0 \int \frac{\partial \tilde{V_{x}}}{\partial z} dt, \:\:\:\:\:\:\:\:\:\:\:\: \tilde{B_{z}} = - B_0 \int \frac{\partial \tilde{V_{x}}}{\partial x} dt. \end{equation} Since we consider the standing sausage wave, which is periodic in the $z$ direction with the longitudinal wave number $k$ and wavelength $\lambda = 2\pi/k$, the general expression of the transverse component $\tilde{V}_{x}$ \citep[see, e.g.,][]{1995SoPh..159..399N} can be simplified: \begin{equation}\label{eq:Vx} \tilde{V}_{x}(x,z,t) = A U(x) \sin(kz)\cos(k V_\mathrm{ph} t). \end{equation} Here $A$ is the amplitude of the MHD perturbation normalized by the Alfv\'en speed $C_\mathrm{A0}$ at the slab axis ($x=0$), $A \ll 1$, and $V_\mathrm{ph} = \omega / k$ is the phase speed. Analytical expression for the transverse structure $U(x)$ of the sausage wave \begin{equation}\nonumber U(x) = \frac{ \sinh(x/w) }{ \cosh^{\zeta}(x/w)}, \:\:\:\:\:\:\:\:\: \zeta = \frac{ |k|w }{ C_\mathrm{A \infty }} \sqrt{ C_\mathrm{A \infty}^2 - V_\mathrm{ph}^2} +1, \end{equation} for the plasma density inhomogeneity defined by Equation~(\ref{eq:Epstein}) was obtained by \citet{2003A&A...409..325C}. The phase speed is defined by the dispersion equation $$ \frac{ |k|w }{ C_\mathrm{A0}^2} (V_\mathrm{ph}^2 - C_\mathrm{A0}^2) - \frac{2}{|k|w} = \frac{3}{C_\mathrm{A \infty}} \sqrt{C_\mathrm{A \infty}^2 - V_\mathrm{ph}^2}, $$ here $C_\mathrm{A \infty}$ is the Alfv{\'e}n speed at the infinitely remote point ($x \to \infty$). Following Equations (\ref{eq:rho})--(\ref{eq:Vx}), the perturbed values for plasma density and the components of the magnetic field become \begin{equation} \rho = \rho_0 + \tilde{\rho}, \:\:\:\:\:\:\:\:\: B_{z} = B_0 + \tilde{B}_{z}, \:\:\:\:\:\:\:\:\: B_{x} = \tilde{B}_{x}. \label{eq:RhoB_Total} \end{equation} Distributions of these parameters in the $xz$ plane at the moment of the maximal perturbation are shown in panels (b), (c), (d) in Figure~\ref{f:rho_Bx_Bz} where axes $x$ and $z$ are normalised by the characteristic width of the plasma non-uniformity $w$. In this paper, for simplicity, we consider strictly monochromatic (i.e. strictly repetitive) decayless MHD wave; therefore, it is sufficient to consider one oscillation period. The wavelength of the MHD wave was chosen to be $\lambda=10w$. \section{Microwave emission source} \label{s:radio} Thermal plasma in the slab is the source of thermal free-free radio emission. In addition, if there is a population of electrons accelerated up to mildly relativistic energies, such a system is the source of gyrosynchrotron emission, which is usually associated with solar radio emission in the microwave band. The GS source is populated with energetic electrons isotropically distributed over pitch-angles and power-law distributed over energy; the concentration of the electrons is assumed to be constant in the unperturbed source which, according to the above-described approach, occupies a part of the sausage-oscillating slab. The GS source is enclosed between pairs of the magnetic field lines symmetric relatively to the $z$ axis. In the unperturbed slab, the magnetic field lines are straight lines at the distances of $x/w=l/w$ from the $z$ axis; thus $|l/w|$ is the half-width of the GS source. We consider the GS sources both narrower ($|l/w| \leq 1$) and wider ($|l/w| > 1$) than the characteristic slab width. Examples of the unperturbed electron density distributions are shown in Figure~\ref{f:rho_Bx_Bz} (panel (a)) by both grey rectangle for the narrow GS source ($|l/w| = 0.5$) and the cross-hatched rectangle for the wide GS source ($|l/w| = 3$). Hereinafter, we will omit the modulus sign from the width of the GS source, for the simplicity. The sausage wave perturbs the parameters of the slab and, therefore, modifies the shape of the magnetic field lines and, thereby, curves the boundaries of the GS source along the slab. Examples of the pairs of the symmetric perturbed magnetic field lines, which are the outer boundaries of the GS sources, are shown in Figure~\ref{f:rho_Bx_Bz} (panels (b), (c), (d)) by the same-coloured wavy curves. The local density of the non-thermal particles was assumed to be inversely proportional to the local width of the GS source, to provide that the flux of the accelerated electrons through the cross-section of the slab is conserved. So, the modulation of the local distance between two symmetric magnetic field lines leads to the modulation of the electron density along $z$ axis. Modulation of the energy and pitch-angle distributions by the MHD oscillations was neglected. We use the Fast Gyrosynchrotron Codes \citep{Fleishman2010, Kuznetsov2021} to calculate the radio emission from the slab. As an input data, these codes use the specified distributions of the emission source parameters (such as the thermal plasma density and temperature, concentration and distribution characteristics of the nonthermal electrons, and the magnetic field strength and direction) along a line-of-sight. The codes include the contributions of both the GS (nonthermal) and free-free (thermal) emission mechanisms. However, we note that the thermal emission is several orders of magnitude weaker than the GS emission, for the model parameters listed below. The codes calculate the resulting radio emission by integrating the radiation transfer equations for both the left-hand ($L$) and right-hand ($R$) polarised components, which correspond to either ordinary or extraordinary electromagnetic modes depending on the local magnetic field direction. We consider the following parameters of the unperturbed slab: magnetic field strength $B_0=200$ G, plasma temperature $T_0=10^7$~K, plasma density at the $z$ axis $\rho_\mathrm{max} = 5 \times 10^9$~cm$^{-3}$, plasma density outside the slab $\rho_\infty = \rho_\mathrm{max}/(d-1)$, where the density contrast $d = 10$ corresponds to the plasma in flaring loops, the characteristic slab width $w=3\times 10^8$~cm. The nonthermal electrons in the unperturbed GS source are assumed to have the constant concentration of $n_{\mathrm{b}}=10^5$ $\textrm{cm}^{-3}$, power-law energy distribution in the range from 0.1 to 10~MeV with the spectral index of $\delta = 3$, and isotropic pitch-angle distribution. These parameters were chosen to provide the GS emission spectral peak at around 3--5~GHz, which is a typical value in solar flares. We consider the $z$ axis inclined relatively to the line-of-sight by the viewing angle $\theta$ from 40$^\circ$ to 89$^\circ$. \section{Light curves} \label{s:LC} Intensity of the GS emission strongly and non-linearly depends on both the local magnitude and direction of the magnetic field (see, for example, Equation~(1) in \citep{Fleishman2010} or the empirical approximation in \citep{1982ApJ...259..350D}). Standing sausage wave modulates parameters of the GS source which results in oscillations of the observed radio flux. Application of the Fast GS Codes allows one to obtain the distribution of the intensity ($I = R+L$) of the radio emission along the plane-of-sky which is inclined relatively to the $z$ axis by the angle $90^\circ - \theta$. Running the procedure for each time during the period of the sausage wave, one obtains the temporal evolution of the intensity (light curve). We split the period of the wave into 20 time segments, so, by definition, the period of the sausage wave $P_\mathrm{saus} = 20$ time units (t.\,u.). We consider the light curves at two positions: $z_1$ which is a node of the sausage wave and $z_2$ which is an anti-node (see panel (b) in Figure~\ref{f:rho_Bx_Bz}). A few representative examples of the light curves, smoothed with the cubic spline interpolation, are presented in Figures~\ref{f:timeprofs_17_50}--\ref{f:timeprofs_03_50} (upper row in each figure). The light curves are coloured in green for $z_1$ and blue for $z_2$. We consider both the optically thin emission at 17~GHz (Figures~\ref{f:timeprofs_17_50}--\ref{f:timeprofs_17_70}) and the optically thick emission at 1.5~GHz (Figure~\ref{f:timeprofs_03_50}). In each of these figures, the upper left panel corresponds to a narrower GS source ($l/w = 0.1$), while the upper right panel --- to a wider GS source ($l/w = 3$ or $l/w = 1.5$). Figures~\ref{f:timeprofs_17_50} and Figure~\ref{f:timeprofs_03_50} correspond to $\theta = 50^\circ$ while Figure~\ref{f:timeprofs_17_70}~--- to $\theta = 70^\circ$. \section{Results} \label{s:results} It is clearly seen in Figures~\ref{f:timeprofs_17_50}--\ref{f:timeprofs_03_50} that the light curves deviate from a sinusoidal shape, i.e., the response of the radio emission to the linear MHD perturbation is nonlinear. Moreover, this deviation depends on the width of the GS source $l/w$ and the viewing angle $\theta$. For example, for $\theta = 50^\circ$, the non-linearity is more pronounced for the narrower GS source, which is manifested as a flattening of minima or maxima of the light curves (e.g., blue curve in panel (a) of Figure~\ref{f:timeprofs_17_50}). In contrast, for $\theta = 70^\circ$, the flattening and hence non-linearity is more pronounced for the wider GS source (see both blue and green curves in panel (b) of Figure~\ref{f:timeprofs_17_70}). To estimate the degree of the non-linearity, we calculated the modulation depth $$ \Delta = \frac{I_{max} - I_{min}}{I_{mean}}, $$ where $I_{max}$ and $I_{min}$ are the maximal and minimal intensities during the MHD wave period, and $I_{mean}$ is the respective mean value. For the narrower GS source, the modulation depths are of $\Delta \sim 100$--200\% and $\Delta \sim 60$--130\% in the optically thin and optically thick regimes, respectively. The high values of $\Delta$ are comparable with those usually associated with quasi-periodic ejection of electrons \citep[e.g.,][]{2016SoPh..291.3427K}. The deep modulation can be explained by the location of the narrow GS source near the $z$ axis where the oscillations magnitude of the waveguide parameters is maximal. For the wider GS source, the modulation depth is lower: $\Delta \sim 10$--20\% for both the optically thin and optically thick emissions. On the other hand, the non-linear effect is much stronger: an extra peak or dip in the lightcurves appears either at $t = P_\mathrm{saus}/4 = 5$ (as in the green curves in panel (b) of Figure~\ref{f:timeprofs_17_50} and panel (a) of Figure~\ref{f:timeprofs_03_50} or as in the blue curve in panel (b) of Figure~\ref{f:timeprofs_03_50}) or at $t = 3P_\mathrm{saus}/4 = 15$ (as in the blue curves in panel (b) of Figure~\ref{f:timeprofs_17_50} and panel (a) of Figure~\ref{f:timeprofs_03_50}). The amplitude of the extra peak/dip varies from around 10\% to 100\% of the above-mentioned modulation depth $\Delta $ for different combinations of $l/w$ and $\theta$. Lower panels in Figures~\ref{f:timeprofs_17_50}--\ref{f:timeprofs_03_50} demonstrate the ideal Fourier periodograms, where one of the peaks corresponds to the principal period of the sausage wave at $P_1 = P_\mathrm{saus} = 20$~t.\,u. The nonlinear response of the radio emission mechanism to the MHD oscillation results in appearance of the second harmonic with the period $P_2 = 10$~t.\,u. The principal period dominates in most of the periodograms, while the second harmonic is less intensive. However, for some combinations of $l/w$ and $\theta$, the Fourier power of the second harmonic becomes higher; an example is presented in Figure~\ref{f:timeprofs_03_50} (panel (d)). We should stress that the second harmonic obtained in our study, which the period exactly equals to the half-period of the principal harmonic, $P_2 = 0.5 P_1$, differs from the second harmonic of the eigen sausage mode, which the period is greater than $0.5 P_1$ because of the wave dispersion \citep[e.g.,][]{2005LRSP....2....3N}. We characterize of the degree of non-linearity as a ratio ($\xi$) of the Fourier power of the second harmonic to that of the principal harmonic. Examples of the $\xi$ maps, as functions of the source width and the viewing angle, are shown in Figure~\ref{f:Fourier_ratios} for both the optically thin (panel (a)) and optically thick (panel (b)) regimes. The maps correspond to the anti-node, $z_2$ (see Figure~\ref{f:rho_Bx_Bz}); note that we plot $\log_{10} \xi$ values here. In the optically thin regime, the $\xi$ values increase with the increasing of the width $l/w$ for most angles $\theta$. Besides, there is a bright peculiarity at the viewing angles around $\theta = 63^\circ$, where the second harmonic dominates over the principal harmonic. In the optically thick regime, the map is more complicated. Different characteristic properties of the ratio $\xi$ in both cases are also clearly seen along the cross-sections of the maps at different selected angles $\theta$ ($\theta = 50$, 60, 70, and 80$^\circ$), marked by the vertical lines in two upper panels of Figure~\ref{f:Fourier_ratios}. The cross-sections are presented in panels (c) and (d) of Figure~\ref{f:Fourier_ratios} for the optically thin and optically thick regimes, respectively. \section{Summary and discussion} \label{s:discussion} We performed simulations of the microwave radio response to a standing sausage wave perturbing a zero-$\beta$ plasma slab, and, thereby, the source of the GS emission, modulated by the sausage wave. The axis of the slab is directed along the magnetic field, and the perpendicular profile of the plasma density profile is smooth. The principal novel and important element which differs this study from the previous ones is that the GS-emitting electrons fill only a layer stretched along the axis of the slab, and localised in the perpendicular direction, i.e., the oscillating and emitting volume do not coincide. Our results confirm that the microwave response to a linear MHD wave can be highly non-linear. This result was expected from, for example, simplified expressions for the emission and absorption coefficients \citep{1982ApJ...259..350D} and was found previously in simulated microwave light curves \citep{2012ApJ...748..140M} in a simplified model. The principally new result of our study is that the degree of the non-linearity depends on both the relative width of the GS source inside the oscillating volume, and the viewing angle. The non-linearity appears in the deviation of the microwave intensity light curve from a sinusoidal shape of the sausage oscillation. This results in appearing the second and higher harmonics of the monochromatic sausage oscillation in Fourier spectra of the time-varying GS signal. Analysis of the degree of non-linearity defined here as a ratio of the Fourier power of the second harmonic to that of the principal harmonic allowed us to define conditions where the non-linear effect is the strongest. The effect found can be considered as a potential tool for the diagnostics of the transverse scales of the regions containing accelerated electrons within sausage-oscillating plasma structures. If we neglect the effect of non-thermal electron diffusion across the magnetic field and take into account the divergence of magnetic tubes in the cusp, this transverse size can be considered as a characteristic perpendicular size of the acceleration region and, therefore, as an upper limit for the perpendicular size of the magnetic reconnection region. Note that previously this size was defined from numerical simulations and was prescribed by certain MHD models. The results presented in our paper open a possibility to estimate the spatial scales of the reconnecting process independently, by analysing the QPPs in the microwave emission. This independent (observational) knowledge is important for empirically constraining QPP models involving the reconnection, for example, for coalescence of two magnetic loops \citep[][]{1987ApJ...321.1031T, 2016PhRvE..93e3205K} or for magnetic tuning fork model \citep[][]{2016ApJ...823..150T}, and also flapping oscillations of the macroscopic current sheet above the loop arcade \citep[][]{2012SoPh..277..283A}. In addition, the results obtained are relevant to the interpretation of multi-modal QPPs in the solar \citep[e.g.,][]{2005A&A...439..727M} and stellar \citep[e.g.,][]{1983ApJ...272L..15L} flares, as the observed second, and sometimes third time harmonics can correspond to the same monochromatic MHD mode. However, the difference between the second harmonic obtained in our study from the second harmonic of the eigen sausage mode is that the non-linearity of the microwave response leads to its period equals exactly to the half-period of the principal harmonic, while for the eigen sausage mode it is not so, because of the wave dispersion \citep[e.g.,][]{2009A&A...493..259I}. We should stress that our model is deliberately simple, aiming to demonstrate the discussed effect. A detailed analysis needs to account for the 3D geometry \citep[see, e.g.,][]{2015ApJ...801...23L, 2015ApJ...812...22C, 2015ApJ...810...87L, 2021MNRAS.505.3505K}, the beam size of a radio instrument, variation of the GS spectrum, wave dispersion, etc. An important research avenue is forward modelling of the manifestation of sausage oscillations in the data delivered by the future Square Kilometre Array (SKA) instrument, which is expected to well resolve these oscillations in both time, space and GS spectrum \citep{2015aska.confE.169N}. \section*{Acknowledgements} This study is supported by the Russian Science Foundation project No. 21-12-00195. \section*{Data Availability} The results obtained in the paper are theoretical; no real observations have not been used. Distributions of parameters in the magneto-plasma slab were obtained using equations in Section~\ref{s:MHD}, and their digital version is available on request to the corresponding author. \bsp % \label{lastpage}
Title: High-order isospin-dependent surface tension contribution to the fourth-order symmetry energy of finite nuclei
Abstract: The relation between the fourth-order symmetry energy $E_{\rm{sym,4}}(\rho_0)$ of nuclear matter at saturation density $\rho_0$ and its counterpart $a_{\rm{sym,4}}(A)$ of finite nuclei in a semiempirical nuclear mass formula is revisited by considering the high-order isospin-dependent surface tension contribution to the latter. We derive the full expression of $a_{\rm{sym,4}}(A)$, which includes explicitly the high-order isospin-dependent surface tension effects, and find that the value of $E_{\rm{sym,4}}(\rho_0)$ cannot be extracted from the measured $a_{\rm{sym,4}}(A)$ before the high-order surface tension is well constrained. Our results imply that a large $a_{\rm{sym,4}}(A)$ value of several MeVs obtained from analyzing nuclear masses can nicely agree with the empirical constraint of $E_{\rm{sym,4}}(\rho_0)\lesssim 2$ MeV from mean-field models and does not necessarily lead to a large $E_{\rm{sym,4}}(\rho_0)$ value of $\approx 20$ MeV obtained previously without considering the high-order surface tension. Furthermore, we also give the expression for the sixth-order symmetry energy $a_{\rm{sym,6}}(A)$ of finite nuclei, which involves more nuclear matter bulk parameters and the higher-order isospin-dependent surface tension.
https://export.arxiv.org/pdf/2208.10438
\title{High-order isospin-dependent surface tension contribution to the fourth-order symmetry energy of finite nuclei} \author{Bao-Jun Cai\footnote{bjcai87@gmail.com}} \affiliation{Quantum Machine Learning Laboratory, Shadow Creator Inc., Shanghai 201208, China} \author{Rui Wang\footnote{wangrui@sinap.ac.cn}} \affiliation{Key Laboratory of Nuclear Physics and Ion-beam Application (MOE), Institute of Modern Physics, Fudan University, Shanghai 200433, China} \author{Zhen Zhang\footnote{zhangzh275@mail.sysu.edu.cn}} \affiliation{Sino-French Institute of Nuclear Engineering and Technology, Sun Yat-Sen University, Zhuhai 519082, China} \author{Lie-Wen Chen\footnote{Corresponding author; lwchen@sjtu.edu.cn}} \affiliation{School of Physics and Astronomy and Shanghai Key Laboratory for Particle Physics and Cosmology, Shanghai Jiao Tong University, Shanghai 200240, China} \date{\today} \section{Introduction}\label{S1} The fourth-order symmetry energy of nuclear matter plays an important role in determining the properties of neutron stars such as the proton fraction and the core-crust transition density/pressure\,\cite{Zha01,Ste06,Xu09,Cai12,Sei14,Gon17,PuJ17}, with the former being critical to neutron star cooling\,\cite{Lat91,Yak01,Yak04}. Mathematically, the fourth-order symmetry energy of nuclear matter is defined as $E_{\rm{sym,4}}(\rho)\equiv24^{-1} \partial^4E(\rho,\delta)/\partial\delta^4|_{\delta=0}$ where $E(\rho,\delta)$ is the equation of state (EOS) of an asymmetric nucleonic matter (ANM) at density $\rho = \rho_{\rm{n}}+\rho_{\rm{p}}$ and isospin asymmetry $\delta=(\rho_{\rm{n}}-\rho_{\rm{p}})/(\rho_{\rm{n}}+\rho_{\rm{p}})$ with $\rho_{\rm{n}}$($\rho_{\rm{p}}$) denoting the neutron(proton) density\,\cite{Dan02,ditoro,LCK08,Tesym,Col14,Bal16,Oer17,Garg18,LiBA18}. Most theoretical model calculations indicate that the $E_{\rm{sym,4}}(\rho)$ is much smaller than its preceding term in the ANM EOS, namely, the nuclear symmetry energy defined similarly as $E_{\rm{sym}}(\rho)\equiv2^{-1} \partial^2E(\rho,\delta)/\partial\delta^2|_{\delta=0}$, and this empirical fact can be illustrated from the free Fermi gas (FFG) model. In particular, in the relativistic FFG model, one has the ratio $\Psi\equiv E_{\rm{sym,4}}(\rho)/E_{\rm{sym}}(\rho)=108^{-1}\cdot(10\nu^4+11\nu^2+4)/(\nu^4+2\nu^2+1)$ with $\nu=k_{\rm{F}}/M$, where $M$ is the nucleon rest mass and $k_{\rm{F}}$ is nucleon Fermi momentum in the symmetric nucleonic matter (SNM). Consequently, the $\Psi$ takes the value in the range of $1/27\leq\Psi\leq 5/54$\,\cite{CaiLi2022,CaiLi2022a}. For example, the $E_{\rm{sym,4}}(\rho_0)$ in the non-relativistic FFG model at the saturation density $\rho_0\approx0.16\,\rm{fm}^{-3}$ is about 0.45\,MeV by taking $M\approx939\,\rm{MeV}$, which is much less than the value of $\sim 13$ MeV for the symmetry energy $E_{\rm{sym}}(\rho_0)$ in the non-relativistic FFG model. When considering nucleon-nucleon interactions, essentially all the predictions on the fourth-order symmetry energy using either phenomenological models\,\cite{Cai12,Sei14,Gon17,PuJ17,Che09,Agr17} or microscopic many-body theories\,\cite{Lee98,Bom91,Kai15} point to the self-consistent constraint $E_{\rm{sym,4}}(\rho_0)\lesssim2\,\rm{MeV}$. On the other hand, a systematic expansion (i.e., the ``leptodermous'' expansion) of the energy per nucleon in a finite nucleus is usually made in terms of the small quantity $I\equiv(N-Z)/A\approx\delta$ and the length parameter $A^{-1/3}$, such that $B(N,Z)/A=\sum_{i,j=0,1,2,\cdots}\overline{B}_{ij}I^{2i}A^{-j/3}$ plus terms like the Coulomb and pairing contributions\,\cite{Mye69,Brack85}, here $N$ and $Z$ are the neutron and proton numbers in a finite nucleus with mass number $A=N+Z$, respectively. Specifically, the classic Bethe--Weizs$\ddot{\rm a}$cker mass formula gives\,\cite{Mye69} \begin{align} B(N,Z) =& -a_{\rm{v}}A + a_{\rm{sur}}A^{2/3} + a_{\rm{cou}}\frac{Z^2(1 - Z^{-2/3})}{A^{1/3}}\notag\\ &+ a_{\rm{a}}I^2A + a_{\rm{p}}\frac{(-1)^{N} + (-1)^Z}{A^{2/3}}, \label{BW} \end{align} here $-a_{\rm{v}}$ is the volume energy coefficient, $a_{\rm{sur}}$ is the surface energy coefficient, $a_{\rm{cou}}$ characterizes the Coulomb interaction between protons, $a_{\rm{a}}$ is the so-called symmetry energy coefficient of finite nuclei and $a_{\rm{p}}$ is the paring energy coefficient. By analyzing nuclear masses\,\cite{Mye69}, a few coefficients appearing in Eq.~(\ref{BW}) are found to be about $a_{\rm{v}}\approx15.7\,\rm{MeV},a_{\rm{sur}}\approx18.6\,\rm{MeV}, \rm{and}~ a_{\rm{cou}}\approx0.7\,\rm{MeV}$\,\cite{Mye69}; see also Ref.\,\cite{Mye96}. By considering the isospin correction to the coefficient of nuclear surface tension [see formula (\ref{st_2})], the symmetry energy $a_{\rm{a}}$ appearing in Eq.~(\ref{BW}) could be generalized to be\,\cite{Dan03} \begin{equation}\label{finEsym-xx} a_{\rm{a}}\to a_{\rm{sym}}(A)=\frac{\alpha}{1+(\alpha/\beta)A^{-1/3}}, \end{equation} which depends only on the mass number $A$. Here $\alpha=E_{\rm{sym}}(\rho_0)$ and $\beta$ is the so-called surface symmetry energy; for details on $\beta$ see the relevant discussion in Sec.~\ref{S2}. Generally, even higher order contributions could be considered in the semi-empirical Bethe--Weizs$\ddot{\rm a}$cker mass formula in the form of $a_{\rm{a}}I^2A\to (a_{\rm{a}}I^2+a_{\rm{a,4}}I^4+\cdots)A$, where $a_{\rm{a,4}}$ is the fourth-order symmetry energy coefficient of finite nuclei, which is the main topic of the present work. The $a_{\rm{a,4}}$ coefficient was recently found to have a sizable value by analyzing the double difference of the ``experimental'' symmetry energy extracted from nuclear masses\,\cite{AME2012}. Specifically, $a_{\rm{a,4}}\approx 3.28\pm0.50$\,MeV was extracted in Ref.\,\cite{Jia14} and $a_{\rm{a,4}}\approx8.33\pm1.21$\,MeV in Ref.\,\cite{Tia16}. Moreover, in Ref.\,\cite{Jia15}, the fourth-order symmetry energy of finite nuclei was investigated by fitting nuclear mass data via the nuclear mass formula with two different forms of the Winger energy, and the obtained constraint on the $a_{\rm{a,4}}$ is about $3.91\pm0.10\,\rm{MeV}$. Some questions naturally emerge: What implication does a sizable $a_{\rm{a,4}}$ give on the value of $E_{\rm{sym,4}}(\rho_0)$? Is the fourth-order symmetry energy of nuclear matter also large and inconsistent with the empirical constraint that $E_{\rm{sym,4}}(\rho_0)\lesssim2\,\rm{MeV}$? In this work, the full formula for the fourth-order symmetry energy of finite nuclei is derived, and the conclusion is that although the $a_{\rm{a,4}}$ obtained by fitting nuclear masses is sizable, the corresponding $E_{\rm{sym,4}}(\rho_0)$ could still be relatively small to be consistent with the empirical constraints due to the high-order isospin-dependent surface tension effects. The paper is organized as follows. Section \ref{S2} gives a brief review of the current status of the symmetry energy and the fourth-order symmetry energy of finite nuclei, starting from the general discussion of Eq.\,(\ref{finEsym-xx}). In Sec.~\ref{S3}, the formula for the fourth-order symmetry energy of finite nuclei is given together with numerical demonstrations, and the emphasis is on the surface contribution. Section \ref{SS6} is devoted to the analytical expression for the sixth-order symmetry energy of finite nuclei. Section \ref{S4} gives the summary of the present work. \section{Symmetry Energies of Finite Nuclei: Relevant Status Review}\label{S2} We start our discussions by reviewing the status of the symmetry energy of finite nuclei. For a finite nucleus with $N$ neutrons and $Z$ protons, the total difference $\Delta=N-Z$ could be decomposed into two parts as $\Delta_{\rm{v}}=N_{\rm{v}}-Z_{\rm{v}}$ and $\Delta_{\rm{s}}=N_{\rm{s}}-Z_{\rm{s}}$, representing the isospin difference in the bulk of the nucleus and that distributed on its surface. Naturally one has $\Delta=\Delta_{\rm{v}}+\Delta_{\rm{s}}$. As more isospin asymmetry moves to the surface, physically the nucleus becomes looser, indicating that the surface tension of the nucleus becomes smaller compared with the one in which no isospin asymmetry is distributed on the surface. Moreover, as either neutrons or protons are distributed more on the nucleus surface, the surface tension $\sigma=a_{\rm{sur}}/(4\pi r_0^2)$, where $r_0$ is the nuclear matter radius parameter defined by $4\pi\rho_0r_0^3/3 = 1$, always decreases. Consequently, the surface tension with some isospin asymmetry distributed on the nucleus surface could be written as\,\cite{Dan03} \begin{equation}\label{st_2} \sigma=\sigma_0-\gamma\mu_{\rm{a}}^2=\sigma_0\left[1-(\gamma/\sigma_0)\mu_{\rm{a}}^2\right], \end{equation} where $\mu_{\rm{a}}=(\mu_{\rm{n}}-\mu_{\rm{p}})/2$ is the chemical potential difference between neutrons and protons, and $\gamma$ is a parameter. The leading non-trivial contribution starting at order $\mu_{\rm{a}}^2$ reflects the aforementioned symmetry between neutrons and protons. Based on the relation (\ref{st_2}), Ref.\,\cite{Dan03} derived a closed expression for the symmetry energy of finite nuclei incorporating the surface effects, see formula (\ref{finEsym-xx}) where $\alpha\equiv a_{\rm{a}}^{\rm{v}}$ is the coefficient in front of $(N_{\rm{v}}-Z_{\rm{v}})^2/A$, and $\beta\equiv1/16\pi r_0^2\gamma\equiv a_{\rm{a}}^{\rm{s}}$. The $a_{\rm{a}}^{\rm{v}}$ and $a_{\rm{a}}^{\rm{s}}$ are the volume (bulk) symmetry energy and the surface symmetry energy\,\cite{Dan03,Dan09}, respectively. In the limit of large $A$, the isospin asymmetry gets primarily stored within the bulk and the coefficient $a_{\rm{sym}}(A)$ tends towards $a_{\rm{a}}^{\rm{v}}$, i.e., we also have $a_{\rm{a}}^{\rm{v}}=E_{\rm{sym}}(\rho_0)$. On the other hand, in the limit of small mass number $A$, the storage of the isospin asymmetry gets shifted to the surface and the ratio $a_{\rm{sym}}(A)/A$ scales as $a_{\rm{a}}^{\rm{s}}/A^{2/3}$. In addition, the relation is further established between the coefficient $\beta$ and the ones appearing in the nuclear droplet model\,\cite{Mye69}, i.e., $\beta={4Q}/{9}=({4H}/{9})/(1-{2P}/{3J})$, where $J\equiv E_{\rm{sym}}(\rho_0)=\alpha$, $Q$ is the neutron skin stiffness coefficient\,\cite{Mye96} and the individual constants $H, P$ and $G$, found equal to $G = 3JP/2Q$, describe the dependence of the surface energy on the bulk isospin asymmetry and on normalized size of the neutron skin. The uncertainties on the symmetry energy of finite nuclei are mainly due to those on the surface symmetry energy coefficient $\beta$, e.g., Ref.\,\cite{Dan03} constrained the ranges for the parameters $\alpha$ and $\beta$ as 27\,MeV$\lesssim\alpha\lesssim$31\,MeV and $11\,\rm{MeV}\lesssim\beta\lesssim14\,\rm{MeV}$ and their ratio as $2.0\lesssim\alpha/\beta\lesssim2.8$\,\cite{Dan03}. Similarly, an earlier analysis via the Thomas--Fermi model gives $Q$ about 35.4\,MeV, leading to the $\beta$ about 15.7\,MeV, and even more earlier the droplet model gave $Q\approx16\,\rm{MeV}$ and $\beta\approx 7.1\,\rm{MeV}$\,\cite{Mye69}. Recently, based on analysis of the isobaric analog states (IAS), the volume symmetry energy coefficient and the surface symmetry energy coefficient were found to be about $a_{\rm{a}}^{\rm{v}}=\alpha\approx35.3\,\rm{MeV}$ and $a_{\rm{a}}^{\rm{s}}=\beta\approx 9.7\,\rm{MeV}$, respectively\,\cite{Dan14}, and consequently $\alpha/\beta\approx3.6$. On the other hand, when combining the IAS analysis and the neutron skin thickness ($\Delta r_{\rm{np}}$) constraint, the relation between the coefficients $a_{\rm{a}}^{\rm{v}}$ and $a_{\rm{a}}^{\rm{s}}$ is found to be slightly changed. Specifically, the $a_{\rm{a}}^{\rm{v}}$ and the $a_{\rm{a}}^{\rm{s}}$ obtained in the half-infinite calculation are found to be about 30.2-33.7\,MeV and 14.8-18.5\,MeV, leading to the ratio $\alpha/\beta\approx1.92\pm0.24$, while the best values from the combined analysis from IAS and the $\Delta r_{\rm{np}}$ are about 33.2\,MeV and 10.7\,MeV, respectively\,\cite{Dan14}, and thus $\alpha/\beta\approx3.1$. We have made attempt to review all the relevant investigations of the coefficients $a_{\rm{a}}^{\rm{v}}$ and $a_{\rm{a}}^{\rm{s}}$. The main point we would like to stress here is that these uncertainties may further induce essential uncertainties on the fourth-order symmetry energy of finite nuclei through the ratio $\alpha/\beta$. It should be pointed out that if the nuclear surface tension is truncated as in Eq.~(\ref{st_2}), there would be no higher order symmetry energies from the surface contribution in finite nuclei. However, it is natural and physical that the nuclear surface tension should include higher order contributions, i.e., \begin{equation}\label{st-kk} \sigma\approx\sigma_0-\gamma\mu_{\rm{a}}^2-\gamma'\mu_{\rm{a}}^4-\gamma''\mu_{\rm{a}}^6+\cdots, \end{equation} where $\gamma'$ and $\gamma''$, etc., are effective parameters. Under this situation, one can obtain higher order terms in the binding energy per nucleon in finite nuclei, which is the starting point for the calculations of the present work. Recently, the bulk term $a_{\rm{sym,4}}^{\rm{v}}(A)$ of the fourth-order symmetry energy $a_{\rm{sym,4}}(A)$ of finite nuclei was derived using a variational method in Ref.\,\cite{WangRui17} based on the lowest-order isospin truncation of Eq.\,(\ref{st_2}) for the surface tension, and the analytic expression is obtained as \begin{equation}\label{a_a4v} a_{\rm{sym,4}}^{\rm{v}}(A)=\frac{1}{(1+\alpha/\beta A^{1/3})^4}\left(E_{\rm{sym},4}(\rho_0)-\frac{L^2}{2K_0}\right). \end{equation} Based on this formula, Ref.\,\cite{WangRui17} claimed that the $E_{\rm{sym,4}}(\rho_0)$ should be as large as about $20\,\rm{MeV}$ if $a_{\rm{a,4}}\approx3.28\,\rm{MeV}$ is used\,\cite{Jia14}. Such a large value of $E_{\rm{sym,4}}(\rho_0)\approx 20$ MeV is partially due to the term $L^2/2K_0$ which is about 4.2\,MeV if $L\approx45\,\rm{MeV}$\,\cite{LiBA13,Zha13,Oer17} and $K_0\approx240\,\rm{MeV}$\,\cite{You99,Shl06,Che12,Garg18} are adopted, and this confusing value is significantly larger than the empirical constraint of $E_{\rm{sym,4}}(\rho_0)\lesssim2\,\rm{MeV}$. In the present work, we show that a high-order surface contribution will appear in the fourth-order symmetry energy of finite nuclei due to the high-order isospin dependent surface tension ($\gamma'$) as in Eq.\,(\ref{st-kk}), and thus a large $a_{\rm{a,4}}$ value of several MeVs obtained from analyzing nuclear masses does not have to lead to a large $E_{\rm{sym,4}}(\rho_0)$ value of $\approx 20$~MeV obtained without considering the high-order isospin dependent surface tension. \section{Fourth-order Symmetry Energy of Finite Nuclei: Full Expression}\label{S3} Generally, one can obtain the following equations to determine the bulk and surface isospin asymmetries as well as the chemical potential difference in finite nuclei\,\cite{Dan03}, \begin{equation}\label{ther33} \Delta_{\rm{v}}+\Delta_{\rm{s}}=\Delta,~~2\alpha \Delta_{\rm{v}}/A-\mu_{\rm{a}}=0,~~\Delta_{\rm{s}}/S+\d\sigma/\d\mu_{\rm{a}}=0, \end{equation} where $S=4\pi r_0^2A^{2/3}$ is the surface area of the nucleus via the relation $R=r_0A^{1/3}$. It should be pointed out that the equations shown in (\ref{ther33}) are not complete in the sense that as higher order terms like the $\gamma'\mu_{\rm{a}}^4$ would induce a high order symmetry energy in finite nuclei related to the surface properties, one needs to add the bulk term $4\alpha_4(\Delta_{\rm{v}}/A)^3\equiv 4E_{\rm{sym,4}}(\rho_0)(\Delta_{\rm{v}}/A)^3$ originating from the fourth-order symmetry energy to the second equation and to solve self-consistently. However, since this part was already obtained in Ref.\,\cite{WangRui17}, for the purpose of the present work, we will not give the detailed derivations here and only focus on the surface contribution. \subsection{Effective Symmetry Energies for Finite Nuclei} By introducing the function $f=\sigma/\sigma_0$, one obtains in the situation $f=1-\theta\mu_{\rm{a}}^2\equiv 1+y$ (with $\theta\equiv\gamma/\sigma_0$ and $y\equiv -\theta\mu_{\rm{a}}^2$) the following expressions for $\Delta_{\rm{v}},\Delta_{\rm{s}}$ and $\mu_{\rm{a}}$, \begin{equation}\label{exex} \frac{\Delta_{\rm{v}}}{\Delta}=\frac{1}{1+\phi},~~\frac{\Delta_{\rm{s}}}{\Delta}=\frac{\phi}{1+\phi},~~\mu_{\rm{a}}=\frac{2\alpha}{A}\frac{\Delta}{1+\phi}, \end{equation} where $\phi=\alpha/\beta A^{1/3}$. The effective symmetry energy (appearing in the mass formula in the form of $a_{\rm{a}}^{\rm{eff}}(N,Z)I^2A$) in finite nuclei could be obtained as, \begin{equation} a_{\rm{a}}^{\rm{eff}}(N,Z)=\left.\left[\frac{\alpha \Delta_{\rm{v}}^2}{A}+\mu_{\rm{a}}\Delta_{\rm{s}}+\sigma_0S(f-1)\right]\right/AI^2, \end{equation} which includes the effects from higher order symmetry energies as $a_{\rm{a}}^{\rm{eff}}(N,Z)\approx a_{\rm{sym}}(A)+a^{\rm s}_{\rm{sym,4}}(A)I^2+\cdots$. In particular, for $f=1-\theta\mu_{\rm{a}}^2$ the effective symmetry energy $a_{\rm{a}}^{\rm{eff}}(N,Z)$ reduces to $a_{\rm{sym}}(A)=\alpha/(1+\alpha/\beta A^{1/3})$. Moreover, in this simple model there is only one effective parameter $\theta$, and it is determined uniquely by the surface symmetry energy coefficient $\beta$ and the surface tension $\sigma_0$. Similarly, the effective fourth-order symmetry energy of finite nuclei is defined as \begin{equation}\label{oka4} a_{\rm{a,4}}^{\rm{eff}}(N,Z)=[{a_{\rm{a}}^{\rm{eff}}(N,Z)-a_{\rm{sym}}(A)}]/{I^2}, \end{equation} which contains even higher order contributions, e.g., the sixth-order symmetry energy. As long as the function $f$ is depending on the combination $y=-\theta\mu_{\rm{a}}^2$, i.e., $f=f(y)$, it could be proved straightforwardly that the bulk and the surface isospin asymmetries are given, by generalizing the first two relations of (\ref{exex}), as \begin{align} \frac{\Delta_{\rm{v}}}{\Delta}=&\left(1+\phi \frac{\d f}{\d y}\right)^{-1},~~ \frac{\Delta_{\rm{s}}}{\Delta}=\phi\frac{\d f}{\d y}\left(1+\phi \frac{\d f}{\d y}\right)^{-1}.\label{ckck-12} \end{align} In this situation, the effective symmetry energy of finite nuclei can be obtained as \begin{align} a_{\rm{a}}^{\rm{eff}}(N,Z) =&\frac{\alpha(1+2\phi {\d f}/{\d y})}{(1+\phi {\d f}/{\d y})^2} +\frac{4\pi r_0^2\sigma_0}{I^2A^{1/3}}\left[f(y)-1\right],\label{aNZ} \end{align} and the $\mu_{\rm{a}}$ should be self-consistently obtained by solving the three equations of (\ref{ther33}). As $A\to\infty$ and $N-Z\to\infty$ but the ratio $I$ is fixed, the chemical potential difference $\mu_{\rm{a}}$ will also be fixed for a given $I$. In particular, all effective models for $f$ tend to be the same in the large-$A$ limit and $\mu_{\rm{a}}\approx\mu_{\rm{a}}^{\infty}\equiv2\alpha I$, or equivalently $2\mu_{\rm{a}}=\mu_{\rm{n}}-\mu_{\rm{p}}\approx 4\alpha\delta$, indicating that the $a_{\rm{a,4}}^{\rm{eff}}(N,Z)$ approaches to zero in this limit. It should be remembered that only the surface contribution to the fourth-order symmetry energy is studied here, see the comments given at the beginning of this section. If $f=1+y$ is adopted, then Eq.\,(\ref{aNZ}) naturally reduces to $\alpha/(1+\phi)$. \subsection{Surface Fourth-order Symmetry Energy} Formula (\ref{aNZ}) itself could be used to derive the fourth-order symmetry energy of finite nuclei. We start from formula (\ref{aNZ}) by assuming that the function $f$ takes the form $ f\approx1+y+\kappa y^2$ where $\kappa=-\gamma'\sigma_0/\gamma^2$ is an effective parameter characterizing the fourth-order contribution to the isospin splitting of the $f=\sigma/\sigma_0$ [see Eq.~(\ref{st-kk})]. Consequently one has $f'\equiv\d f/\d y=1+2y\kappa$, and the first term in Eq.~(\ref{aNZ}) becomes \begin{align}\label{dk-1} \frac{\alpha(1+2\phi f')}{(1+\phi f')^2}\approx\frac{\alpha(1+2\phi)}{(1+\phi)^2}-\frac{4\phi^2y\alpha}{(1+\phi)^3}\kappa, \end{align} to order $\kappa$. Similarly the second term in Eq.~(\ref{aNZ}) is expanded as $ {4\pi r_0^2\sigma_0(y+y^2\kappa)}/{I^2A^{1/3}}$, where the first term here gives \begin{equation}\label{dk-2} \frac{4\pi r_0^2\sigma_0y}{I^2A^{1/3}}\approx-\frac{\alpha\phi}{(1+\phi)^2}-\frac{16\alpha^3\phi^2\theta I^2\kappa}{(1+\phi)^5}. \end{equation} Summing the first term of Eq.~(\ref{dk-1}) and the first term of Eq.~(\ref{dk-2}) gives the familiar formula for symmetry energy of finite nuclei, i.e., $a_{\rm{sym}}(A)=\alpha/(1+\phi)$. Since we assume $\kappa$ is small, the $y$ in $4\pi r_0^2\sigma_0y^2\kappa/I^2A^{1/3}$ could be approximated as $y\approx-{4\theta\alpha^2I^2}/{(1+\phi)^2}$ (recalling that $\Delta/A=I$), and moreover $ \mu_{\rm{a}}\approx{2\alpha I}/({1+\phi})$. One then obtains \begin{align} -\frac{4\alpha\phi^2y}{(1+\phi)^3}\approx&\frac{16\alpha^3\phi^2\theta}{(1+\phi)^5}I^2,~~ \frac{4\pi r_0^2\sigma_0y^2}{I^2A^{1/3}}\approx\frac{4\alpha^3\phi\theta }{(1+\phi)^4}I^2, \end{align} where the relation $ {4\pi r_0^2\sigma_0y}/{I^2A^{1/3}}\approx-{\alpha\phi}/{(1+\phi)^2} $ is used for obtaining ${4\pi r_0^2\sigma_0y^2}/{I^2A^{1/3}}$ at this order. Combining all these terms gives the effective symmetry energy of finite nuclei to order $I^2\kappa$ as \begin{equation} a_{\rm{a}}^{\rm{eff}}(N,Z)\approx \frac{\alpha}{1+\phi} +\frac{4\alpha^3\phi\theta \kappa I^2}{(1+\phi)^4}, \end{equation} the coefficient in front of $I^2$ of the second term gives the (surface) fourth-order symmetry energy of finite nuclei, i.e., \begin{equation}\label{a4ext} a^{\rm s}_{\rm{sym,4}}(A)=\frac{4\alpha^3\theta \kappa\phi}{(1+\phi)^4} =\left.\frac{4\alpha^4\theta\kappa}{\beta A^{1/3}}\right/\left(1+\frac{\alpha}{\beta A^{1/3}}\right)^4. \end{equation} By combining Eq.~(\ref{a4ext}) with the bulk contribution Eq.~(\ref{a_a4v}) derived in Ref.\,\cite{WangRui17}, we finally obtain the total fourth-order symmetry energy of finite nuclei as \begin{align}\label{def_asym4} a_{\rm{sym,4}}(A)=&\left(1+\frac{E_{\rm{sym}}(\rho_0)}{\beta A^{1/3}}\right)^{-4}\notag\\ &\times\left(E_{\rm{sym,4}}(\rho_0) -\frac{L^2}{2K_0}+\frac{4\theta\kappa E_{\rm{sym}}^4(\rho_0)}{\beta A^{1/3}}\right). \end{align} In Fig.\,\ref{fig_a4exact}, the $A$ dependence of the symmetry energy $a_{\rm{sym}}(A)$ (blue dash-dot line) and the surface fourth-order symmetry energy $a^{\rm s}_{\rm{sym,4}}(A)$ (magenta line) of finite nuclei are shown by adopting $\kappa=0.5$, and moreover $\alpha\approx 30\,\rm{MeV}, \beta\approx 15\,\rm{MeV},r_0\approx 1.12\,\rm{fm}$\,\cite{Dan03}, and $ \sigma_0\approx0.8\,\rm{MeV}/\rm{fm}^2$ are used for illustration. It is found that the fourth-order symmetry energy (due to the surface) has a weak dependence on the mass number $A$ within the given range. In addition, we have $ a_{\rm{sym}}(A)\to\beta A^{1/3}, a^{\rm s}_{\rm{sym},4}(A)\to4\kappa\theta\beta^3A$, and $a^{\rm s}_{\rm{sym},4}(A)/a_{\rm{sym}}(A)\to 4\kappa\theta\beta^2A^{2/3}$ as $A\to0$, and all are independent of the bulk term $\alpha$ and approach to zero as $A\to0$. For heavy nuclei the $\phi=\alpha/\beta A^{1/3}$ is generally smaller than unity since the mass number $A$ is large, leading to the approximation $ a^{\rm s}_{\rm{sym,4}}(A)\approx4\alpha^3\theta\kappa\phi= {\alpha^3A^{2/3}\kappa\phi}/{S\beta\sigma_0}$. The interesting feature of this approximated fourth-order symmetry energy is that it is linearly proportional to the factor $\phi$ which approaches to zero in infinite matter limit, i.e., $ \lim_{A\to\infty}a^{\rm s}_{\rm{sym,4}}(A)=0\,\rm{MeV}$. There is no surprise that $a^{\rm s}_{\rm{sym},4}(A)$ approaches to zero as $A\to\infty$ since it only reflects the surface part of the fourth-order symmetry energy of finite nuclei. Physically the surface disappears as the mass number $A$ approaches to infinity, i.e., the related fourth-order symmetry energy becomes zero at this limit, see the inset of Fig.\,\ref{fig_a4exact}. However, it does not mean that the fourth-order symmetry energy for infinite matter should be zero, as in the above calculations the relevant term related to $E_{\rm{sym,4}}(\rho_0)$ is not included in the second equation of (\ref{ther33}), i.e., the fourth-order symmetry energy of finite nuclei obtained here is still characterized by the symmetry energy coefficients $\alpha$ and $\beta$ instead of the $E_{\rm{sym,4}}(\rho_0)$, see Ref.\,\cite{WangRui17} for the relevant discussions. Nonetheless, (\ref{a4ext}) is enough for our purpose. Moreover, the value of $A$ corresponding to the maximum of $a^{\rm s}_{\rm{sym},4}(A)$ could be found via $\partial a^{\rm s}_{\rm{sym},4}(A)/\partial A=0$, and this gives $A_{\max}=27\alpha^3/\beta^3\approx216$. Consequently, $a^{\rm s}_{\rm{sym},4}(A_{\max})=27\alpha^3\theta\kappa/64\approx7.5\,\rm{MeV}$; see the black dashed line of the inset in Fig.\,\ref{fig_a4exact}. We find that the empirical ratio $\alpha/\beta$ (near 2-3) coincidentally predicts that the fourth-order symmetry energy (due to the surface contribution) maximizes near the $^{208}\rm{Pb}$. Infinite matter (with $A\to\infty$) and finite nuclei (with $A$ being around 208) are fundamentally different from this perspective, and this explains the confusion given in Ref.\,\cite{WangRui17}. The relation $2\alpha\Delta_{\rm{v}}/A=\mu_{\rm{a}}$ could itself be solved perturbatively order by order. Under the assumption $f=1+y+\kappa y^2$, one obtains the equation for determining the chemical potential using the expression for $\Delta_{\rm{v}}/\Delta$ [see the relations (\ref{ckck-12})], \begin{equation}\label{mua-xxx} \mu_{\rm{a}}+\phi\mu_{\rm{a}}-2\kappa\phi\theta\mu_{\rm{a}}^3=2\alpha I. \end{equation} In the infinite matter limit the $\phi\to0$ and the equation gives $\mu_{\rm{a}}=\mu_{\rm{a}}^{\infty}\equiv 2\alpha I$. By treating both the $\phi$ and $\kappa$ perturbatively on the same order, one writes down the $\mu_{\rm{a}}$ to the second order as $ \mu_{\rm{a}}\approx \mu_{\rm{a}}^{\infty}(1+\varphi_1\phi+\varphi_2\kappa+\varphi_3\phi\kappa+\varphi_4\phi^2+\varphi_5\kappa^2)$, with $\varphi_1-\varphi_5$ five coefficients to be determined. Since there are already two small quantities $\kappa\phi$ in front of $\mu_{\rm{a}}^3$ one could safely approximate the cube of $\mu_{\rm{a}}^3$ as $\mu_{\rm{a}}^{\infty,3}$ to second order. Equation\,(\ref{mua-xxx}) becomes \begin{align} &\mu_{\rm{a}}^{\infty} \left(1+\varphi_1\phi+\varphi_2\kappa+\varphi_3\phi\kappa+\varphi_4\phi^2+\varphi_5\kappa^2\right)\notag\\ &+\mu_{\rm{a}}^{\infty}\phi \left(1+\varphi_1\phi+\varphi_2\kappa\right)-2\kappa\phi\theta\mu_{\rm{a}}^{\infty,3}=2\alpha I, \end{align} which could be calculated order by order: 1) at zeroth order: one has the result $\mu_{\rm{a}}^{\infty}\approx2\alpha I$, or $\mu_{\rm{n}}^{\infty}-\mu_{\rm{p}}^{\infty}\approx 4\alpha I $; 2) at the first order: we have $\mu_{\rm{a}}^{\infty}[(\varphi_1+1)\phi+\varphi_2\kappa]=0$, consequently $\varphi_1=-1,\varphi_2=0$; and 3) at second order: one has the equation $\mu_{\rm{a}}^{\infty}[\varphi_3\phi\kappa+\varphi_4\phi^2+\varphi_5\kappa^2+\varphi_1\phi^2+\varphi_2\phi\kappa-2\kappa\phi\theta\mu_{\rm{a}}^{\infty,2}]=0$, and solving it gives $\varphi_4=1,\varphi_5=0$, and $\varphi_3=2\theta\mu_{\rm{a}}^{\infty,2}$. By combining the above results one obtains the chemical potential difference between neutrons and the protons in finite nuclei to the second order as \begin{equation}\label{mua-xyz-1} \mu_{\rm{a}}\approx\mu_{\rm{a}}^{\infty}\left(1-\phi+\phi^2+2\kappa\phi\theta\mu_{\rm{a}}^{\infty,2}\right). \end{equation} The relation (\ref{mua-xyz-1}) could be written in the slightly different form, $ \mu_{\rm{n}}-\mu_{\rm{p}} \approx4\alpha I(1-\phi+\phi^2)+4\kappa\phi\theta\mu_{\rm{a}}^{\infty,3}=4\alpha I(1-\phi+\phi^2)+32\alpha^3\kappa\phi\theta I^3$, via the identity $2\mu_{\rm{a}}=\mu_{\rm{n}}-\mu_{\rm{p}}$. Moreover, near the infinite matter limit we have $a^{\rm s}_{\rm{sym},4}(A)\approx 4\alpha^3\theta\kappa\phi$ (the surface contribution), and one finally obtains $ \mu_{\rm{n}}-\mu_{\rm{p}} \approx4I\alpha (1-\phi+\phi^2)+8I^3a^{\rm s}_{\rm{sym},4}(A)$. It is analogous to the relation $\mu_{\rm{n}}-\mu_{\rm{p}}\approx4\delta E_{\rm{sym}}(\rho)+8\delta^3E_{\rm{sym,4}}(\rho)$ often used in determining the proton fraction $x_{\rm{p}}=(1-\delta)/2$ in neutron stars\,\cite{Cai12,Gon17,PuJ17}. In fact a more similar relation can be found for the neutron and proton chemical potential difference, i.e., $\mu_{\rm{n}}-\mu_{\rm{p}}\approx 4I a_{\rm{sym}}(A)[1+2I^2a^{\rm s}_{\rm{sym},4}(A)/a_{\rm{sym}}(A)]$ in finite nuclei, by recalling that $1-\phi+\phi^2\approx1/(1+\phi)$ and $\alpha(1-\phi+\phi^2)\approx a_{\rm{sym}}(A)$. \subsection{Implications of the Smallness of $E_{\textmd{sym},4}(\rho_0)$} Since the derivation of the term $4\theta\kappa E_{\rm{sym}}^4(\rho_0)/\beta A^{1/3}$ is independent of the bulk contribution $E_{\rm{sym,4}}(\rho_0)-L^2/2K_0$, the two terms are additive. Here the $\theta\kappa$-term in Eq.~(\ref{def_asym4}) approaches to zero as $A\to\infty$ since the surface disappears for infinite matter. On the other hand, the bulk term approaches to a constant (independent of $A$), i.e., $E_{\rm{sym,4}}(\rho_0)-L^2/2K_0$ as $A\to\infty$. It is now clear that one could still have a $E_{\rm{sym,4}}(\rho_0)\lesssim2\,\rm{MeV}$ to be consistent with microscopic calculations, irrespective of the value of $a^{\rm s}_{\rm{sym},4}(A)$ since the surface contribution only affects the finite nuclei. Solving Eq.~(\ref{def_asym4}) for $E_{\rm{sym,4}}(\rho_0)$ gives the expression for $E_{\rm{sym},4}(\rho_0)$, from which it is clearly demonstrated that a large value of $a_{\rm{a,4}}$ (as obtained from nucleus mass formula fitting) should not necessarily lead to a large $E_{\rm{sym,4}}(\rho_0)$, and the balance strongly depends on the higher order coefficient $\kappa$, which has very little influence on nuclear structure quantities such as the surface tension of certain typical finite nuclei. If one accepts the fact that $E_{\rm{sym},4}(\rho_0)$ is empirically smaller than about 2\,MeV\,\cite{Cai12,Gon17,PuJ17}, then Eq.~(\ref{def_asym4}) gives certain correlations among quantities with sizable magnitude. In this sense, the resulting correlations are expected to be intrinsic like those obtained from the unbound nature of a pure neutron matter\,\cite{Cai21}. As an example, we study the correlation between the surface symmetry energy coefficient $\beta$ and the slope parameter $L$, by uniformly sampling within the empirical ranges for the symmetry energy as $\alpha=E_{\rm{sym}}(\rho_0)\approx28-36\,\rm{MeV}$\,\cite{Chen17,LiBA21}, the ratio $\alpha/\beta\approx1-4$, the nucleon surface tension $\sigma_0\approx0.6-1.0\,\rm{MeV}/\rm{fm}^2$ in SNM, the coefficient $\kappa\approx0-1.0$, the slope parameter $L\approx30-120\,\rm{MeV}$ of the symmetry energy, the incompressibility coefficient $K_0\approx220-260\,\rm{MeV}$ of SNM, the fourth-order symmetry energy $a_{\rm{a},4}\approx2.78-3.78\,\rm{MeV}$\,\cite{Jia14} extracted from nuclear mass data, $E_{\rm{sym},4}(\rho_0)\approx0-2\,\rm{MeV}$, $a_{\rm{cou}}\approx0.6-0.8\,\rm{MeV}$, and $r_0\approx1.12\,\rm{fm}$. The model $f$ is simply truncated as $f\approx1+y+\kappa y^2$. The results calculated from the samples using Eq.~(\ref{def_asym4}) are then shown as open circles in Fig.\,\ref{fig_ab-La}. For comparison, the results from various Skyrme interactions reported in Ref.\,\cite{Dan09} are present as open diamonds. Specifically, the correlation of the ratio $\alpha/\beta$ to the ratio $L/\alpha$ is shown in the left panel of Fig.\,\ref{fig_ab-La}, which demonstrates well linear dependence. This is consistent with the findings using the Skyrme interactions\,\cite{Dan09}. Similarly, the correlation between $\beta$ and $L$ is shown in the right panel of Fig.\,\ref{fig_ab-La}. Furthermore, if a macroscopic formula for the neutron skin thickness $\Delta r_{\rm{np}}$ for heavy nuclei is adopted, then one can similarly investigate the correlations between $\Delta r_{\rm{np}}$ and characteristic parameters such as the surface symmetry energy $\beta$, the slope parameter $L$ of the symmetry energy, the $\kappa$ parameter, and so on. In particular, if a strong correlation between $\kappa$ and $\Delta r_{\rm{np}}$ could be established, then one can use the latter from experiments like the parity-violating electron scattering experiments (PREX, CREX)\,\cite{Abr12,Hor12,Adh21,Adh22,Rei22,Yuk22,Zha22,HuB22} to effectively constrain $\kappa$. However, before the lower order parameters such as the surface symmetry energy $\beta$ are well constrained, it is hard to investigate the real effects of $\kappa$, since $\beta$ is directly related to the $\theta$ parameter as $\theta\sim\beta^{-1}$. Nonetheless the neutron skin thickness for neutron-rich nuclei may provide a promising probe to detect the $\kappa$ parameter, and this is left for future studies. \section{Expression for the Sixth-order Symmetry Energy of Finite Nuclei}\label{SS6} If one considers even higher order isospin dependent terms in the nuclear surface tension coefficient as $\sigma/\sigma_0\approx1+y+\kappa y^2+s y^3$ with $s$ being the effective parameter beyond $\kappa$, one can directly obtain the sixth-order symmetry energy $a_{\rm{sym},6}(A)$ of finite nuclei as a function only of $A$ including both the volume and surface contributions as \begin{align}\label{ee6} a_{\rm{sym,6}}(A) =&\left(1+\frac{E_{\rm{sym}}(\rho_0)}{\beta A^{1/3}}\right)^{-6}\Bigg[E_{\rm{sym,6}}(\rho_0) -L_{\rm{sym,4}}\left(\frac{L}{K_0}\right)\notag\\ &\hspace*{1.cm}+\frac{K_{\rm{sym}}}{2}\left(\frac{L}{K_0}\right)^2 -\frac{J_0}{6}\left(\frac{L}{K_0}\right)^3 \notag\\ &\hspace*{-1.cm}+\frac{16\theta^2E_{\rm{sym}}^6(\rho_0)}{\beta A^{1/3}}\left(\frac{4\kappa^2}{1+\beta A^{1/3}/E_{\rm{sym}}(\rho_0)}-s\right) \Bigg], \end{align} here $K_{\rm{sym}}\equiv 9\rho_0^2\d^2E_{\rm{sym}}(\rho)/\d\rho^2|_{\rho=\rho_0}$ is the curvature coefficient (see, e.g., Ref.\,\cite{ZhouY19}) of the symmetry energy, $J_0\equiv 27\rho_0^3\d^3E_0(\rho)/\d\rho^3|_{\rho=\rho_0}$ is the skewness coefficient (see, e.g., Ref.\,\cite{Cai17}) of the EOS $E_0(\rho)$ of SNM, and $L_{\rm{sym,4}}\equiv 3\rho_0\d E_{\rm{sym},4}(\rho)/\d\rho|_{\rho=\rho_0}$ is the slope parameter of the fourth-order symmetry energy $E_{\rm{sym,4}}(\rho)$ (see, e.g., Ref.\,\cite{Che09}). The surface contribution (the last line) in Eq.~(\ref{ee6}) comes from two parts: the higher-order term originating from the lower-order coefficient $\kappa$ (proportional to $\kappa^2$) and the higher-order term from the coefficient $s$. Naturally the surface term of $a_{\rm{sym},6}(A)$ approaches zero as $A\to\infty$. Similarly to the investigations given in the last section, it is very difficult to extract the value of the sixth-order symmetry energy $E_{\rm{sym,6}}(\rho_0)\equiv 720^{-1}\partial^6E(\rho_0,\delta)/\partial\delta^6|_{\delta=0}$ for infinite matter from this expression even if the $a_{\rm{a,6}}$ appeared in the mass formula via the term $a_{\rm{a,6}}I^6$ is constrained, since one could adjust the coefficient $s$ to make $E_{\rm{sym,6}}(\rho_0)$ small or large without essentially affecting the properties of finite nuclei, such as the nuclear surface tension. For example, without including the surface term here a ``small'' $a_{\rm{a,6}}\approx1\,\rm{MeV}$ (which can hardly be ``probed'' because of the tiny factor $I^6$ associated with it) may induce an $E_{\rm{sym,6}}(\rho_0)$ of about 7-8\,MeV, which is likely to be in What implication does a sizableconflict with the empirical constraint on the EOS of ANM. In this sense, the surface contribution to $a_{\rm{sym},6}(A)$ is fundamental. In addition, by requiring, e.g., $E_{\rm{sym},6}(\rho_0)\lesssim1\,\rm{MeV}$ and $a_{\rm{sym},6}(A)\lesssim1\,\rm{MeV}$, Eq.\,(\ref{ee6}) then provides a link relating several characteristics with sizable magnitude, which could be used to establish certain correlations among them. These are left for future studies. \section{Summary and conclusions}\label{S4} We have shown that for the fourth-order symmetry energy of finite nuclei, it could be naturally decomposed into two terms characterizing the bulk and the surfaces contributions, respectively. The surface contribution $4\theta\kappa E_{\rm{sym}}^4(\rho_0)/\beta A^{1/3}/(1+E_{\rm{sym}}(\rho_0)/\beta A^{1/3})^4\sim4\theta\kappa E_{\rm{sym}}^4(\rho_0)/\beta A^{1/3}$ characterized by the product of $\theta\kappa$, to the fourth-order symmetry energy of finite nuclei is obtained by introducing the next leading order contribution to the isospin dependence of the nuclear surface tension coefficient through $\sigma/\sigma_0\approx1-\theta\mu_{\rm{a}}^2+\kappa\theta^2\mu_{\rm{a}}^4$, where $\kappa$ is an effective parameter characterizing the high order effects. Although the $\kappa$ parameter may induce a sizable $4\theta\kappa E_{\rm{sym}}^4(\rho_0)/\beta A^{1/3}$, it has very little impact on the nuclear structure quantities of interest such as the surface tension itself (since $y=-\theta\mu_{\rm{a}}^2$ is generally small than unity). Since this term is independent of the $E_{\rm{sym,4}}(\rho_0)$, it characterizes the coupling between the symmetry energy $E_{\rm{sym}}(\rho_0)$ and the high order coefficient $\kappa$, i.e., it is induced by some high-order isospin dependent surface tension effects. Although both the surface contribution and the bulk term to the fourth-order symmetry energy of finite nuclei could be large, they contribute little to the nuclear structure quantities, since generally one has $I^4\lesssim0.003$ for finite nuclei. This means that the fourth-order symmetry energy of finite nuclei is usually hard to ``probe'' and special observables are needed (see, e.g., Refs.\,\cite{Jia14,Jia15,Tia16}). On the other hand, all the microscopic many-body theories and phenomenological model predictions give the consistent constraint that $E_{\rm{sym,4}}(\rho_0)\lesssim2$\,MeV, indicating the $E_{\rm{sym,4}}(\rho_0)$ could not be large although its counterpart of finite nuclei could be sizable. One needs to consider the total fourth-order symmetry energy of finite nuclei composed of both the bulk and the surface contributions. In this sense the surface contribution characterized by $\kappa$ to the fourth-order symmetry energy of finite nuclei is fundamental, i.e., it is essential for explaining a reasonable $E_{\rm{sym,4}}(\rho_0)\lesssim2\,\rm{MeV}$ and a sizable $a_{\rm{sym,4}}(A)$ simultaneously. Essentially, it is hard to constrain the parameter $\kappa$ via finite-nucleus information. In the future, unless certain quantities/processes determining the coefficient $\kappa$ to within some narrow range are available, it seems that one can hardly constrain the fourth-order symmetry energy $E_{\rm{sym,4}}(\rho_0)$ of nuclear matter from the nuclear mass formula fitting on $a_{\rm{sym,4}}(A)$, since a finite nucleus has a surface while the infinite matter does not. Finally, we also present the expression for the sixth-order symmetry energy of finite nuclei, which is related to more nuclear matter bulk parameters and the higher-order isospin-dependent surface tension. We would like to mention that although the higher-order symmetry energies of finite nuclei are difficult to measure in terrestrial nuclei, they could be potentially useful for understanding the properties of neutron star crust or supernova explosions where extremely neutron-rich clusters may exist. \section*{Acknowledgments} This work was supported in part by the National Natural Science Foundation of China under Grants No. 12235010, No. 11905302, and No. 11625521, the National SKA Program of China No. 2020SKA0120300, and the Fundamental Research Funds for the Central Universities, Sun Yat-Sen University (No. 22qntd1801).
Title: Signatures of impact-driven atmospheric loss in large ensembles of exoplanets
Abstract: The results of large-scale exoplanet transit surveys indicate that the distribution of small planet radii is likely sculpted by atmospheric loss. Several possible physical mechanisms exist for this loss of primordial atmospheres, each of which produces a different set of observational signatures. In this study, we investigate the impact-driven mode of atmosphere loss via N-body simulations. We compare the results from giant impacts, at a demographic level, to results from another commonly-invoked method of atmosphere loss: photoevaporation. Applying two different loss prescriptions to the same sets of planets, we then examine the resulting distributions of planets with retained primordial atmospheres. As a result of this comparison, we identify two new pathways toward discerning the dominant atmospheric loss mechanism at work. Both of these pathways involve using transit multiplicity as a diagnostic, in examining the results of follow-up atmospheric and radial velocity surveys.
https://export.arxiv.org/pdf/2208.05989
{\affiliation{Department of Astronomy, University of Florida, 211 Bryant Space Science Center, Gainesville, FL, 32611, USA}} \title{Signatures of impact-driven atmospheric loss in large ensembles of exoplanets} \author[0000-0002-9916-3517]{Quadry Chance} \affiliation{Department of Astronomy, University of Florida, 211 Bryant Space Science Center, Gainesville, FL, 32611, USA} \author[0000-0002-3247-5081]{Sarah Ballard} \affiliation{Department of Astronomy, University of Florida, 211 Bryant Space Science Center, Gainesville, FL, 32611, USA} \author[0000-0002-3481-9052]{Keivan Stassun} \affiliation{Department of Physics \& Astronomy, Vanderbilt University, Nashville, TN 37235, USA} \keywords{Exoplanets (498), Exoplanet systems (484), Exoplanet dynamics (490), Exoplanet evolution (491), Exoplanet formation (492)} \section{Introduction} The study of exoplanetary atmospheres, even in its first decades, is characterized by incredible diversity and complexity. Attempts to link the phenomenology of atmospheres to the dominant underlying processes have been correspondingly inventive. While placing individual planets under a microscope is a critical exercise, so too is studying large-scale demographic patterns among samples of many atmospheres. The study of exoplanetary demographics was transformed by NASA's \kepler{} Mission. The discoveries of of thousands of super-Earths with radii between $1-4 R_{\oplus}$, the most common type in the Milky Way \citep{Fressin13,Howard2013,Dressing15,Bonomo2019}, have crucially informed theories of planet formation and evolution, including the processes that shape their atmospheres. Many of the most detailed studies atmospheric demographics await missions like the James Webb Space Telescope (hereafter JWST, Gardner et al 2006). However, with sample sizes of several planets to dozens, many studies (\citep{Sing2016, Crossfield2017}, and \citep{Guo17} et al. among others) are beginning to illuminate patterns among the rich complexity of exoplanetary atmospheres. One of the defining features of Super-Earths, as they are currently understood, is the bimodal radius distribution of small, short period exoplanets: there exists a gap in their radii at around $1.5-2 R_{\oplus}$ \citep{Fulton2017}. As our constraints of the planets' stellar properties have improved, our understanding of the radius gap has grown \citep{Fulton18}, including whether planets reside in the gap proper. Models have predicted the presence of such a gap before observations of a large enough population revealed its presence \citep{Ciardi2013, Owen2013}. Several possible mechanisms have emerged to explain this feature. Among them are photoevaporation of the atmospheres by stellar irradiation \citep{Owen2013,Pu2015a, Berger20}, loss due to the heating of the atmosphere by the newly-formed hot planetary cores (or ``core-powered mass loss", \citealt{Ginzburg2018,Owen2016}), a population of primordially bare planets \citep{Lopez2018}, and atmospheric loss due to giant impacts with debris or other planets \citep{Biersteker2019}. The known sample of thousands of exoplanets is presumably shaped by a wide variety of physics, both imprinted from formation and ongoing-- however, there are key ways in which predictions diverge for different physics \citep{Owen2017}. All three methods can produce the "Fulton gap" between the subset of planets with retained primordial atmospheres and those without. But the existence of adjacent planets with extremely different densities, for example, cannot be explained only by a mechanism that \textit{only} ever depends the planets' distance from the host star \citep{Bonomo2019,Inamdar2016, Carter2012}. The observational consequences of different processes include (among other properties) the envelope fraction, bulk density, and atmospheric composition of exoplanets. On a large-scale, the underlying dominant processes will shape exoplanet demographics as a whole in ways we can model and predict \citep{Dawson2016,MacDonald2020}. In this paper, we investigate the giant impact mechanism for atmospheric loss from a demographic standpoint. Using a suite of N-body simulations to track impacts during formation, we examine the properties of the resulting planets. \cite{Biersteker2019} have shown that giant impacts have an erosive effect on the primordial atmospheres of terrestrial planets, with complete loss possible in some cases. This effect would be \textit{a priori} stochastic in nature: a single impactor can significantly alter the core/envelope mass ratio, resulting in a large, observable density difference even between adjacent planets. For the same suite of planets, we also apply the competing photoevaporation model, in order to compare the resulting populations of planets in a head-to-head fashion. We will not, however, be considering the effect of core-powered mass loss This manuscript is organized as follows. In Section \ref{sec:methods}, we describe the extant N body simulations we employ for this study. We detail how we translate the simulation outcomes to observable planet properties, for comparison to the results of large-scale transit survey (in particular, NASA's \textit{Kepler} Mission). We detail our set of assumptions for assessing whether atmospheric loss has occurred for a given planet, dependent on whether giant impacts or photoevaporation is dominant. In Section \ref{sec:analysis}, we describe the ways in which these two processes sculpt the resulting population of planets from our simulations, highlighting their differences. In Section 4, we explore the efficacy of different observational tools to differentiate between these two hypotheses, based upon our findings. In Section 5, we summarize our findings and conclude. \section{Methods} \label{sec:methods} \subsection{Details of N-body simulations} To interpret the effects of giant impacts on the primordial atmospheres on a large set of exoplanets, we employ a suite of late-stage planet formation N-body simulations. A set of simulations published in \cite{Dawson16}, with details included therein, are ideal for our experiment. They span the relevant timescale for planet formation, nominally 10 Myr, so that we can track each impact as planets accrete. They also demonstrably recover key observable features among mature planetary systems. \cite{Dawson16} (hereafter D16) simulated late-stage planet formation around stars with mass of 1 $M_{\odot}$ over a range of solid and gas disk surface densities. They assumed a common surface density power law ($\alpha=3/2$, per the minimum mass solar nebula), over typical timescales of 10 Myr. For detailed description of the simulation initial conditions, we refer the reader to D16. They went on to investigate the links between the initial conditions of these simulations and the resulting dynamical conditions of the resulting planets. We use these simulations to investigate a new, but possibly linked, phenomenon: the role of giant impacts in sculpting atmospheric loss. These simulations are useful toward that end, firstly because we can trace every impact of two bodies and their corresponding masses. Secondly, the process of accruing a primordial atmosphere happens contemporaneously with planet assembly. Our analysis of the output differs from D16 in that we do not apply any prescription for gas accretion; we make the simplifying assumption that all planets left at the gas disk's dissipation acquire a primordial H/He atmosphere. We employ a key finding in that work for this study as well, which is that a combination of disk initial conditions is necessary to recover the properties of observed \textit{Kepler} exoplanetary systems as a whole. No single disk model alone in that work reproduced the \textit{Kepler} distributions in transiting planets per system, transit duration ratios of adjacent planets, and period ratios of adjacent planets. The necessity for a mixture model of initial disk conditions is attributable to the wide range of ``dynamical temperatures" \citep{Tremaine2013} among planetary systems. Simulations with different initial conditions in solid surface density and gas surface density, result in variation in average dynamical temperature (see also \citealt{Moriarty16}, who reached a similar conclusion). Exoplanetary systems with wide orbit spacings, high eccentricities, and high mutual inclinations are canonically ``dynamically hotter"; the ensembles that produce these simulations are denoted \texttt{Ed$10^{4}$} in D16 and are initialized with a greater degree of gas depletion and therefore less strongly damped. In contrast, systems with tighter orbit spacings, lower orbital eccentricities, and lower mutual inclinations are ``dynamically colder"; these systems originate with start with similar disks with less gas depletion and are denoted \texttt{Ed$10^{2}$} in that work. In the recent exoplanet literature, these latter dynamically cool and densely populated ``Systems of Tightly-packed Inner Planets" are denoted ``STIPS'' \citep{Volk2015}; we employ this latter term throughout. \subsection{Assembly of Planetary System Sample} \label{sec:assembly} In the original D16 study, the authors determined that matching the ensemble of \textit{Kepler} planets to the output planets from disk simulations required a combination of disk types. D16 explored a large range of intial surface densities, inclination and eccentricity distributions, and spacings of the planet embryos. \citep{Moriarty2016}, who varied the surface density profile as well as total mass available, came to similar conclusions. The necessity for this mixture model is the subject of active discussion. Initial studies of Kepler's multi-transiting planet systems \citep{Lissauer11, Johansen2012} argued for a so-called ``Kepler dichotomy", but later studies showed that modifications to key assumptions about sensitivity of the Kepler observations to multiple transiting planets \citep{Zink19} or to the underlying relationship between number of planets and mutual inclination \citep{Zhu18, He20, Millholland21} made a "single mode" model tractable. Because the mixture model, as a phenomenological descriptor, recovers important observables \citep{He19}, we adopt it as a heuristic for this study. In this paper, singly-transiting systems or singles refer to systems that only have one transiting planet while multiple-transiting or multis refers to systems with multiple transiting planets. There exist multiple estimates for the underlying fraction of planetary systems that are drawn from each population: the STIPS are heavily over-represented in transit surveys. This is due to the larger number of planets overall, in addition to the likelier transit probability at shorter orbital periods. The underlying rate of compact systems of multiple planets is estimated at 5\% of the total \citep{Lissauer11, Contreras18}, though they can make up half or more of detected planets \citep{He19, Ballard16}. \citep{Moriarty16} found that the underlying rate of compact multiples around FGK stars is half that of the rate around M dwarfs; the latter lies between 10-20\% with 1$\sigma$confidence \citep{Muirhead15, Ballard19}. For this study, we adopt an underlying compact multiple rate of 7\% for Sunlike Stars; that is, 7\% of planetary systems are drawn from the ensemble \texttt{Ed$10^{2}$} (which produces most STIPS), while 93\% are drawn from \texttt{Ed$10^{4}$}. To create a synthetic population of planets to realistically compare to a Kepler-like survey, we use this 93\% ``dynamically hot"/7\% ``dynamically cool" partition to draw ~150,000 stars with randomly-oriented ecliptic angles. The systems are drawn uniformly from the set of 80 \texttt{Ed$10^{2}$} and 80 \texttt{Ed$10^{4}$} systems. \subsection{Assigning Radii to Planets} We employ two prescriptions for atmospheric loss with each set of simulations: one in which photoevaporation is the dominant process and one in which giant impacts are dominant. The choice of which is dominant determines how we translate a planetary mass output by the n-body code used in D16,\textsc{MERCURY6} \citep{chambers12} to a radius of that planet today. We assume that, all else being equal, a planet will accrete and retain a primordial atmosphere from the disk unless that process is inhibited or halted; we vary only the assumption about how that loss might occur. In reality, the gas accretion rate of a planet is sensitive to the mass of the planet. In practice, the 1 My timescale for accretion is more than sufficient for the majority of our simulated planets to accrete primordial atmospheres \citep{Ikoma2012}. In-situ accretion of H-dominated atmospheres is thought to result in envelope mass fraction from $10^{-2}$ to $10^{-1}$ for planets with masses $< 10M_{\oplus}$, with the envelope fraction increasing with the core mass \citep{Ikoma2012}: for this work, we have assumed an initial envelope mass fraction of 5\%. This assumption will increase our planet radii by roughly 10\%, which corresponds to a ~ 20\% increase in the transiting planet yield all else being equal. Each process for atmospheric sculpting, photoevaporation and giant impacts, has a set of necessary conditions under which the planet loses this primordial H/He atmosphere. To isolate the effects of each process, we assume first that the two processes are mutually exclusive. For planets that retain their primordial atmospheres, we use the mass-radius models of \cite{Lopez2013} for the appropriate remaining gas fraction and equilibrium temperature. For these planets, their radius is defined at 20 mbar. For planets that lose their atmospheres, we use the Earth-like rocky composition models of \cite{Zeng2019}. In Subsection \ref{sec:atm_loss_impact}, we first consider giant impacts and the conditions under which planets retain their primordial atmospheres at the end of the simulations. In subsection \ref{sec:atm_loss_photo}, we turn to photoevaporation to ask the same. \subsubsection{Loss Driven by Giant Impacts} \label{sec:atm_loss_impact} We employ the formalism of \citep{Biersteker2019} to determine the erosive effect of giant impacts on the primordial atmospheres of terrestrial planets. There are two primary erosive mechanisms at work when planet embryos collide. The first is atmospheric displacement from the impact-generated shockwave. \added{Detailed in \cite{Schlichting18}}, eroding the atmosphere completely with only a few impacts requires impactors with radii comparable to the target embryo. The second mechanism is loss due to the thermal energy deposited into the planet by the impact. For planets fresh with the heat of formation, the energy required to heat the base of the envelope past this threshold may be very low, leading to complete envelope stripping from lower energy impacts. The energy required to remove an envelope this way depends on the energy budget of the planet separated into the thermal energy in its core and envelope. In the core-dominated regime, the envelope is stripped when the Bondi radius is equal to the core radius, so that gas molecules are no longer gravitationally bound to the planet. We consider here the most conservative scenario for atmosphere retention- in which the core thermal energy can be neglected in favor of thermal energy delivered by the impactor- the entire envelope will be ejected when $E_{imp} + E_{env}$ = 0 where $E_{imp}$ is (from \citealt{Biersteker2019} Equation 18)and $v_{imp} = v_{esc}$, and $E_{env}$ is approximated as: \begin{equation} E_{env} = \eta GM_{env} \frac{M_{c}}{R_{c}} \end{equation} where $\eta$ is the efficiency of energy transfer from the impactor to the core and envelope of the target. Impacts can happen at speeds reaching a few times the lower limit of $v_{esc}$. Since the impact velocities were not tracked in the original D16 simulations, we ignore the details of each collision make the conservative assumption that they all happen at $v_{esc}$. In the limit where all of the impact energy goes into heating the core and envelope ($\eta$ = 1) and an envelope mass of $M_{env} = 0.05M_{c}$, the condition required to eject an envelope is: \begin{equation} M_{imp}G\frac{M_{p}}{R_{c}} + GM_{env}\frac{M_{c}}{R_{c}} = 0 \end{equation} Rearranging this to get an expression in terms of impactor mass: \begin{equation} M_{imp} \frac{1-E_{env}}{\eta}M_{env} =\frac{f}{\eta} M_{c} \end{equation} In our simulations (intial N=123-220 and particle mass ~0.01$M_{\oplus}$-~1$M_{\oplus}$), the typical impactor radius is high. When we consider all impacts that take place after the gas disk has dissipated, we find that the mass of all impactors in our simulations \textit{exceed} this cutoff mass. Therefore, a simplifed boolean scenario describes our simulations: either an impact \textit{has} occurred and the atmosphere has been boiled off of the core, or an impact \textit{has not} occurred and the primordial atmosphere is retained. We make an additional simplifying assumption: if a giant impact occurs before the gas disk has dissipated, we assume that the planet re-accretes the primordial atmosphere. It is only if the giant impact occurs after gas dissipation (\textless 1 Myr) in the disk that we assume the primordial atmosphere is lost. In Figure \ref{fig:numberofimpacts}, we show a histogram of the number of these post-dissipation impacts that occur in each simulation (that is, the number that occur per disk). \subsubsection{Loss Driven by Photoevaporation} \label{sec:atm_loss_photo} For photoevaporative atmospheric loss, we turn to the \textsc{atmesc} module of the Virtual Planet Simulator \citep{Barnes20}. \textsc{VPlanet} computes the atmospheric escape of hydrogen-dominated atmospheres in the energy-limited regime while the star is in its pre-main-sequence phase, and transitions to ballistic escape afterward. The XUV luminosity of the star over time is calculated by the \textsc{stellar} module and follows an interpolation of the evolutionary tracks from \citep{Baraffe2015}. Using a Sun analog, we evolved each system forward 5 Gyr with starting envelope mass fractions of 5\%. We consider planets that lose $>$ 90\% of their primordial envelope to be ``stripped," for the sake of simplicity of comparison. \subsection{Assigning Detection Probability} Once we have assembled populations of planets, and determined which planets in the simulations retain their primordial atmospheres, we then ``observe" the planetary systems, in order to compare our synthetic planetary sample to \textit{Kepler} observations. We assume a CDPP (Combined Differential Photometric Precision) equal to the median 6-hour rms CDPP for a 12th magnitude star in Kepler Quarter 3 as reported in \citep{Christiansen11}. We consider transiting planets (that is, those with impact parameter $b$ $\le$ 0.9) with a total SNR $>$ 7.1 to be ``detected," where $P$ is the orbital period of the planet, and $\delta$ is the transit depth: \begin{equation} SNR = \sqrt{\frac{\textrm{3 years}}{P}}\frac{\delta}{\textrm{CDPP}.} \label{eq:cdpp} \end{equation} We find that 4.9\% of the full sample of simulated planets are typically detected if giant impacts dominate, and 4.7\% are typically detected if photoevaporation dominates. The slightly lower latter detection rate results from the fact that photoevaporation prevents close-in planets from retaining primordial atmospheres, decreasing their radii and the resulting transit signal. \section{Results} \label{sec:analysis} With our synthetic sample of observed planets in hand, we now investigate the effect of the dominant atmospheric loss process (whether from photoevaporation or giant impacts) on the properties of the resulting planets. We consider both the effects on the underlying population and the ``observed" population of planets. We find that $\sim 85\%$ of planets experience the same outcome for both mechanisms, whether the atmosphere is lost or retained. However, the retention of the atmospheres of $\sim 15\%$ of planets are contingent on whether giant impacts or photoevaporation dominates. We first focus in Section \ref{sec:atmosphere_occurrence} on this raw likelihood of primordial atmospheric loss for both mechanisms. We then turn to the $\sim 15\%$ of planets with divergent outcomes in \ref{sec:diverge}. The fates of these planets' atmospheres depend on which mechanism is dominant, and therefore these diagnostic planets highlight the conditions under which the assumption of underlying physics matters most. The different resulting radii shift the planetary positions on the resulting period/radius distribution. In Section \ref{sec:intrinstic_radius}, we investigate this radius distribution alone, to characterize the predicted position and relative emptiness of the ``radius gap" \citep{Fulton11}. \subsection{Raw occurrence of primordial atmospheres} \label{sec:atmosphere_occurrence} As we describe in Section \ref{sec:atm_loss_impact}, the masses of all impactors in our simulation exceed the ``cutoff mass" for total atmospheric loss \citep{Biersteker2019}. When we consider a giant impact, therefore, either the primordial atmospheric is left intact, or it is completely lost. We therefore characterize the likelihood of atmospheric loss using a binomial likelihood, where some fraction $f$ of atmospheres are lost to giant impacts. To compare atmosphere loss occurrence due to giant impacts to the more gradual loss driven by photoevaporation, we make a simplifying assumption. If an amount \textless 10\% of the initial primordial atmosphere remains after 5 Gyr of photoevaporation have elapsed, we consider the atmosphere to be ``lost". This simplification has the advantage of being approximately true and allowing us to again employ a binomial likelihood function for easy comparison to the giant impacts sample. Assuming a uniform prior on the fraction $f$ of planets that lose their primoridal atmospheres, we evaluate the posterior distributions on $f$ for planets resulting from the two sets of initial conditions, whether dynamically cool (\texttt{Ed$10^{2}$}) or dynamically hot (\texttt{Ed$10^{4}$} planets). We find that dynamically cool and dynamically hotter planetary systems retain their atmospheres at similar rates, per Table \ref{tbl:underlying_rate}. However, the loss process itself does affect the atmospheric retention rate: the rate of atmospheric retention is $f=0.52^{+0.02}_{-0.02}$ for giant impacts and $f=0.35^{+0.02}_{-0.02}$ for photoevaporation. \begin{table}[] \centering \caption{Posterior probability of a planet having an atmosphere for underlying sample} \label{tab:table1} \begin{tabular}{lll} \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{$f_P$} & $f_{GI}$ \\ \hline \multicolumn{1}{l|}{Ed$10^{2}$} & \multicolumn{1}{l|}{$0.71^{+0.02}_{-0.02}$} & $0.65^{+0.02}_{-0.02}$ \\ \hline \multicolumn{1}{l|}{Ed$10^{4}$} & \multicolumn{1}{l|}{$0.84^{+0.02}_{-0.02}$} & $0.61^{+0.02}_{-0.02}$ \\ \hline & & \end{tabular} \label{tbl:underlying_rate} \end{table} \subsection{Cases with Divergent Atmospheric Outcomes} \label{sec:diverge} There exist a subset of planets whose atmospheric retention depends upon whether photoevaporation or giant impacts are the dominant loss mechanism. While temperature and surface gravity are critical for whether photoevaporative atmosphere loss occurs, loss due to giant impacts is necessarily stochastic in nature. There are therefore a sample of planets that retain atmospheres under one set of assumptions, and lose them under the alternative set of assumptions. With this subset of planets, we can identify the most sensitive parts of parameter space to assumptions about atmospheric loss. In Figure \ref{fig:difference}, we show how these populations of planets with divergent outcomes shift in density-period space. Considering first giant impacts, we find that populations of low-density planets are present at all periods. These include hot, low-mass planets with primordial atmospheres, which become high-density cores under photoevaporation. cool, High-density remnant cores are also present at all periods in the sample planets sculpted by giant impacts. In contrast, photoevaporated high-density cores reside mostly at close-in orbits. Overall, we identify three populations that are most sensitive to assumptions about atmosphere loss. These three populations lie in distinct temperature/mass regimes, which we designate Populations A, B, \& C for ease. Population A consists of highly-irradiated ($T_{p}\sim\ 1000 K$) low-mass planets, likelier to lose atmospheres under photoevaporation and keep them under giant impacts. The second group, Population B, is higher mass planets with intermediate irradiation ($T_{p}\sim\ 500 K$), which are conversely likelier to keep their atmospheres under photoevaporation but lose them under giant impacts. Finally, there is a third sample of cold, very low-mass planets ($T_{p}<300 K$), designated Population C. With surface gravity weak enough for photoevaporation to strip their atmospheres, they sometimes retain these atmospheres under giant impacts, if they escape an impactor. The presence of this first set of highly-irradiated but low-density planets (Population A here, sometimes denoted ``super-puffs" per \citealt{Lee2016}), particularly challenge the photoevaporative model. This is especially true in the case of extremely low-density exoplanets in multi-planet systems where the XUV history of the host star is well-constrained (Kepler-107, TOI-178 \citep{Bonomo2019,Leleu2021}). In Figure \ref{fig:difference}, these correspond to the purple symbols with orbits clustering near 0.1 AU. We include in Figure \ref{fig:difference} in grey the densities and periods of a subset of observed ``super-puffs" from the literature (with insolation normalized to solar for inclusion on a common axis). We find that the presence of these planets is completely consistent with a scenario in which giant impacts dominate atmospheric loss, at left; considering only photoevaporative loss, these planets should not exist. Less detectable from transit surveys are the diagnostic Populations B and C of higher-mass stripped cores at temperatures between 1000 and 300 K (present for giant impacts), and cold, small cores (present for photoevaporation), respectively. In Figure \ref{fig:pressure}, we show the resulting period-radius distributions for both the underlying and subsequently ``detected" populations of planets. At left, we assume giant impacts are dominant in atmospheric loss at left, and at right, that photoevaporation is dominant. We can now assess the usefulness of the three diagnostic populations most sensitive to loss mechanism, based upon their likelihood of detection. Any difference between these two radius-period diagrams will result from the same set of planets with divergent outcomes, as shown in Figure \ref{fig:difference}. The same three clouds of points in this parameter space, that are apparent in density/semimajor axis space, are most sensitive to atmospheric loss mechanism. We employ this same color scheme (depicting the difference in the bulk density of the planet) in Figure \ref{fig:ABC}, circling each nexus of points and showing their changed location in radius-period space. The hot, low-mass planets of Population A (``super puffs"), that only exist if giant impacts dominate (purple points with $P<50$ days in Figure \ref{fig:ABC}), are apparent in the upper left corner of the giant impacts period-radius diagram. Comparing to the detection rates in that part of parameter space in the top panels of Figure \ref{fig:pressure}, we see that they are nearly always detected with \textit{and} without atmospheres, making them the most useful diagnostic population. Population B are points that are yellow in Figure \ref{fig:difference}; they have retained their atmospheres if we only consider photoevaporation, but lose them if giant impacts are most important. These are typically higher-mass planets with temperatures near 500 K. These planets, when placed on the giant impacts period-radius diagram shown in the left panel of Figure \ref{fig:pressure}, comprise the stripped cores with mean size 1.5 $R_{\oplus}$ and periods \textless 10 days. They are drawn approximately uniformly in log period from among the upper cloud of sub-Neptunes (reflecting the stochastic nature of giant impacts). As stripped cores if giant impacts dominate, they are detected with approximately 50\% probability. Finally, there exist the Population C planets: tiny and cool stripped cores if photoevaporation dominates (with mean $R_{p}$ of 0.5 $R_{\oplus}$ and period of 1000 days, in the right panel of Figure \ref{fig:pressure}). Under giant impacts, these small planets mostly retain their atmospheres, but have low enough surface gravity that photoevaporation is sufficient to remove them. They make up the bulk of stripped planets with periods longer than 10 days in the photoevaporation-dominant scenario, and are very diagnostic in the sense that they should \textit{not} exist if only giant impacts are considered. However, they are rarely detected when stripped (indicated by their grey color in Figure \ref{fig:pressure}), and are therefore least useful as an observational diagnostic tool. \subsection{Radius distribution} \label{sec:intrinstic_radius} To understand the effect of loss mechanism on the observed radius distribution, we first examine the intrinsic underlying distribution, before turning to the ``observed" distribution. We find key differences in the radius distribution of planets dependent upon whether giant impacts or photoevaporation is dominant, due almost entirely to the fate of planets designated as Population B in Figure \ref{fig:ABC}. We first examine the effect of loss mechanism, between the samples of cooler and warmer dynamical temperature. Because these two ensembles, \texttt{Ed$10^{2}$} and \texttt{Ed$10^{4}$}, produce different mass and Hill spacing distributions, it stands to reason that they may be differently impacted by atmospheric loss mechanism. In Figure \ref{fig:lesser}, we show the histograms of resulting planet radii for both modes of planet formation. The mode that produces the dynamically cooler systems of more coplanar planets ((\texttt{Ed$10^{2}$}) is shown at left, and the mode that produces dynamically hotter systems with less planets (\texttt{Ed$10^{4}$}, at right). \texttt{Ed$10^{2}$} generates a larger number cores of smaller average mass. Small cores that retain atmospheres tend to cluster near $2 M_{\Earth}$. \texttt{Ed$10^{4}$} generates fewer, but larger cores. The presence of larger cores in the latter sample appears in the relative shift to higher radii in both modes of the radius distribution. The effect of higher mass planets in \texttt{Ed$10^{4}$} is also to compress the atmospheres of planets with large radii. The low mass ($< .5 M_{\Earth}$) planets of \texttt{Ed$10^{2}$} generate the ``super-puff" \citep{Lee2016} planets. With high insolation and low mass, they retain their extended atmospheres only if we do not consider photoevaporation. We turn in Figure \ref{fig:intrinsic} to the resulting \textit{observed} distributions of planetary radii. These samples are now drawn from the mixture models described in Section \ref{sec:assembly}, with 7\% from the {Ed$10^{2}$} simulations and 93\% from the {Ed$10^{4}$} simulations, and we have retained only the ``detected" planets. Since the transit detection method selects strongly for short periods, the observed planet populations are mostly drawn from this close-in population. In bother cases, this translates to a higher \textit{observed} fraction of stripped cores, compared to the intrinsic underlying occurrence (that is, the observed relative heights of the two radius peaks is reversed from their true relative heights). In both cases, this is due to the higher incidence of mini-Neptunes (with retained H/He atmospheres) at longer orbital periods. \section{Discussion: Observational consequences} We identify four potential observational diagnostics, useful for distinguishing between a large population of exoplanets sculpted primarily by photoevaporation and a population sculpted primarily by giant impacts. These are (1) the morphology of the radius gap (Section \ref{sec:gap}), (2) the occurrence of H/He atmospheres as a function of transit multiplicity (Section \ref{sec:multiplicity}), (3) the size-ordering (or lack thereof) in multi-planet systems (Section \ref{sec:monotonicity}), and (4) the ease of distinction between the densities of stripped planets and planets with primordial atmospheres (Section \ref{sec:clustering}). \subsection{Radius gap} \label{sec:gap} The morphology of the exoplanet radius distribution is the subject of active investigation \citep{Fulton18,Berger20,VanEylen18}. The location of the ``gap" separating smaller super-Earths from larger sub-Neptunes changes with spectral type, and even within the same spectral type, stars vary in XUV flux history. This complicates predictions for the gap morphology from photoevaporation. We find that the two mechanisms we consider in this manuscript, photoevaporation and giant impacts, are distinguishable by the relative emptiness of the radius gap. This is due to the divergent fates of the planets we have called ``Population B", which retain their atmospheres under photoevaporation and lose them under giant impacts. In Figure \ref{fig:P_stats} depicting the radius/period distribution of planets, the radius distribution is projected against the right-hand side of both panels (for both the full and observed samples). Population B accounts entirely for the change in the relative emptiness of the gap. These planets are coded as in yellow in the density difference plot of Figure \ref{fig:difference} and \ref{fig:ABC}, and comprise the largest and highest-density stripped cores. As a group, these planets possess orbital periods $>$10 days and surface gravities large enough to retain atmospheres under photoevaporation. Their atmospheres are lost only when we consider giant impacts. We examine the intrinsic radius distributions (before observation) more closely in \ref{fig:intrinsic}. While the position of the gap at $1.8-1.9 R_{\oplus}$ is unchanged, it is the \textit{relative emptiness} of the gap that depends strongly upon which loss mechanism is dominant. For photoevaporation, the gap is entirely empty, and there exists a clear separation between planets with and without atmospheres. This is because we predict few, if any, high-mass stripped cores under photoevaporation. Under the set of assumptions we have used here, only a giant impact can strip the atmosphere of the larger cores ($\ge$1.7 $R_{\oplus}$ at $P=10$ days, or $\ge$1.2 $R_{\oplus}$ at 100 days). In Figure \ref{fig:comparison}, we show the predicted gap morphology in three scenarios: only photoevaporation, only giant impacts, and a scenario with 50\% likelihood of either outcome. The latter two are associated with the presence of planets within the radius gap, consistent with the \cite{Fulton18} findings that the gap is not empty (that is, that the presence of planets with radii in the gap cannot be explained by radius uncertainty alone). \subsection{Occurrence of atmospheres versus planetary multiplicity} \label{sec:multiplicity} The ratio of \texttt{Ed$10^{4}$} planets to \texttt{Ed$10^{2}$} planets drops precipitously with increasing transit multiplicity, with we might expect to see different atmospheric outcomes as a function of transit multiplicity. In particular, while singly- and doubly-transiting systems are drawn from a mixture of populations, systems with $\ge3$ transits are drawn exclusively from \texttt{Ed$10^{2}$}. Single transit systems are mostly drawn from \texttt{Ed$10^{4}$}, given the larger orbital spacing between planets resulting from these simulations. On average, these planets are more massive than planets in \texttt{Ed$10^{2}$}. However, this depends on orbital period in a similar manner to \cite{Moriarty2016} (see Figure 1 of that work): while \texttt{Ed$10^{4}$} cores grow more massive with increasing orbital period, \texttt{Ed$10^{2}$} cores tend to stay smaller and more uniform. The increased mass and corresponding increase in surface gravity make \texttt{Ed$10^{4}$} cores likelier to retain atmospheres under photoevaporation, when compared to a \texttt{Ed$10^{2}$} planet receiving the same incident flux. Figure \ref{fig:atmo} shows the resulting likelihood of atmospheric retention among \textit{transiting} planets as a function of their transit multiplicity, whether giant impacts (upper panel) or photoevaporation (lower panel) is the dominant atmospheric loss mechanism. The yellow posterior distributions corresponds to the binomial likelihood $f$ of atmospheric retention of all planets, transiting and non-transiting (described in Section \ref{sec:atmosphere_occurrence}. Singly- and doubling-transiting systems are drawn from a similar mixture of \texttt{Ed$10^{2}$} and \texttt{Ed$10^{4}$} disks, which is reflected in their identical atmospheric retention rates. The retention rates for single-transit and double-transit systems are also consistent with the overall retention rate. It is in the atmospheric retention rate in systems with $\ge3$ transiting planets that we see a marked effect, in two ways. First, atmospheric retention rates among systems with 3 or more transiting systems are not reflective of the total retention rate. This is true for both the photoevaporation-dominant and giant-impacts-dominant scenario. In the case of giant impacts, the likelihood of atmospheric retention is $0.80^{+0.04}_{-0.03}$ among systems with $\ge3$ transits, \textit{higher} than the $0.52\pm0.02$ rate among all systems. In the case of photoevaporation, the likelihood of atmospheric retention is $0.2\pm0.03$ among systems with $\ge3$ transiting planets, \textit{lower} than the underlying 0.35$\pm$0.03 rate among all systems. Secondly, while the atmospheric retention rates among singly- and doubly-transiting systems are similar between giant impacts and photoevaporation (0.46 versus 0.38 for single transit systems, for example), they are very different for systems with $\ge3$ transits: in multi-transit systems under a photoevaporation-dominant scenario, only 1 in 5 detected planets has retained their primordial atmosphere, but 4 in 5 detected planets have retained it if the giant impacts mechanism is dominant. This is directly reflective of the outcome of \texttt{Ed$10^{4}$} simulated disks. Since triply-transiting systems are all drawn from this population (with more total planets in more closely-spaced and more coplanar orbits), they are a direct reflection of the strongly divergent predicted outcomes for STIPs under photoevaporation and giant impacts. \begin{table}[] \centering \caption{Posterior probability of a planet having an atmosphere for observed sample} \label{tab:table2} \begin{tabular}{l|l|l} & $f_P$ & $f_{GI}$ \\ \hline singles & $0.38^{+0.03}_{-0.03}$ & $0.46^{+0.03}_{-0.03}$ \\ \hline doubles & $0.36^{+0.03}_{-0.03}$ & $0.49^{+0.02}_{-0.02}$ \\ \hline multis & $0.22^{+0.03}_{-0.03}$ & $0.80^{+0.04}_{-0.03}$ \end{tabular} \label{tbl:retain_multi} \end{table} \subsection{Monotonicity} \label{sec:monotonicity} STIPS exhibit similarity between neighboring planets, with evidence for monotonically increasing planet size with period (the ``peas-in-a-pod" phenomenon, explored in detail in \citealt{Millholland17,Weiss_2018,Weiss_2020,Murchikova_2020,Millholland21} and others). The typical ratio in size of super-Earths to neighboring sub-Neptunes ($R_{\textrm{SN}} \approx 1.7 R_{\textrm{SE}}$, per \citealt{Millholland21}) is consistent with multiple models for the loss of primordial atmospheres, including core-powered mass loss (e.g. \citealt{Gupta2019}) and photoevaporation. The relationship between planetary size and orbital period within the same system presents another diagnostic \citep{Ciardi2013,Kipping18,Millholland17,Weiss_2018}. An outstanding observational challenge to the photevaporation hypothesis is the existence of neighboring planets with divergent densities. Such systems include TOI-178 \textit{d} and \textit{e} \citep{Leleu2021} and Kepler-107 \textit{b} and \textit{c}. We explore the intra-system patterns associated with giant impacts and photoevaporation here. Giant impacts, given the stochastic nature of impactors, provides a natural possible explanation for the existence of dissimilar neighbors. Monotonic increase in planet size is consistent with photevaporative loss of primordial planet atmospheres. Closer planets are more irradiated and therefore likelier to be evaporated cores. A hypothesized giant-impacts-domninant scenario, in contrast, has competing effects in favor and against a monotonic trend. In one sense, we would expect a giant-impacts-dominant scenario to result in the same observed smaller-to-larger size ordering as photoevaporation, since the shorter dynamical timescale of the inner systems allow for more orbit crossing events. This would results in a modest increase in probability of an impact for close-in planets. On the other hand, the stochastic nature of giant impacts could allow for a less ordered scenario. Populations A and B in the giant impacts scenario described in Section \ref{sec:diverge} represent close-in sub-Neptunes (with retained atmospheres) and modestly irradiated cores, neither of which exist in the photoevaporation dominant scenario. To assess the predicted effect of giant-impact-dominated atmospheric loss on size ordering, we employ the monotonicity statistic as defined in \cite{Gilbert2020}: \begin{equation} \mathcal{M}=\rho_{S}\mathcal{Q}^{1/N}. \end{equation} where $\rho_{S}$ is the Spearman rank-order coefficient, calculated from the planet masses. This coefficient can be 1 (for perfectly positive monotonic systems), в€’1 for perfectly negative monotonic systems, 0 for systems with no evidence of monotonic behavior. We calculate $\mathcal{M}$ for each of our systems with observed and detected planets. For additional description of $\mathcal{M}$, including the power dependence of the mass partition function $\mathcal{Q}$, we refer the reader to \cite{Gilbert2020}. We show the resulting cumulative distributions for $\mathcal{M}$ in Figure \ref{fig:monotone}, where systems closer to 1 exhibit more monotonicity from smallest to largest. We find that the giant-impacts-dominant scenario is associated with modestly lower monotonicity coefficients: taking $\mathcal{M}$=0.25 as a sample value, we find that 50\% of photoevaporated systems have $\mathcal{M}\ge$0.25 (that is, more highly ordered), whereas this fraction is 35\% for giant-impacts dominated scenarios. \cite{Gilbert2020} calculated the monotonicity coefficient for the observed sample of \textit{Kepler} planets as well. With 25\% of systems exhibiting $\mathcal{M}\ge$0.25 in that sample, it more closely resembles our giant-impacts distribution distribution, but a detailed comparison of monotonicity distributions to the observed data requires additional study. \subsection{Density clustering} \label{sec:clustering} Our final diagnostic for differentiating between giant-impacts or photoevaporation-dominant scenarios employs planetary bulk density. The populations of (1) planets with lost atmospheres and (2) planets with retained atmospheres hew to the approximate bulk densities of rock (at $\sim$6 g/cm$^{-3}$) and water (at $\sim$2 g/cm$^{-3}$), respectively, as shown in Figure \ref{fig:difference}. However, the distance between the two modes of these bimodal density distribution depends on whether photoevaporation or giant impacts is dominant. The two modes are further separated from one another when we consider only giant impacts. This is because a giant-impacts-dominant scenario allows for extremely high \textit{and} extremely low density planets to exist. These planets are not predicted to exist when photoevaporation is dominant, by the following reasoning. On the high-density end, giant impacts allow for high-mass, high-density stripped cores (these are the yellow ``Population B" planets in Figures \ref{fig:difference} and \ref{fig:ABC}). Under photoevaporation, the surface gravity of these cores is high enough that their atmospheres are not lost even under heavy insolation. On the low-mass end, giant impacts allow for low-mass planets to retain their primordial atmospheres, thereby allowing for small planets with very extended atmospheres (these are the close-in, purple ``Population A" planets in Figures \ref{fig:difference} and \ref{fig:ABC}). These atmospheres are readily lost under a photoevaporation scenario. The existence of these extremely high and low density planets means that if giant impacts dominate, the resulting density distribution of planets will be more easily resolved into the two distinct modes. For this reason, the dominant mode of atmosphere loss is encoded in how easily we can distinguish the Earth-density and Neptune-density modes from one another, for a given density uncertainty. The existence of both higher \textit{and} lower density planets under a giant impacts-dominant model makes it easier to resolve one mode from the other. To assess the diagnostic usefulness of this feature, we draw 100 samples of 100 planets each from our observed populations, to which we add a mean density error of 1 g cm$^{-3}$ (characteristic of, for example, a radius uncertainty of 3\% and a mass uncertainty of 8\% for a 5 $M_{\oplus}$ planet with density of 8 g/cm$^{-3}$, per \citealt{Malavolta18}). We then take the ratio of the likelihoods for two hypothetical models: a unimodal density distribution and a bimodal one. These comprise a set of nested models, with the unimodal distribution being merely a special case of the bimodal distribution (with the amplitude of one mode set to zero). For this reason, we can interpret the log of this likelihood ratio as relative preference between the uni- and bi-modal models, given an uninformative prior (c.f. \citealt{Feroz08}). In \ref{fig:clustering}, we show a representative sample of 100 planets along with 100 representative bimodal model samples. We find that, with a typical density uncertainty of 1 g cm$^{-3}$, a giant impacts-dominated sample emerges as bimodal with roughly twice the confidence (that the observed density distribution is described by two Gaussians rather than one) of the same sample of planets subjected to only photoevaporation. This translates to about half as many planets being required to statistically distinguish the two density modes, if giant impacts are the dominant atmospheric loss mechanism. Figure \ref{fig:clustering} highlights this finding, with the ratio of likelihood between the bimodal and unimodal distributions in the upper right of each panel. We repeated the experiment with a typical uncertainty of 2 g cm$^{-3}$, but even with 100 mass measurements, we could not meaningfully distinguish between the two density modes. We find that this density diagnostic is most effective when considering multi-transiting systems. This is because multi-transiting systems are likelier to contain planets from ``Population A", which that are strongly diagnostic of their atmospheric loss mechanism. These have very low densities under giant impacts and high densities under photoevaporation (see Figure \ref{fig:ABC}). Fortuitously, obtaining density measurements for planets in multi-transiting systems is made easier by the possibility of mass measurement with the transiting timing variation method \citep{Holman05, Agol05}. \section{Conclusions} We conclude that the two atmospheric loss mechanisms we consider here should produce divergent demographic-scale effects, due to the 15\% of planets whose atmospheric retention depends on which loss mechanism is dominant. The predicted differences expected from these two atmospheric loss processes are hypothetically observable, from our analysis of a simulated Kepler-sized sample of exoplanets. We have employed a suite of late-stage planet formation simulations published by \cite{Dawson16} to craft this population; these were already shown in that work to reproduce important \textit{Kepler} observables such as transit multiplicity and period ratio between adjacent planets. By tracking impactors after the nominal gas dissipation phase at 1 Myr, we have modeled the effects of impact-driven atmospheric loss on a large population of planets. By then modeling photoevaporative atmospheric loss in the same set of planets, we can directly compare the fingerprints of these two processes on a common sample. Similar to the findings of \cite{MacDonald2020}, we conclude that these two processes form planets with different radii largely depending on when planets complete their assembly- before or after the gas disk dissipates. Planets the finish forming after the gas disk dissipates run the risk of having their atmosphere stripped by an impact, regardless of separation from the host star. Systems that finish assembly before the gas disk dissipates lose atmosphere to photevaporation in order of distance from the host star, owing to the "peas-in-a-pod" phenomenon in planet masses. The 15\% of planets that experience divergent outcomes, dependent on which loss mechanism is dominant, fall into three main categories. We have designated these as Populations A, B, and C. Populations A and C are predicted to retain H/He atmospheres when giant impacts are dominant, and to lose them if photoevaporation is dominant. Population B, in contrast, is comprised of planets that are predicted to retain H/He atmospheres if photoevaporation is dominant, and to lose them only when giant impacts play an important role. The predicted detection rates of planets in these populations enable us to identify which of these diagnostic planets are most useful from an observational standpoint. At a demographic level, we identify four diagnostic tools for discerning which atmospheric loss mechanism is dominant. These are: \begin{itemize} \item The relative emptiness of the radius gap, which is completely empty when planets are subjected to photoevaporation, but not empty when giant impacts are dominant, \item The occurrence of primordial atmospheres as a function of transit multiplicity, where the observed number of transiting planets retention under giant impacts is correlated with the likelihood of atmospheric retention, \item The size-ordering of planets as orbital period increases (also denoted ``monotonicity''), where giant impacts produce more sets of neighboring planets with very different densities, and \item The existence of very high and low density outliers when giant impacts are dominant. This results in an ease of distinguishing between rocky stripped cores and sub-Neptunes at a population level with greater ease. \end{itemize} We have not considered the core-powered mass loss scenario here, whereby primordial atmospheres are lost due to the luminosity of the cooling rocky core \citep{Ginzburg2018}. We leave a comparison between this and other hypothetical atmospheric erosion mechanisms for future work. Future observations to refine the resolution of the radius gap and radial velocity campaigns for a few hundred planets will be sufficient to determine which physics is dominant for atmospheric loss in the latest stages of planet formation. \bibliography{qac_refs.bib}
Title: Simulating gravitational waves passing through the spacetime of a black hole
Abstract: We investigate how GWs pass through the spacetime of a Schwarzschild black hole using time-domain numerical simulations. Our work is based on the perturbed 3+1 Einstein's equations up to the linear order. We show explicitly that our perturbation equations are covariant under infinitesimal coordinate transformations. Then we solve a symmetric second-order hyperbolic wave equation with a spatially varying wave speed. As the wave speed in our wave equation vanishes at the horizon, our formalism can naturally avoid boundary conditions at the horizon. Our formalism also does not contain coordinate singularities and, therefore, does not need regularity conditions. Then, based on our code, we simulate both finite and continuous initially plane-fronted wave trains passing through the Schwarzschild black hole. We find that for the finite wave train, the wave zone of GWs is wildly twisted by the black hole. While for the continuous wave train, unlike geometric optics, GWs can not be sheltered by the back hole. A strong beam and an interference pattern appear behind the black hole along the optical axis. Moreover, we find that the back-scattering due to the interaction between GWs and the background curvature is strongly dependent on the direction of the propagation of the trailing wavefront relative to the black hole.
https://export.arxiv.org/pdf/2208.01621
\title{Simulating gravitational waves passing through the spacetime of a black hole} \author{Jian-hua He$^{1,2}$} \thanks{Corresponding author: \href{mailto:hejianhua@nju.edu.cn}{hejianhua@nju.edu.cn}} \author{Zhenyu Wu$^{1,2}$} \affiliation{$^1$School of Astronomy and Space Science, Nanjing University, Nanjing 210023, P. R. China} \affiliation{$^2$Key Laboratory of Modern Astronomy and Astrophysics (Nanjing University), Ministry of Education, Nanjing 210023, P. R. China} \section{introduction} “stellar-mass black holes” are formed from the deaths of high-mass ($> 8M_{\odot}$) stars, which are estimated as many as $10^8 \sim 10^9$ in our Milky Way~\cite{Agol:2001hb}. The majority of these black holes are expected to be in an isolated situation due to the disruption of their progenitor systems~\cite{Belczynski_2004} that exist mostly in binaries or in multiple systems~\cite{sana_2016}. However, despite their vast population, they are difficult to detect, as unlike binary black holes, isolated black holes do not produce detectable emissions of their own. At the moment, the primary tools to detect them rely on either photometric microlensing~\cite{Bennett_2002,Mao} or astrometric microlensing (e.g.~\cite{2022arXiv220201903L}). On the other hand, the discovery of gravitational waves ushered us into a new era of astronomy~\cite{Abbott:2016blz}. In the coming decades, ground- and space-based GW experiments will study the GW phenomena in unprecedented detail, such as the Einstein Telescope (ET)~\cite{Einstein_tele}, 40-km LIGO~\cite{LIGO40}, eLISA~\cite{2013arXiv1305.5720E}, DECIGO~\cite{Sato_2009}, and Pulsar Timing Arrays (PTA)~\cite{2010CQGra..27h4013H}. These experiments may provide a potential way beyond the conventional astronomical methods to detect isolated black holes. On the theoretical aspect, since an isolated black hole does not emit detectable GWs itself, one possible way to detect it is when some external GWs pass through it and leave some detectable features in the waveforms. The key question then becomes how to accurately predict the waveforms of GWs when they pass through a black hole. In the weak field limit, such a problem has been studied in the previous work ~\cite{PhysRevD.34.1708,Deguchi,Schneider,Ruffa_1999,DePaolis:2002tw,Takahashi:2003ix,1999PThPS.133..137N,Suyama:2005mx,Christian:2018vsi,Zakharov_2002,Liao:2019aqq,Macquart:2004sh,PhysRevLett.80.1138,Dai:2018enj,PhysRevD.90.062003,Yoo:2013cia,Nambu:2019sqn}. In these pioneer works (e.g.~\cite{Schneider}), a thin-lens model is assumed, in which the incident GWs are assumed to be far away from the optic axis and the impact parameter is much larger than the Schwarzschild radius of the lens mass $R_s$. The gravitational field of the black hole is weak in this case, and the deflection angle due to the black hole is small. In addition to the assumption of thin-lens, to address the wave effects of GWs, the previous work also applied {\it Kirchhoff's} diffraction theory to the lensing system. When the wavelength of GWs $\lambda$ is much smaller than the Schwarzschild radius $\lambda\ll R_s$, the diffraction integral is dominated by the stationary phase points. These points can be viewed as corresponding to distinct images. The approximation is known as the stationary phase approximation or the geometric optics approximation. Based on the thin lens model and geometric optics approximation, most recently, a comprehensive analysis of lensing has been performed using the data from the first half of the third LIGO–Virgo observing run~\cite{LIGOScientific:2021izm}. However, no compelling evidence of lensing has been found. In the opposite limit, when the wavelength of GWs is comparable to or much greater than the Schwarzschild radius $\lambda\ge R_s$, diffraction becomes significant. In this case, even in the weak field limit, the thin lens model and the geometric optics approximation break down for waves along the optic axis. GWs do not form caustics behind the lens at scales that are comparable to their wavelength. Instead, GWs form a strong beam along the optic axis~\cite{He:2021hhl}. In strong gravity, the situation is more complicated due to the complexity of the black hole spacetime. To fully address GWs passing through the spacetime of strong gravity, we would like to investigate the wave property of GWs in the spacetime of a black hole first, which plays an important role here. In flat spacetime, GWs are described by \begin{equation} \frac{\partial^2}{\partial t^2}h_{ij}-c^2\nabla^2 h_{ij} =c^2 Q_{ij}(t,x) \,,\label{vaccumwave1} \end{equation} where $Q_{ij}(t,x)$ is the source mass and $c$ is the speed of light in vacuum. The wave equation has a fundamental solution $G$ which satisfies \begin{equation} \frac{\partial^2}{\partial t^2}G - c^2\nabla^2 G =c^2\delta(t)\delta(x) \,.\nonumber \end{equation} For an outgoing wave, the explicit form of $G$ is known as the {\it retarded Green's function} \begin{equation} G^{+}(x,t;x',t')=\frac{\delta(t'-[t-\frac{\left\|x-x'\right\|}{c}])}{4\pi\left\|x-x'\right\|}\,.\nonumber \end{equation} Unlike the Green's function in the 2D case (or even dimensions), the 3D (or odd dimensions) Green function has a notable feature, namely, the signal is ``sharp". A perturbation at a point $\vec{x}$ is visible at another point $\vec{x}'$ exactly at the time $t = |\vec{x} - \vec{x}'|/c$. Since the wave speed of GWs equals that of light rays, GWs travel exactly on the future direct light cone. Moreover, since the speed of GWs does not depend on frequency, there is no dispersion in a wavelet of GW signals. This leads to the strong Huygens' principle in the vacuum of flat spacetime~\cite{1965ArRMA..18..103G}: a finite GW wave signal has a definite wave zone with clear leading and trailing wavefronts. However, the strong Huygens' principle does not hold in strong gravity. A strict mathematical theorem states that in the four-dimensional spacetime the wave equation on an empty spacetime with a vanishing Ricci tensor satisfies Huygens' principle if and only if the spacetime is flat or is that of a plane wave \cite{1965ArRMA..18..103G,1990GReGr..22..843W,mclenaghan_1969}. As such, unlike in the case of flat spacetime, in strong gravity, besides the original signals, GWs can be scattered back due to the interaction between GWs and the background curvature~\cite{1992JMP....33..625S,1975weoc.book.....F}. As a result, a ``sharp" signal is no longer ``sharp" but can be dispersed, which generates a ``lasting" effect and leaves a tail of GW signals. This tail blurs the trailing wavefront. In this case, GWs propagate not only on the light cone but also inside it~\cite{1992JMP....33..625S,Poisson:2003nc}. Unlike the trailing wavefront, the leading wavefront represents the transfer of energy from one place to another, which is subject to the constraint of causality. Its propagation does not depend on the frequencies of GWs. This is because the leading wavefront is not necessarily continuous but can be with a sharp edge (e.g. square waves). In mathematics, the Fourier series, indeed, converge only in the sense of $L_2$ norm, which means that they converge up to functions with differences on a set of Lebesgue measure zero. They do not necessarily converge in the pointwise sense unless the function is continuously differentiable. Thus, the sum of the Fourier series does not converge near the point of discontinuity of a piecewise smooth function, which is known as the Gibbs phenomenon. Moreover, unlike in the 1D case, the wavefront in 3D space may have a complex geometric shape at a given time (e.g. the wavefront presented in~\cite{He:2021hhl}). It is difficult to apply the spectrum method based on eigenfunctions (e.g. Fourier analysis) in this case. The perturbation theory of a black hole has already been studied extensively in the literature. Historically, these pioneer works assume that the geometric shape of the hypersurfaces of waves at a given time (wavefronts) is spherical. The angular dependence of waves can then be separated, which leaves a set of master equations along the radius, such as the Regge-Wheeler~\cite{PhysRev.108.1063}, Zerilli~\cite{PhysRevLett.24.737,PhysRevD.2.2141}, Bardeen-Press~\cite{1973JMP....14....7B} equations for the Schwarzschild black hole, and Teukolsky equations~\cite{Teukolsky} for Kerr black hole. Furthermore, by imposing boundary conditions at both the horizon and spatial infinity, the wave equations can be finally solved. Despite the success of these analyses~\cite{1992mtbh.book.....C}, it is necessary to go beyond the assumption of spherical waves. Even in the weak field limit, as shown in~\cite{He:2021hhl}, when GW signals pass through a compact object, the geometric shape of the outgoing GWs is no longer spherical but is rather complicated~\cite{He:2021hhl}. The conventional analysis based on spherical waves is difficult to describe such situations. New methods are needed. However, due to the complexity of the perturbed equations of a black hole, it is difficult to find analytical expressions for wavefronts with generic shapes. Numerical techniques, therefore, are called for in this case. This work aims to extend our work~\cite{He:2021hhl} from simulating the propagation of GWs in a potential well in the weak field limit to the regime of strong gravity. However, a key difference between our work and the scattering theory of black holes~\cite{futterman_handler_matzner_1988,Peters,Suyama:2005mx} is that our work focuses on a localized wave within a finite spacetime in the time-domain. We treat the propagation of waves as a ``Cauchy" problem for hyperbolic equations, which is fundamentally local. If taking the Fourier transforms, the wave equations in the frequency-domain become elliptic (e.g. Helmholtz equations), which are specific to the ``boundary-value" problem~\cite{nla:cat-vn1414651}. The wave functions, in this case, are determined by boundary conditions and the shape of the incident wave packet. In this work, we derive the linear perturbation equations of a black hole based on the covariant 3+1 form of Einstein's equations~\cite{gourgoulhon20123}. An advantage of this formalism is that it is less coordinate-dependent, which can avoid the coordinate singularities at $\theta =0\,,\pi$ in the spherical polar coordinates and does not need to impose regularity conditions. This is important for our 3D simulations. Our evolution scheme is similar to those of solving wave equations in numerical relativity~\cite{PhysRevD.70.104007,PhysRevD.77.084007}. However, a key difference between our work and those presented in~\cite{PhysRevD.70.104007} is that the wave speed in our scheme is no longer a constant but varies in space, which equals the speed of light observed by an asymptotic observer. Following our previous work~\cite{He:2021hhl}, the main numerical technique used in this work is called the finite element method (FEM)(see e.g. textbook~\cite{FEMbook} for details). Unlike the conventional numerical method such as the finite difference method (FDM), the FEM is based on the {\it weak formulation} or {\it variational formulation} of partial differential equations (PDEs). The solutions of PDEs can be expanded in terms of {\it ansatz} functions. The domain of interest is then decomposed into finite elements. On each element, a shape function is assigned. If the {\it ansatz} function is the same as the shape function, the scheme is called the Galerkin scheme. The FEM in this case is called the Galerkin FEM. The shape function can be either continuous or discontinuous. The resulting methods are called the continuous Galerkin FEM (cGFEM) or the discontinuous Galerkin FEM (dGFEM)~\cite{osti_4491151}, respectively. Although compared with the cGFEM, the dGFEM is more flexible for the local shape functions and is also more stable for convective problems, it is usually used with an explicit scheme of time discretization and also requires a much larger number of degrees of freedom due to the higher order shape functions used. In this work, we adopt the cGFEM method. One particular reason for this is that we implement an implicit scheme of time discretization, which is called the Crank–Nicolson scheme. In flat spacetime, this scheme is symplectic, which inherently preserves the energy of plane waves during their propagation. Throughout this paper, we adopt the geometric unit $c=G=1$, in which $1\,{\rm Mpc}=1.02938\times 10^{14} {\rm Hz}^{-1}$ and $1 M_{\odot}=4.92535\times 10^{-6} {\rm Hz}^{-1}\,.$ The advantage of this unit system is that time and space have the same unit. This paper is organized as follows: In section~\ref{sec::background}, we introduce the 3+1 Einstein's equations for the Schwarzschild spacetime in isotropic coordinates. In section~\ref{sec::perturbed}, we present the perturbed equations up to linear order based on the 3+1 Einstein's equations. In section~\ref{sec::symmetric}, we introduce the symmetric hyperbolic equations for GWs in Schwarzschild spacetime. In section~\ref{sec::FEM}, we introduce the weak formulation of wave equations and the cGFEM method. In section~\ref{sec::numres}, we present our numerical results. In section~\ref{sec::conclusions}, we summarize and conclude this work. \section{background spacetime\label{sec::background}} We choose the background as Schwarzschild spacetime. We present the line element in terms of the lapse function $N$ and shift vector $\vec{\beta}$ \begin{equation} ds^2=g_{\mu\nu}dx^{\mu}dx^{\nu}=-N^2dt^2+\gamma_{ij}(dx^i+\beta^i dt)(dx^j+\beta^j dt)\,\label{lineelement}, \end{equation} where $\gamma_{ij}$ is the spatial metric. The Greek letters $\mu$ and $\nu$ run from 0 to 3 and the Latin indices $i$ and $j$ run from 1 to 3. In isotropic coordinates, the lapse function and $\gamma_{ij}$ are given by \begin{align} N&=\frac{1-\frac{M}{2\rho}}{1+\frac{M}{2\rho}}\,,\label{metric1}\\ \gamma_{ij}&=\left(1+\frac{M}{2\rho}\right)^4\delta_{ij}\,,\label{metric2} \end{align} where $\rho=\sqrt{x^ix_i}$ is the radius in isotropic coordinates. The shift vector vanishes in this case $\vec{\beta}=(0,0,0)$. The 3+1 Einstein equations with respect to coordinates $(t,\vec{x})$ are given by~\cite{Baumgarte:2002jm,gourgoulhon20123+1} \begin{align} \frac{\partial}{\partial t}\gamma_{ij}&=-2NK_{ij}\,,\label{evolution1}\\ \frac{\partial}{\partial t} K_{ij} &= -D_iD_j N + N (R_{ij}+KK_{ij}-2K_{il} \tensor{K}{^l_j})\label{evolution2}\,, \end{align} where $K_{ij}$ is the external curvature tensor, $R_{ij}$ is the Ricci tensor for 3D space and $D_i$ is covariant derivative with respect to the 3D spatial metric $\gamma_{ij}$ . Since $\gamma_{ij}$ is time independent $\frac{\partial}{\partial t}\gamma_{ij}=0$, from Eq.~(\ref{evolution1}) $K_{ij}$ vanishes on all the hyper-surfaces $\Sigma_t$ \begin{align} &K_{ij}=0\,.\label{static0} \end{align} The hyper-surface $\sum_t$ in this case is called the maximal slicing $K=0$. The lapse functions satisfy the following relations \begin{align} D_iD_jN&=NR_{ij}\neq 0 \,(i\neq j)\,,\label{static1}\\ D_iD^i N&=N R =0\,.\label{static2} \end{align} Equations~(\ref{evolution1}) and (\ref{evolution2}) constitute a time evolution system as a Cauchy problem. For the background line elements, inserting Eqs.~(\ref{static0},\ref{static1},\ref{static2}) into Eqs.~(\ref{evolution1}) and (\ref{evolution2}), the background metrics are consistent with Eqs.~(\ref{evolution1}) and (\ref{evolution2}). In addition to the evolution equations, the 3+1 Einstein's equations are also subject to the Hamiltonian and momentum constraints. \begin{align} \mathcal{H}&=\frac{1}{2}\left(R+K^2-K_{ij}K^{ij}\right)=0\,,\label{constraint1}\\ M_i&=D_j\tensor{K}{^j_i}-D_i K = 0\,\label{constraint2}. \end{align} For the background spacetime, the above constraints are automatically satisfied since $R=0$ and $K=0$. \section{perturbed spacetime\label{sec::perturbed}} \subsection{perturbed wave equations} From Eqs.~(\ref{evolution1}) and (\ref{evolution2}), the perturbed 3+1 formalism of the Einstein equations are given by \begin{align} \frac{\partial}{\partial t} h_{ij} &= -2\delta N K_{ij}-2N\delta K_{ij}\,,\label{perturbe1}\\ \frac{\partial}{\partial t} \delta K_{ij} &= -\delta (D_iD_jN)+N\delta R_{ij}+N\delta K K_{ij}\nonumber\\ &+N K \delta K_{ij} -2N\delta K_{il} \tensor{K}{^l_j}-2NK_{il}\delta \tensor{K}{^l_j}\nonumber\\ &+\delta N (R_{ij}+KK_{ij}-2K_{il} \tensor{K}{^l_j}) \,.\label{perturbe2} \end{align} where $h_{ij}$ denotes the perturbed metric \begin{equation} h_{ij} = \delta \gamma_{ij} \,. \end{equation} $\delta R_{ij}$ is the perturbed Ricci tensor, which can be presented in terms of covariant derivatives $D$ and $\delta \Gamma^{k}_{ij}$ \begin{align} \delta R_{ij}&=D_k\delta \Gamma^k_{ij}-D_j\delta \Gamma^k_{ik}\nonumber\\ &=\frac{1}{2}(D^lD_i h_{lj}+D^lD_jh_{il}-D^lD_lh_{ij})\nonumber\\ &-\frac{1}{2}D_jD_i(\gamma^{kl}h_{lk})\,.\label{deltaRicci} \end{align} Although the Christoffel symbol $\Gamma^{k}_{ij}$ itself is not a tensor, its perturbation $\delta \Gamma^{k}_{ij}$ is a tensor, which can be written in a covariant format~\cite{PhysRev.146.938} \begin{align} \delta \Gamma^{k}_{ij} =\frac{1}{2}\gamma^{kl}(D_i h_{lj}+D_j h_{il}-D_l h_{ij})\,. \end{align} The perturbation of $2N\delta(D_iD_jN)$ is given by \begin{align} 2N\delta(D_iD_jN)&=2N\left(D_iD_j\delta N-\delta \tensor{\Gamma}{^k_i_j}\partial_k N\right) \nonumber\\ &=2N\left(\frac{\partial^2 \delta N}{\partial x^i\partial x^j} -\tensor{\Gamma}{^k_i_j}\partial_k\delta N\right)\nonumber\\ &-N\partial_kN\gamma^{kl}(D_ih_{lj}+D_jh_{il}-D_lh_{ij})\,.\label{deltaN} \end{align} where \begin{align} D_iD_j\delta N=\frac{\partial^2 \delta N}{\partial x^i\partial x^j}-\tensor{\Gamma}{^k_i_j}\partial_k\delta N\,. \end{align} Next, taking the time derivative of Eq.~(\ref{perturbe1}), and using Eq.~(\ref{perturbe2}) to eliminate $\delta K_{ij}$, we obtain a second order equation for $h_{ij}$ \begin{align} \frac{\partial^2}{\partial t^2}h_{ij}&=-2\frac{\partial \delta N}{\partial t} K_{ij}-2\delta N\frac{\partial K_{ij}} {\partial t} \nonumber\\ &+2N\delta(D_iD_jN)-2N^2\delta R_{ij}\nonumber\\ &-2N^2\delta K K_{ij}-2N^2K\delta K_{ij} + 4N^2\delta K_{il}\tensor{K}{^l_j} \nonumber\\ &+4N^2 K_{il}\delta \tensor{K}{^l_j}-2N\delta N (R_{ij}+KK_{ij}-2K_{il} \tensor{K}{^l_j}) \,. \label{waveequation} \end{align} Noting that the background quantities vanish $K_{ij}=0$ $K=0$ and $\tensor{K}{_i^j}=0$, the above equation reduces to \begin{align} \frac{\partial^2}{\partial t^2}h_{ij}&= 2N\delta(D_iD_jN)-2N^2\delta R_{ij}-2N\delta N R_{ij}\,.\label{waveequation2} \end{align} Inserting Eq.~(\ref{deltaRicci}) and Eq.~(\ref{deltaN}) into Eq.~(\ref{waveequation}), we obtain \begin{align} \frac{\partial^2}{\partial t^2}h_{ij}&=2ND_iD_j\delta N-2N\delta N R_{ij}\nonumber \\ +&N^2(D_iD_jh+D^lD_lh_{ij}-D^lD_ih_{lj}-D^lD_jh_{il})\nonumber\\ -&N\partial_kN\gamma^{kl}(D_i h_{lj}+D_j h_{il}-D_l h_{ij})\,. \end{align} Using the identity \begin{align} D^lD_ih_{lj}=D_iD^l h_{lj}+\tensor{R}{^m_j_i^n}h_{mn}+\tensor{R}{_i^l}h_{lj}\,, \end{align} we finally arrive at \begin{align} \frac{\partial^2}{\partial t^2}h_{ij}&=2ND_iD_j\delta N-2N\delta N R_{ij}\nonumber \\ &+N^2D^lD_lh_{ij}+N^2D_iD_jh-N^2\left(D_i \Gamma_j+D_j\Gamma_i\right)\nonumber \\ &-N^2(2\tensor{R}{^m_i_j^n}h_{mn}+\tensor{R}{_i^l}h_{lj}+\tensor{R}{_j^l}h_{li})\nonumber\\ &-N\partial_kN\gamma^{kl}(D_i h_{lj}+D_j h_{il}-D_l h_{ij})\,, \label{wavetensor} \end{align} where \begin{align} \Gamma_j=D^{l}h_{lj}\,. \end{align} Equation~(\ref{wavetensor}) gives the most general linear perturbed equation for gravitational waves in Schwarzschild spacetime. If $\gamma_{ij}$ is in flat case, Eq.~(\ref{wavetensor}) is consistent with Eq.(85) presented in Ref.~\cite{PhysRevD.70.104007}. \subsection{general covariance} Unlike the non-linear perturbations, in linear perturbation theory general covariance plays a vital role. To highlight this point, we consider an arbitrary infinitesimal coordinate transformation $\eta^i$ \begin{equation} \tilde{\rho}^{i}=\rho^{i}+\eta^i\,. \end{equation} For an scalar field $S$ and a tensor field $T_{ij}$, the perturbed quantities transform under $\eta^i$ as \begin{align} \delta\tilde{S} &\rightarrow \delta S-\mathcal{L}_{\vec{\eta}}S\,,\\ \delta\tilde{T}_{ij} &\rightarrow \delta T_{ij}-\mathcal{L}_{\vec{\eta}}T_{ij}\,, \end{align} where $\mathcal{L}_{\vec{\eta}}$ denotes the Lie derivative. For a scalar field, the Lie derivative gives \begin{equation} \mathcal{L}_{\vec{\eta}}S= \eta^kD_k S\,, \end{equation} and for a tensor field, the Lie derivative reads \begin{equation} \mathcal{L}_{\vec{\eta}}T_{ij}=T_{ik}D_j\eta^k+T_{kj}D_i\eta^k + \eta^kD_k T_{ij}\,. \end{equation} The perturbed quantities $\delta N,\,$ $h_{ij}$ and $\delta R_{ij}$ then transform as \begin{equation} \left\{ \begin{aligned} \delta \tilde{N}&\rightarrow \delta N - \eta^kD_k N\\ \tilde{h}_{ij}&\rightarrow h_{ij} - \gamma_{ik}D_j\eta^k-\gamma_{kj}D_i\eta^k - \eta^kD_k\gamma_{ij}\\ \delta \tilde{R}_{ij}&\rightarrow \delta R_{ij} - R_{ik}D_j\eta^k-R_{kj}D_i\eta^k - \eta^kD_k R_{ij} \label{infinitrans} \end{aligned} \right.\,. \end{equation} Since the gauge transformation simply changes coordinates, it does not induce real physics. If an equation describes real physics, it should have the same format in different coordinates. This principle is known as general covariance. It turns out that Eq.~(\ref{waveequation2}) does have such a property. When inserting the above gauge transformation, Eq.~(\ref{waveequation2}) keeps the same format in the new coordinate system. The detailed proof is provided in appendix~\ref{covariantproof}. A key point in the proof is that $\delta N$ does not vanishes, which plays an essential role for Eq.~(\ref{waveequation2}) to be covariant. If $\sum_t$ is compact or $\sum_t$ is open but $\eta^i$ can vanish rapidly at infinity, the tensor field $h_{ij}$ can be decomposed into scalar, vector and tensor components according to their properties of gauge transformation \cite{1984Psasaki}. However, although in most cases it is possible to do such decomposition for $h_{ij}$, there is no guarantee that different components can evolve independently. This is because the evolution equations of different components may couple to one another. To illustrate this point, we decompose $\eta^k$ as \begin{align} &\eta^k = D^k\eta + \eta^k_* \,,\nonumber\\ &D_k\eta^k_*=0\,. \end{align} $\eta$ and $\eta^k_*$ then represent the scalar and vector infinitesimal transformations, respectively. Note that these two transformations are independent to each other. $\eta^k_*$ alone, therefore, does not change the scalar components of $h_{ij}$. However, this may not be true for the second order derivatives of $h_{ij}$ \begin{align} D^iD^j\tilde{h}_{ij}\rightarrow D^iD^j h_{ij} - 2 R_{ij}D^i\eta^j_*\,. \end{align} Although $D^iD^j h_{ij}$ itself is a scalar field, whether it will be changed under the vector coordinate transformation $\eta^k_*$ is dependent on the curvature of $\sum_t$. If $R_{ij}$ is to be maximally symmetric, such as in the case of Robertson-Walker metric \begin{align} R_{ij}=2K\gamma_{ij}\,, \end{align} $R_{ij}D^i\eta^j_*$ then vanishes. The vector transformation $\eta^k_*$ in this case does not induce any change in the scalar quantity $D^iD^j h_{ij}$. And the equations for scalar and vector components can evolve independently. In cosmological perturbation theory, the explicit proof of the independence of different components can be found in the Appendix B of Ref.~\cite{1984Psasaki} where the constant curvature of the background space plays an essential role in the proof. However, in our case, $R_{ij}D^i\eta^j_*$ does not vanish since $R_{ij}\neq 0 \, (i\neq j)$ and $R_{ij}$ can not be presented as a constant in combination with $\gamma_{ij}$. Different components in Eq.~(\ref{wavetensor}), thus, couple to one another. In this case, the most general perturbed equation should be used directly. \subsection{perturbed conservation laws\label{pconser}} In addition to the wave equation, the perturbed constraint Eq.~(\ref{constraint1}) gives \begin{align} \delta R + 2K\delta K - \delta K_{ij}K^{ij}-K_{ij} \delta K^{ij} = 0\,. \end{align} Since the background external curvature vanishes, we obtain \begin{align} \delta R = 0\,. \end{align} From Eq.~(\ref{deltaRicci}), we obtain \begin{align} \gamma^{ij} \delta R_{ij} = D^l \Gamma_l-D^lD_l h\,. \end{align} Further noting that \begin{align} \delta R = \delta \gamma^{ij} R_{ij} + \gamma^{ij} \delta R_{ij} \end{align} and \begin{align} \delta \gamma^{ij} = -\gamma^{im}\gamma^{jn} h_{mn}\,, \end{align} we obtain \begin{align} D^l \Gamma_l-D^lD_l h=\gamma^{im}\gamma^{jn} h_{mn}R_{ij}\,.\label{Dgamma} \end{align} The perturbation of Eq.~(\ref{static2}) gives \begin{align} \delta(D_iD^i N)=\gamma^{ij}\delta(D_iD_jN)-\gamma^{im}\gamma^{jn}h_{mn}D_iD_jN=0 \end{align} Contracting Eq.~(\ref{waveequation2}) with $\gamma^{ij}$, we obtain \begin{align} \frac{\partial^2h}{\partial t^2}&=2N\gamma^{ij}\delta(D_iD_j N)-2N^2\gamma^{ij}\delta R_{ij}\nonumber\\ &=2N\gamma^{im}\gamma^{jn}h_{mn}(D_iD_j N - N R_{ij})\nonumber \\ &=0\label{wavetrace} \end{align} Therefore, we can take the trace of $h_{ij}$ as zero $h=0$. Equation~(\ref{Dgamma}), then, gives \begin{align} D^l\Gamma_l=\gamma^{im}\gamma^{jn}h_{mn}R_{ij}=\gamma^{im}\gamma^{jn}h_{mn}\frac{D_iD_jN}{N}\,.\label{Gamma_new} \end{align} Next, contracting Eq.~(\ref{perturbe1}) with $\gamma^{ij}$ and noting that $h=0$, we obtain \begin{align} \delta K = 0\,. \end{align} The perturbation of momentum constraint Eq.~(\ref{constraint2}) reduces to \begin{align} D^j \delta K_{ji} = 0\,.\label{mconstraint} \end{align} Taking the spatial derivative of Eq.~(\ref{waveequation2}), we obtain \begin{align} \frac{\partial}{\partial t} D^i h_{ij} = -2D^iN\delta K_{ij}-2ND^i\delta K_{ij}\,. \end{align} Using the definition \begin{align} \Gamma_j=D^{l}h_{lj} \end{align} and Eq.~(\ref{mconstraint}), we find \begin{align} \frac{\partial}{\partial t} \Gamma_j = D^i\ln N\frac{\partial}{\partial t} h_{ij}\,. \end{align} The above equation has a general solution, which takes the form \begin{align} \Gamma_l = D^m \ln N h_{ml} +A_l \,,\label{Gammahml} \end{align} where $A_l$ is a free vector field that does not depend on time $\frac{\partial {A_l}}{\partial t}=0$. To fix the choice of the vector field $A_l$, we can take the spatial derivative of the Eq.~(\ref{Gammahml}) \begin{align} D^l\Gamma_l &= (D^l D^m \ln N) h_{ml} + D^m \ln N D^l h_{ml} +D^l A_l \nonumber\\ &=(D^n D^m \ln N + D^m \ln N D^n \ln N) h_{mn} \nonumber\\ &+ D^m \ln N A_m + D^m A_m \nonumber\\ &= \frac{D^mD^n N}{N} h_{mn}\,. \label{Gammaequality} \end{align} In the last equality, we have used Eq.~(\ref{Gamma_new}). Further using the identity \begin{align} D^n D^m \ln N + D^m \ln N D^n \ln N = \frac{D^mD^n N}{N}\,, \end{align} from Eq.~(\ref{Gammaequality}) we obtain \begin{align} D^m \ln N A_m + D^m A_m = 0\,. \end{align} The above equation places a constraint on the choice of $A_m$. In this work, we take $A_m = 0$, which is an obvious solution to the above equation. Equation~(\ref{Gammahml}) reduces to \begin{align} \Gamma_l = D^m \ln N h_{ml} \,.\label{GammahmlNOa} \end{align} The above relation plays a vital role in the numerical process of solving Eq.~(\ref{wavetensor}). We will discuss this point in the next few sections. \subsection{Choosing coordinates} As already noted, an infinitesimal coordinate transformation from the background spacetime can lead to non-zero perturbations. Although these perturbations still satisfy the covariant perturbed equation Eq.~(\ref{waveequation2}), it is obvious that they do not represent any true physics. One way to get around this problem is to evolve the perturbed equations in a particular gauge and then extract the gauge-invariant GWs from the symmetric trace-free tensor $h_{ij}$ later on ( see ~\cite{gourgoulhon20123+1} for reviews). Choosing a gauge is usually done by picking up particular forms for the lapse $N$ and shift $\beta_i$. In our work, since $\beta_i=0$, we only need to choose a particular form for $N$. In this work, we choose a gauge in which $\delta N =0$. This slicing is known as the maximal slicing since the mean external curvature $K=\delta K=0$ vanishes on this hypersurfaces $\Sigma_t$. \section{symmetric hyperbolic equations\label{sec::symmetric}} For numerical reasons, it is more convenient to write the covariant derivatives $D^lD_l$ in Eq.~(\ref{wavetensor}) in terms of ordinary partial derivatives \begin{align} N^2D^lD_lh_{ij}=c^2\nabla^2 h_{ij}-\frac{32(2\rho-M)^2\rho^3M}{(2\rho+M)^7}\frac{\rho^l}{\rho}\partial_{l} h_{ij}\,, \end{align} where $\nabla^2 = \partial_i\partial^i$ and the coefficient $c^2$ is defined by \begin{align} c^2 &= \frac{16\rho^4 (2\rho-M)^2}{(2\rho+M)^6}\,.\label{definationspeed} \end{align} $c$ has a clear physical meaning, which is the speed of GWs in Schwarzschild spacetime. Note that $c$ is not a constant but varies in space. In fact, it has long been known that the major challenge of solving the wave equation Eq.~(\ref{wavetensor}) lies in the terms associated with $\partial_i \Gamma_j$, which involve the mixed second order spatial derivatives of $h_{ij}$~\cite{Baumgarte:2010ndz}. The wave equation Eq.~(\ref{wavetensor}) in this case is no longer {\it symmetric hyperbolic} and {\it well-posed}, as the mixed terms can generate modes that grow exponentially in the solutions~\cite{Baumgarte:2010ndz}. One way to overcome this problem is to present $\Gamma_j$ in terms of $h_{ij}$ rather than its spatial derivatives using the momentum constraint Eq.~(\ref{GammahmlNOa}) \begin{align} \Gamma_j = \gamma^{mn}D_n \ln N h_{mj}=\frac{f_{\Gamma}}{N^2}\frac{\rho^m}{\rho} h_{mj}\,. \end{align} We then obtain \begin{align} &N^2\left(\partial_i \Gamma_j+\partial_j\Gamma_i-2\tensor{\Gamma}{^k_i_j}\Gamma_k\right)\nonumber \\ =&f_{\kappa}\frac{\rho^m\rho^k}{\rho^2}(\delta_{ki}h_{mj}+\delta_{kj}h_{mi})\nonumber \\ +&f_{\Gamma}\frac{\rho^l}{\rho}\left(\partial_i h_{lj}+\partial_j h_{il}-2 \tensor{\Gamma}{^m_i_j} h_{lm}\right)\,,\label{N2Gamma} \end{align} where \begin{align} f_{\kappa}&=-\frac{64M\rho^3(3M^2-8M\rho+12\rho^2)}{(2\rho+M)^8}\nonumber\\ f_{\Gamma}&= \frac{64(2\rho-M) M\rho^4}{(2\rho+M)^7}\,. \end{align} Equation~(\ref{N2Gamma}), thus, does not involve the mixed second order spatial derivatives of $h_{ij}$. The wave equation, in this case, is {\it symmetric hyperbolic} and is also {\it strong hyperbolic} (see Chapter 11.1 in~\cite{Baumgarte:2010ndz}). The wave equation is {\it well-posed} and turns out to be stable in the numerical process. Next, we rewrite the Ricci tensor in the second term on the RHS in Eq.~(\ref{wavetensor}) as \begin{align} N^2\gamma^{lk}\tensor{R}{_i_k}h_{lj}=f_R\delta^{lk}\tilde{R}_{ik}h_{lj}\,, \end{align} where $R_{ij}$ and $\tilde{R}_{ij}$ are given by \begin{align} R_{ij}&=\frac{4M}{\rho^3(2\rho+M)^2}(\delta_{ij}\rho^2-3\rho_i\rho_j)\nonumber\\ &=\frac{4M}{\rho^3(2\rho+M)^2}\tilde{R}_{ij}\,,\\ \tilde{R}_{ij}&=\delta_{ij}\rho^2-3\rho_i\rho_j\,. \end{align} The coefficient $f_{R}$ is defined by \begin{align} f_R = \frac{64(2\rho-M)^2 M\rho}{(2\rho+M)^8}\,. \end{align} The terms associated with the Riemann tensor can be written as \begin{align} N^2\tensor{R}{^m_i_j^n}h_{mn}=N^2\gamma^{mp}\gamma^{nq}\tensor{R}{_p_i_j_q}h_{mn}\nonumber\\ =f_R\delta^{mp}\delta^{nq}\tensor{\tilde{R}}{_p_i_j_q}h_{mn}\,, \end{align} where \begin{align} R_{pijq}&=\frac{M(2\rho+M)^2}{4\rho^7}\tilde R_{pijq}\,,\\ \tilde R_{ijij}&=2\rho^2-3\rho_i^2-3\rho_j^2\,,\quad(i\neq j)\\ \tilde {R}_{ijik}&=-3\rho_j\rho_k\,,\quad(i\neq j \neq k)\,. \end{align} The last term on the RHS in Eq.~(\ref{wavetensor}) can be simplified as \begin{align} &N\partial_kN\gamma^{kl}(D_i h_{lj}+D_j h_{il}-D_l h_{ij})\nonumber\\ =&f_{\Gamma}\frac{\rho^l}{\rho}\left[\partial_i h_{lj}+\partial_j h_{il}-\partial_l h_{ij}-2 \tensor{\Gamma}{^m_i_j} h_{lm}\right]\,. \end{align} Finally, combining all the above expressions, we obtain \begin{align} \frac{\partial^2}{\partial t^2}h_{ij}&=c^2\nabla^2 h_{ij}-f_{\kappa}\frac{\rho^m\rho^k}{\rho^2}(\delta_{ki}h_{mj}+\delta_{kj}h_{mi})\nonumber\\ &+f_{\rho}\frac{\rho^k}{\rho}\partial_{k} h_{ij}-2f_R\delta^{mp}\delta^{nq}\tensor{\tilde{R}}{_p_i_j_q}h_{mn}\nonumber\\ &-f_R(\delta^{lk}\tilde{R}_{ik}h_{lj}+\delta^{lk}\tilde{R}_{jk}h_{li})\nonumber\\ &-2f_{\Gamma}\frac{\rho^l}{\rho}\left(\partial_i h_{lj}+\partial_j h_{il}-2 \tensor{\Gamma}{^m_i_j} h_{lm}\right)\,,\label{perturbedwaveequation} \end{align} where \begin{align} f_{\rho}&=\frac{32(2\rho-M)\rho^3M^2}{(2\rho+M)^7}\nonumber\,. \end{align} The above equation is the core equation we aim to solve in this work. Before going further, we shall address several key aspects of Eq.~(\ref{perturbedwaveequation}). \subsection{isotropic wave speed} The principle part of the wave equation Eq.~(\ref{perturbedwaveequation}) is \begin{align} \frac{\partial^2}{\partial t^2}h_{ij}&=c^2\nabla^2 h_{ij}+{\rm dispersion\,terms} \,, \end{align} where $c$ is the speed of GWs measured by a static observer $(\frac{\partial}{\partial t})^a$ at spatial infinity. Its value varies in space. The rest terms in Eq.~(\ref{perturbedwaveequation}) serve as dispersion terms. If Eq.~(\ref{perturbedwaveequation}) admits a plane wave solution, these terms change the dispersion relation between the speed of phase and the frequency of the wave. Figure~\ref{wavespeed} shows $c$ as a function of $\rho/M$. If it is far away from the black hole \begin{equation} \lim_{\rho/M\rightarrow \infty}c = 1\,, \end{equation} $c$ goes back to the speed of light in vacuum $c\sim 1$. At the horizon $c$ becomes zero \begin{equation} \lim_{\rho\rightarrow M/2}c = 0\,. \end{equation} Moreover, it is important to note that $c$ equals the speed of light in Schwarzschild spacetime. To see this point, we consider a null curve $ds^2=0$ (not necessarily null geodesics). From the line element Eq.~(\ref{lineelement}), we obtain \begin{align} 0=-N^2dt^2+\left(1+\frac{M}{2\rho}\right)^4dl^2\,, \end{align} where $dl^2=dx^2+dy^2+dz^2$. The above equation gives \begin{align} \frac{dl^2}{dt^2}=\frac{N^2}{\left(1+\frac{M}{2\rho}\right)^4}=\frac{16\rho^4 (2\rho-M)^2}{(2\rho+M)^6}=c^2\,.\label{wavespeed_light_rays} \end{align} This demonstrates that GWs in Schwarzschild spacetime travel at the same speed as those of light rays. Moreover, it is also worth noting that $c$ is isotropic in our coordinate system, namely, the speed of wave does not depend on the direction of propagation. \subsection{Flat spacetime limit} Next, we demonstrate the consistency of our formalism in the limit of flat spacetime, namely, $M\rightarrow 0$ and $N\rightarrow 1$. Here we also assume that $\delta N\rightarrow 0$. In this case, Eqs.~(\ref{perturbe1},\ref{perturbe2},\ref{waveequation}) reduce to \begin{align} \frac{\partial}{\partial t} h_{ij} &= -2\delta K_{ij}\,,\\ \frac{\partial}{\partial t} \delta K_{ij} &= \delta R_{ij},\\ \frac{\partial^2}{\partial t^2}h_{ij}&=-2\delta R_{ij}\,. \label{flatwaveequation} \end{align} From Eq.~(\ref{deltaRicci}), the perturbed Ricci tensor becomes \begin{align} \delta R_{ij}=-\frac{1}{2}\nabla^2 h_{ij}\,.\label{flatDeltaR} \end{align} Equation~(\ref{waveequation}) then becomes \begin{align} \frac{\partial^2}{\partial t^2}h_{ij}=\nabla^2 h_{ij}\,.\label{planewave} \end{align} The above equation can also be obtained from Eq.~(\ref{perturbedwaveequation}) by noting that $f_{\rho}\rightarrow 0\,,f_{R}\rightarrow 0\,,f_{\Gamma}\rightarrow 0 \,, c^2\rightarrow 1$ when $M\rightarrow 0$. Equation~(\ref{planewave}) is consistent with the well-known wave equation in Minkowski spacetime. Note that, in the flat spacetime limit, the momentum constraint gives $\Gamma_i \rightarrow 0$. Equation~(\ref{planewave}) has an analytical solution \begin{align} h_{ij}(t,\vec{\rho})=H_{ij} \cos(\omega t -k^i\rho_i)\,,\label{planewaves} \end{align} where $\vec{k}$ is the wave vector, $\vec{\rho}$ is the position vector and $\omega$ is the angular frequency. $H_{ij}$ is a constant tensor field. Taking the derivative $\nabla^i$ of $h_{ij}$, the condition $\Gamma_j=\nabla^ih_{ij}=0$ gives \begin{align} \nabla^ih_{ij}=k^i H_{ij} =0\,. \end{align} The above equation, thus, indicates that $H_{ij}$ is perpendicular to the wave vector $\vec{k}$, which means that the oscillation of GWs is transverse relative to the direction of its propagation. Therefore, in flat spacetime, the condition $\Gamma_j=0$ implies transverse waves. Moreover, since $h_{ij}$ is trace-less, from Eq.~(\ref{flatDeltaR}) we obtain \begin{align} \delta R = \delta \gamma^{ij}R_{ij}+\delta\tensor{R}{^i_i}=\delta\tensor{R}{^i_i}=-\frac{1}{2}\nabla^2 {\tensor{h}{^i_i}}=0\,. \end{align} Given the transverse condition, we also have \begin{align} \nabla^i\delta K_{ij}=-\frac{1}{2}\frac{\partial}{\partial t} \nabla^i h_{ij} =0\,. \end{align} As such, Eq.~(\ref{planewaves}) satisfies both the energy and momentum constraints. It is worth noting that, in general, $\Gamma_j$ does not vanish in curved spacetime, unless the GW tensor is transverse relative to the radius. This is because from the momentum constraint \begin{align} \Gamma_l = D^m \ln N h_{ml} \propto \rho^m h_{ml}, \end{align} a vanishing $\Gamma_l$ leads to a transverse GW tensor relative to the radius $\rho^m h_{ml}=0$. This usually happens for spherical waves. However, in our case GWs are neither plane waves nor spherical waves. $\Gamma_l$, therefore, does not vanish. \subsection{wave equations at the horizon} At the horizon $\rho \rightarrow M/2$, the only non-vanishing coefficient in Eq.~(\ref{perturbedwaveequation}) is \begin{align} f_{\kappa}\rightarrow-\frac{1}{16M^2}\,. \end{align} Equation~(\ref{perturbedwaveequation}) then reduces to \begin{align} \frac{\partial^2}{\partial t^2}h_{ij}&=\frac{\rho^m\rho^k}{4M^4}(\delta_{ki}h_{mj}+\delta_{kj}h_{mi})\,, \end{align} where $\rho^k\rho_k=M^2/4$. It is worth noting that there is no divergent terms in our formulation. If the incident waves are of spherical symmetry and transversely relative to the radius $\rho^mh_{mi}=0$, the above equation simply gives $h_{ij}(t)|_{\rho=M/2}=0$, given that the horizon is initially at rest. This is reasonable since the transverse GWs only induce oscillations that lie on the surface of the horizon, which do not actually change the horizon. Only when GWs have non-transverse components, the above equation has a non-trivial solution. For instance, if the GW tensor has a non-zero component $h_{xx}$ and hits the horizon at $(-M/2,0,0)$, the above equation reduces to \begin{align} \frac{\partial^2}{\partial t^2}h_{xx}&=\frac{1}{8M^2}h_{xx}\,, \end{align} which admits a decaying mode in the solution $h_{xx} =C e^{-\frac{\sqrt{2}}{4M}t}$. This indicates that GWs are not frozen at the horizon but will die out with the asymptotic time $t$. However, it should be noted that since the wave speed at the horizon is zero $c \rightarrow 0$, no information at the horizon can propagate to a distant observer. This is consistent with the usual treatment by pushing the event horizon of the black hole to $-\infty$ using the tortoise coordinate, in which information at the horizon takes infinite asymptotic time to get out of the horizon. Our treatment, indeed, achieves a similar result but with a more natural geometry for the black hole in 3D space. \section{Finite Element method\label{sec::FEM}} To numerically solve Eq.~(\ref{perturbedwaveequation}), we use the finite element method. Unlike the conventional methods such as the finite difference method, the FEM is based on the {\it weak formulation} (or {\it variational formulation}) of the PDEs. Therefore we will first introduce the {\it weak formulation} of the wave equations, which can be obtained by multiplying Eq.~(\ref{perturbedwaveequation}) with a test function $\Psi$ and then integrating over a domain $\Omega$. For convenience, we adopt the following notion for short \begin{equation} \langle f,g \rangle=\int_{\Omega}f(x)^*g(x)\,\mathrm{d}x\,. \end{equation} Equation~(\ref{perturbedwaveequation}) can be presented as \begin{align} &\langle\Psi,\frac{\partial^2}{\partial t^2}h_{ij}e^i\otimes e^j\rangle\nonumber\\ =&\langle\Psi, (c^2\nabla^2 h_{ij}+f_{\rho}\partial_{\rho} h_{ij})e^i\otimes e^j\rangle \nonumber\\ -&\langle\Psi, f_{\kappa}\frac{\rho^m\rho^k}{\rho^2}(\delta_{ki}h_{mj}+\delta_{kj}h_{mi})e^i\otimes e^j\rangle \nonumber\\ -&\langle\Psi, (f_R\delta^{mp}\tilde{R}_{im}h_{pj}+f_R\delta^{mp}\tilde{R}_{jp}h_{mi})e^i\otimes e^j\rangle \nonumber\\ -&2\langle\Psi,f_R\delta^{mp}\delta^{nq}\tensor{\tilde{R}}{_p_i_j_q}h_{mn}e^i\otimes e^j\rangle \nonumber\\ -&2\langle\Psi,f_{\Gamma}\frac{\rho^p}{\rho}\left(\partial_i h_{pj}+\partial_j h_{ip}-2 \tensor{\Gamma}{^m_i_j} h_{pm}\right)e^i\otimes e^j\rangle \,, \label{weakwaveequation} \end{align} where $e^i\otimes e^j$ is the tensor basis. As pointed out previously, $h_{ij}$ is trace-less. Due to symmetry, $h_{ij}$ only has $5$ independent components. We denote the basis for these $5$ independent components as $\epsilon^{\alpha}$, which is related to $e^i\otimes e^j$ by \begin{align} e^i\otimes e^j=\tensor{C}{^i^j_{\alpha}}\epsilon^{\alpha}\,.\label{tensorbasis} \end{align} The non-vanishing components of $\tensor{C}{^i^j_{\alpha}}$ are \begin{align} &\tensor{C}{^2^2_{0}}=\tensor{C}{^1^1_{2}} = 1\nonumber\,,\\ &\tensor{C}{^2^3_{1}}=\tensor{C}{^3^2_{1}} = \tensor{C}{^2^1_{3}}= \tensor{C}{^1^2_{3}}= \tensor{C}{^3^1_{4}}= \tensor{C}{^1^3_{4}}=1/2\,. \end{align} Given such a basis, the tensor field $h_{ij}e^i\otimes e^j$ can be presented in terms of $\epsilon^{\sigma}$ \begin{align} h_{ij}e^i\otimes e^j=H_{\sigma}\epsilon^{\sigma}\,. \end{align} The components of $H_{\sigma}$ are explicitly given by \begin{equation} \left\{ \begin{aligned} &H_0=h_{22}\nonumber\\ &H_1=h_{\times}=h_{23}=h_{32}\nonumber\\ &H_2=h_{11}\nonumber\\ &H_3=h_{12}=h_{21}\nonumber\\ &H_4=h_{13}=h_{31}\nonumber\\ &h_{33}=-H_2-H_0 \end{aligned} \right.\,. \end{equation} $H_{\sigma}$ is then related to $h_{ij}$ by \begin{align} h_{ij}=\tensor{C}{_i_j^{\sigma}}H_{\sigma}\,. \end{align} The non-vanishing components of $\tensor{C}{_i_j^{\sigma}}$ are \begin{align} \tensor{C}{_2_2^{0}}&=\tensor{C}{_2_3^{1}}=\tensor{C}{_3_2^{1}}=\tensor{C}{_1_1^{2}} \nonumber\\ &=\tensor{C}{_2_1^{3}}=\tensor{C}{_1_2^{3}}=\tensor{C}{_2_1^{4}}=\tensor{C}{_1_3^{4}}=1\,,\nonumber \\ \tensor{C}{_3_3^{0}}&=\tensor{C}{_3_3^{2}}=-1\,. \end{align} Note that $\tensor{C}{_i_j^{\sigma}}$ and $\tensor{C}{^i^j_{\sigma}}$ are symmetric with respect to $i\,,j$. They are related to each other by \begin{align} \tensor{C}{_i_j^{\sigma}}\tensor{C}{^i^j_{\alpha}}=\tensor {\delta}{^{\sigma}_{\alpha}}\,. \end{align} The {\it test} function $\Psi$ for a vector field can be constructed in a form \begin{align} \Psi = \phi \otimes \epsilon^{\alpha}\,, \end{align} where $\phi$ is chosen in such a way that it vanishes on the subset of boundaries with Dirichlet boundary conditions ${\partial \Omega_D}$ \begin{equation} \mathbb{V}:=\{\phi:\phi\in \mathbb{H}^{1}(\Omega),\phi|_{\partial \Omega_D}=0\}\,.\label{Vspace} \end{equation} $\mathbb{H}^{1}(\Omega)=\mathbb{W}^{1,2}(\Omega)$ is called the first order {\it Sobolev space} meaning that $\phi$ and its first order weak derivatives $\partial_x \phi$ are square integrable \begin{equation} \left\|\phi\right\|_{\mathbb{H}^1(\Omega)}=\left[\int_{\Omega}\sum_{|\alpha|\leq 1} |\partial^{\alpha}_x\phi(x)|^2dx\right]^{\frac{1}{2}}<\infty\,.\nonumber \end{equation} If $h_{ij}$ in Eq.~(\ref{weakwaveequation}) holds for any test function $\Psi$, $h_{ij}$ is called the {\it weak solution} and Eq.~(\ref{weakwaveequation}) is called the {\it weak formulation}. \subsection{spatial discretization} In the Finite Elements Methods (FEM), the domain $\Omega$ is decomposed into subdomains $\Omega_i$, which consist of rectangles or triangles. This is called {\it decomposition} or {\it triangulation}. The vertices of rectangles and triangles in the domain $\Omega$ are called mesh points or nodes. Let $\Omega_h$ denote the set of all nodes of the decomposition. On each node, we construct a scalar test function $\phi_i\in \mathbb{V} \,,i=1,..,N$, where $\mathbb{V}$ is the space defined in Eq.~(\ref{Vspace}) and $N$ is the total number of nodes in the domain. The scalar test function $\phi_i$ is required to have the property \begin{equation} \phi_i(p^k)=\delta_{ik}, \quad i,k=1,..,N, \quad p^k\in \Omega_h \,. \nonumber \end{equation} As such, $\phi_i$ has non-zero values only on the node with $k=i$ and its adjacent subdomains, which is called the influential zones. However, it vanishes on other parts of the domain $\Omega$. The test function $\phi_i$ constructed this way is called the {\it scalar shape function}. Clearly, $\phi_i \in \mathbb{V}$ on different nodes are linearly independent. We denote the space spanned by $\phi_i$ as $\mathbb{V}_h:=\mathrm{span}\{\phi_i\}_{i=1}^N$, which is a subspace of $\mathbb{V}$. The vector shape functions can be constructed from the scalar shape functions with $\phi_i$ for each component of the vector field \begin{align} \Psi_{l,\tau} = \phi_l \otimes \epsilon^{\tau}\,, \end{align} where $\epsilon^{\tau}$ is the basis of a vector. On the other hand, the tensor fields $h_{ij}e^i\otimes e^j$ can be expanded using {\it ansatz} functions. If the {\it ansatz} function is the same as the shape function, the scheme is called the Galerkin scheme. The FEM in this case is called the Galerkin FEM. Moreover, if the shape function is continuous, the method is also called the continuous Galerkin FEM. In this case, the tensor field $h_{ij}e^i\otimes e^j$ can be presented as \begin{align} h_{ij}e^i\otimes e^j = H_{\sigma}\epsilon^{\sigma}=\tensor{H}{^k_{\sigma}}\phi_k\otimes\epsilon^{\sigma}\,, \end{align} where since each component $H_{\sigma}$ itself is a scalar field, it can be further expanded by the scalar test function $H_{\alpha}=\tensor{H}{^k_{\alpha}}\phi_k$. As Eq.~(\ref{weakwaveequation}) holds for any test functions $\Psi$, we can choose $\Psi$ as shape functions over all the different nodes in the domain. The left-hand-side of Eq.~(\ref{weakwaveequation}) then gives, \begin{align} \langle \phi_l \otimes \epsilon^{\tau},\frac{\partial^2}{\partial t^2}h_{ij} e^i\otimes e^j\rangle&=\langle \phi_l \otimes \epsilon^{\tau},\phi_k\otimes\epsilon^{\sigma}\rangle \frac{\partial^2}{\partial t^2}\tensor{H}{^k_{\sigma}}\nonumber\\ &=\langle \phi_l,\phi_k \rangle \otimes \tensor{\delta}{_{\tau}^{\sigma}}\frac{\partial^2}{\partial t^2}\tensor{H}{^k_{\sigma}}\,, \end{align} where the index $l$ runs over $1,...,N$ and $\tau$ runs over $1,...,5$. Similarly, for the first term on the right-hand-side of Eq.~(\ref{weakwaveequation}), we obtain \begin{align} &\langle\phi_l \otimes \epsilon^{\tau}, c^2\nabla^2 h_{ij} e^i\otimes e^j \rangle=\langle\phi_l\otimes\epsilon^{\tau},c^2\nabla^2\phi_k\otimes\epsilon^{\sigma}\rangle\tensor{H}{^k_{\sigma}}\nonumber\\ =&-\langle(\nabla c^2)\phi_l,\nabla\phi_k\rangle \otimes\tensor{\delta}{_\tau^\sigma}\tensor{H}{^k_{\sigma}}-\langle c^2\nabla \phi_l,\nabla\phi_k\rangle \otimes\tensor{\delta}{_\tau^\sigma}\tensor{H}{^k_{\sigma}}\nonumber\\ &+\langle \phi_l, c^2\hat{n}\cdot \nabla \tensor{H}{_{\sigma}}\rangle_{\partial \Omega}\otimes\tensor{\delta}{_\tau^\sigma}\label{boundaryterms} \end{align} where for the last term, we have used integration by parts. $\langle \phi_l, c^2\hat{n}\cdot \nabla \tensor{H}{_{\sigma}}\rangle_{\partial \Omega}\otimes\tensor{\delta}{_\tau^\sigma}$ represents the integration over boundaries of the simulation domain $\partial \Omega$. This term is related to the boundary conditions in FEM, which we shall discuss in detail in the next few sections. Similarly, for other terms, we have \begin{align} &\langle \phi_l\otimes\epsilon^{\tau},N^2\partial_i \Gamma_j e^i\otimes e^j\rangle\nonumber\\ =&\langle \phi_l\otimes\epsilon^{\tau},\left(f_{\kappa} \frac{\rho^m \rho^k}{\rho^2} \delta_{ki} h_{mj} +f_{\Gamma} \frac{\rho^m}{\rho} \partial_i h_{mj}\right) e^i\otimes e^j\rangle\nonumber\\ =&[\langle \phi_l,f_{\kappa}\frac{\rho^m \rho^k}{\rho^2} \delta_{ki} \tensor{C}{_m_j^\sigma}\tensor{C}{^i^j_\alpha}\phi_k\rangle\nonumber\\ &+\langle \phi_l,f_{\Gamma} \frac{\rho^m}{\rho}\tensor{C}{_m_j^\sigma}\tensor{C}{^i^j_\alpha}\partial_i \phi_k\rangle] \otimes\tensor{\delta}{_\tau^\alpha}\tensor{H}{^k_{\sigma}} \quad,\\ \nonumber\\ &\langle \phi_l\otimes\epsilon^{\tau},-2N^2\tensor{\Gamma}{^m_i_j}\Gamma_m e^i\otimes e^j\rangle\nonumber\\ =&-2\langle \phi_l,f_{\Gamma}\frac{\rho^l}{\rho}\tensor{C}{_l_m^\sigma} \tensor{\Gamma}{^m_i_j}\tensor{C}{^i^j_\alpha} \phi_k\rangle \otimes \tensor{\delta}{_\tau^\alpha} \tensor{H}{^k_{\sigma}}\quad,\\ \nonumber\\ &\langle \phi_l\otimes\epsilon^{\tau}, f_{\rho}\frac{\rho^m}{\rho}\partial_{m} h_{ij}e^i\otimes e^j\rangle\nonumber\\ =&\langle\phi_l\otimes\epsilon^{\tau},f_{\rho}\frac{\rho^m}{\rho}\partial_{m}\phi_k\otimes\epsilon^{\sigma} \rangle\tensor{H}{^k_{\sigma}}\nonumber\\ =&\langle\phi_l,f_{\rho}\partial_{\rho}\phi_k\rangle\otimes\tensor{\delta}{_\tau^\sigma}\tensor{H}{^k_{\sigma}}\quad,\\ \nonumber\\ &\langle \phi_l\otimes\epsilon^{\tau}, f_R\delta^{mp}\tilde{R}_{ip}h_{mj} e^i\otimes e^j\rangle\nonumber\\ =& \langle \phi_l\otimes\epsilon^{\tau},f_R \delta^{mp}\tilde{R}_{ip}\tensor{C}{_m_j^{\sigma}}\tensor{C}{^i^j_{\alpha}}\phi_k\otimes\epsilon^{\alpha}\rangle\tensor{H}{^k_{\sigma}}\nonumber\\ =&\langle \phi_l,f_R \delta^{mp}\tilde{R}_{ip}\tensor{C}{_m_j^{\sigma}}\tensor{C}{^i^j_{\alpha}}\phi_k\rangle\otimes\tensor{\delta}{_\tau^\alpha}\tensor{H}{^k_{\sigma}}\quad,\\ \nonumber\\ &\langle\phi_l\otimes\epsilon^{\tau},f_R\delta^{mp}\delta^{nq}\tensor{\tilde{R}}{_p_i_j_q}h_{mn}e^i\otimes e^j\rangle\nonumber\\ =&\langle\phi_l\otimes\epsilon^{\tau},f_R\delta^{mp}\delta^{nq}\tensor{\tilde{R}}{_p_i_j_q}\tensor{C}{_m_n^{\sigma}}\tensor{C}{^i^j_{\alpha}}\phi_k\otimes\epsilon^{\alpha}\rangle\tensor{H}{^k_{\sigma}}\nonumber\\ =&\langle\phi_l,f_R\delta^{mp}\delta^{nq}\tensor{\tilde{R}}{_p_i_j_q}\tensor{C}{_m_n^{\sigma}}\tensor{C}{^i^j_{\alpha}}\phi_k\rangle\otimes\tensor{\delta}{_\tau^\alpha}\tensor{H}{^k_{\sigma}}\quad,\\ \nonumber\\ &\langle\phi_l\otimes\epsilon^{\tau},f_{\Gamma}\frac{\rho^p}{\rho}\partial_i h_{pj} e^i\otimes e^j\rangle\nonumber\\ =&\langle\phi_l\otimes\epsilon^{\tau},f_{\Gamma}\frac{\rho^p}{\rho}\tensor{C}{_p_j^{\sigma}}\tensor{C}{^i^j_{\alpha}}\partial_{i}\phi_k\otimes\epsilon^{\alpha}\rangle\tensor{H}{^k_{\sigma}}\nonumber\\ =& \langle\phi_l,f_{\Gamma}\frac{\rho^p}{\rho}\partial_{i}\phi_k\tensor{C}{_p_j^{\sigma}}\tensor{C}{^i^j_{\alpha}}\rangle\otimes\tensor{\delta}{_\tau^\alpha}\tensor{H}{^k_{\sigma}}\quad,\\ \nonumber\\ &\langle \phi_l\otimes\epsilon^{\tau},f_{\Gamma}\frac{\rho^p}{\rho}\tensor{\Gamma}{^m_i_j} h_{pm} e^i\otimes e^j\rangle\nonumber\\ =&\langle\phi_l\otimes\epsilon^{\tau},f_{\Gamma}\frac{\rho^p}{\rho}\tensor{\Gamma}{^m_i_j}\tensor{C}{_p_m^{\sigma}}\tensor{C}{^i^j_{\alpha}}\phi_k\otimes\epsilon^{\alpha}\rangle\tensor{H}{^k_{\sigma}}\nonumber\\ =& \langle\phi_l,f_{\Gamma}\frac{\rho^p}{\rho}\tensor{\Gamma}{^m_i_j}\tensor{C}{_p_m^{\sigma}}\tensor{C}{^i^j_{\alpha}}\phi_k \rangle\otimes\tensor{\delta}{_\tau^\alpha}\tensor{H}{^k_{\sigma}}\quad. \end{align} Inserting the above expressions back into Eq.~(\ref{weakwaveequation}), we obtain $5\times N$ different equations, which form a linear system. In this case, it is more convenient to present Eq.~(\ref{weakwaveequation}) in a matrix format \begin{align} \left\{ \begin{aligned} \frac{\partial}{\partial t}\mathcal{H}&=\mathcal{V}\,\\ \mathcal{M}\frac{\partial}{\partial t}\mathcal{V} &=-\mathcal{M}^B\frac{\partial}{\partial t}\mathcal{H}-\mathcal{F}\mathcal{H} \end{aligned}\,, \right. \end{align} where $\mathcal{F}$ is defined by \begin{align} \mathcal{F}&=\mathcal{A}+\mathcal{D}^c-\mathcal{D}^{\rho}+2\mathcal{D}^{\rm Ricci}+2\mathcal{D}^{\rm Riemann}+2\mathcal{D}^{\kappa}\nonumber\\ &+4\mathcal{D}^{\Gamma_1}-4\mathcal{D}^{\rm \Gamma_2}\,. \end{align} The elements of matrices are defined by \begin{equation} \left\{ \begin{aligned} \mathcal{M}_{(lk)\otimes(\tau\sigma)}&=\langle\phi_l,\phi_k\rangle\otimes \tensor{\delta}{_{\tau}^{\sigma}}\\ \mathcal{M}_{(lk)\otimes(\tau\sigma)}^B&=\langle c\phi_l,\phi_k \rangle_{\partial \Omega}\otimes \tensor{\delta}{_{\tau}^{\sigma}}\\ \mathcal{D}_{(lk)\otimes(\tau\sigma)}^c&=\langle\nabla(c^2)\phi_l,\nabla\phi_k\rangle\otimes \tensor{\delta}{_{\tau}^{\sigma}}\\ \mathcal{A}_{(lk)\otimes(\tau\sigma)}&=\langle c^2\nabla\phi_l,\nabla\phi_k\rangle\otimes \tensor{\delta}{_{\tau}^{\sigma}}\\ \mathcal{D}_{(lk)\otimes(\tau\sigma)}^{\rho}&=\langle\phi_l,f_{\rho}\frac{\rho^m}{\rho}\partial_{m}\phi_k\rangle\otimes \tensor{\delta}{_{\tau}^{\sigma}}\\ \mathcal{D}_{(lk)\otimes(\tau\sigma)}^{\rm Ricci}&=\langle\phi_l,f_R \delta^{mp}\tilde{R}_{ip}\tensor{C}{_m_j^{\sigma}}\tensor{C}{^i^j_{\alpha}}\phi_k\rangle\otimes \tensor{\delta}{_\tau^\alpha}\\ \mathcal{D}_{(lk)\otimes(\tau\sigma)}^{\rm Riemann}&=\langle\phi_l,f_R\delta^{mp}\delta^{nq}\tensor{\tilde{R}}{_p_i_j_q}\tensor{C}{_m_n^{\sigma}}\tensor{C}{^i^j_{\alpha}}\phi_k\rangle\otimes \tensor{\delta}{_\tau^\alpha}\nonumber\\ \mathcal{D}_{(lk)\otimes(\tau\sigma)}^{\Gamma_1}&=\langle\phi_l,f_{\Gamma}\frac{\rho^p}{\rho}\tensor{C}{_p_j^{\sigma}}\tensor{C}{^i^j_{\alpha}}\partial_{i}\phi_k\rangle\otimes \tensor{\delta}{_\tau^\alpha}\\ \mathcal{D}_{(lk)\otimes(\tau\sigma)}^{\Gamma_2}&=\langle\phi_l,f_{\Gamma}\frac{\rho^p}{\rho}\tensor{\Gamma}{^m_i_j}\tensor{C}{_p_m^{\sigma}}\tensor{C}{^i^j_{\alpha}}\phi_k\rangle\otimes\tensor{\delta}{_\tau^\alpha}\\ \mathcal{D}_{(lk)\otimes(\tau\sigma)}^{\rm \kappa}&=\langle \phi_l,f_{\kappa}\frac{\rho^m \rho^k}{\rho^2} \delta_{ki} \tensor{C}{_m_j^\sigma}\tensor{C}{^i^j_\alpha}\phi_k\rangle\otimes\tensor{\delta}{_\tau^\alpha}\nonumber \end{aligned}\,. \right. \end{equation} \subsection{time discretization} For the time discretization, we use the following scheme \begin{align} \mathcal{M}\frac{\mathcal{V}^n-\mathcal{V}^{n-1}}{k} =&-\mathcal{M}^B\frac{\mathcal{H}^n-\mathcal{H}^{n-1}}{k}\nonumber\\ &-\mathcal{F}[\theta \mathcal{H}^n+(1-\theta)\mathcal{H}^{n-1}]\,,\label{V0}\\ \frac{\mathcal{H}^n-\mathcal{H}^{n-1}}{k} =& \theta \mathcal{V}^n+(1-\theta)\mathcal{V}^{n-1} \,.\label{H0} \end{align} The superscript $n$ indicates the number of a time step and $k=t_n-t_{n-1}$ is the length of the present time step. Using Eq.~(\ref{V0}) to eliminate $\mathcal{H}^n$ in Eq.~(\ref{H0}) and Eq.~(\ref{H0}) to eliminate $\mathcal{V}^n$ in Eq.~(\ref{V0}), we can present $\mathcal{H}^n$ and $\mathcal{V}^n$ in terms of $\mathcal{V}^{n-1}$ and $\mathcal{H}^{n-1}$ \begin{align} [&\mathcal{M}+k\theta \mathcal{M}^B+k^2\theta^2\mathcal{F}]\mathcal{H}^{n}\nonumber\\ =&[\mathcal{M}+k\theta \mathcal{M}^{B}-k^2\theta(1-\theta)\mathcal{F}]\mathcal{H}^{n-1}+k\mathcal{M}\mathcal{V}^{n-1}\label{eqH} \end{align} \begin{align} &[\mathcal{M}+k\theta \mathcal{M}^B+k^2\theta^2\mathcal{F}]\mathcal{V}^{n}\nonumber\\ =&[\mathcal{M}-k(1-\theta) \mathcal{M}^{B}-k^2\theta(1-\theta)\mathcal{F}]\mathcal{V}^{n-1}-k\mathcal{F}\mathcal{H}^{n-1} \label{eqV}\,. \end{align} Given the knowledge of $\mathcal{V}^{n-1}$ and $\mathcal{H}^{n-1}$ at a previous time step, we can solve $\mathcal{H}^{n}$ and $\mathcal{V}^{n}$ from the above two linear equations. When $\theta = 0$, the scheme is called the forward or explicit Euler method. If $\theta = 1$, it reduces to the backward or implicit Euler method. The scheme adopted in this work is called the {\it Crank-Nicolson Scheme}, namely $\theta = 1/2$, which uses the midpoint between two different time steps. This scheme is {\it implicit} and is of second-order accuracy. An advantage of the {\it implicit} method is that it can be stable for arbitrary step sizes if the scheme is {\it upwind} (see. Chapter 2 in Ref.~\cite{grossmann2007numerical}). However, a stable scheme does not guarantee a correct solution to the wave equation. We need to resolve the waveforms as well. For this purpose, in our simulations, we set $k=\lambda/10$ and the size of the mesh $\sigma<\lambda/15$, where $\lambda$ is the wavelength. \subsection{linear solvers} The number of independent equations in Eqs.~(\ref{eqH}) and~(\ref{eqV}) is called the degree-of-freedom (DOF) of the system. In FEM, the DOF is usually very large, which can be easily up to $10^9$. Therefore, direct methods such as the LU decomposition are inefficient in this case. One has to use the iterative methods. However, it is important to note that, matrices $D^{c}\,$, $D^{\rho}\,$, $D^{\Gamma_1}\,$,$D^{\Gamma_2}\,$,$D^{\kappa}\,$,$D^{\rm Ricci}\,$,$D^{\rm Riemann}\,$are not symmetric in our case. Some conventional methods such as the Conjugate Gradient (CG) method can not be used here. Instead, we use the GMRES method (a generalized minimal residual algorithm for solving non-symmetric linear systems)~\cite{osti_409863}, which does not require any specific properties of the matrices. In practice, the efficiency of the GMRES method is also strongly dependent on the preconditioners used. A good preconditioner can significantly reduce the number of iterations needed in the GMRES method. Moreover, for massively paralleled linear solvers, the primary bottleneck, indeed, comes from the difficulty to produce preconditioners that can scale up to a large number of processors, rather than from the communication between processors. Fortunately, it has been shown in the past decade that the algebraic multigrid (AMG) method~\cite{brandt1982algebraic}, which can be used to construct preconditioner only based on the matrix itself, is suitable for massive parallelization and is extremely efficient in this case. Therefore, in this work, we adopt the GMRES linear solver together with the AMG preconditioner. As a result, in our method, it only takes less than $25$ iterations for the linear system to achieve a rather stringent convergence criterion \begin{align} \left\|\rm Residual \right\|_{\infty,h}:= \max_{x_h\in \overline{\Omega}_h}| {\rm Residual}(x_h)|<10^{-14}\,, \end{align} where $\overline{\Omega}_h$ denotes all the grid points inside the domain and at its boundary $\overline{\Omega}_h=\Omega_h+\partial \Omega_h$. \subsection{Boundary conditions} For test functions $\phi \in \mathbb{V}$, the Dirichlet boundary conditions do not appear explicitly in Eqs~(\ref{eqH},\ref{eqV}), which are called {\it essential boundary conditions}. However, the Neumann conditions have to appear explicitly in the formulation, which is called {\it natural boundary conditions}. The boundary conditions only appear in these terms associated with the Laplacian operator $\nabla^2$, namely, the last term in Eq.~(\ref{boundaryterms}) \begin{align} &\langle \phi_l, c^2\hat{n}\cdot \nabla \tensor{H}{_{\sigma}}\rangle_{\partial \Omega}\otimes\tensor{\delta}{_\tau^\sigma}\nonumber\\ =&\langle \phi_l, c^2\hat{n}\cdot \nabla \tensor{H}{_{\sigma}}\rangle_{\partial \Omega_1}\otimes\tensor{\delta}{_\tau^\sigma}+\langle \phi_l, c^2\hat{n}\cdot \nabla \tensor{H}{_{\sigma}}\rangle_{\partial \Omega_2}\otimes\tensor{\delta}{_\tau^\sigma}\,, \end{align} where $\partial\Omega_1$ represents the boundaries of the simulation domain and $\partial\Omega_2$ represents the horizon of black hole. On $\partial\Omega_1$ we impose the absorbing boundary condition \begin{align} \langle \phi_l, c^2\hat{n}\cdot \nabla \tensor{H}{_{\sigma}}\rangle_{\partial \Omega_1}\otimes\tensor{\delta}{_\tau^\sigma}=-\langle c\phi_l,\phi_k\rangle_{\partial \Omega_1}\otimes\tensor{\delta}{_\tau^\sigma}\frac{\partial}{\partial t}\tensor{H}{^k_{\sigma}}\,, \end{align} where we have used \begin{equation} \hat{n}\cdot\nabla \tensor{H}{_{\sigma}}=-\frac{1}{c}\frac{\partial{\tensor{H}{_{\sigma}}}}{\partial{t}}\quad {\rm on} \quad \partial\Omega_1\times (0,T] \,.\label{boundary2} \end{equation} The absorbing boundary condition is also called the {\it non-reflecting boundary} conditions or {\it radiation boundary} condition. These boundary conditions can eliminate the reflections of waves on boundaries, which enables us to simulate the propagation of waves in free space using a finite simulation domain. At the horizon $\partial \Omega_2$, we have \begin{equation} \lim_{\rho \rightarrow M/2 }c^2=0\,. \end{equation} A notable feature of our formulation is that the boundary term on $\partial \Omega_2$ vanishes \begin{equation} \langle \phi_l, c^2\hat{n}\cdot \nabla \tensor{H}{_{\sigma}}\rangle_{\partial \Omega_2}\otimes\tensor{\delta}{_\tau^\sigma} = 0\,. \end{equation} As such, our formalism can naturally avoid artificial boundary conditions at the horizon. \subsection{Numerical implement} The numerical implement of this work is based on our code {\bf GWSIM}~\cite{He:2021hhl}, which is further based on the public available code {\bf deal.ii}~\cite{dealII91,BangerthHartmannKanschat2007,dealII90}. {\bf deal.ii} is a C++ program library, which is designed to solve partial differential equations based on modern finite element method. Coupled to efficient stand-alone linear algebra libraries, such as PETSc~\cite{abhyankar2018petsc,petsc-web-page,petsc-user-ref,petsc-efficient}, {\bf deal.ii} supports massively parallel computing of large sparse linear systems of equations. {\bf deal.ii} also provides convenient tools for {\it triangulation} of various geometries of the simulation domain. \section{Numerical results \label{sec::numres}} \subsection{Null geodesics and wavefronts} The basic setups follow our previous work~\cite{He:2021hhl}. We assume that the source of GWs is far away from the simulation domain and the incident waves travel along the axis of the cylinder ($x$-axis). However, due to the long-range nature of gravity produced by the black hole, the incident waves suffer the Shapiro time delay on their way from the source to the black hole, which adds up a shift of the arrival time to the wavefronts relative to the case without such a black hole and may also distort the shape of the wavefront that arrives at the black hole. However, as pointed out in our previous work~\cite{He:2021hhl}, these effects can be accurately and robustly determined using null geodesics as tracers for the wavefronts of GWs. This is because from Eq.~(\ref{wavespeed_light_rays}), GWs travel at the same speed as those of light rays in Schwarzschild spacetime. The wave vector of GWs, therefore, is a null vector $k^ak_a=0$. As a result, by definition the hypersurface of the wavefront is a null hypersurface. A key property of null hypersurface is that the integral curves of its normal vector $k^a$ are null geodesics. This holds even in the curved spacetime (see equation 4.2.37 in~\cite{wald2010general}). Therefore, the wavefront of GWs in Schwarzschild spacetime can be accurately traced by null geodesics. The equations for null geodesics in Schwarzschild spacetime in isotropic coordinates are provided in Appendix~\ref{Nullgeodesic}. After obtaining the spatial trajectories of null geodesics, the Killing time of each wavefront can be obtained by integrating \begin{align} dt=\frac{dl}{c}\,, \end{align} along the geodesics, where $dl$ is the spatial length and $c$ is the isotropic wave speed. Figure~\ref{wavefronttheory} shows the evolution of the wavefronts of GWs in Schwarzschild spacetime. The dashed lines are for null geodesics, which are also the integral curves of the normal vectors of the wavefront. The solid lines show wavefronts at different times. In our code, the mass of black hole is $M=3\times 10^5 M_{\odot}$. Red and black colors show the null geodesics obtained with initial conditions at $x_i=-100\,[{\rm Sec}]$ and $x_i=-25\,[{\rm Sec}]$, respectively. If the distance to the center of black hole is above $25\,[{\rm Sec}]$, the distortion effect of the black hole on the initial wavefront is negligible in the regimes of our simulations. Moreover, above $25\,[{\rm Sec}]$ there is also no appreciable impact of setting initial conditions at different places on the wavefronts. Given the above tests, we choose the simulation domain as a cylinder with a length of $L_{\rm cylinder}=50\,[{\rm Sec}]$. In this case, the distance from the boundary at $x=-L_{\rm cylinder}/2$ to the black hole is long enough so that the wavefront of the incident waves is not distorted by the black hole at the boundary of the simulations domain. As such, the incident GWs at $x=-L_{\rm cylinder}/2$ can be considered as plane waves with a constant wave vector $\vec{k}$ along the $x$-axis. Figure~\ref{cylinderical} shows the triangulation of our simulation domain. For illustrative purposes, we only show the refinement of $2^5$ (along one dimension). Unlike a cubic domain which is usually used in numerical simulations, a cylinder domain can minimize the impact of boundaries on waveforms as both the simulation domain and wavefront are of the azimuth symmetry. Further note that due to the linearity of the wave equation and the geometric unit, the parameters used in our code and numerical simulations can be rescaled to other values that are of astrophysical interest. \subsection{Wave zones} As pointed out in the previous section, the leading wavefront can be traced by null geodesics, which provides a robust test of our simulation results. In this test, we choose the radius of the cylinder as $R_{\rm cylinder}=15\,[{\rm Sec}]$. The incident waves are set at $x=-L_{\rm cylinder}/2$ as \begin{equation} \left\{ \begin{aligned} &H_0=h_{22}=h(t)\nonumber\\ &H_1=h_{\times}=0\nonumber\\ &H_2=0\nonumber\\ &H_3=0\nonumber\\ &H_4=0\nonumber \end{aligned} \right.\,, \end{equation} which are along the $x$ direction. In this test, we assume that the incident wave has only one non-vanishing polarization pattern. Given that $H_2=0$ initially, we have $h_{zz}=-H_2-h_{yy}=-h_{yy}$ and $h_{+}=h_{yy}=-h_{zz}$ for the incident wave. Moreover, since the wave vector of the incident wave is along the $x$-axis $\vec{k}\parallel \vec{x}$, the incident wave is transverse relative to the $x$-axis. We first choose the wave train as a wavelet. The waveform of the input wave train is a sinusoidal wave but only lasting for one period \begin{equation} h(t)=\left\{ \begin{aligned} A\sin(\omega t)\,, t\le\lambda\\ 0\,, t>\lambda \end{aligned} \right.\,, \label{sinusoidal} \end{equation} where $\omega = 2\pi/\lambda$ is the angular frequency and $\lambda = 5\,[{\rm Sec}]$ is the wavelength. The amplitude $A$ is normalized as unity. The black hole is placed at the center of our simulation domain with a mass of $M=3\times 10^5 M_{\odot}\,$, the same as before. The radius of the horizon of the black hole is $\rho_s=M/2=0.738803\,[{\rm Sec}]$. We perform a high-resolution simulation with a refinement of $2^8$. The total DOFs is $8.055\times 10^8$. The simulation uses $1920$ CPU cores and the total cost is $236.6{\rm K}$ CPU-hours. The parameters of our simulation are summarized in Table~\ref{Table_Re}. \begin{table*} \centering \caption{The parameters used in our simulation \label{Table_Re}} \begin{tabular}{c|c|c|c|c} \hline Refinement & Total DOF & mass of black hole & maximum diameter of elements & time step \\ \hline $2^8$ & $8.055\times 10^8$ & $3\times 10^5 M_{\odot}$ & $0.109055[{\rm Sec}]$ & $0.00839504[{\rm Sec}] $ \\ \hline \end{tabular} \end{table*} Figure~\ref{sim1} shows the wave train in the Schwarzschild spacetime. The snapshot is taken at $t=12.00\,[{\rm Sec}]$ along the $x-y$ plane at $z=0$. The solid black circle indicates the position of the black hole. The radius of the horizon of the black hole is $\rho_s=0.738803\,[{\rm Sec}]$. The dotted lines are for the congruence of null geodesics. The red and blue solid lines show the leading (red) and trailing (blue) wavefronts predicted by null geodesics, respectively. The color bar to the right shows the amplitude of the wave train. The wave zone in our simulations agrees with the theoretical predictions very well. Figure~\ref{sim2} shows the wave train at $t=40.5273\,[{\rm Sec}]$. At this time, GWs have already arrived at the black hole. Due to the fact that GWs travel faster at outer regions than those close to the horizon, the wave zone is bent toward the black hole. Figure~\ref{sim3} shows the wave train at $t=54.0273\,[{\rm Sec}]$. At this time, GWs have already passed through the black hole. The wave zone is wildly twisted. Unlike in geometric optics, GWs do not form a caustic behind the black hole at scales comparable to their wavelength. Instead, an interference pattern appears in the overlaps of the twisted wave zones. Moreover, most GWs, indeed, only simply pass by and are deflected by the black bole. Only in a small region along the x-axis (radius), GWs can be reflected back when they hit the horizon. Moreover, ahead the leading wavefront, wavefront shocks appear along the $x$-axis. These wavefront shocks are numerical artifacts. The significance of such shocks is dependent on the relative resolution between the wavelength of the input waves and the mesh size of simulations. As shown in our previous work~\cite{He:2021hhl}, they can be significantly suppressed with a higher resolution in simulations. Figure~\ref{sim4} shows the wave train at $t=63.0273\,[{\rm Sec}]$. Unlike in the case of flat spacetime, the strong Huygens’ principle of waves does not hold in curved spacetime due to the effect of back-scattering. A tail behind the trailing wavefront (red shaded regions ) emerges. However, the significance of such an effect is dependent on the direction of propagation of the trailing wavefront. If the trailing wavefront travels perpendicularly to the radius of the black hole, there is no such back-scattering effect and the trailing wavefront is still clear. However, if the trailing wavefront travels along the radius, a clear tail emerges. Physically speaking, the effect of back-scattering is caused by the interaction between GWs and the background curvature. If GWs travel perpendicular to the radius of the black hole, the curvature of the background spacetime relative to GWs does not change. And, as a result, there are no such interactions and back-scattering. However, if GWs travel along the radius, curvature changes and such back-scattering emerges. \subsection{Power-law tail} Besides the back-scattering, another intrinsic response of a black hole to a perturbation is called quasinormal modes (QNMs) (see~\cite{2009CQGra..26p3001B} for a review). However, QNMs happen only under particular boundary conditions, the energies of which blow up both at the horizon and spatial infinity. As a result, QNMs do not form a complete set of wavefunctions and it is in general impossible to represent a regular wavelet like ours as a sum of QNMs. Indeed, despite QNMs having been known for over three decades, how these modes are actually excited by physically relevant perturbations is less well studied~\cite{2009CQGra..26p3001B}. Our results suggest that QNMs if any, are sub-dominant in our case, as the amplitude of the incident wavelet on the x-axis is amplified behind the black hole, which is the dominant signal. However, as shown in Fig.~\ref{powerlawtail}, a power-law tail emerges behind the trailing wavefront, which is significant in our case. It is worth noting that such a tail does not appear before the black hole as in this case the trailing wavefront travel faster than the leading wavefront. \subsection{Sinusoidal Wave} Next, we choose the input GWs as a continuous sinusoidal wave, the same functional form as Eq.(\ref{sinusoidal}) but for a much longer time. The parameters of the simulation are the same as those in Table~\ref{Table_Re}. Figure~\ref{sim5} shows a snapshot at $t=80.0218\,[{\rm Sec}]$ and $z=0$ along the $x-y$ plane. The waves are in a steady state. Unlike geometric optics, GW signals cannot be sheltered by the black hole due to the wave nature of GWs. GWs do not form a caustic behind the black hole at scales that are comparable to their wavelength. Instead, a strong beam and an interference pattern appear in the overlaps of the wave zones along the optical axis behind the black hole. \subsection{Realistic input GW waveform} In this subsection, we choose the input waveform as a realistic template generated by binary black holes. The waveform is generated by NRHybSur3dq8 model~\cite{Varma:2018mmi}, which is a surrogate model for numerical relativity simulations. The initial total mass of the binary black holes is $40 M_{\odot}$. The binary has equal masses. We choose the lens as an isolated black hole with a mass of $133.33 M_{\odot}$. The binary GW source is $100.0 {\rm Mpc}$ away from the lens. The inclination of the angular momentum plane of the source binary is $\pi/2$ so that only $h_+$ polarization can be observed at the lens. We choose the starting time of the waveform at a point $0.094666$[Sec] before the merger of the binary black holes. In practice, we simulate such a system with a scaling factor of $750$. As such, the lens is $10^5 M_{\odot}$ in our new simulations. We choose the radius of the cylinder as $R_{\rm cylinder}= 48.73 \rho_s$ and the length as $L_{\rm cylinder}= 146.18 \rho_s$, where $\rho_s$ is the horizon of the lens black hole. The simulation has a refinement of $2^8$ with a total DOF of $8.055\times 10^8$, the same as before. Figure~\ref{Inputwaveform} shows the input waveform (the solid black line) and the temporal lensed waveform (the orange line), taken at $x=0.023333[{\rm Sec}]\approx 73\rho_s$ along the $x$-axis. This position is far away from the horizon, where the impact of the black hole on wave propagation is small. Indeed, as pointed out in~\cite{PhysRevD.103.064047}, since the amplitude of the lensed waveform completely degenerates with the luminosity distance, the detectability of the lensed waveform comes from the changes of shape relative to the unlensed one. We therefore normalize the amplitude of the lensed waveform by its maximum value. The initial amplitude of the input waveform is then re-scaled to match the lensed waveform. Note that its time is also shifted for the purpose of comparison. Compared with the unlensed waveform, the lensed waveform has two important features after passing through the black hole. First, unlike in the flat spacetime, the shape of the lensed waveform can be permanently changed due to the effect of back-scattering in strong gravity. One obvious example for this effect is the wavelet with a sharp trailing wavefront as already shown in Fig.~\ref{powerlawtail}. After passing through the black hole, the wavelet has a clear tail. Second, the relative strength of amplitude after the lens is frequency dependent. This feature is significant even in the weak field limit~\cite{Takahashi_2003}. As a result, because of the above two effects, the lensed waveform is much longer than the input waveform in the merger and ringdown phases as shown Fig.~\ref{Inputwaveform}. The relative strength of the lensed waveform in the merger and ringdown also exhibits significant differences from the input waveform. Figure~\ref{Convergency_test} shows the numerical convergence test of the lensed waveforms. The solid orange line is the waveform from the simulation with a refinement of $2^7$ and the blue dashed line is the waveform obtained from a higher resolution with a refinement of $2^8$. The left panel shows the lensed waveform and the right panel shows an enlargement of the waveform for the merger and ringdown phases. The numerical results from both $2^7$ and $2^8$ refinements show a good agreement. Moreover, since the initial wavelength is much larger than the one used in our previous wavelet test, there are almost no wavefront shocks in the lensed waveform. \subsection{Hamiltonian constraint} The evolution equation for the Hamiltonian constraint is given by \begin{align} \frac{\partial \mathcal{H}}{\partial t} =-D_i(NM^i)+2N K\mathcal{H}-M^iD_iN\,. \end{align} In the background spacetime, the above equation is trivial since both $\mathcal{H}$ and $M^i$ vanish in the background $\mathcal{H}=0\,,M^i=0$. Taking the linear perturbation of the above equation, we obtain \begin{align} \frac{\partial \delta \mathcal{H}}{\partial t} &=-2\delta M^i D_i N -2M^i D_i \delta N -\delta N D_i M^i - N \delta (D_i M^i) \nonumber \\ &=-2\delta M^i D_i N\nonumber\,. \end{align} If the perturbed constraints are satisfied at $t=0$, \begin{equation} \left\{ \begin{aligned} \delta \mathcal{H}|_{t=0}&=0\nonumber \\ \delta M^i|_{t=0}&=0 \end{aligned} \right. \,, \end{equation} then $\delta \mathcal{H}=0$ and $\delta M^i=0$ are preserved by the wave equation Eq.~(\ref{perturbedwaveequation}). In fact, by straightforward calculation, we obtain \begin{align} \delta \mathcal{H}=\frac{1}{2}\delta R = \frac{1}{2}\left(\delta \gamma^{ij} R_{ij} + \gamma^{ij} \delta R_{ij}\right)\,. \end{align} From Eq.~(\ref{wavetrace}), we find \begin{align} \gamma^{ij}\delta R_{ij}=-\frac{1}{2N^2}\frac{\partial^2h}{\partial t^2}-\gamma^{im}\gamma^{jn}h_{mn}\frac{D_iD_j N}{N}\,. \end{align} Therefore, we obtain \begin{align} \delta \mathcal{H}&=\frac{1}{2}\delta R \nonumber \\ &=-\frac{1}{4N^2}\frac{\partial^2h}{\partial t^2}-\frac{\gamma^{im}\gamma^{jn}h_{mn}}{2}\left(R_{ij} -\frac{D_iD_j N}{N}\right)\nonumber\\ &=-\frac{1}{4N^2}\frac{\partial^2h}{\partial t^2}\,. \end{align} From the above equation, the perturbed Hamiltonian constraint $\delta \mathcal{H} =0$ is satisfied as long as $h_{ij}$ is trace-free. Since $h_{ij}$ is taken to be exactly trace-free in our scheme, the perturbed Hamiltonian constraint holds exactly in our approach. \section{conclusions\label{sec::conclusions}} In this paper, we have substantially extended our previous work~\cite{He:2021hhl} simulating the propagation of GWs in a potential well in the weak field limit to the regime of strong gravity. We have developed a code that is capable of studying the propagation of GWs in the spacetime of a Schwarzschild black hole. Based on the 3+1 form of Einstein's equation, we calculate the perturbation equations up to linear order in a self-consistent manner. We have shown explicitly that these equations are covariant under the arbitrary infinitesimal coordinate transformation. Moreover, unlike the conventional perturbed equations in a black hole~\cite{PhysRev.108.1063,PhysRevLett.24.737,PhysRevD.2.2141,Teukolsky}, our formalism is less coordinate dependent. There are no coordinate singularities in our approach. As such, no regularity conditions are needed in our analyses. To numerically solve these equations, we firstly derived their {\it weak formulation}. Then we adopted the cGFEM, based on the publicly available code DEAL.II. We evolve the perturbed equations in a maximal slicing $\delta N =0$. Compared with the scheme presented in ~\cite{PhysRevD.70.104007,PhysRevD.77.084007} in numerical relativity, a notable feature of our work is that the wave speed is no longer a constant but varies in space, which equals the speed of light observed by an asymptotic observer. Since the wave speed vanishes at the horizon, a particular advantage of our approach is that it can naturally avoid boundary conditions at the horizon. Based on our code, we have first simulated a finite wave train of GWs passing through the spacetime of a Schwarzschild black hole. Since the leading wavefront represents the transfer of energy from one place to another, it is subject to the constraint of causality. Moreover, since the speed of the leading wavefront equals the speed of light rays, the leading wavefront is a null hypersurface and its integral curves are null geodesics. As a result, the leading wavefront can be traced by null geodesics, which provides a robust way to test our numerical results. We find that our numerical simulations agree with the theoretical predictions very well. Besides the leading wavefront, we have also studied the evolution of wave zones of GWs in the spacetime of the Schwarzschild black hole. Behind the black hole, the wave zone is wildly twisted, which has a complicated geometry. Moreover, we find that the back-scattering due to the interaction between GWs and the background curvature is strongly dependent on the direction of the propagation of the trailing wavefront relative to the black hole. For waves that are far away from the horizon, the trailing wavefront travels nearly perpendicular to the radius. There is no back-scattering effect and the trailing wavefront is still clear. However, in regions that are around the $x$-axis, where the trailing wavefront travels along the radius of the black hole, we find that a clear tail behind the trailing wavefront emerges. However, such a tail does not appear before the black hole as in this case the trailing wavefront travel faster than the leading wavefront. Moreover, we have also simulated a continuous wave train passing through the black hole. We find that, unlike geometric optics, GW signals cannot be sheltered by the black hole due to the wave nature of GWs. GWs do not form a caustic behind the black hole at scales that are comparable to their wavelength. Instead, a strong beam and an interference pattern appear in the overlaps of the wave zones behind the black hole along the optical axis. The wave functions in our simulations are well defined on the optical axis, which are different from those in the scattering theory, where the wave functions are divergent along the optical axis behind the black hole~\cite{PhysRevD.52.1808,PhysRevD.18.1798}. The reason for such differences is due to the boundary conditions imposed in the scattering theory, which implicitly assume that the scattered waves are spherical. This, however, is not the case in our simulations. The wavefront of the outgoing GWs in our simulations has a complicated geometric shape. For a realistic input waveform generated by binary black holes, we find that when passing through the lens back hole, the lensed waveform in the merger and ringdown phases is much longer than that of the input waveform due to the effect of back-scattering in strong gravity. The relative strength of the lensed waveform in the merger and ringdown phases also exhibits significant differences from that of the input waveform. Further, it is worth noting that due to the linearity of the wave equation and the geometric unit, the parameters used in our code and numerical simulations can be rescaled to other values that are of astrophysical interest. For instance, if we want to obtain the results for a black hole with a mass of $M=10 M_{\odot}$, we only need to rescale the temporal and spatial axes of our numerical results by $1/3 \times 10^{-4}$ times. Finally, this paper mainly focuses on the numerical technique aspect. In a companion paper, we will present a comprehensive analysis of the detectability of GWs passing through an isolated Schwarzschild black hole against the sensitivities of current and future GW detectors using Bayesian Inference~\cite{DelPozzo:2014cla}. Moreover, since our formalism does not involve artificial boundary conditions, our work can be extended to study the physical origin of exciting QNMs in the spacetime of a black hole. Our formalism can also provide a potential way to study GWs produced by the generic extreme mass ratio inspirals (EMRIs) in the time domain ~\cite{PhysRevD.61.084004,PhysRevD.73.024027}. It is also of great interest to extend our work to the spacetime of a Kerr black hole~\cite{Baraldo:1999ny}. Detailed analyses of these issues will be presented in a series of follow-up papers in the future. \vspace{5mm} \noindent {\bf Acknowledgments} J.H.H. acknowledges support of Nanjing University. This work is supported by the National Key R$\&$D Program of China (Grant No. 2021YFC2203002), the National Natural Science Foundation of China (Grants No. 12075116, No.12150011), the science research grants from the China Manned Space Project (Grant No.CMS-CSST-2021-A03). The Natural Science Foundation of the Jiangsu Higher Education Institutions of China (Grant No. 22KJB630006). The numerical calculations in this paper have been done on the computing facilities in the High Performance Computing Center (HPCC) of Nanjing University. \bibliography{myref} \section*{Appendix} \subsection{general covariance of the perturbation equations\label{covariantproof}} In this appendix, we show the general covariance of the perturbation equations. We start with the wave equation \begin{align} \frac{\partial^2}{\partial t^2}h_{ij}&= 2N\delta(D_iD_jN)-2N^2\delta R_{ij}-2N\delta N R_{ij}\nonumber\\ &=2N[D_iD_j\delta N-\frac{1}{2}\gamma^{kl}\partial_kN(D_i h_{lj}+D_j h_{il}-D_l h_{ij})\nonumber\\ &-N\delta R_{ij}-\delta N R_{ij}]\,.\label{secondwaveper} \end{align} Inserting the following infinitesimal coordinate transformation into the RHS of the above equation \begin{equation} \left\{ \begin{aligned} \delta \tilde{N}&\rightarrow \delta N - \eta^kD_k N\\ \tilde{h}_{ij}&\rightarrow h_{ij} - D_j\eta_i-D_i\eta_j \\ \delta \tilde{R}_{ij}&\rightarrow \delta R_{ij} - R_{ik}D_j\eta^k-R_{kj}D_i\eta^k - \eta^kD_k R_{ij} \label{infinitrans2} \end{aligned} \right.\,, \end{equation} we obtain \begin{align} \frac{\widetilde{{\rm RHS}}}{2N}&\rightarrow \frac{{\rm RHS}}{2N}-D_iD_j(\eta^kD_kN)\nonumber\\ &+\frac{1}{2}D^lN(D_iD_l\eta_j+D_iD_j\eta_l+D_jD_i\eta_l\nonumber \\ &+D_jD_l\eta_i-D_lD_j\eta_i-D_lD_i\eta_i)\nonumber\\ &+N(R_{ik}D_j\eta^k+R_{kj}D_i\eta^k+\eta^kD_kR_{ij})\nonumber\\ &+\eta^kD_kR_{ij}\,.\label{wavetrans} \end{align} From the definition of the Riemann tensor, we have \begin{align} D_iD_l\eta_j&=D_lD_i\eta_j-R_{ilmj}\eta^m\nonumber\\ D_jD_l\eta_i&=D_lD_j\eta_i-R_{jlmi}\eta^m\nonumber\\ D_jD_i\eta_l&=D_iD_j\eta_l-R_{jiml}\eta^m\,. \end{align} We expand the first term as \begin{align} D_iD_j(\eta^kD_kN)&=D_kND_iD_j\eta^k+D_j\eta^kD_iD_kN\nonumber\\ &+D_i\eta^kD_jD_kN+\eta^kD_iD_jD_kN\,. \end{align} Further noting that $D_iD_jN=NR_{ij}$, from the above expressions, Eq.~(\ref{wavetrans}) reduces to \begin{align} \frac{\widetilde{{\rm RHS}}}{2N}&\rightarrow \frac{{\rm RHS}}{2N}- \eta^kD_iD_jD_kN+\eta^kD_k(NR_{ij})\nonumber\\ &-\frac{1}{2}D^lN(R_{jiml}+R_{ilmj}+R_{jlmi})\eta^m \,. \end{align} The terms with Riemann tensor can be further simplified using the circling identity of Riemann tensor $$R_{jiml}+R_{mjil}+R_{imjl}=0\,,$$ which gives \begin{align} \frac{1}{2}D^lN(R_{jiml}+R_{ilmj}+R_{jlmi})\eta^m=D^lNR_{jlki}\eta^k\,. \end{align} Further noting that \begin{align} \eta^kD_iD_jD_kN&=\eta^kD_iD_kD_jN\nonumber\\ &=\eta^k(D_kD_iD_jN-R_{iklj}D^lN)\,, \end{align} we find \begin{align} \frac{\widetilde{{\rm RHS}}}{2N}&\rightarrow \frac{{\rm RHS}}{2N}- \eta^kD_iD_jD_kN+\eta^kD_k(NR_{ij})\nonumber\\ &-\frac{1}{2}D^lN(R_{jiml}+R_{ilmj}+R_{jlmi})\eta^m\nonumber\\ &=\frac{{\rm RHS}}{2N}-\eta^kD_kD_iD_jN+\eta^kD_k(NR_{ij})\nonumber\\ &=\frac{{\rm RHS}}{2N} \,. \end{align} The above result shows that the RHS of Eq.~(\ref{secondwaveper}) does not change its format under arbitrary infinitesimal coordinate transformations. Since $- D_j\eta_i-D_i\eta_j$ is time independent, the LHS of Eq.~(\ref{secondwaveper}) $\frac{\partial^2}{\partial t^2}h_{ij}$ does not change its format as well. Equation~(\ref{secondwaveper}), therefore, is covariant. Not only is Eq.~(\ref{secondwaveper}) covariant, we can also show that \begin{align} \delta R_{ij}&=\frac{1}{2}(D^lD_i h_{lj}+D^lD_jh_{il}-D^lD_lh_{ij})\nonumber\\ &-\frac{1}{2}D_jD_i(\gamma^{kl}h_{lk})\,, \end{align} is covariant. To do this, we insert the infinitesimal coordinate transformation of $h_{ij}$ into the RHS of the above equation \begin{align} \tilde{h}_{ij}&\rightarrow h_{ij} - \gamma_{ik}D_j\eta^k-\gamma_{kj}D_i\eta^k\,, \end{align} we obtain \begin{align} \widetilde{{\rm RHS}}\rightarrow &{\rm RHS}-\frac{1}{2}D^lD_iD_l\eta_j-\frac{1}{2}D^lD_iD_j\eta_l-\frac{1}{2}D^lD_jD_l\eta_i\nonumber\\ &-\frac{1}{2}D^lD_jD_i\eta_l+\frac{1}{2}D^lD_lD_i\eta_j+\frac{1}{2}D^lD_lD_j\eta_i\nonumber\\ &+D_iD_jD^k\eta_k\,. \end{align} From the definition of the Riemann tensor, we have \begin{align} D^lD_lD_i\eta_j&=D^lD_iD_l\eta_j-\eta^kD^lR_{likj}-D^l\eta^k R_{likj}\nonumber\\ D^lD_lD_j\eta_i&=D^lD_jD_l\eta_i-\eta^kD^lR_{ljki}-D^l\eta^k R_{ljki}\,. \end{align} Further note that \begin{align} D_iD_jD_k\eta_k&=D_iD_kD_j\eta^k-D_i(R_{jm}\eta^m)\nonumber\\ &=D_kD_iD_j\eta^k-R_{ikmj}D^m\eta^k-R_{im}D_j\eta^m\nonumber\\ &-\eta^mD_iR_{jm}-R_{jm}D_i\eta^m\, \end{align} and \begin{align} D^lD_jD_i\eta_l&=D^lD_iD_j\eta_l-D^l(R_{jiml}\eta^m)\,, \end{align} we obtain \begin{align} \widetilde{{\rm RHS}}\rightarrow &{\rm RHS}+\frac{1}{2}D^l\eta^k(R_{jikl}+R_{ikjl}+R_{kjil})\nonumber\\ &-R_{ik}D_j\eta^k-R_{jk}D_i\eta^k-\eta^kD_iR_{jk}\nonumber\\ &+\frac{1}{2}\eta^k(D^lR_{jikl}-D^lR_{likj}-D^lR_{ljki})\,. \end{align} Using the circling identity of Riemann tensor $R_{jikl}+R_{ikjl}+R_{kjil}=0$ and the Bianchi identity \begin{align} D^lR_{jikl}&=D_iR_{jk}-D_jR_{ik}\nonumber\\ D^lR_{likj}&=-D_jR_{ki}+D_kR_{ji}\nonumber\\ D^lR_{ljki}&=-D_iR_{kj}+D_kR_{ij}\,, \end{align} we find that \begin{align} \widetilde{{\rm RHS}}\rightarrow &{\rm RHS}-R_{ik}D_j\eta^k-R_{jk}D_i\eta^k-\eta^kD_kR_{ij}\,, \end{align} which is consistent with doing the infinitesimal coordinate transformation of $\delta R_{ij}$ directly. The expression of $\delta R_{ij}$ in terms of $h_{ij}$, therefore, is covariant. Next, we show that the perturbed conservation laws are covariant as well \begin{align} D^l \Gamma_l-D^lD_l h=\gamma^{im}\gamma^{jn} h_{mn}R_{ij}\,.\label{Dgammaapp} \end{align} Under the infinitesimal coordinate transformation Eq.~(\ref{infinitrans2}), $\Gamma_l$ and $h$ in Eq.~(\ref{Dgammaapp}) transform as \begin{align} &D^l\tilde{\Gamma}_l\rightarrow D^l\Gamma_l - D_lD_mD^l\eta^m - D_lD_mD^m\eta^l\nonumber \\ &D^lD_l \tilde{h} \rightarrow D^lD_l h - 2 D^lD_l D_m\eta^m \,. \end{align} From the definition of Ricci tensor, we obtain \begin{align} (D_lD_m-D_mD_l)\eta^m=-R_{lk}\eta^k\,. \end{align} Taking the derivative $D^l$ of above equation, we find \begin{align} D^lD_l D_m\eta^m=D^lD_m D_l\eta^m-\eta^mD^lR_{lm}-R_{lm}D^l\eta^m\,. \end{align} Since the background scalar curvature vanishes $R=0$, from the Bianchi identity we have $2D^lR_{lm}=D_m R =0$. Then it follows that \begin{align} D^lD_l D_m\eta^m=D^lD_m D_l\eta^m-R_{lm}D^l\eta^m\,. \end{align} The infinitesimal coordinate transformation on the left hand side of Eq.~(\ref{Dgammaapp}) gives \begin{align} \widetilde{{\rm RHS}} \rightarrow & {\rm RHS} - D_lD_mD^l\eta^m - D_lD_mD^m\eta^l + 2 D^lD_l D_m\eta^m\nonumber\\ =&D_lD_mD^l\eta^m - D_lD_mD^m\eta^l-2 R_{lm}D^l\eta^m\,. \end{align} Further note that \begin{align} D_lD_mD^l\eta^m &= D_mD_lD^l\eta^m - \tensor{R}{_l_m_k^l}D^k\eta^m- \tensor{R}{_l_m_k^m}D^l\eta^k\nonumber\\ &=D_mD_lD^l\eta^m + R_{mk}D^k\eta^m-R_{lk}D^l\eta^k\nonumber\\ &=D_mD_lD^l\eta^m\nonumber\\ &=D_lD_mD^m\eta^l\,, \end{align} we obtain \begin{align} D^l \tilde{\Gamma}_l-D^lD_l \tilde{h}&=D^l \Gamma_l-D^lD_l h-2 R_{lm}D^l\eta^m\nonumber\\ &=\gamma^{im}\gamma^{jn} h_{mn}R_{ij}-2 R_{ij}D^i\eta^j\nonumber\\ &=\gamma^{im}\gamma^{jn} \tilde{h}_{mn}R_{ij}\,. \end{align} The above equation demonstrates that Eq.~(\ref{Dgamma}) does not change its format under arbitrary infinitesimal coordinate transformation, which means that Eq.~(\ref{Dgamma}) is covariant. \subsection{Null geodesics\label{Nullgeodesic}} The trajectories of null geodesics in the spacetime of a Schwarzschild black hole are given by \begin{align} \frac{d^2\mu}{d\phi^2}+\mu=3M\mu^2\,, \end{align} where $r$ and $\phi$ are the radius and azimuthal angle in the Schwarzschild coordinates. $\mu$ is the inverse of the radius $\mu=1/r$. We then make coordinate transformations \begin{equation} \left \{ \begin{aligned} r&=\frac{(M+2\rho)^2}{4\rho}\\ \rho&=\frac{1}{2}\left(r-M+\sqrt{r^2-2Mr}\right) \end{aligned} \right. \,,\label{rho2r} \end{equation} where $\rho$ is the radius in isotropic coordinates. With the notion $\mu'=1/\rho$, the geodesic equations in the isotropic coordinates are given by \begin{equation} \left \{ \begin{aligned} \frac{d^2\mu'}{d\phi^2}&=\frac{3M-r}{\rho^2}\frac{d\rho}{dr}\\ \frac{d\rho}{dr}&=\frac{1}{2}\left(1+\frac{r-M}{\sqrt{r^2-2Mr}}\right) \end{aligned} \right. \,. \end{equation} In the above equations, $r$ can be replaced by $\rho=1/\mu'$ from Eq.~(\ref{rho2r}). After obtaining the geodesics, the length of trajectories can be obtained by \begin{align} dl=\frac{1}{\mu'}\sqrt{\frac{1}{{\mu'}^2}\left(\frac{d\mu'}{d\phi}\right)^2+1}d\phi\,. \end{align} Then the total asymptotic time along the null geodesics can be computed by \begin{align} t=\int \frac{dl}{c}\,, \end{align} where $c$ is the isotropic speed of wave defined in Eq.~(\ref{definationspeed}).
Title: Utilizing a global network of telescopes to update the ephemeris for the highly eccentric planet HD 80606 b and to ensure the efficient scheduling of JWST
Abstract: The transiting planet HD80606b undergoes a 1000-fold increase in insolation during its 111-day orbit due to it being highly eccentric (e=0.93). The planet's effective temperature increases from 400K to over 1400K in a few hours as it makes a rapid passage to within 0.03AU of its host star during periapsis. Spectroscopic observations during the eclipse (which is conveniently oriented a few hours before periapsis) of HD80606b with the James Webb Space Telescope (JWST) are poised to exploit this highly variable environment to study a wide variety of atmospheric properties, including composition, chemical and dynamical timescales, and large scale atmospheric motions. Critical to planning and interpreting these observations is an accurate knowledge of the planet's orbit. We report on observations of two full-transit events: 7 February 2020 as observed by the TESS spacecraft and 7--8 December 2021 as observed with a worldwide network of small telescopes. We also report new radial velocity observations which when analyzed with a coupled model to the transits greatly improve the planet's orbital ephemeris. Our new orbit solution reduces the uncertainty in the transit and eclipse timing of the JWST era from tens of minutes to a few minutes. When combined with the planned JWST observations, this new precision may be adequate to look for non-Keplerian effects in the orbit of HD80606b.
https://export.arxiv.org/pdf/2208.14520
\submitjournal{AAS Journals} \shorttitle{$\copyright$ 2022. All rights reserved.} % \shortauthors{$\copyright$ 2022. All rights reserved.} \begin{CJK*}{UTF8}{gbsn} \title{Utilizing a global network of telescopes to update the ephemeris for the highly eccentric planet HD 80606 b and to ensure the efficient scheduling of JWST} \correspondingauthor{Kyle A. Pearson, kyle.a.pearson@jpl.nasa.gov} \author[0000-0002-5785-9073]{Kyle A. Pearson} \affiliation{Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91125 USA} \affiliation{Exoplanet Watch, \url{https://exoplanets.nasa.gov/exoplanet-watch/}} \author[0000-0002-5627-5471]{Charles Beichman} \affiliation{NASA Exoplanet Science Institute, IPAC, California Institute of Technology, Pasadena, CA 91125 USA} \affiliation{Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91125 USA} \author[0000-0003-3504-5316]{B.J. Fulton} \affiliation{NASA Exoplanet Science Institute, IPAC, California Institute of Technology, Pasadena, CA 91125 USA} \author[0000-0002-0792-3719]{Thomas M. Esposito} \affiliation{SETI Institute, Carl Sagan Center, 339 Bernardo Ave, Ste 200, Mountain View, CA 94043 USA} \affiliation{Unistellar SAS, 19 Rue Vacon, 13001 Marseille, France} \affiliation{Department of Astronomy, University of California Berkeley, Berkeley, CA 94720 USA} \author[0000-0001-7547-0398]{Robert T. Zellem} \affiliation{Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91125 USA} \affiliation{Exoplanet Watch, \url{https://exoplanets.nasa.gov/exoplanet-watch/}} \author[0000-0002-5741-3047]{David R. Ciardi} \affiliation{NASA Exoplanet Science Institute, IPAC, California Institute of Technology, Pasadena, CA 91125 USA} \author{Jonah Rolfness} \affiliation{California Institute of Technology, Pasadena, CA 91125 USA} \affiliation{Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91125 USA} \affiliation{Exoplanet Watch, \url{https://exoplanets.nasa.gov/exoplanet-watch/}} \author[0000-0002-5977-5607]{John Engelke} \affiliation{Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91125 USA} \affiliation{Raytheon Intelligence, Information, and Services, 300 N Lake Ave, Suite 1120, Pasadena, CA 91101, USA} \affiliation{Exoplanet Watch, \url{https://exoplanets.nasa.gov/exoplanet-watch/}} \author[0000-0002-0665-5759]{Tamim Fatahi} \affiliation{Department of Computer Science, California Polytechnic University, San Luis Obispo USA} \affiliation{Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91125 USA} \affiliation{Exoplanet Watch, \url{https://exoplanets.nasa.gov/exoplanet-watch/}} \author{Rachel Zimmerman-Brachman} \affiliation{Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91125 USA} \affiliation{Exoplanet Watch, \url{https://exoplanets.nasa.gov/exoplanet-watch/}} \author[0000-0001-7801-7425]{Arin Avsar} \affiliation{Unistellar SAS, 19 Rue Vacon, 13001 Marseille, France} \affiliation{Department of Astronomy, University of California Berkeley, Berkeley, CA 94720 USA} \author[0000-0002-6112-7609]{Varun Bhalerao} \affiliation{Department of Physics, Indian Institute of Technology Bombay, Powai, 400076, India} \author{Pat Boyce} \affiliation{Boyce Research Initiatives and Education Foundation} \affiliation{Exoplanet Watch, \url{https://exoplanets.nasa.gov/exoplanet-watch/}} \author{Marc Bretton} \affiliation{Observatoire des Baronnies Proven{\c c}ales, Route de Nyons, F-05150 Moydans, France} \author[0000-0001-5248-1705]{Alexandra D. Burnett} \affiliation{Unistellar Network Citizen Scientist, \url{https://unistellaroptics.com/citizen-science/}} \affiliation{School of Natural Resources and the Environment, University of Arizona, Tucson, AZ 85721 USA} \author[0000-0002-0040-6815]{Jennifer Burt} \affiliation{Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91125 USA} \author{Martin Fowler} \affiliation{ExoClock Project, \url{https://www.exoclock.space/}} \affiliation{Exoplanet Watch, \url{https://exoplanets.nasa.gov/exoplanet-watch/}} \author{Daniel Gallego} \affiliation{Exoplanet Watch, \url{https://exoplanets.nasa.gov/exoplanet-watch/}} \author{Edward Gomez} \affiliation{Las Cumbres Observatory, 6740 Cortona Drive, Suite 102, Goleta, CA 93117 USA} \author[0000-0003-4091-0247]{Bruno Guillet} \affiliation{Unistellar Network Citizen Scientist, \url{https://unistellaroptics.com/citizen-science/}} \author{Jerry Hilburn} % \affiliation{Exoplanet Watch, \url{https://exoplanets.nasa.gov/exoplanet-watch/}} \author{Yves Jongen} \affiliation{Observatoire de Vaison-La-Romaine, D{\'e}partementale 51, pr{\`e}s du Centre Equestre au Palis—F-84110 Vaison-La-Romaine, France} \affiliation{ExoClock Project, \url{https://www.exoclock.space/}} \author[0000-0003-3759-9080]{Tiffany Kataria} \affiliation{Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91125 USA} \author[0000-0002-3205-0147]{Anastasia Kokori} \affiliation{University College London, Gower Street, London, WC1E 6BT, UK} \affiliation{ExoClock Project, \url{https://www.exoclock.space/}} \author[0000-0003-0871-4641]{Harsh Kumar} \affiliation{Department of Physics, Indian Institute of Technology Bombay, Powai, 400076, India} \author{Petri Kuossari} \affiliation{Unistellar Network Citizen Scientist, \url{https://unistellaroptics.com/citizen-science/}} \author[0000-0003-3559-0840]{Georgios Lekkas} % \affiliation{Department of Physics, University of Ioannina, Ioannina, 45110, Greece} \affiliation{Exoplanet Watch, \url{https://exoplanets.nasa.gov/exoplanet-watch/}} \author[0000-0003-3779-6762]{Alessandro Marchini} \affiliation{University of Siena, Department of Physical Sciences, Earth and Environment, Astronomical Observatory, Via Roma 56, 53100 Siena, Italy} \affiliation{ExoClock Project, \url{https://www.exoclock.space/}} \author[0000-0002-5105-635X]{Nicola Meneghelli} \affiliation{Unistellar Network Citizen Scientist, \url{https://unistellaroptics.com/citizen-science/}} \author[0000-0001-8771-7554]{Chow-Choong Ngeow} \affiliation{Graduate Institute of Astronomy, National Central University, 300 Jhongda Road, 32001 Jhongli, Taiwan} \author{Michael Primm} \affiliation{Unistellar Network Citizen Scientist, \url{https://unistellaroptics.com/citizen-science/}} \author[0000-0003-2167-9764]{Subham Samantaray} \affiliation{Department of Physics, Indian Institute of Technology Bombay, Powai, 400076, India} \author{Masao Shimizu (清水正雄)} \affiliation{Unistellar Network Citizen Scientist, \url{https://unistellaroptics.com/citizen-science/}} \author{George Silvis} \affiliation{American Association of Variable Star Observers, 49 Bay State Rd, Cambridge, MA 02138, USA} \affiliation{Exoplanet Watch, \url{https://exoplanets.nasa.gov/exoplanet-watch/}} \author{Frank Sienkiewicz} \affiliation{The Center for Astrophysics, Harvard $\&$ Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA} \affiliation{Exoplanet Watch, \url{https://exoplanets.nasa.gov/exoplanet-watch/}} \author[0000-0002-7942-8477]{Vishwajeet Swain} \affiliation{Department of Physics, Indian Institute of Technology Bombay, Powai, 400076, India} \author{Joshua Tan} \affiliation{Exoplanet Watch, \url{https://exoplanets.nasa.gov/exoplanet-watch/}} \author{Kalee Tock} \affiliation{Stanford Online High School, Academy Hall Floor 2 8853, 415 Broadway Redwood City, CA 94063, USA} \affiliation{Exoplanet Watch, \url{https://exoplanets.nasa.gov/exoplanet-watch/}} \author[0000-0002-4309-6343]{Kevin Wagner} \altaffiliation{NASA Hubble Fellowship Program - Sagan Fellow} \affiliation{Unistellar Network Citizen Scientist, \url{https://unistellaroptics.com/citizen-science/}} \affiliation{Department of Astronomy and Steward Observatory, University of Arizona, Tucson, AZ 85721 USA} \author{Ana{\"e}l W{\"u}nsche} \affiliation{Observatoire des Baronnies Proven{\c c}ales, Route de Nyons, F-05150 Moydans, France} \affiliation{ExoClock Project, \url{https://www.exoclock.space/}} \date{June 2022} \section{Abstract} The transiting planet HD~80606~b undergoes a 1000-fold increase in insolation during its 111 day orbit due to it being highly eccentric ($e$=0.93). The planet's effective temperature increases from 400~K to over 1400~K in a few hours as it makes a rapid passage to within 0.03~AU of its host star during periapsis. Spectroscopic observations during the eclipse (which is conveniently oriented a few hours before periapsis) of HD~80606~b with the James Webb Space Telescope (JWST) are poised to exploit this highly variable environment to study a wide variety of atmospheric properties, including composition, chemical and dynamical timescales, and large scale atmospheric motions. Critical to planning and interpreting these observations is an accurate knowledge of the planet's orbit. We report on observations of two full-transit events: 7 February 2020 as observed by the TESS spacecraft and 7--8 December 2021 as observed with a world-wide network of small telescopes. We also report new radial velocity observations which when analyzed with a coupled model to the transits greatly improves the planet's orbital ephemeris. Our new orbit solution reduces the uncertainty in the transit and eclipse timing of the JWST era from tens of minutes to a few minutes. When combined with the planned JWST observations, this new precision may be adequate to look for non-Keplerian effects in the orbit of HD~80606~b. \section{Introduction} For many years HD~80606~b held the record for the most highly eccentric planet. Discovered by the radial velocity (RV) technique in 2001 \citep{Naef2001} HD~80606~b has a mass of 4.1~\mj, an orbital period of 111.4~days and an eccentricity of $\epsilon$=0.93. Its eccentricity is currently exceeded only by HD~20782~b with an eccentricity of $\epsilon$=0.95 \citep{Jones2006}. HD~80606~b continues to be compelling for further study as it was discovered by Spitzer using the eclipse in early 2009 \citep{Laughlin2009}. The transit was then discovered and announced near-simultaneously in late February 2009 by \cite{Fossey2009}, \cite{Garcia2009}, and by \cite{Moutou2009}. HD 80606 b passes within 0.03~AU of its host G5V star, during its rapid periastron passage of a few tens of hours, the insolation and temperature of the planet increase dramatically, from 1$\times$ to almost 1000$\times$ Earth-Equivalent and from 400~K to over 1400~K. These rapid changes, coupled with the fact that HD~806060~b transits and also eclipses (passes behind the star), provide a unique opportunity to explore the dynamical response of an atmosphere under an extreme external forcing function. Spitzer's photometric observations of eclipses in 2009 and 2010 at 8.0 and 4.5~\mum\, respectively, were used to infer timescales for radiative, dynamical, and chemical processes \citep{dewit2016, Lewis2017}. As noted by \citet{Lewis2017}, ``The time-variable forcing experienced by exoplanets on eccentric orbits provides a unique and important window on radiative, dynamical, and chemical processes in planetary atmospheres and an important link between exoplanet observations and theory." The James Webb Space Telescope (JWST) will expand these studies dramatically using spectroscopy. Kataria et al.\footnote{Approved Cycle 1 program \#2008. ``A Blast From the Past: A Spectroscopic look at the Flash Heating of HD~80606~b" https://www.stsci.edu/jwst/science-execution/program-information.html?id=2008} will use the MIRI Low Resolution Spectrometer (MIRI/LRS) to observe an eclipse of HD~80606~b from 5--14~\mum\ at a spectral resolution of $\sim$100. Sikora et al\footnote{Approved Cycle 1 program \#2488. ``Real Time Exoplanet Meteorology: Direct Measurement of Cloud Dynamics on the High-Eccentricity Hot Jupiter HD~80606 b" https://www.stsci.edu/jwst/science-execution/program-information.html?id=2488} will explore the formation and evolution of atmospheric clouds at shorter wavelengths using NIRSpec at 2.87-5.18 \mum\ with a resolution of $\sim$2700 to observe the eclipse and periastron passage. These spectral regions contain a wealth of molecular features whose variation will reveal new insights into the chemistry and dynamics of the atmospheres of giant planets. A challenge to transit and eclipse observations is the gradual erosion of our knowledge of a planet's orbital properties. Uncertainties in the timing of transits and eclipses lead to observing inefficiencies as longer durations must to scheduled to avoid missing some or all of an event \citep[e.g.,][]{Dragomir2020, Zellem2020}. This problem is exacerbated in the case of HD~80606~b where the relevant observations are over a decade old and uncertainties on the eclipse prediction grow with each orbit ($\sim$3 per year). Of particular importance is the knowledge of the time of periastron passage relative to the eclipse as this is needed to link the spectral observations to the insolation profile. It was to remedy this growing uncertainty in our knowledge of the ephemerides of HD~80606~b that we undertook to analyze the TESS data and to obtain observations of the transit occurring on 7/8-Dec-2021 (Table~\ref{tab:nominal} and Figure~\ref{fig:Map}) from the ground. We also obtained new RV measurements around the time of periastron to continue to refine the RV solution. Section~$\S$\ref{sec:trans} describes the observations of the transit and $\S$\ref{sec:PRV} the RV observations. Section~$\S$\ref{sec:analysis} describes the analysis of the various datesets while $\S$\ref{sec:params} uses the combined transit and RV measurements to refine the ephemeris of HD~80606~b and to predict the times of occurrence of future transits and eclipses. \begin{deluxetable}{lcl}[t!] \tablecaption{Orbital Prior for HD~80606~b\label{tab:nominal}} \tablehead{ \colhead{Parameter} & \colhead{Value} & \colhead{Reference}} \startdata T$_{mid}$ (MJD)&2455210.6428$\pm$0.001 &\citet{Bonomo2017}\\ E$_{mid}$ (MJD)&2454424.736 $\pm$0.003 &\citet{Laughlin2009}\\ & 14-Jan-2010 0326 UTC&\\ Period (d)&111.43670$\pm$0.0004 &\citet{Bonomo2017}\\ Eccentricity ($e$)&0.93226$\pm$0.00066&\citet{Bonomo2017}\\ Arg. Periapsis ($\omega_{peri})$&58.97$\pm$0.2 (deg)&\citet{Bonomo2017}\\ &-1.0292$\pm$0.0035 (rad)&\\ Transit Duration (hr)&11.64$\pm$0.25&\citet{Winn2009}\\ \multicolumn{3}{l}{\textit{Prediction for Dec. 2021}}\\ Accum. Unc. (hr)$^1$& 0.4 for Observed transit\\ T$_{mid}$ (MJD)&2459556.674$\pm$0.016 d &\\ Observed event&08-12-2021 0411 UTC &\\ \enddata \tablecomments{$^1$Accumulated uncertainty in the timing of the transit occurring $N_{per}=39$ periods after the reference time, $T_c$. $\sigma T=\sqrt{\sigma(T_c)^2+N_{per}^2\sigma(Period)^2}$ (Eqn.~3 in \citet{Zellem2020})} \end{deluxetable} \section{Observations \label{sec:obs}} A majority of the transit observations for HD 80606 b originate almost a decade ago when it was a targeted by the Spitzer Space Telescope. Since then, there hasn't been a full transit observation in $\sim$10 years although the star has been monitored by radial velocity surveys. In preparation for JWST observations we have combined observations of the 2020 transit taken by TESS with 2021 observations taken from the ground by the Exoplanet Watch program. Finally, the light curve measurements are combined with new and archival radial velocity measurements in order to constrain the orbit parameters and to improve our knowledge of transit and eclipse events over the next decade. \subsection{2020 Transit With TESS} The photometric data from TESS were processed using a custom pipeline leveraging optimal aperture selection, systematic detrending with a weighted spline and outlier rejection in order to improve and minimize the scatter in the light curve \citep{Pearson2019b}. The custom pipeline uses multiple aperture sizes during the photometric extraction in order to minimize the scatter in the residuals after fitting a light curve model. Detrending the time series and minimizing scatter in the residuals has been shown to improve light curve quality compared to the default produced from the Science Processing Operations Center (SPOC) pipeline \citep{Jenkins2016} which is based on the Kepler mission pipeline \citep{Jenkins2010}. TESS is capable of high precision measurements for this system due to the host star being bright (V=9.0 mag). However, TESS's large pixel size (21\arcsec) is less than ideal for HD~80606 due to the presence of HD~80607, a nearby companion of similar spectral type and brightness (V=9.07 mag) separated by 20.5\arcsec. Stellar blends dilute the transit signal causing a larger planet to mistakenly appear smaller \citep[e.g.,][]{Ciardi15, Zellem2020}. In the reduction of TESS data, a wide aperture was used and includes light from both stars. Therefore, our estimate for the transit depth is underestimated. The estimated contamination is around $\sim$48$\%$ and translates to a corrected transit depth $\sim2\times$ greater than what we directly measure. Despite the contamination decreasing the transit depth, we still detect the transit at over 40 $\sigma$ which allows for a strong constraint on the time of mid-transit to within a few minutes (see Table~\ref{tab:newmid} and Figure \ref{fig:joint_transit}). \begin{deluxetable*}{lllllll} \centering \tablecaption{Transit Observing Facilities\label{tab:facilities}} \tablehead{ \colhead{Facility} & \colhead{Location (N,E)} & \colhead{Size (m)}& \colhead{UTC Start (Phase)} & \colhead{UTC Stop (Phase)}& \colhead{Precision \% $^{1}$ }& \colhead{N. Images} } \startdata Transiting Exoplanet & Space & 0.1 & 2020-02-07 20:32:00 (-0.0054) & 2020-02-07 07:06:00 (0.0054) & 0.06 & 1520 \\ % Survey Satellite (TESS) & &&&&\\ \hline Exoplanet Watch [HJEB] & (30.7, -104.2) & 0.4 & 2021-12-06 08:21:36 (-0.0166) &2021-12-06 09:40:50 (-0.0161) & 1.31 & 225 \\ Las Cumbres (LCO) & (30.7, -104.2) & 0.4 & 2021-12-07 06:48:56 (-0.0079) & 2021-12-07 07:39:54 (-0.0082) & 1.26 & 218 \\ Las Cumbres (LCO) & (30.7, -104.2) & 0.4 & 2021-12-07 09:46:56 (-0.0068) & 2021-12-07 10:38:05 (-0.0071) & 0.77 & 225 \\ Las Cumbres (LCO) & (30.7, -104.2) & 0.4 & 2021-12-07 11:35:45 (-0.0064) & 2021-12-07 12:26:18 (-0.0061) & 1.21 & 221 \\ Exoplanet Watch [NCC] & (23.5, 120.9) & 0.4 & 2021-12-07 17:34:11 (-0.0042) & 2021-12-07 20:13:20 (-0.0032) & 1.01 & 481 \\ % GROWTH-India & (32.8, 79.0) & 0.7 & 2021-12-07 19:52:49 (-0.0033) & 2021-12-08 00:40:41 (-0.0015) & 0.53 & 609 \\ % Unistellar eVscope 2 (2rz) & (49.2, -0.4) & 0.11 & 2021-12-07 20:49:47 (-0.0030) & 2021-12-08 01:38:22 (-0.0012) & 1.09 & 126 \\ Unistellar eVscope (etx) & (49.2, -0.4) & 0.11 & 2021-12-07 20:48:29 (-0.0030) & 2021-12-08 01:37:27 (-0.0012) & 0.63 & 131 \\ Unistellar eVscope (257) & (60.8, 24.4) & 0.11 & 2021-12-07 21:41:31 (-0.0027) & 2021-12-08 00:17:56 (-0.0017) & 0.36 & 79 \\ Unistellar eVscope (3mh) & (45.3, 11.1) & 0.11 & 2021-12-07 22:24:41 (-0.0024) & 2021-12-08 01:41:27 (-0.0012) & 0.67 & 55 \\ Exoplanet Watch [GDAI] & (39.0, -108.2) & 0.4 & 2021-12-08 03:37:37 (-0.0004) & 2021-12-08 11:46:49 (0.0026) & 3.11 & 503 \\% Daniel Gallego <dangallego3@gmail.com>, Dr. Joshua Tan Unistellar eVscope (rev) & (30.4, 97.8) & 0.11 & 2021-12-08 04:26:52 (-0.0001) & 2021-12-08 08:09:55 (0.0013) & 0.50 & 101 \\ Unistellar eVscope (sdp) & (32.2, -111) & 0.11 & 2021-12-08 05:17:14 (0.0002) & 2021-12-08 12:18:15 (0.0028) & 0.78 & 155 \\ Exoplanet Watch [RJBA] & (34.1, -118.1) & 0.15 & 2021-12-08 06:09:47 (0.0005) & 2021-12-08 12:08:50 (0.0027) & 1.47 & 569 \\ % Las Cumbres (LCO) & (30.7, -104.2) & 1 & 2021-12-08 06:41:20 (0.0007) & 2021-12-08 12:17:36 (0.0028) & 0.33 & 391 \\ Exoplanet Watch [HJEB] & (30.7, -104.2) & 0.4 & 2021-12-08 06:46:01 (0.0007) & 2021-12-08 07:36:43 (0.001) & 1.29 & 225 \\ Las Cumbres (LCO) & (30.7, -104.2) & 0.4 & 2021-12-08 11:35:50 (0.0025) & 2021-12-08 12:26:33 (0.0029) & 0.80 & 225 \\ Unistellar eVscope (8cm) & (35.1, 134.4) & 0.11 & 2021-12-08 13:19:08 (0.0032) & 2021-12-08 14:14:42 (0.0035) & 1.54 & 26 \\ Exoplanet Watch [NCC] & (23.5, 120.9) & 0.4 & 2021-12-08 16:04:28 (0.0042) & 2021-12-08 20:08:09 (0.0057) & 0.80 & 516 \\ % Unistellar eVscope 2 (2rzB) & (49.2, -0.4) & 0.11 & 2021-12-08 21:47:08 (0.0063) & 2021-12-08 23:47:48 (0.0071) & 1.08 & 88 \\ Unistellar eVscope (etxB) & (49.2, -0.4) & 0.11 & 2021-12-08 21:48:00 (0.0064) & 2021-12-08 23:39:20 (0.007) & 1.25 & 152\\ Exoplanet Watch [BARO] & (32.6, -116.3) & 0.43 & 2021-12-09 01:26:11 (0.0077) & 2021-12-09 01:55:10 (0.0079) & 0.97 & 98 \\ % Exoplanet Watch [LGEC] & (28.3, -16.6) & 0.4 & 2021-12-09T02:06:25 (0.008) & 2021-12-09 02:15:10 (0.008) & 0.80 & 29 \\ % Exoplanet Watch [FMAA] & (31.7, -111.1) & 0.15 & 2021-12-09T04:41:25 (0.009) & 2021-12-09 12:06:02 (0.012) & 1.79 & 130 \\ % \enddata \tablecomments{$^1$Standard deviation of the residuals \\The observations are split between the archival measurements (top) and those taken for the same transit (bottom).\\For the exoplanet watch observations the letters in brackets represent the AAVSO Observer code so the datasets can be easily referenced in the future and searchable on their archive. } \end{deluxetable*} \subsection{2021 Transit from the Ground}\label{sec:trans} HD~80606~b's long transit duration, over 11.5~hr \citep{Pont2009,Winn2009}, and the accumulated uncertainty in its time of occurrence, make a world-wide program of coordinated observations essential. Fortunately, networks of small and modest sized telescopes (e.g., Exoplanet Watch\footnote{https://exoplanets.nasa.gov/exoplanet-watch/}, ExoClock\footnote{http://exoclock.space}, Unistellar\footnote{https://unistellaroptics.com/}) are now in place to support programs of this type. The global observational campaign to measure the 2021 December 7--8 transit of HD~80606~b presented here was coordinated by Exoplanet Watch. The various observatories that contributed a transit measurement in December are shown in Figure \ref{fig:Map}. \subsubsection{Exoplanet Watch} Exoplanet Watch is a citizen science project funded by NASA's Universe of Learning\footnote{https://www.universe-of-learning.org} for observing exoplanets with small, ground-based telescopes to maintain ephemerides and to ensure the efficient use of large telescopes, discover new exoplanets via transit timing variations, resolve blended pairs, monitor for stellar variability, and confirm exoplanet candidates \citep{Zellem2019, Zellem2020}. Anyone is able to contribute observations to a public data archive\footnote{https://app.aavso.org/exosite/}, hosted by the American Association of Variable Star Observers\footnote{http://aavso.org}, where they are analyzed on a regular basis and used to refine exoplanet ephemerides\footnote{https://exoplanets.nasa.gov/exoplanet-watch/results/}. The observations listed under Exoplanet Watch in Table~\ref{tab:facilities} are currently available online and are linked to their AAVSO observer code. A majority of the users contributed at least one hour of observations using telescopes smaller than 0.5-meters. A few notable contributors to the network include the Boyce-Astro Research Observatory (BARO) located at an observing site near Tierra Del Sol and Campo, California. BARO includes a 17-inch telescope and a ZWO ASI 1600 CMOS camera. The observing configuration provides a 8.3'$\times$6.3' field of view with a plate scale of 0.107'' per pixel. Additionally, an individual user was able to capture part of transit egress from the top of the Cahill building on the campus of California Institute of Technology using a 6~inch telescope and the ASI 224MC camera. Another contributor is the MicroObservatory which hosts a network of automated remote reflecting telescopes, each with a 6-inch mirror, 560-mm focal length, and KAF1402ME CCD with 6.8-micron-sized pixels. With 2×2 pixel binning, the image size is 650×500 pixels at a pixel scale of approximate 5”/px. MicroObservatory takes images of exoplanet systems daily and makes the images publicly available for educational use via their DIY Planet Search program\footnote{https://mo-www.cfa.harvard.edu/MicroObservatory/}. \subsubsection{LCO Network} Las Cumbres Observatory (LCO) is a global telescope network consisting of multiple meter and sub-meter sized telescopes at various locations around the Earth. HD~80606 was observed over the course of 3 days from multiple locations in the LCO network. Unfortunately, weather clouded-out most of the Northern Hemisphere so that only a few sites acquired data. A majority of the usable observations come from LCO's telescopes at McDonald Observatory in Texas and Teide Observatory in Tenerife. LCO's 0.4-meter telescopes contain SBIG CCD cameras with a field of view $\sim$29' $\times$ 29', corresponding to a plate scale of 0.571''/pixel. The 1-meter telescope apart of LCO contains a Sinistro imager with a 26' $\times$ 26' field of view and a plate scale of 0.39''/px. All of the LCO observations were acquired with the R filter and some observatory-specific details are highlighted in Table~\ref{tab:facilities}. \subsubsection{Unistellar Network} The Unistellar Network is a global community of citizen scientist observers with Unistellar telescopes who have open access to observing campaigns organized by SETI Institute astronomers, including exoplanet transit observations. Seven different eVscopes (``Enhanced Vision Telescopes'') acquired nine observations of HD~80606~b from six different observing locations in North America, Europe, and Japan (Table~\ref{tab:facilities}). Of those observations, seven were collected using the Unistellar eVscope~1, which is a 4.5-inch reflecting telescope with a Sony IMX224LQR CMOS sensor at its prime focus. The camera's field of view is 37.0$\arcmin$ x 27.7$\arcmin$ with a plate scale of 1.7 $\arcsec$/pixel. Individual images had an exposure time of 3.970~s and sensor gain of 2 dB. The two remaining observations were collected using the Unistellar eVscope~2, which shares the design of the eVscope~1 but has a Sony IMX347LQR CMOS sensor. The camera's field of view is 45.3$\arcmin$ x 34.0$\arcmin$ with a plate scale of 1.3 $\arcsec$/pixel. Individual images had an exposure time of 3.970~s and sensor gain of 0~dB (no digital gain). \subsubsection{ExoClock Project} In addition to the TESS and December transit of HD 80606 b we also report on three additional transit measurements from the project ExoClock \citep{Kokori2021}. The ExoClock project is an open-access citizen science project aimed at conducting transit measurements of exoplanets targeted by the Ariel Mission \citep{Tinetti16}. The three measurements were taken from ground-based observatories in Europe with mid-transit measurements reported in Table~\ref{tab:oldmid}. \subsubsection{GROWTH} The Global Relay of Observatories Watching Transients Happen (GROWTH) network involves over a dozen institutions dedicated to the follow-up of transient events \citep{Kasliwal2019}. Among these, a number of Asian observatories within the GROWTH collaboration participated in the 2021 Dec 7/8 campaign, providing critical data during transit ingress. The GROWTH-India Telescope (GIT) is a 0.7m fully robotic telescope located at the Indian Astronomical Observatory (IAO), Hanle-Ladakh. The telescope is equipped with an Andor Ikon230XL CCD camera which provides a Field of view of $\sim0.5~\rm{deg}^2$. GIT observed HD~80606~b for $\sim 5$~hrs on night of Dec~7, 2022, obtaining a total of 609 images. The details of the observations are provided in Table~\ref{tab:facilities}. Data were reduced following standard procedures, and photometry was performed with EXOTIC as described in 3.2. \subsection{Transit Data Reduction} Data reduction and calibrations of the individual science images was done by each observer or their group. We encouraged all groups to acquire at least a bias and flat-field frame in order to reduce noise and normalize pixel to pixel changes in sensitivity, respectively. We provided an open-source package for aperture photometry and light curve fitting in order to make extracting the time series easy and optimal with respect to minimizing sources of noise. The EXOplanet Transit Interpretation Code\footnote{https://github.com/rzellem/EXOTIC} (EXOTIC; \citealt{Zellem2020}; Fatahi et al. \textit{in prep.}) can calibrate images (i.e. bias, flat and dark), plate solve images for better centroiding and conducts an optimization over comparison star selection and aperture when extracting the photometric timeseries. After conducting aperture photometry, all of the time series files were combined in order to produce the global light curve shown in Figure \ref{fig:BestFit}. A mosaic of the individual observations is shown in the appendix (see Figure \ref{fig:mosaic}.) \begin{deluxetable}{lll} \centering \tablecaption{Archival Ephemeris Times\label{tab:oldmid}} \tablehead{ \colhead{BJD$_{TBD}$} & \colhead{Reference} & \colhead{Status}} \startdata 2454424.736 $\pm$ 0.003 & \cite{Laughlin2009} & Full Eclipse \\ 2454876.316 $\pm$0.023 & \cite{Pont2009} & Partial Transit \\%https://arxiv.org/pdf/0906.5605.pdf 2454876.338 $\pm$ 0.017 & \cite{Kokori2021} & Partial Transit\\ % 2454987.7842 $\pm$0.0049 & \cite{Winn2009} & Full Transit\\%https://arxiv.org/pdf/0907.5205.pdf 2455099.196 $\pm$ 0.026 & \cite{Shporer2010} & Partial Transit\\%https://arxiv.org/pdf/1008.4129.pdf 2455210.6420 $\pm$0.001 & \cite{Hebrard2010} & Full Transit \\%https://www.aanda.org/articles/aa/pdf/2010/08/aa14327-10.pdf 2455210.6502 $\pm$ 0.0064 & \cite{Shporer2010} & Full Transit\\%https://arxiv.org/pdf/1008.4129.pdf 2457439.401 $\pm$ 0.012 & \cite{Kokori2021} & Partial Transit\\ % 2459222.401 $\pm$ 0.016 & \cite{Kokori2021} & Partial Transit\\ % \enddata \end{deluxetable} \begin{deluxetable}{ll} \centering \tablecaption{New Mid-transit Times\label{tab:newmid}} \tablehead{ \colhead{Facility} & \colhead{BJD$_{TBD}$}} \startdata TESS & 2458888.07466 $\pm$ 0.00204 \\ Multiple (7--8 Dec. 2021) & 2459556.7007 $\pm$ 0.0035 \\ \enddata \end{deluxetable} \begin{deluxetable}{lll} \centering \tablecaption{New Radial Velocity Observations\label{tab:NEW_RV}} \tablehead{ \colhead{Instrument} & \colhead{BJD$_{TBD}$}& \colhead{Relative RV}} \startdata HIRES &2459514.0886&-133.668$\pm$1.168\\ APF & 2459533.0674 & 37.779$\pm$2.332\\ APF & 2459535.9405 & 15.584$\pm$8.951\\ APF & 2459541.0692 & -13.924$\pm$2.248\\ APF & 2459541.8002 & -9.460$\pm$2.288\\ APF & 2459544.0027 & -28.552$\pm$2.413\\ \enddata \end{deluxetable} \begin{deluxetable}{lll} \centering \tablecaption{Archival Radial Velocity Observations\label{tab:OLD_RV}} \tablehead{ \colhead{Instrument} & \colhead{BJD$_{TBD}$}& \colhead{Relative RV}} \startdata ELODIE & 2452075.359 & -134.46$\pm$13\\ ... \\ HIRES$_K$ &2452219.162 & -85.11$\pm$1.6\\ ... \\ HRS &2453433.606&119.8$\pm$8.6\\ ... \\ HIRISE$_J$ &2453398.854&-171.57$\pm$0.89\\ ... \\ SOPHIE &2454876.729&222.1$\pm$5\\ ... \\ \enddata \tablecomments{These measurements are available online in a machine readable format \footnote{\url{https://exofop.ipac.caltech.edu/tess/view_tag.php?tag=418623}}. } \end{deluxetable} \subsection{Radial Velocity Observations\label{sec:PRV}} New radial velocity observations were obtained around periapsis in December 2021 using the Levy spectrometer on the 2.4m Automated Planet Finder telescope (APF) \citep{Vogt2014} and the High Resolution spectrometer (HIRES, on the 10m Keck I telescope). The new RV measurements are processed using standard data reduction techniques described in \citet{Butler1996}. The APF and HIRISE RV values are measured using an Iodine cell-based design in order wavelength calibrate the stellar spectrum. The spectral region from 5000-6200 $\AA$ is used for measuring the radial velocities. The new observations are listed in Table~\ref{tab:NEW_RV}. We used a total of 593 RV measurements spanning 22 years for the data analysis (see Figure \ref{fig:joint_rv}) and they are available in a machine readable format online (see Table \ref{tab:OLD_RV}). \section{Analysis\label{sec:analysis}} The newly acquired data of HD~80606 b along with the historical measurements for RV, transit and eclipse are analyzed in a self-consistent manner in order to place constraints on the system parameters. The radial velocity observations help constrain the orbit and alignment of HD~80606~b, which is particularly important considering the high eccentricity of the planet can drastically change the transit duration based on the argument of periastron \citep{Hebrard2010}. The transit observations help the size of the planet once the orbit is reliably known and disentangled from degeneracies involving the stellar radius, inclination and contamination by HD 80607. Additionally, using the measured times of mid-transit and mid-eclipse we can search for deviations from a Keplerian orbit, which is potentially indicative of a companion in the system (\citealt{Holman2005}; \citealt{Nesvorney2008}). \subsection{Global Light Curve Analysis} Observations for the transit of HD~80606~b on the night of 2021 December 7--8 are combined and fitted simultaneously in order to derive the time of mid-transit and radius ratio between the planet and star. Since each observation was acquired at a different location, it requires individual treatment of extinction from Earth's atmosphere. We adopt a parameterization \citep[e.g.,][]{Pearson2019a} which scales exponentially with airmass and has resemblance to a solution of the radiative transfer equation when the source function is $I(\tau) = I(0)e^{-\tau}$. The following equation is used to maximize the likelihood of the transit model and airmass signal simultaneously: \begin{equation} \label{expam} F_{obs} = a_{0} e^{a_{1} \beta } F_{transit}. \end{equation} \noindent Here $F_{obs}$ is the flux recorded on the detector, $F_{transit}$ is the actual astrophysical signal (i.e., the transit light curve, given by pyLightcurve \citep{Tsiaras2016}, $a_{i}$ are airmass correction coefficients and $\beta$ is the airmass value. Since the underlying astrophysical signal is shared between all the observations we leave $R_{p}/R_{s}$ and $T_{mid}$ as free parameters during the retrieval and share the values between each dataset. The free parameters are optimized using the multimodal nested sampling algorithm called UltraNest (\citealt{Feroz2008}; \citealt{Buchner2014}; \citealt{Buchner2017}). Ultranest is a Bayesian inference tool that uses the Monte Carlo strategy of nested sampling to calculate the Bayesian evidence allowing simultaneous parameter estimation and model selection. A nested sampling algorithm is efficient at probing parameter spaces which could potentially contain multiple modes and pronounced degeneracies in high dimensions; a regime in which the convergence for traditional Markov Chain Monte Carlo (MCMC; e.g., \citealt{ford05}) techniques becomes comparatively slow (\citealt{Skilling2004}; \citealt{Feroz2008}). Convergence for such a large retrieval can take a long time if the priors are very large and sometimes the solutions will not converge at all within a given range for likelihood evaluations for such a large dataset. Therefore, to aid with convergence, each observation was fit individually before being fit simultaneously and given priors to reflect $\pm5\sigma$ around the individual fits. The nested sampling algorithm runs for 500,000 likelihood evaluations before terminating with the resulting posterior distribution shown in Figure \ref{fig:posterior}. An open source version of the global retrieval is available through the EXOTIC repository on GitHub \footnote{https://github.com/rzellem/EXOTIC}. A non-linear 4 parameter limb darkening model is used for the both the ground-based measurements and TESS but corresponding to their respective filters \citep{Morello2020}. \subsection{Radial Velocity Analysis} The archival and new RV measurements (Table~\ref{tab:NEW_RV} and Table~\ref{tab:OLD_RV}) are analyzed using a joint simultaneous fit between a TESS light curve and historical measurements for mid-transit and mid-eclipse in order to constrain a consistent orbital solution across 10 years of heterogeneous data. The radial velocity model uses the same orbit equation and Keplerian solver as the transit light curve model (PyLightcurve; \citealt{Tsiaras2016}). The orbit equation used in the transit model is \begin{equation} r_t = \frac{a}{R_s}\frac{(1-e^2)}{(1+e*cos(\nu_t))} \end{equation} where $a$ is the semi-major axis, R$_s$ is the stellar radius, $e$ is the eccentricity, and $\nu$ is the true anomaly at some time $t$. The true anomaly can be solved for using equations 1 and 2 in \citealt{Fulton2018} by finding the root of an equation to get the eccentric anomaly which is then used to compute the true anomaly. The orbit equation is projected onto a Cartesian grid which is necessary for the transit model and useful for taking the dot product along our line of sight ensuring it matches the transit geometry (see Figure~\ref{fig:orbit}). The projection along the x-axis, or our line of sight is \begin{equation} x_t = r_t sin(\nu_t + \omega) sin(i) \end{equation} where $i$ is the inclination of the orbit and $\omega$ is the argument of periastron. The star's velocity is estimated after applying a scaling relation to the planet's orbit assuming it is in a two body system. Coupling the orbit solutions ensures a self consistent system where gravity balances the centripetal acceleration of the planet. The velocity vector of the planet is scaled to match that for the star's orbit and then projected along a line of sight in order to produce the RV signal. A velocity is estimated by evaluating the orbit equation twice in order to compute a numerical derivative using a time step of $\sim$8.5 seconds (0.0001 day): \begin{equation} v_{r,t} = \frac{M_p}{M_s} R_s \frac{x_{t+\Delta t}-x_t}{\Delta t} \end{equation} In addition to scaling the planet's orbit by a mass ratio to mimic the stellar position it must also be scaled by the stellar radius in order to acquire units of meters. The stellar radius is given a Gaussian prior during the retrieval process in order to reflect uncertainties on that scale factor and because it is correlated with the planet's inclination. For instance, for a given transit duration there could be a small star with a non-inclined planet or a big star with an inclined planet. Either way they can produce the same transit duration and it is difficult to disentangle the two parameters without an additional constraint on the likelihood function (e.g. some spectral modelling is needed to constrain the stellar properties). We do not have enough information to uniquely constrain the stellar radius and inclination simultaneously which leads to a degeneracy in our retrieval if each parameter uses a uniform prior. Therefore, the stellar radius is given a Gaussian prior which is constructed to be consistent with past derivations in the literature (\citealt{Bonomo2017}; \citealt{Rosenthal2021}). \subsection{Joint simultaneous fit} Fitting three different types of measurements in a joint analysis requires a likelihood function with contributions from each data set. The system parameters are used to generate a coupled physical model for the transit, RV and ephemeris data in order to enforce consistency between the data sets. The likelihood function includes the sum of the chi-square values when comparing the data sets to their respective model. The TESS light curve is compared to a transit model in a manner similar to the global fit for all the ground-based measurements except the airmass correction is left out. The historic mid-transit and mid-eclipse measurements are compared to a linear ephemeris and then folded into the total chi-squared estimate. The radial velocity measurements are also folded into the total chi-squared however the uncertainties are adjusted prior to the joint fit. The radial velocity likelihood ($\mathcal{L}$) adopts a parameterization similar to RADVEL \citep{Fulton2018} in order to account for underestimated uncertainties, \begin{equation} \label{rv_likelihood} \mathcal{L}_{RV} = -\frac{1}{2} \sum_{i} \sum_{t} (\frac{d_t - v_{r,t}}{\sigma_{i,t} + \sigma_i})^2 \end{equation} \noindent where d$_t$ is the velocity measurement at time, $t$, $v_{r,t}$ is the Keplerian model predicted for each RV measurement, $\sigma_{i,t}$ is the original uncertainty on the radial velocity measurement and $\sigma_i$ is an RV jitter term for each data set, $i$. The jitter term is set after an individual fit to the radial velocity data and before the joint fit. The jitter term scales the uncertainty such that the average uncertainty is roughly equal to the standard deviation of the residuals from the individual fit. Additionally, the solution to the individual fit $\pm$ 5 sigma is used to constrain the priors for the joint fit. Our uncertainty scaling is similar to RADVEL however we do not include a penalty term which is required when fitting for an error scaling term. We adopt an easier correction for underestimated uncertainties while still being able to leverage the optimizations behind nested sampling. The errors are scaled after an individual fit to the RV data such that the average uncertainty is roughly equal to the scatter in the residuals for that particular data set. After inflating each uncertainty, we found our error estimate for orbital period increased by a factor of $\sim$2 and other orbit parameters similarly. The likelihood function for the joint fit has contributions from transit data, RV measurements and historic ephemerides using \begin{equation} \label{joint_likelihood} \mathcal{L}_{Joint} = \mathcal{L}_{RV} + \mathcal{L}_{Transit} + \mathcal{L}_{Mid-transit} + \mathcal{L}_{Mid-eclipse} \end{equation}. The likelihood function for mid-transit and mid-eclipse represent the error for a linear ephemeris estimate compared to existing measurements Whereas the transit likelihood function uses the photometric time-series. Nested sampling is used to efficiently explore a large parameter space defining the system and to build a posterior distribution with which to infer uncertainties \citep{Buchner2021}. The free parameters include orbital period, time of mid-transit, inclination, argument of periastron, eccentricity, a planet mass and the radius ratio between the planet and star. Posteriors for the free parameters in the joint fit are shown in Figure 9. We also include a Gaussian prior on the stellar radius because it is needed to convert our radial velocity model into meters. The stellar radius is degenerate with inclination and difficult to constrain if left as a uniform prior. Another relationship in the posteriors is the perfect correlation between eccentricity and argument of periastron. We have seen similar correlations when fitting for $a_{0}$ and $\gamma$ that allowed us to simplify the retrieval and solve for them instead. It is theoretically possible to remove one of these parameters ($e$ or $\omega$) from the sampling process and solve for the other at run-time without having to build it into the posteriors. That solution however requires solving a transcendental equation on top of the existing orbit solution and would increase the computation time of the likelihood function. Therefore, we include both $e$ and $\omega$ in the retrieval and let the sampler handle the correlation which decreases its efficiency slightly. \section{Results and Conclusions \label{sec:params}} As part of an effort to refine the orbital ephemeris for HD 806060 b, we have obtained new radial velocity and transit measurements for HD~80606~b. The transit measurements were obtained with TESS in 2020 and a ground-based campaign in 2021; together, the new data, coupled with archival RV and transit observations provide a valuable constraint on the time of conjunction. We are able to refine the estimate on the orbital period of HD~80606~b by taking advantage of the 10-year baseline between the archival and the new observations. Using only the data from 2009-2010, the uncertainty on the orbital period was $\sigma$(P) $=4\times 10^{-4}$; combining the old data with the new observations, the new value of the period 111.436971~days has an improved uncertainty of $\sigma$(P)= $7.4\times 10^{-5}$~days (Figure~\ref{fig:period}). The period estimate is improved by factor $\sim$5 compared to \cite{Bonomo2017} along with significant improvements for the system parameters as summarized in Table~\ref{tab:final}. The immediate benefit of these new observations is to greatly reduce the uncertainty in the timing of future events (transits or eclipses; e.g., \citealt{Zellem2020}). In the case of an eclipse in November 2022, e.g., in mid Cycle 1 for JWST, the uncertainty resulting from propagating the ephemeris in Table~\ref{tab:nominal} is $\sim$24 minutes, whereas with the new linear ephemeris the uncertainty is $\sim$5 minutes (See Figure~\ref{fig:period}). The linear ephemeris uses the eclipse mid-point from \citep{Laughlin2009} and our new period estimate. We also provide a more conservative error estimate based on the orbit solution which yields an uncertainty $\sim$30 minutes. The orbit solution has a larger uncertainty than the linear propagation due to the uncertainty in $e$ and $\omega$ on the estimated eclipse time. For example, the mid-eclipse time predicted from the prior is 2458882.207 $\pm$ 0.10 and from our posterior we get 2458882.214 $\pm$ 0.021 which leads to a difference in uncertainty of $\sim$2 hours. The errors are significantly larger on predicting mid-eclipse because of a degeneracy between $e$ and $\omega$ and it is exacerbated with larger orbital periods. Removing the degeneracy may be possible by simultaneously fitting a transit and eclipse. The uncertainties reported in Figure $\ref{fig:period}$ are smaller than the ones estimated above because they use a linear propagation of the average orbit solution. It is also important to note that the uncertainty on inclination in the prior does not always yield a transiting planet when conducting a Monte Carlo simulation. Simultaneously fitting a TESS light curve with RV data allowed for a strong constraint on the inclination that helped measure the transit duration to within $\sim$ 7 minutes compared to the full event that is almost 12 hours. For the analysis of the JWST phase curve it is important to know the offset between the eclipse, which will be well determined by the JWST observations, and time of periapsis, which will not be directly measured. The timing of eclipse relative to periapsis depends on three key variables: orbital period $P$, eccentricity $e$, and argument of periapsis $\omega$ in Eqn~(\ref{deltaT}) \citep{Huber2017, Alonso2018}: \begin{equation} T_{ecl}-T_{peri}= \frac{P}{2 \pi \sqrt{ 1 - e^2}} \int_{0}^{-\frac{\pi}{2} -\omega} \left( \frac{(1 - e^2)}{1 + e cos(x)} \right)^2 \,dx \label{deltaT} \end{equation} A Monte Carlo simulation for the parameters with their associated uncertainties (Table~\ref{tab:final}) yields an offset in time between the eclipse and periapsis of $\Delta T =-3.104\pm0.011$ hr, i.e with the eclipse occurring before periapsis. This is to be compared with -3.069$\pm$0.049 hr derived using the \citet{Bonomo2017} parameters in Table~\ref{tab:nominal}, a difference of $\sim$2 minutes. Table~\ref{tab:ephemeris} takes the times of periapsis, eclipse and conjunction from our solution (Table~\ref{tab:final}) and propagates these forward in time from 2020 to 2031. The uncertainties include a constant term from the initial Monte Carlo estimates plus the growth in uncertainty occurring $N$ periods after the reference time. Finally, we note that the increased precision of the ephemeris, when combined with new JWST observations, may allow an exploration of non-Keplerian effects such as tidal dissipation \citep{Fabrycky2010} or General Relativistic effects similar to those seen in the precession of the periapsis in orbit of Mercury in our solar system, but greatly enhanced by the high eccentricity of HD~80606~b. \citet{Blanchet2019} calculate that offsets between transit and eclipse midpoints should grow as the number of orbits increases. While the precision and temporal baseline of the 2009--2010 measurements is inadequate to measure the predicted effects of 3--4 minutes, the high precision expected from JWST's great sensitivity make such measurements possible over the next few years. Additionally, our measurements reported in this paper will be archived on ExoFOP enabling future studies to search for long-term perturbations that may affect the ephemeris estimates. \begin{deluxetable*}{llccc} \centering \tablecaption{System Parameters for HD~80606\label{tab:final}} \tablehead{ \colhead{Parameter} & \colhead{Explanation} & \colhead{Our Study} & \cite{Rosenthal2021} & \cite{Bonomo2017} } \startdata M$_*$ [M$_\odot$] & Stellar Mass & 1.05 & 1.047$\pm$0.047 & 1.018$\pm$0.035 \\ R$_*$ [R$_\odot$] & Stellar Radius & 1.050 $\pm$ 0.01$^a$ & 1.066$\pm$0.024 & 1.037$\pm$0.032 \\ T$_*$ [K] & Stellar Temperature & 5565 & 5565 $\pm$ 92 & 5574$\pm$72 \\ Fe/H & Stellar Metallicity & 0.35 & 0.348$\pm$0.057 & 0.340$\pm$0.050 \\ $(R_{p}/R_*)_{contaminated}$ & Planet-Star Radius Ratio & 0.07268 $\pm$ 0.00085 \\ $(R_p/R_*)^2_{contaminated}$ & Radius Ratio Squared & 0.00528 $\pm$ 0.00012 \\ $(R_p/R_*)^2_{corrected}$ & Radius Ratio Squared & 0.01019 $\pm$ 0.00023$^b$ & & 0.00991$\pm$0.00076 \\ $R_p$ [R$_{Jupiter}$] & Planet Radius & 1.032$\pm$0.015 & & 1.003$\pm$0.023 \\ $M_{p}$ [M$_{Jupiter}$] & Planet Mass & 4.1641 $\pm$ 0.0047 & 4.16 $\pm$0.13$^c$ & 4.1$\pm$0.1 \\ K [m/s] & RV Semi-Amplitude & 469.22 $\pm$ 0.61 & 465.5$\pm$2.8 & 474.9$\pm$2.6 \\ Period [day] & Orbital period & 111.436765 $\pm$ 0.000074 & 111.43639$\pm$0.00032 & 111.4367$\pm$0.0004 \\ E$_{mid}$ [BJD] & Eclipse Midpoint & 2458882.214 $\pm$ 0.0021$^d$ & & \\ E$_{14}$ [day] & Eclipse Duration & 0.07169$\pm$0.00073 & & \\ T$_{peri}$ [BJD] & Epoch of periastron & 2458882.344 $\pm$ 0.0021 & & \\ T$_{mid}$ [BJD] & Transit Midpoint & 2458888.07466 $\pm$ 0.00204 & 2455099.39$\pm$0.13 & 2455210.6428$\pm$0.001 \\ T$_{14}$ [day] & Transit Duration & 0.4990 $\pm$ 0.0048 & & \\ $i$ [deg] & Inclination & 89.24 $\pm$ 0.01 & & 89.23$\pm$0.3 \\ a/R$_*$ & Scaled Semi-major axis & 94.452 $\pm$ 0.014 & 92.8$\pm$2.5 & 94.6$\pm$3.1 \\ a [au] & Semi-major axis & 0.4603$\pm$0.0021 & 0.4602$\pm$0.0071 & 0.4565$\pm$0.0053 \\ $e$ & Eccentricity & 0.93183 $\pm$ 0.00014 & 0.93043$\pm$0.00068 & 0.93226$\pm$0.00064 \\ $\omega$ [deg] & Arg. of periastron & -58.887 $\pm$ 0.043 & -58.95$\pm$0.25 & -58.97$\pm$0.2 \\ \enddata \tablecomments{The values in parentheses are calculated using the respective column's orbit solution and a Monte Carlo simulation with 10,000 forward model evaluations. $^a$Gaussian Prior; $^b$Corrected for stellar contamination using brightness values for HD~80606: V-mag=9.00 and HD80607: V-mag=9.07; $^c$ M$_{p}$sin($i$); $^d$Uncertainty estimated with fixed $\omega$;} \end{deluxetable*} \begin{deluxetable*}{lllll} \tabletypesize{\scriptsize} \tablecaption{Predicted Transit, Eclipse and Periapsis Times \label{tab:ephemeris}} \tablehead{ \colhead{Period} & \colhead{Periapsis Date}& \colhead{T$_{Peri}$ (BJD$_{TBD}$)}& \colhead{E$_{mid}$ (BJD$_{TBD}$)} & \colhead{T$_{mid}$ (BJD$_{TBD}$)}} \startdata 0 & 2020-02-02 20:15:10 & 2458882.344 $\pm$ 0.0021 & 2458882.214 $\pm$ 0.0021 & 2458888.0746 $\pm$ 0.0020 \\ 1 & 2020-05-24 06:44:36 & 2458993.781 $\pm$ 0.0021 & 2458993.651 $\pm$ 0.0021 & 2458999.5116 $\pm$ 0.0020 \\ 2 & 2020-09-12 17:14:03 & 2459105.218 $\pm$ 0.0021 & 2459105.089 $\pm$ 0.0021 & 2459110.9487 $\pm$ 0.0020 \\ 3 & 2021-01-02 03:45:18 & 2459216.656 $\pm$ 0.0021 & 2459216.527 $\pm$ 0.0021 & 2459222.3855 $\pm$ 0.0021 \\ 4 & 2021-04-23 14:12:08 & 2459328.092 $\pm$ 0.0021 & 2459327.962 $\pm$ 0.0021 & 2459333.8225 $\pm$ 0.0021 \\ 5 & 2021-08-13 00:42:14 & 2459439.529 $\pm$ 0.0022 & 2459439.400 $\pm$ 0.0022 & 2459445.2595 $\pm$ 0.0022 \\ 6 & 2021-12-02 11:10:00 & 2459550.965 $\pm$ 0.0022 & 2459550.836 $\pm$ 0.0022 & 2459556.6963 $\pm$ 0.0021 \\ 7 & 2022-03-23 21:40:27 & 2459662.403 $\pm$ 0.0021 & 2459662.274 $\pm$ 0.0022 & 2459668.1333 $\pm$ 0.0021 \\ 8 & 2022-07-13 08:09:58 & 2459773.840 $\pm$ 0.0022 & 2459773.711 $\pm$ 0.0021 & 2459779.5704 $\pm$ 0.0021 \\ 9 & 2022-11-01 18:39:52 & 2459885.278 $\pm$ 0.0023 & 2459885.148 $\pm$ 0.0022 & 2459891.0073 $\pm$ 0.0022 \\ 10 & 2023-02-21 05:08:03 & 2459996.714 $\pm$ 0.0022 & 2459996.584 $\pm$ 0.0023 & 2460002.4443 $\pm$ 0.0022 \\ 11 & 2023-06-12 15:37:31 & 2460108.151 $\pm$ 0.0023 & 2460108.021 $\pm$ 0.0021 & 2460113.8814 $\pm$ 0.0022 \\ 12 & 2023-10-02 02:07:17 & 2460219.588 $\pm$ 0.0023 & 2460219.459 $\pm$ 0.0022 & 2460225.3183 $\pm$ 0.0022 \\ 13 & 2024-01-21 12:36:17 & 2460331.025 $\pm$ 0.0022 & 2460330.896 $\pm$ 0.0022 & 2460336.7554 $\pm$ 0.0023 \\ 14 & 2024-05-11 23:06:49 & 2460442.463 $\pm$ 0.0024 & 2460442.334 $\pm$ 0.0023 & 2460448.1923 $\pm$ 0.0023 \\ 15 & 2024-08-31 09:35:05 & 2460553.899 $\pm$ 0.0023 & 2460553.770 $\pm$ 0.0023 & 2460559.6291 $\pm$ 0.0023 \\ 16 & 2024-12-20 20:01:54 & 2460665.335 $\pm$ 0.0023 & 2460665.205 $\pm$ 0.0024 & 2460671.0663 $\pm$ 0.0023 \\ 17 & 2025-04-11 06:33:17 & 2460776.773 $\pm$ 0.0024 & 2460776.644 $\pm$ 0.0024 & 2460782.5030 $\pm$ 0.0023 \\ 18 & 2025-07-31 17:03:20 & 2460888.211 $\pm$ 0.0024 & 2460888.081 $\pm$ 0.0025 & 2460893.9400 $\pm$ 0.0025 \\ 19 & 2025-11-20 03:30:27 & 2460999.646 $\pm$ 0.0025 & 2460999.517 $\pm$ 0.0024 & 2461005.3771 $\pm$ 0.0024 \\ 20 & 2026-03-11 14:00:39 & 2461111.084 $\pm$ 0.0024 & 2461110.954 $\pm$ 0.0024 & 2461116.8140 $\pm$ 0.0025 \\ 21 & 2026-07-01 00:29:17 & 2461222.520 $\pm$ 0.0024 & 2461222.391 $\pm$ 0.0025 & 2461228.2509 $\pm$ 0.0025 \\ 22 & 2026-10-20 10:59:11 & 2461333.958 $\pm$ 0.0026 & 2461333.828 $\pm$ 0.0025 & 2461339.6880 $\pm$ 0.0025 \\ 23 & 2027-02-08 21:26:36 & 2461445.393 $\pm$ 0.0025 & 2461445.264 $\pm$ 0.0026 & 2461451.1249 $\pm$ 0.0026 \\ 24 & 2027-05-31 07:56:26 & 2461556.831 $\pm$ 0.0025 & 2461556.701 $\pm$ 0.0026 & 2461562.5616 $\pm$ 0.0027 \\ 25 & 2027-09-19 18:27:19 & 2461668.269 $\pm$ 0.0026 & 2461668.140 $\pm$ 0.0026 & 2461673.9988 $\pm$ 0.0026 \\ 26 & 2028-01-09 04:54:59 & 2461779.705 $\pm$ 0.0027 & 2461779.575 $\pm$ 0.0027 & 2461785.4358 $\pm$ 0.0027 \\ 27 & 2028-04-29 15:27:05 & 2461891.144 $\pm$ 0.0028 & 2461891.014 $\pm$ 0.0027 & 2461896.8728 $\pm$ 0.0028 \\ 28 & 2028-08-19 01:54:49 & 2462002.580 $\pm$ 0.0028 & 2462002.450 $\pm$ 0.0028 & 2462008.3097 $\pm$ 0.0029 \\ 29 & 2028-12-08 12:23:09 & 2462114.016 $\pm$ 0.0029 & 2462113.887 $\pm$ 0.0029 & 2462119.7467 $\pm$ 0.0030 \\ 30 & 2029-03-29 22:52:37 & 2462225.453 $\pm$ 0.0030 & 2462225.324 $\pm$ 0.0029 & 2462231.1838 $\pm$ 0.0031 \\ 31 & 2029-07-19 09:22:28 & 2462336.891 $\pm$ 0.0030 & 2462336.761 $\pm$ 0.0031 & 2462342.6207 $\pm$ 0.0031 \\ 32 & 2029-11-07 19:51:46 & 2462448.328 $\pm$ 0.0032 & 2462448.198 $\pm$ 0.0030 & 2462454.0576 $\pm$ 0.0030 \\ 33 & 2030-02-27 06:22:46 & 2462559.766 $\pm$ 0.0031 & 2462559.636 $\pm$ 0.0032 & 2462565.4946 $\pm$ 0.0031 \\ 34 & 2030-06-18 16:50:58 & 2462671.202 $\pm$ 0.0032 & 2462671.072 $\pm$ 0.0032 & 2462676.9315 $\pm$ 0.0031 \\ 35 & 2030-10-08 03:19:59 & 2462782.639 $\pm$ 0.0033 & 2462782.509 $\pm$ 0.0033 & 2462788.3685 $\pm$ 0.0032 \\ \enddata \end{deluxetable*} \section{Acknowledgements} Some of the research described in this publication was carried out in part at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. This research has made use of the NASA Exoplanet Archive and ExoFOP, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. This publication makes use of data products from Exoplanet Watch, a citizen science project managed by NASA's Jet Propulsion Laboratory on behalf of NASA's Universe of Learning. This work is supported by NASA under award number NNX16AC65A to the Space Telescope Science Institute, in partnership with Caltech/IPAC, Center for Astrophysics|Harvard $\&$ Smithsonian, and NASA Jet Propulsion Laboratory. We acknowledge with thanks the use of the AAVSO Exoplanet Database contributed by observers worldwide and used in this research. This work makes use of observations from the Las Cumbres Observatory global telescope network. The authors thank Dr. Lisa Storie-Lombardi for the grant of Director's Discretionary Time with the Los Cumbres Observatory (LCO) which was critical to the execution of this program. Dr. Rachel Street helped to identify the appropriate telescopes and observing modes for LCO. Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. Some of the scientific data presented herein were obtained using the eVscope Network, which is managed jointly by Unistellar and the SETI Institute. The Unistellar Network and work by T.M.E. and A.A. are supported by grants from the Gordon and Betty Moore Foundation. The authors wish to thank Prof. S. Kulkarni for an introduction to members of the GROWTH consortium, The results reported herein benefited from collaborations and/or information exchange within NASA's Nexus for Exoplanet System Science (NExSS) research coordination network sponsored by NASA's Science Mission Directorate. K.W. acknowledges support from NASA through the NASA Hubble Fellowship grant HST-HF2-51472.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. The ExoClock project has received funding from the UKSA and STFC grants: ST/W00254X/1 and ST/W006960/1. This work made use of data from the GROWTH-India Telescope (GIT) set up by the Indian Institute of Astrophysics (IIA) and the Indian Institute of Technology Bombay (IITB). It is located at the Indian Astronomical Observatory (Hanle), operated by IIA. We acknowledge funding by the IITB alumni batch of 1994, which partially supports operations of the telescope. Telescope technical details are available at \url{https://sites.google.com/view/growthindia/}. This work uses funding from the Ministry of Science and Technology (Taiwan) under the contract 109-2112-M-008-014-MY3 and we are thankful for their support. The queue observations were done using the 0.4m SLT telescope located at the Lulin Observatory, with assistance from observatory staff C.-S. Lin, H.-Y. Hsiao, and W.-J. Hou. \facility{Keck:I (HIRES), Lick:APF, LCO, TESS, Spitzer Space Telescope, Keck Observatory Archive (KOA)} \end{CJK*}
Title: Long-term photometric monitoring and spectroscopy of the white dwarf pulsar AR Scorpii
Abstract: AR Scorpii (AR Sco) is the only radio-pulsing white dwarf known to date. It shows a broad-band spectrum extending from radio to X-rays whose luminosity cannot be explained by thermal emission from the system components alone, and is instead explained through synchrotron emission powered by the spin-down of the white dwarf. We analysed NTT/ULTRACAM, TNT/ULTRASPEC, and GTC/HiPERCAM high-speed photometric data for AR Sco spanning almost seven years and obtained a precise estimate of the spin frequency derivative, now confirmed with 50-sigma significance. Using archival photometry, we show that the spin down rate of P/Pdot = 5.6e6 years has remained constant since 2005. As well as employing the method of pulse-arrival time fitting used for previous estimates, we also found a consistent value via traditional Fourier analysis for the first time. In addition, we obtained optical time-resolved spectra with WHT/ISIS and VLT/X-shooter. We performed modulated Doppler tomography for the first time for the system, finding evidence of emission modulated on the orbital period. We have also estimated the projected rotational velocity of the M-dwarf as a function of orbital period and found that it must be close to Roche lobe filling. Our findings provide further constraints for modelling this unique system.
https://export.arxiv.org/pdf/2208.08450
\label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \begin{keywords} binaries: general -- cataclysmic variables -- binaries: close -- stars: individual: AR Scorpii \end{keywords} \section{Introduction} The binary system AR~Scorpii (henceforth \arsco) is composed of a white dwarf with an M-dwarf companion in a 3.56-hour orbit \citep{Marsh2016}. In contrast to cataclysmic variable systems (CVs), which show this same binary configuration, there is no evidence that the white dwarf in \arsco\ accretes mass from its companion. It is further distinguished from other similar binaries by strongly pulsed emission with a period of 1.97~minutes (see Fig.~\ref{fig:hcam}) over a broad range of wavelengths, from radio \citep{Stanway2018} to X-rays \citep{Takata2018}, which led to it being dubbed a white dwarf pulsar \citep{Geng2016, Buckley2017}. The physical process behind \arsco's pulses is, nonetheless, not the same as for traditional neutron-star radio pulsars \citep{Katz2017, Lyutikov2020}. This is inferred from the fact that no phase coherent radio emission has been reported for \arsco, and from the companion being within the radius of the white dwarf's light cylinder. The pulsed variability of \arsco\ was first reported by \citet{Marsh2016}, who detected two components of very similar frequency in \arsco's amplitude spectra, at $\approx 731$ and $\approx 738$ cycles per day (c/d). The difference between the two values corresponds to the orbital frequency, suggesting that the higher frequency component represents the spin of the white dwarf, whereas the lower frequency is a re-processed beat frequency. \citet{Marsh2016} also found the maximum luminosity of \arsco\ to be well in excess of the combined luminosity of the stellar components. Given the lack of accretion signatures (such as strong X-ray luminosity, broad emission lines, or flickering), they proposed this excess luminosity to be powered by the spin-down of the rapidly-rotating white dwarf. They estimated a spin frequency derivative of $\dot{\nu}_\mathrm{spin} = (-2.86\pm0.36) \times 10^{-17}$~\hzs, corresponding to a rate of loss of rotational energy of $L_{\dot{\nu}} = 1.5 \times 10^{26}$~W. \citet{PotterBuckley2018Fourier} carried out a Fourier analysis of data spanning three years (2015 to 2017) but could not confirm the spin-down found by \citet{Marsh2016}. Instead they found a spin frequency derivative consistent with zero, but were only able to constrain it to the broad range $-20 \times 10^{-17}\,$\hzs to $+10\times 10^{-17}$~\hzs. On the other hand, \citet{Stiller2018} confirmed the occurrence of spin-down, again from just three years coverage (2016 to 2018), but found that it had a significantly larger magnitude, $\dot{\nu}_\mathrm{spin} = (-5.14\pm0.32) \times 10^{-17}$~\hzs, than determined by \citet{Marsh2016}. \citet{Stiller2018} argued that the apparent inconsistency between the results of \citet{Marsh2016} and \citet{PotterBuckley2018Fourier} arose from an underestimate of $\dot{\nu}_\mathrm{spin}$ in \citet{Marsh2016}, likely due to sparse sampling and low time resolution of their data. They also pointed out that their value of $\dot{\nu}_\mathrm{spin}$ could explain the difference in the spin periods found by \citet{Marsh2016} and \citet{PotterBuckley2018Fourier}, being consistent with the spin period found in both works. More recently, the spin-down rate was further refined to $\dot{\nu}_\mathrm{spin} = (-4.82\pm0.18) \times 10^{-17}$~\hzs by \citet{Gaibor2020}. The spin-down rate measurements have thus converged, with the most recent consistent with each other, but questions remain over the earlier measurements. \citet{Marsh2016} based their measurement upon data from the CRTS survey that preceded (by several years) the more recent intensive follow-up data used by \citet{PotterBuckley2018Fourier}, \citet{Stiller2018} and \citet{Gaibor2020}. The CRTS data were very sparse, and used relatively long exposures (30~secs), which rightly raises a question mark over the reliability of \citet{Marsh2016}'s measurement of spin-down \citep{PotterBuckley2018Fourier}. However, could \citet{Marsh2016}'s result, together with those of \cite{Stiller2018} and \cite{Gaibor2020}, point towards a genuine change in the rate of spin down in AR~Sco? Such behaviour is often seen amongst close relatives of AR~Sco, the intermediate polars \citep[IPs, see e.g. fig. 2 of][]{Patterson2020}. If the same applied to AR~Sco, it could suggest that (fluctuating) mass accretion has some role to play in the spin-down, despite a relative lack of any other evidence for accretion in the system \citep{Marsh2016}. A second puzzle is why \cite{PotterBuckley2018} obtained so loose a constraint upon the spin derivative, some 30 times worse than \cite{Stiller2018} from data covering the same time span. Is the Fourier-based method somehow flawed in the case of AR~Sco? The orbital period of the system is also expected to change, as a result of angular momentum losses caused by the interplay between magnetic braking and gravitational radiation. \citet{Stiller2018} constrained the orbital frequency derivative to $\dot{\nu}_\mathrm{orbit} \lesssim 2 \times 10^{-18}$~\hzs, which at their precision implied $\dot{\nu}_\mathrm{spin} = \dot{\nu}_\mathrm{beat}$. An even tighter constraint of $\dot{\nu}_\mathrm{orbit} \lesssim 3.8 \times 10^{-20}$~\hzs was found by \citet{Peterson2019} using archival photometric plate data dating back to 1902. They pointed out that their constraint is consistent with the semi-empirical model of \citet{Knigge2011} describing angular momentum loss in CVs, which predicts an orbital frequency derivative of $\dot{\nu}_\mathrm{orbit} \lesssim 2 \times 10^{-20}$~\hzs for \arsco's orbital period. Additional studies of the orbital behaviour were performed by \citet{Littlefield2017}, who found that the light curve shows a stable orbital waveform on timescales of days, but alters slowly over timescales on the order of years. As well as time-series photometry, time-resolved spectra have also been obtained for \arsco. The spectra in the visible show a blue continuum added to the red spectrum of an M5-type dwarf, with strong Balmer and neutral helium lines in emission \citep{Marsh2016}. The radial velocity variation of the emission lines suggests that they originate near the surface of the M-dwarf facing the white dwarf. \citet{Garnavich2019} found the Balmer lines to show significant structure as they vary over the orbital period. Similar to what is observed for the orbital modulation of the light curve, the Balmer lines show peak flux between phases 0.4 and 0.5, and minima around phase 0.1\footnote{The convention is that zero orbital phase is at superior conjunction of the white dwarf, when the M-dwarf is closest to us.}. The continuum shows a similar behaviour, but is further affected by fast variations on the beat period. Doppler tomography shows that most of the H$\alpha$ emission originates on the face of the secondary star, but satellite emission features are seen, which in other systems have been attributed to long-lived prominences on the secondary star \citep[e.g.][]{Parsons2016}. \citet{Garnavich2019} argued that the stability of these prominences requires the M-dwarf to have a magnetic field of the order of 100--500~G \citep{Ferreira2000}, which is comparable to the white dwarf magnetic field strength at the M-dwarf \citep[$\sim 200$~G,][]{Takata2018}, implying that interaction between the two stars' magnetic fields is likely responsible for the energy production in \arsco. A pulsar model for \arsco\ is supported by polarimetric observations, which detected strong linear polarisation (as high as 40 per cent) modulated on both the spin and beat periods \citep{Buckley2017}. The observed broad-band spectrum is associated with synchroton radiation from relativistic electrons accelerated as the white dwarf magnetosphere sweeps past the M-dwarf, resembling the emission from neutron star pulsars, but with a different mechanism whose details remain a mystery. Most likely it relates to direct interaction between the two stars, resulting in emission from the surface or coronal loops of the M-dwarf, from the magnetosphere of the white dwarf, or possibly through an associated bow shock \citep{Geng2016, Katz2017, Takata2017, PotterBuckley2018, Plessis2022}. A collimated relativistic outflow, i.e. a jet, has also been considered, but seems to be ruled out by radio observations \citep{Marcote2017}. \arsco's nature, and the rapid spin-down of its white dwarf suggests that it could be a missing link between the two major classes of accreting magnetic white dwarfs, polars and IPs, which are distinguished by whether or not the white dwarf's spin is synchronous with the orbit of the binary \citep{Katz2017}, but how AR~Sco attained its current configuration is unknown. A prior stage of accretion-driven spin-up is required to explain the current spin period. However, the high magnetic field estimated for the white dwarf in \arsco\ given its measured spin-down \citep[100--500~MG,][]{Katz2017, Buckley2017} presents a significant challenge, as an extremely high mass transfer rate would have been required in the past to crush the magnetosphere down to a co-rotation radius matching the 2~minute spin-period. In an alternative model, \citet{Lyutikov2020} argued the magnetic field of \arsco\ must be only $\sim 10$~MG to allow its spin-up, and that this is consistent with the observed spin-down rate if the material leaving the M-dwarf is originally neutral. In that case, the material can approach the white dwarf without interacting, until it is ionised by the white dwarf itself. This model requires both that the white dwarf temperature be such that the material is ionised only near the white dwarf ($T_\mathrm{eff} \approx 12\,000$~K), and that the heated face of the M-dwarf is cool enough not to ionise the gas before it heads towards the white dwarf. Although the first condition could be fulfilled \citep{Garnavich2021}, the heated face of the M-dwarf seems to be hot enough that the material would have a high ionisation rate \citep{Garnavich2019, Garnavich2021}. Another potential solution that reconciles the need for accretion-driven spin-up and the apparently high magnetic field was put forward by \citet{Schreiber2021}. They proposed that \arsco\ only became magnetic as a result of a crystallisation- and rotation-driven dynamo. In this model, \arsco\ would originally be non-magnetic, allowing for straightforward accretion-driven spin-up. When crystallisation starts to occur in the core of the cooling white dwarf, strong density stratification combined with convection creates the conditions for a dynamo, generating the magnetic field \citep{Isern2017}. If the field is strong enough, the rapid transfer of spin angular momentum into the orbit may cause the binary to detach and mass transfer to cease, leading to a system such as \arsco. % The many open issues surrounding the modelling of \arsco, as well as its potential central role for constraining the evolution of CVs, make precise observational constraints a valuable asset. In this work, we analyse high-speed photometry of \arsco\ obtained with ULTRACAM \citep{ultracam}, ULTRASPEC \citep{ultraspec}, and HiPERCAM \citep{Dhillon2021} spanning seven years (2015--2022), and time-resolved spectra obtained with the Intermediate-dispersion Spectrograph and Imaging System\footnote{https://www.ing.iac.es/Astronomy/instruments/isis/} (ISIS) at the William Herschel Telescope (WHT) and X-shooter at the Very Large Telescope (VLT). We further constrain the spin-down of the white dwarf using two different techniques, pulse-time arrival modelling and Fourier analysis. We probe for evidence of orbital period change by modelling spin and beat frequencies independently in the Fourier analysis. Based on our photometry, we discuss the possibility of using \arsco\ as a cosmic clock for timing observations. Using our spectroscopic data, we perform modulation Doppler tomography \citep{Steeghs2003}, extending the work of \citet{Garnavich2019}. We also measure the rotational velocity of the M-dwarf as function of orbital phase to probe for variability that could confirm that the M-dwarf is Roche lobe filling. \section{Observations} \subsection{Photometry} \label{sec:phot} We started a monitoring campaign of \arsco\ shortly after its discovery in 2015. We used three different high-speed photometers: ULTRACAM at the 3.5-m ESO New Technology Telescope (NTT), ULTRASPEC mounted at the 2.4-m Thai National Telescope (TNT), and HiPERCAM at the 10.4-m Gran Telescopio Canarias (GTC) \citep{ultracam, ultraspec, Dhillon2021}. We observed it on a total of 68 different nights, over a time span of almost seven years (2015--2022). Our cadence varied between 0.1 and 9.1 seconds, with an average of 3.3~s. This includes the readout time, which varied with the configuration and readout mode, but was never longer than 15~ms. With ULTRASPEC, we typically used a $g$ filter, with the exception of three nights when a red-blocking KG5 filter (effectively $u+g+r$) or an $r$ filter was installed. For ULTRACAM and HiPERCAM, dichroic beam splitters allow simultaneous observations in three or five different filters, respectively. Filters $u,g,r$ or $u,g,i$ were used for ULTRACAM until 14 May 2017, after which they were replaced with similar filters whose cut-on/off wavelengths match those of the typical Sloan Digital Sky Survey (SDSS) filters, but with a higher throughput, being sometimes referred to as $u_s, g_s, r_s, i_s, z_s$ or "super" SDSS. These "super" filters were also used for the HiPERCAM observations. For simplicity, we omit the $s$ subscript in the text, but details of each of our runs and filters used can be found in Table~\ref{tab:observations}. All datasets were reduced with the dedicated HiPERCAM data reduction pipeline\footnote{https://github.com/HiPERCAM/hipercam}. We performed bias subtraction, and flat field correction using skyflats taken during twilight. We also used the pipeline to carry out differential aperture photometry with a variable aperture size, set to scale with the seeing estimated from a point-spread function (PSF) fit. We used the same comparison star for all observations, with the exception of our two HiPERCAM runs, for which the chosen comparison star was not in the field of view. Our default comparison star was {\it Gaia} EDR3 6050299715251225344 ($G = 13.3$, $G_{BP} - G_{RP} = 2.3$), whereas for HiPERCAM we used {\it Gaia} EDR3 6050298302203861888 ($G = 15.2$, $G_{BP} - G_{RP} = 1.9$). For reference, \arsco\ itself has reported values of $G =15.0$, and $G_{BP} - G_{RP} = 1.4$. To extend our baseline to earlier years, we retrieved data for \arsco\ from the Catalina Real-Time Transient Survey \citep[CRTS,][]{Drake2009}. These data span from 2005 August 01 to 2013 July 08. The typical CRTS exposure time of 30~s is short enough that the spin and beat periods are not completely smeared out. The observing times of all datasets were corrected to Barycentric Julian Date (BJD) in the Barycentric Dynamical Time (TDB) reference system. \subsection{Spectroscopy} We obtained spectroscopic observations of \arsco\ in 2016 both with X-shooter at the 8.2~m VLT \citep{Vernet2011}, and with ISIS at the 4.2~m WHT. The X-shooter observations were carried out using a 1~arcsec slit for the UVB arm (300-559.5~nm, $R = 5400$), and 0.9~arcsec slits for the VIS (559.5-1024~nm, $R = 8900$) and NIR (1024-2480~nm, $R= 5600$) arms. In the UVB arm, we limited our exposure time to 16.1~s to allow for sampling of the beat/spin variability and obtained 689 exposures (readout time was 14~s). A fraction of the spectra that were taken continuously (the observations were briefly interrupted by clouds) is shown in Fig.~\ref{fig:uvb_trail}. For the VIS arm, we obtained 42 exposures of 354~s ($\simeq 3$ times the beat period; readout time was 58~s). The same exposure time was used for the NIR arm, but the difference in the way readout is carried out allowed 47 frames to be taken (with a readout time of 8.2~s). The spectra were reduced using the standard procedures within the ESO {\sc reflex} reduction pipeline \footnote{http://www.eso.org/sci/software/reflex/}, including telluric line removal using {\sc molecfit} \citep{molecfit1,molecfit2}. ISIS observations were obtained using the R600B and R600R gratings in the blue and red arms, respectively. The central wavelength was set to 4500~\AA\ for the blue arm, and to 6200~\AA\ for the red arm. We used a 1.0~arcsec slit and set the exposure time to 7.3~s (blue) and 8.0~s (red), whereas the readout time was $\approx 70$~s. 1078 and 1081 consecutive exposures were obtained for blue and red arms, respectively. All spectra were de-biased and flat-fielded using the {\sc starlink}\footnote{https://starlink.eao.hawaii.edu/starlink} packages {\sc kappa}, {\sc figaro} and {\sc convert}. Optimal spectral extraction was carried out using {\sc pamela} and wavelength calibration was performed with {\sc molly} \citep{Marsh1989}. Spectra taken with the red arm are shown as a trail in Fig.~\ref{fig:red_trail}. \section{Data analysis} \subsection{Pulse arrival time estimates} We have estimated individual pulse arrival times for each of our observing runs with ULTRASPEC, ULTRACAM, or HiPERCAM. As the pulse timing depends on the filter used \citep[see][and Section~\ref{sec:clock}]{Gaibor2020}, we have only used data taken with the $g$ filter for this purpose. Our method is similar to that previously employed by \citet{Stiller2018} and \citet{Gaibor2020}, who used a Gaussian function to model the beat pulse peak. We first estimated the location of each peak using previously determined ephemeris for the beat period. Next, we cross-correlated the data around each peak with a Gaussian function with a fixed width set to minimise the residuals when fitting the ephemeris. We experimented with standard deviation ($\sigma$) values for the Gaussian ranging from 5 to 20~s, finding an optimal value of 10.1~s. To determine the maximum of the cross-correlation function, which is the estimate for the location of the peak, we determined the value at which the derivative of the cross-correlation changes sign from positive to negative using Newton-Raphson iteration. The initial ephemeris was also used to estimate the cycle number of each peak measurement. We repeated the procedure recalculating the cycle numbers using the derived ephemeris in order to get self-consistent values; this was repeated until the assumed and fitted ephemeris showed no significant change. Data points affected by clouds were manually flagged in each run, and for each peak we computed the resulting number of bad points. Peak measurements were only accepted if the number of good points exceeded the number of bad points by at least a factor of three. Peaks were also rejected if the difference between the measurement and predicted value based on the approximate ephemeris was more than 10 per cent of the beat period. This method resulted in 3195 good pulse measurements (table included as Supplementary Material). However, the beat pulses are known to show an orbital phase dependence \citep{Stiller2018}, which needs to be corrected for. To account for this orbital modulation, we performed an initial fit to the derived pulse times, and calculated the observed minus calculated values (O-C) as a function of orbital phase $\phi_\mathrm{orb}$. We then modelled the O-C behaviour with a Fourier series, and subtracted the modelled O-C orbital behaviour from each obtained pulse arrival time\footnote{We have used a Fourier series with four terms in the form $S_i\sin(2\pi i \phi_\mathrm{orb}) + C_i\cos(2\pi i \phi_\mathrm{orb})$ with ($S_i, C_i$) coefficients in seconds given by (-1.752, 2.165), (4.723, -2.769), (-0.929, 0.734), (0.541, -1.382).} (see Fig~\ref{fig:orbitdiffs}). Note that the values supplied as Supplementary Material do not include this correction, as combining them with more data can change the required number of Fourier components and their coefficients. This same procedure cannot be applied to the CRTS data, since the pulses are not individually resolved at the CRTS cadence. We have instead determined the maxima by fitting a cosine to the data. We first phase-folded the data to the orbital ephemeris of \citet{Marsh2016}, and modelled the orbital variability with a Fourier series. The orbital contribution was then subtracted from each measurement. Given the long-time span of the CRTS observations, the spin-down can have a measurable effect in the data. To account for this, we subtracted a quadratic term obtained from fitting the ULTRASPEC/ULTRACAM/HiPERCAM data from the measured times. We then fitted cosines with a period fixed to the beat period to the CRTS data divided into two datasets at the midpoint of observations (2008 September 17). Data taken at phases showing excess scatter (see next paragraph) were excluded. This resulted in two measurements for the maximum of the beat pulse extending our baseline back to 2005, with each representing the mean pulse time from three years of CRTS coverage. Finally, we carried out a least-squares fit of a quadratic function to the measured times and estimated cycle numbers. We excluded from our fit the measurements taken around orbital phases 0.05 and 0.55 (more specifically, in the ranges 0.987--0.140 and 0.455 and 0.635), which show excess scatter because the maxima are less well defined around these phases \citep[see Fig~\ref{fig:orbitdiffs}, as well as ][fig. 2]{Gaibor2020}. These orbital phase limits were set to exclude points with a root-mean-square deviation (RMS) larger than 2.5~s after correcting for orbital effects. For the remaining 2135 measurements, the uncertainty was recalculated as the RMS at the given orbital phase. Fitting these measurements, we obtained the ephemeris: \begin{eqnarray} T_\mathrm{beat~max} (BJD) = 2457941.6688819(20) \nonumber+\\ 0.001368045813(8) E \nonumber +\\ 4.53(8) \times 10^{-16} E^2, \end{eqnarray} where $E$ is the integer cycle number. The quoted one-sigma uncertainties were determined via bootstrapping. The quadratic coefficient is equivalent to $-0.5 \dot{\nu}_\mathrm{beat} P_\mathrm{beat}^3$, thus implying a beat frequency derivative of $-4.74\pm0.08 \times 10^{-17}$~\hzs, consistent with the estimates of both \citet{Stiller2018} ($-5.14\pm0.32 \times 10^{-17}$~\hzs) and \citet{Gaibor2020} ($-4.82\pm0.18 \times 10^{-17}$~\hzs) within $1~\sigma$, and with a 50-$\sigma$ significance. Figure~\ref{fig:ominusc} shows the fit of the quadratic component of the ephemeris. \subsection{Fourier Analysis} In addition to using our photometric data to perform pulse arrival time estimates, we also carried out a Fourier analysis. As in the previous section, we focused on observations taken with the $g$ filter. To minimise systematic effects, we excluded any data points affected by clouds, and only utilised data employing our default comparison star. To identify the main harmonics and combinations contributing to the observed waveform, we calculated the Fourier transform of the data, simultaneously modelled the identified dominant contributions with a Fourier series, subtracted the model from the light curve, and then re-calculated the Fourier transform of the residuals. The modelling was done by performing a least-squares fit to the data with orbital contributions modelled as sine waves, whereas the spin and beat contributions were modelled as cosines (we differentiate between sines and cosines to make it clear what phase 0 implies). Timing parameters were common to all data, whereas amplitudes and phases were fitted individually to each run. We started by subtracting the fundamental frequency and the first harmonic for the orbital ($\Omega$), spin ($\omega$), and beat ($\omega-\Omega$) frequencies. This revealed the next main contribution to be the second harmonic of the beat frequency $2(\omega-\Omega)$, followed by the combinations $2\omega - \Omega$, $4\omega - 2\Omega$, $\omega - 2\Omega$, the third harmonic of the beat frequency 4($\omega-\Omega$), $\omega + \Omega$, and $2\omega - 3\Omega$. At this point, the dominant peak became again the beat period, which is not perfectly modelled using this procedure due to the varying amplitude of the peak \citep[see][and Fig.~\ref{fig:some_pulses}]{Stiller2018}. Figure~\ref{fig:ft_prew} illustrates this pre-whitening process. Once we identified the main contributions to the observed light curve, we carried out a fit of the light curve including all of these contributions using Markov-Chain Monte-Carlo (MCMC) implemented with {\sc emcee} \citep{emcee}. We initially performed a fit with amplitudes and phases of all contributions allowed to vary freely, but each set to be constant throughout the whole span of our data. This led to a poor fit of the light curve, as the amplitudes of the peaks are not consistent over time (as shown in Fig.~\ref{fig:some_pulses}). We then opted for initially fitting each of our runs individually, with the timing parameters fixed to the values obtained from a least-squares fit to all of the data, but amplitudes and phases free. The obtained amplitudes and phases were then fixed, and we carried out the MCMC fit with only the timing parameters, $T_{0}^{\mathrm{orb}}, T_{0}^{\mathrm{beat}}, \nu_\mathrm{beat}, \nu_\mathrm{spin}, \dot{\nu}_\mathrm{beat}, \dot{\nu}_\mathrm{spin}$, as free. Since our runs typically do not cover a full orbit, the orbital period itself is not well constrained from sinusoidal fits to our data, but it can be indirectly obtained by modelling the beat and spin frequencies separately. We therefore allowed the spin and beat periods to vary freely, and calculated the orbital period from their difference. The linear term corresponding to $T_0$ was allowed to be free for the orbital and beat periods, but constrained to be within one period of our initial guess. For the spin period, as well as spin/orbit combinations, $T_0$ was set to be equal to the beat $T_0$, but a phase shift was allowed. A phase shift was also allowed for all harmonics. In addition to the periods themselves, we also included the period derivative as a free parameter for both spin and beat in our model. We have not constrained them to be equal in order to probe for a significant orbital period derivative. Data points whose residuals were more than three standard deviations away from an initial least-squares fit were excluded from the MCMC fit to avoid occasional strong peaks, which typically have low uncertainties and could bias our fit. Uncertainties were inflated by a factor of 8 to achieve reduced $\chi^2 \approx 1$ (though we report the $\chi^2$ prior to this renormalisation below). Additionally, we excluded data taken towards the end of our last run, 2022 April 26 (MJD > 59695.2125), because we noticed that the inclusion of these data led to an increase in $\chi^2$ of at least a factor of two, leading to a poor solution. The reason behind this seems to be an inversion in the strength of consecutive peaks (see Fig.~\ref{fig:bad_pulses}), possibly caused by a change in which pole generates the strongest synchrotron emission power. This is also eventually observed in other runs, but has a particularly damaging effect to the fit when occurring in the last portion of data, which plays an important role in constraining the frequency derivatives. With the current data we see no evidence for a periodical behaviour of such inversion, which seems to be stochastic. Future observations will reveal whether this behaviour persists after our last run. When running the MCMC fit, we at first applied a prior forcing the difference between spin and beat derivatives to be consistent with the upper limit determined by \citet{Peterson2019}, that is $\dot{\nu}_\mathrm{spin} - \dot{\nu}_\mathrm{beat} \lesssim 3.8 \times 10^{-20}$~\hzs. We obtained a solution with $\chi^2 = 18415210$ (for 153527 datapoints), whose obtained parameters are given in Table~\ref{tab:fourier_fit}. Next we removed the prior and let $\dot{\nu}_\mathrm{spin}$ and $\dot{\nu}_\mathrm{beat}$ vary freely. Without the prior, we obtain a solution with a lower $\chi^2$ of 18381704 for the same number of points. Figure~\ref{fig:model_lc} shows a section of the data with this best-fit model, and Figure~\ref{fig:corner} shows a corner plot of the resulting timing parameters. The obtained beat period and spin and beat frequency derivatives are consistent at the 99 per cent confidence level for the two solutions, though the value of the spin period and of $T_{0}^{\mathrm{beat}}$ shift significantly. Notably, in both cases we obtain $\dot{\nu}_\mathrm{beat}$ consistent with our pulse time arrival analysis. The main distinction between the two fits is that, without a prior, the difference between our spin and beat period derivatives implies a significant $\dot{\nu}_\mathrm{orbit} = (8.43\pm0.37) \times 10^{-19}$~\hzs, higher than the upper limit suggested by \citet{Peterson2019}, whereas the value is naturally lower, $\dot{\nu}_\mathrm{orbit} = (3.68\pm0.17) \times 10^{-20}$~\hzs, when the upper-limit prior is applied. We further discuss this result in Section~\ref{sec:nudots}. \begin{table*} \centering \caption{The two solutions obtained from the Fourier fit to the light curve. Uncertainties are given by the standard deviation obtained in the MCMC run.} \label{tab:fourier_fit} \begin{tabular}{ccc} % \hline Parameter & $\dot{\nu}_\mathrm{spin} - \dot{\nu}_\mathrm{beat} \lesssim 3.8 \times 10^{-20}$~\hzs & $\dot{\nu}_\mathrm{spin}$, $\dot{\nu}_\mathrm{beat}$ free\\ \hline $T_{0}^{\mathrm{orb}}$ & 2457264.624698(14) & 2457264.624503(17) \\ $T_{0}^{\mathrm{beat}}$ & 2457941.6689500836(15) & 2457941.6689500569(15) \\ $P_\mathrm{beat}$ (days) & 0.00136804583419(44) & 0.00136804583230(45) \\ $P_\mathrm{spin}$ (days) & 0.00135556080771(47) & 0.00135556081486(57) \\ $\dot{\nu}_\mathrm{beat}$ ($10^{-17}$~Hz/s) & $-4.945\pm0.007$ & $-4.963\pm0.006$ \\ $\dot{\nu}_\mathrm{spin}$ ($10^{-17}$~Hz/s) & $-4.941\pm0.006$ & $-4.878\pm0.007$ \\ Reduced $\chi^2$ & 119.95 & 119.73 \\ \hline \end{tabular} \end{table*} \subsection{Doppler tomography} \label{sec:dopper} Similarly to \citet{Garnavich2019}, we have applied the Doppler tomography technique to our obtained spectra. This technique consists of mapping the observed line profiles at different orbital phases into velocity space, thereby allowing the line emission distribution in the binary to be mapped \citep[see][for a review]{Marsh2001}. We started by analysing the H$\alpha$ line using our WHT spectra. \citet{Garnavich2019} previously reported the existence of long-lived prominences (see their fig. 7), whose existence, they argued, requires the M-dwarf to have a strong magnetic field ($100-500$~G). Owing to the short integration time of our exposures, we were able to search for variations of these prominences with beat period, which could indicate that they are modulated by the interaction between the magnetic field of the white dwarf and the companion, since the beat phase tracks the orientation of the white dwarf with respect to the binary orbit. As can be seen in Figure~\ref{fig:dop_red}, these features seem to persist regardless of beat phase, suggesting that they are indeed inherent to the M-dwarf. In addition, they seem to appear only for the H$\alpha$ line, whereas the emission for other lines is mostly concentrated on the face of the M-dwarf facing the white dwarf, though there are some substructures that vary from line to line (see Figure~\ref{fig:dop_caii_heii}). The detection of He~{\sc ii} is particularly important, as it suggests a high temperature on the M-dwarf's heated face that would ionise material before it leaves the star, which invalidates \citet{Lyutikov2020}'s model \citep[as previously pointed out by][]{Garnavich2021}. The location of this stream also suggests interaction close to the M-star, in agreement with the model suggest by \citet{Katz2017}, rather than to the white dwarf, as proposed by \citet{Lyutikov2020}. It is important to note that persistent H$\alpha$ prominences have been detected in other non-accreting systems, such as QS~Vir \citep{Parsons2016} or SDSS J1021+1744, in which they have been attributed to long-lived material near the L5 point \citep{Irawati2016}. Prominences are also shown by some CVs during low states, as reported for instance for AM~Her \citep{Kafka2008} and BB~Dor \citep{Schmidtobreick2012}. Similar to what we see here, these CVs also show strong prominences mainly in H$\alpha$, with emission in other lines appearing more concentrated near the heated face of the white dwarf companion. \citet{Kafka2008} and \citet{Schmidtobreick2012} noted that the prominences do not coincide with the location of the outer Lagrangian points, L4 and L5, but \citet{Schmidtobreick2012} hypothesised that the magnetic field could alter the Roche geometry and result in equilibrium points where the prominences are observed. A magnetic field has also been suggested by \citet{Parsons2016} to explain the prominences seen in QS~Vir, and could also be responsible for the features seen in \arsco. One of the assumptions of traditional Doppler tomography is that the flux of any point in the binary system is constant over time. However, emission that is modulated with time is often observed in interacting systems \citep[e.g.][]{Papadaki2008, Calvelo2009, Somero2012}. \citet{Steeghs2003} extended the Doppler tomography method to allow for emission modulated on the orbital period. We applied this modulated Doppler tomography to the WHT spectra H$\alpha$ line (Figure~\ref{fig:moddop_red}), and to the H$\beta$ (Figure~\ref{fig:moddop_uvb}) line using the X-shooter spectra, which have a better spectral resolution than our WHT spectra. Both lines show components with a significant orbital period modulation. Unlike the average amplitudes, which are somewhat dissimilar for H$\alpha$ and H$\beta$ given that only the former shows strong prominences, the modulated amplitude shows a similar behaviour for both lines. The sine component, which peaks at orbital phase 0.5, shows two main components, one of which has a similar origin as one of the observed H$\alpha$ prominences, whereas the other is located near the trailing face of the M dwarf, much like seen for the HeII 4686~\AA\ line. The cosine component (peaking at phase 0.0) is mainly along the line connecting the two stars, with an extension towards the same H$\alpha$ prominence seen in the sine map. This phase-dependent behaviour is further illustrated in Figure~\ref{fig:vmap_red}. The extension seems to be located between what would be the Keplerian and stream trajectories, if there were any mass transfer between the two stars. This is similar to what is seen in, for example, U Geminorum \citep{Marsh1990}. In \arsco's case, there is no evidence for accretion; yet, the existence of persistent prominences suggests the concentration of matter in the region where the stream would be. The modulated emission could thus result from a shock in this region resulting from interaction between the two stars' magnetic fields as the system rotates. \subsection{Rotational velocity of the M-dwarf} Evidence of ellipsoidal variation in \arsco's light curve suggests that the M-dwarf is filling its Roche lobe, or very nearly so \citep{Marsh2016}. For any star with a high Roche lobe filling factor, the radius along the line of sight can change significantly as it rotates, which translates into variable projected rotational velocity, $v_\mathrm{rot} \sin i$, given that $v_\mathrm{rot} = 2\pi R_{\star}/P_{\star}$ (where $R_{\star}$ and $P_{\star}$ are the radius and rotation period of the star). Our X-shooter spectra include several lines originating in the atmosphere of the M-dwarf that allow us to probe for this $v_\mathrm{rot} \sin i$ variability. Following \citet{Parsons2018}, we selected three sets of lines for the estimates: the KI line at 7699\,\AA, the NaI lines at 8183/8193\,\AA, and the KI line at 1.252\,$\mu$m. We have used the M4.5 template from \citet{Parsons2018}, which we found to yield a better fit to these lines than the M5 template. Our approach was to find the value of $v_\mathrm{rot} \sin i$ that minimised the $\chi^2$ between the observed line and the template. We normalised and continuum-subtracted both observed spectra and template prior to fitting. We also corrected the observed spectra to the rest-frame of the M-dwarf. In order to obtain model values for comparison, we utilised the {\sc lprofile} code, which is part of the {\sc lcurve} package \citep{Copperwheat2010}. We calculated phase-resolved synthetic line profiles assuming the system parameters derived by \citet{Marsh2016} and an orbital inclination of 60 degrees \citep{PotterBuckley2018, Plessis2019}. The exposure length and the spectral resolution were fixed at values matching our X-shooter spectra. We estimated the full-width at half maximum (FWHM) of each line by fitting a Gaussian profile, and did the same for the 1.252~$\mu m$ line (which is the most isolated of the analysed lines). We then interpolated the observed (close to linear) relationship between FWHM and $v_\mathrm{rot} \sin i$ to estimate the $v_\mathrm{rot} \sin i$ value of each synthetic profile. We carried out this estimate both for a completely Roche-lobe filling M-dwarf, and for radii of 90, 80, 70 and 60 per cent of the Roche-lobe radius (the latter corresponds approximately to the radius of an isolated M-dwarf of mass 0.3~M$_{\sun}$). These radii are a linear measure of the distance from the centre of mass of the M-dwarf to L1, that is a linear measure of its radius relative to the Roche lobe, not a volume-based measurement. Our results are shown in Fig.~\ref{fig:vsini}. Though a lot of scatter is present in our measurements, in particular around orbital phase 0.5 when the analysed lines from the M-dwarf become much shallower, it can be clearly seen that a low degree of Roche-filling is incompatible with the estimated values. Whereas the reduced $\chi^2$ for the 1.252~$\mu m$ line is similar for models with full Roche-lobe filling and for a radius of 90 per cent of the Roche-lobe radius (4.5 and 6.0, respectively), the value more than doubles for a radius of 80 per cent (13.6), and increases by at least a factor of 5 for 70 and 60 per cent (34 and 62, respectively). Therefore, a radius of less than 80 per cent of the Roche-lobe radius is very unlikely for the M-dwarf. \section{Discussion} \subsection{The rates of period change} \label{sec:nudots} Our pulse timing analysis builds on previous work \citep{Marsh2016, Stiller2018, Gaibor2020} and further confirms the occurrence of spin-down, now confirmed at the 50-$\sigma$ level through this method. Moreover, we have shown that the spin-down rate has been constant to within the observational uncertainties over a period of 17 years for which there are data. Applying a quadratic term correction based on other measurements to the CRTS times, as well as excluding data taken at phases showing scatter, were crucial for obtaining a good fit to the CRTS data. \citet{Marsh2016} did not take into account the large scatter at some orbital phases, which could explain the discrepancy with more recent measurements. For the first time, we have also succeeded in obtaining an estimate of the beat frequency derivative via Fourier analysis, obtaining a value in good agreement with results from pulse arrival timing. In order to obtain a good fit to the data, amplitudes had to be fit individually in each run given that the pulse strength is clearly variable with time. In addition, data from our most recent observation for which the strength of consecutive pulses was not consistent with the typical behaviour had to be excluded. Possibly these measures have allowed us to obtain a Fourier fit consistent with the pulse timing analysis when previous attempts have failed. One puzzling result of our Fourier analysis was a significant difference between our derived beat and spin frequency derivatives when they were left to vary freely, which implies a measurable orbital frequency derivative. Our value of $\dot{\nu}_\mathrm{orbit} = (8.43\pm0.37) \times 10^{-19}$~\hzs is in agreement with the upper limit set by \citet{Stiller2018}, but is more than 20$\sigma$ above the upper limit derived by \citet{Peterson2019}. It is worth noting that \citet{Peterson2019} remarked that they too obtained a non-null $\dot{\nu}_\mathrm{orbit}$ if the timing uncertainties in their data were not accounted for (which would be unrealistic). As their data included photographic plate measurements, that is indeed a great source of uncertainty, and they point out that it can be up to 11~minutes for their data. Since the inclusion of uncertainties changes what is obtained for $\dot{\nu}_\mathrm{orbit}$, that implies that the significance of $\dot{\nu}_\mathrm{orbit}$ depends on the assumed uncertainties. \citet{Peterson2019}'s conclusion could for example be altered if their uncertainties were overestimated, that is if their timing measurements were more precise than they assumed. The timing uncertainties in our measurements were taken into account, though they are negligible. \citet{Peterson2019} support their own empirical estimate by pointing out that the models of \citet{Knigge2011} predict $\dot{\nu}_\mathrm{orbit} \lesssim 2 \times 10^{-20}$~\hzs for \arsco's orbital period. However, this prediction does not take into account that \arsco\ is highly magnetic, which affects angular momentum loss \cite[e.g.][]{Kolb1995,Cohen2012,Belloni2020}. The direction in which angular momentum loss is affected, however, is a subject of much discussion. According to \citet{Liebert1985} and \citet{King1985}, the white dwarf magnetic field would lead to enhanced magnetic breaking. They argued that the wind from the donor can be temporarily trapped by the white dwarf's magnetic field lines, thus gaining angular momentum and subsequently carrying more angular momentum out of the system. \citet{Li1994}, on the other hand, proposed the opposite: that the donor's wind would remain trapped in the system due to the white dwarf's magnetic field, leading to reduced angular momentum loss. This model was supported by modelling of the polar population done by \citet{Belloni2020}. Given that we find a higher $\dot{\nu}_\mathrm{orbit}$ than the upper limit suggested by the non-magnetic semi-empirical model of \citet{Knigge2011}, this would require \arsco\ to be losing more angular momentum than a non-magnetic system, or enhanced magnetic braking. We estimate $\dot{J}_\mathrm{magnetic} \gtrsim 40 \dot{J}_\mathrm{non-magnetic}$ given the ratio between our obtained value and the estimate using \citet{Knigge2011}'s model. Considering the uncertainties surrounding angular momentum loss in magnetic cataclysmic variables, particularly in the case of a unique system such as \arsco, it is hard to establish whether this is feasible or not. The orbital period change could instead happen at fixed angular momentum if it is triggered by structural changes in the secondary, which can be explained by magnetic activity. Activity can affect the distribution of angular momentum within the star, changing its rotational oblateness (i.e. the gravitational quadrupole moment), which translates into a change in the gravitational field. That in turn causes the orbit and speed of the stars to change, altering the orbital period of the system without affecting its angular momentum \citep[the so-called Applegate mechanism,][]{Applegate1992, Lanza1998, Lanza1999}. This model was proposed to explain the orbital period changes often observed for eclipsing systems \citep[e.g.][]{Parsons2010,Bours2016}, which in many cases are well above the rate expected from magnetic breaking. In addition, the rate of period change caused by the Applegate mechanism is not constant, and can even change in signal (i.e. the period can go from decreasing to increasing and vice-versa). This could potentially explain the discrepancy between our derived value and the limit obtained by \citet{Peterson2019}. This explanation of course requires the M-dwarf to be magnetically active, which seems to be the case at least for \arsco. \subsection{AR~Sco as a cosmic clock} \label{sec:clock} The regularity and strength of the beat pulsations observed for \arsco, as well as the precise ephemeris that can be obtained for the beat period, raise the question of whether \arsco\ could be used for calibrating the timing accuracy of high-speed observations. This is potentially useful given that it is fairly bright, varies at a speed that it accessible to many instruments, and there is not a need to target particular eclipses. Whereas the ephemerides are precise to less than a second, one important issue is their colour dependency. As already pointed out by \citet{Gaibor2020}, the times of the primary pulses depend on the choice of filter, and their difference varies with orbital phase. This is illustrated in Figure~\ref{fig:pulses} for a section of one of our HiPERCAM runs. It is clear, therefore, that the use of \arsco\ as a timing source depends on the desired accuracy. If the orbital phases around 0.05 and 0.5 are avoided, an accuracy of the order of one second could be obtained fairly easily (see top panel of Figure~\ref{fig:chrom}), although a few pulses times may require combining if signal-to-noise is low. Around phases 0.35 and 0.85, the arrival times for different filters converge, so a better accuracy can be obtained, and one could perhaps hope to calibrate to a few tenths of a second. However, for large telescopes and high signal-to-noise, AR~Sco is not fully reliable as a timing fiducial, and systematic errors from the source itself will limit precision. An overall important result is that, if the timing calibration is being done with a specific filter, preferably an ephemeris obtained with the same filter should be used. Nevertheless, AR~Sco could prove useful in some instances, for example as a quick check against the large errors that can sometimes plague timing work, such as whether the start, middle or end of the exposure has been used, and whether barycentric corrections have been implemented correctly. \subsection{The behaviour of the companion} Our Doppler tomography confirms the existence of prominences around the M-dwarf previously identified by \citet{Garnavich2019}. We have further showed that those persist throughout the beat cycle, which suggests that they are not triggered by interaction between the M-dwarf and the white dwarf. Instead they are likely inherent to the M-dwarf, and remain stable due to its magnetic field. Further evidence for this is the fact that similar behaviour is observed for other non-accreting systems \citep[e.g. QS~Vir,][]{Parsons2016}, and even for CVs during low-states \citep[e.g. BB~Dor,][]{Schmidtobreick2012}, and is thus not unique to AR~Sco. Likely the prominences are explained by the magnetic field altering the Roche geometry and displacing the location of the L4 and L5 Lagrange points. By performing modulated Doppler tomography, we have identified the occurrence of emission modulated on the orbital period, in particular along the line connecting the two stars, but also remarkably in the region where an accretion stream would be located if there were mass transfer. A possible cause for this is a shock wave in the long-lived prominence that is located in this region, triggered by interaction between the two stars' magnetic fields. In addition, by measuring the $v_\mathrm{rot} \sin i$ for different orbital phases, we provide evidence that the companion is likely filling at least 80 per cent of its Roche-lobe. \section{Summary \& Conclusions} We performed an extensive study of the white dwarf pulsar \arsco\ using precise photometric data spanning almost seven years, complemented by CRTS data that extends our baseline to 2005. We refined the beat period ephemeris using pulse time arrival analysis, detecting a non-zero frequency derivative at the 50$\sigma$ level. We also confirm this value via Fourier analysis. We performed two different fits, first constraining the difference between $\dot{\nu}_\mathrm{spin}$ and $\dot{\nu}_\mathrm{beat}$ to be no more than $3.8 \times 10^{-20}$~\hzs, the $\dot{\nu}_\mathrm{orbit}$ upper limit found by \citet{Peterson2019}, and next leaving both to vary unconstrained. Both cases lead to significant values of $\dot{\nu}_\mathrm{orbit}$, in the first case $(3.68\pm0.17) \times 10^{-20}$~\hzs, and $(8.43\pm0.37) \times 10^{-19}$~\hzs when no prior is applied. This could suggest that \arsco\ is undergoing orbital period changes in excess of what is predicted by magnetic breaking and gravitational wave losses, similar to what is observed for many eclipsing systems and often attributed to changes in the quadrupole moment of the M-dwarf due to magnetic activity. However, since the $\chi^2$ values obtained for our two fits are comparable and they lead to very different $\dot{\nu}_\mathrm{orbit}$ estimates, we believe that there is no strong evidence for measurable orbital period change at the moment, though continuous monitoring might change this picture. There is also no reason to believe $\dot{\nu}_\mathrm{orbit}$ to be constant, as its value can depend on the magnetic activity of the M-dwarf. Large changes have been observed for post-common envelope systems \citep{Parsons2010, Bours2016}. We additionally carry out an analysis of time-resolved spectra obtained with both ISIS/WHT and X-shooter/VLT. Modulated Doppler tomography, performed here for the first time, shows the occurrence of emission modulated on the orbital period in the region where an accretion stream would be. This region also shows long-lived prominences that appear inherent to the M-dwarf, given their lack of dependence on beat phase. The modulated emission could thus be the result of shock interaction between the two stars' magnetic fields in this region as the system rotates. Our spectra also reveal evidence of tidal deformation detected via the variation on the observed projected rotational velocity, which indicates that the M-dwarf must be at least 80 per cent Roche lobe filling. \arsco\ remains a unique and puzzling system. If our findings of a possibly significant $\dot{\nu}_\mathrm{orbit}$ are confirmed, they could require the system to be losing angular momentum at a higher rate than non-magnetic systems, which could provide clues to explain the origin of \arsco. Alternatively, this could provide a probe for the mechanism proposed by \citet{Applegate1992}, which allows for orbital period changes at fixed angular momentum. The detection of modulated emission can shed light on the interaction between the M-dwarf and the white dwarf, which remains not fully understood. \section*{Acknowledgements} We thank D. Steeghs for advice on how to best visualise modulated Doppler maps. IP and TRM acknowledge support from the UK's Science and Technology Facilities Council (STFC), grant ST/T000406/1. AA acknowledges funding support from the NSRF via the Program Management Unit for Human Resources \& Institutional Development, Research and Innovation, grant B05F640046. SGP acknowledges the support of a STFC Ernest Rutherford Fellowship. This work has made use of data obtained at the Thai National Observatory on Doi Inthanon, operated by NARIT, and of observations made with the Gran Telescopio Canarias (GTC), installed at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias, in the island of La Palma. The design and construction of HiPERCAM was funded by the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013) under ERC-2013-ADG Grant Agreement no. 340040 (HiPERCAM). VSD and ULTRACAM/ULTRASPEC/HiPERCAM operations are supported by STFC grant ST/V000853/1. This works is also partially based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO programme 097.D-0685(B), and on observations made with WHT operated on the island of La Palma by the Isaac Newton Group of Telescopes in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrof\'{i}sica de Canarias. The ISIS spectroscopy was obtained as part of W/2016A/26. \section*{Data Availability} All data analysed in this work can be made available upon reasonable request to the authors. \bibliographystyle{mnras} \bibliography{arsco} % \appendix \section{Log of photomometric observations} \begin{table*} \centering \caption{Journal of observations.} \label{tab:observations} \begin{tabular}{ccccc} \hline Telescope & Start & Duration & Cadence & Filter \\ & (TDB) & (min) & (s) & \\ \hline ULTRACAM & 2015-06-23 21:12:56.852 & 37.7 & 2.9 & $u,g,r$ \\ ULTRACAM & 2015-06-24 21:12:19.723 & 165.6 & 1.3 & $u,g,i$ \\ ULTRASPEC & 2016-01-19 22:03:32.622 & 70.2 & 4.0 & $g$ \\ ULTRASPEC & 2016-03-07 22:39:59.884 & 23.8 & 4.0 & $g$ \\ ULTRASPEC & 2016-03-08 22:19:40.231 & 42.6 & 4.0 & $g$ \\ ULTRASPEC & 2016-03-13 21:32:59.534 & 87.4 & 4.7 & $g$ \\ ULTRASPEC & 2016-03-16 21:24:04.987 & 91.6 & 4.7 & $g$ \\ ULTRASPEC & 2016-03-17 20:19:08.410 & 45.8 & 4.7 & $g$ \\ ULTRASPEC & 2016-03-19 19:11:59.534 & 26.6 & 4.7 & $g$ \\ ULTRASPEC & 2016-03-21 22:09:02.976 & 48.4 & 4.0 & $g$ \\ ULTRASPEC & 2016-04-08 21:45:16.514 & 60.2 & 4.0 & $g$ \\ ULTRASPEC & 2016-04-09 21:44:43.571 & 53.3 & 4.0 & $g$ \\ ULTRASPEC & 2016-04-10 21:54:49.403 & 46.5 & 4.0 & $g$ \\ ULTRACAM & 2016-04-21 06:23:46.060 & 104.7 & 2.0 & $u,g,r$ \\ ULTRACAM & 2016-04-21 08:40:39.584 & 117.5 & 2.0 & $u,g,r$ \\ ULTRASPEC & 2016-05-02 19:44:55.657 & 158.2 & 4.0 & $g$ \\ ULTRASPEC & 2016-05-07 18:39:43.968 & 3.7 & 3.4 & $g$ \\ ULTRASPEC & 2016-05-07 18:43:36.310 & 14.6 & 3.4 & $g$ \\ ULTRASPEC & 2016-05-07 18:58:39.648 & 201.6 & 3.4 & $g$ \\ ULTRASPEC & 2016-05-08 17:11:27.304 & 311.0 & 5.4 & $g$ \\ ULTRACAM & 2016-07-04 01:15:02.419 & 13.0 & 2.3 & $u,g,r$ \\ ULTRACAM & 2016-07-04 01:28:09.289 & 14.9 & 3.0 & $u,g,r$ \\ ULTRACAM & 2016-07-04 01:43:10.239 & 9.8 & 2.0 & $u,g,r$ \\ ULTRACAM & 2016-07-04 01:53:06.634 & 59.6 & 2.0 & $u,g,r$ \\ ULTRACAM & 2016-07-04 02:52:45.285 & 195.3 & 1.0 & $u,g,r$ \\ ULTRACAM & 2016-07-04 06:08:10.125 & 48.7 & 1.6 & $u,g,r$ \\ ULTRACAM & 2016-08-19 23:53:04.573 & 62.5 & 4.0 & $u,g,i$ \\ ULTRASPEC & 2017-01-23 22:13:02.715 & 53.7 & 4.9 & $g$ \\ ULTRASPEC & 2017-02-10 21:43:24.910 & 39.5 & 6.2 & $g$ \\ ULTRASPEC & 2017-02-11 22:15:57.551 & 53.0 & 4.2 & $g$ \\ ULTRASPEC & 2017-02-12 21:21:12.738 & 111.3 & 4.4 & $g$ \\ ULTRASPEC & 2017-02-14 22:05:13.713 & 65.1 & 4.2 & $g$ \\ ULTRASPEC & 2017-02-18 21:30:20.061 & 68.5 & 4.5 & $g$ \\ ULTRASPEC & 2017-02-24 19:59:18.793 & 55.2 & 4.5 & $g$ \\ ULTRACAM & 2017-03-17 06:39:39.180 & 74.1 & 3.0 & $u,g,r$ \\ ULTRACAM & 2017-03-21 08:37:59.580 & 61.1 & 3.0 & $u,g,r$ \\ ULTRASPEC & 2017-03-30 20:47:27.827 & 116.2 & 4.2 & $g$ \\ ULTRASPEC & 2017-03-31 20:44:02.680 & 111.9 & 4.2 & $g$ \\ ULTRASPEC & 2017-04-01 22:10:08.743 & 11.2 & 4.2 & $g$ \\ ULTRASPEC & 2017-04-06 18:07:22.546 & 74.2 & 4.2 & $g$ \\ ULTRASPEC & 2017-04-07 22:14:25.609 & 25.9 & 4.2 & $g$ \\ ULTRASPEC & 2017-04-08 21:20:50.174 & 77.0 & 4.2 & $g$ \\ ULTRACAM & 2017-05-03 03:00:53.854 & 1.3 & 3.0 & $u,g,i$ \\ ULTRACAM & 2017-05-03 03:02:22.843 & 127.2 & 3.0 & $u,g,i$ \\ ULTRACAM & 2017-05-04 01:43:27.069 & 219.5 & 3.0 & $u,g,r$ \\ ULTRACAM & 2017-05-04 05:23:03.800 & 312.2 & 1.3 & $u,g,r$ \\ ULTRACAM & 2017-05-05 01:48:11.118 & 529.9 & 1.3 & $u,g,r$ \\ ULTRACAM & 2017-05-06 06:01:02.773 & 174.2 & 1.3 & $u,g,r$ \\ ULTRACAM & 2017-05-14 06:15:24.989 & 268.6 & 3.0 & $u_s,g_s,r_s$ \\ ULTRACAM & 2017-06-10 02:26:04.269 & 223.9 & 1.3 & $u_s,g_s,r_s$ \\ ULTRACAM & 2018-01-19 07:58:18.422 & 77.4 & 2.0 & $u_s,g_s,r_s$ \\ ULTRACAM & 2018-01-20 08:05:59.798 & 60.2 & 1.3 & $u_s,g_s,r_s$ \\ ULTRACAM & 2018-01-22 07:57:58.542 & 72.3 & 1.3 & $u_s,g_s,r_s$ \\ ULTRASPEC & 2018-03-03 20:06:08.847 & 154.4 & 3.4 & $g$ \\ ULTRASPEC & 2018-03-03 22:40:41.706 & 17.1 & 3.4 & $g$ \\ ULTRASPEC & 2018-03-04 19:57:44.470 & 160.1 & 4.2 & $g$ \\ ULTRASPEC & 2018-03-04 22:38:00.830 & 14.3 & 4.2 & $g$ \\ ULTRASPEC & 2018-03-05 19:54:03.936 & 184.4 & 4.2 & $g$ \\ ULTRASPEC & 2018-03-18 22:15:48.145 & 34.2 & 4.2 & $g$ \\ ULTRASPEC & 2018-03-19 21:01:17.871 & 36.8 & 4.2 & $g$ \\ ULTRASPEC & 2018-03-19 21:38:14.630 & 76.6 & 4.2 & $g$ \\ ULTRASPEC & 2018-03-20 21:54:01.845 & 50.4 & 4.2 & $g$ \\ \hline \end{tabular} \end{table*} \begin{table*} \centering \contcaption{} \begin{tabular}{ccccc} \hline Telescope & Start & Duration & Cadence & Filter \\ & (TDB) & (min) & (s) & \\ \hline ULTRACAM & 2018-04-14 09:44:25.367 & 46.8 & 2.0 & $u_s,g_s,r_s$ \\ ULTRACAM & 2018-04-15 08:07:05.188 & 45.4 & 1.3 & $u_s,g_s,r_s$ \\ HiPERCAM & 2018-04-16 02:25:42.704 & 131.0 & 0.1 & $u_s,g_s,r_s,i_s,z_s$ \\ HiPERCAM & 2018-04-18 04:03:15.289 & 114.8 & 0.1 & $u_s,g_s,r_s,i_s,z_s$ \\ ULTRACAM & 2018-06-06 05:15:00.776 & 22.7 & 2.0 & $u_s,g_s,r_s$ \\ ULTRACAM & 2018-06-06 05:38:21.907 & 62.3 & 2.0 & $u_s,g_s,r_s$ \\ ULTRACAM & 2018-06-19 01:12:53.833 & 177.2 & 2.0 & $u_s,g_s,r_s$ \\ ULTRASPEC & 2019-02-07 21:08:04.697 & 111.7 & 4.5 & $g$ \\ ULTRACAM & 2019-03-23 06:53:51.419 & 207.1 & 1.3 & $u_s,g_s,r_s$ \\ ULTRASPEC & 2020-02-24 20:22:41.377 & 22.5 & 9.1 & $g$ \\ ULTRASPEC & 2020-03-13 19:31:13.103 & 90.2 & 4.5 & $g$ \\ ULTRASPEC & 2020-03-23 20:56:00.406 & 27.1 & 4.5 & KG5 \\ ULTRASPEC & 2020-03-23 21:27:07.128 & 52.8 & 4.5 & KG5 \\ ULTRASPEC & 2020-03-28 19:56:59.783 & 133.3 & 4.2 & $g$ \\ ULTRASPEC & 2020-04-03 18:43:20.081 & 234.0 & 4.2 & $r$ \\ ULTRASPEC & 2020-04-04 19:56:42.781 & 156.0 & 4.2 & $g$ \\ ULTRASPEC & 2021-02-28 21:00:48.318 & 46.0 & 4.5 & $g$ \\ ULTRASPEC & 2021-03-01 20:38:24.340 & 74.1 & 4.4 & $g$ \\ ULTRACAM & 2021-04-08 08:35:44.966 & 40.2 & 2.0 & $u_s,g_s,r_s$ \\ ULTRASPEC & 2022-02-20 21:17:17.315 & 38.1 & 4.5 & $g$ \\ ULTRACAM & 2022-04-26 03:25:28.705 & 229.6 & 2.5 & $u_s,g_s,i_s$ \\ \hline \end{tabular} \end{table*} \section{Corner plot for the MCMC fit} \bsp % \label{lastpage}
Title: Identifying diffuse spatial structures in high-energy photon lists
Abstract: Data from high-energy observations are usually obtained as lists of photon events. A common analysis task for such data is to identify whether diffuse emission exists, and to estimate its surface brightness, even in the presence of point sources that may be superposed. We have developed a novel non-parametric event list segmentation algorithm to divide up the field of view into distinct emission components. We use photon location data directly, without binning them into an image. We first construct a graph from the Voronoi tessellation of the observed photon locations and then grow segments using a new adaptation of seeded region growing, that we call Seeded Region Growing on Graph, after which the overall method is named SRGonG. Starting with a set of seed locations, this results in an over-segmented dataset, which SRGonG then coalesces using a greedy algorithm where adjacent segments are merged to minimize a model comparison statistic; we use the Bayesian Information Criterion. Using SRGonG we are able to identify point-like and diffuse extended sources in the data with equal facility. We validate SRGonG using simulations, demonstrating that it is capable of discerning irregularly shaped low surface-brightness emission structures as well as point-like sources with strengths comparable to that seen in typical X-ray data. We demonstrate SRGonG's use on the Chandra data of the Antennae galaxies, and show that it segments the complex structures appropriately.
https://export.arxiv.org/pdf/2208.07427
command. \newcommand{\vdag}{(v)^\dagger} \newcommand\aastex{AAS\TeX} \newcommand\latex{La\TeX} \newcommand{\gsrg}{{\tt SRGonG}} % \newcommand{\chandra}{{\sl Chandra}} \newcommand{\xmm}{XMM-{\sl Newton}} \newcommand{\todo}[1]{\textcolor{red}{#1}} \newcommand{\vlk}[1]{{\color{blue} [VLK: #1]}} \newcommand{\az}[1]{{\color{purple} [AZ: #1]}} \newcommand{\jw}[1]{{\color{teal} [JW: #1]}} \newcommand{\tl}[1]{{\color{blue} [TL: #1]}} \newcommand{\dvd}[1]{{\color{red} [DvD: #1]}} \newcommand{\dvds}[1]{{\color{teal} [DvD subcomment: #1]}} \newcommand{\vorcell}{\mathcal{V}} \newcommand{\fov}{\mathcal{F}} \newcommand{\Ksrc}{K} \newcommand{\llik}{\mathcal{L}} \newcommand{\proflik}{\mathcal{L}_{\rm profile}} \newcommand{\mpar}{m_{\rm seg}} \newcommand{\Nrand}{N} \newcommand{\obsph}{n} \newcommand{\ppar}{m_{\rm par}} \newcommand{\Sk}{\mathcal{S}} \newcommand{\Sroi}{\mathbf{S}} \newcommand{\Data}{\mathbf{X}} \newcommand{\xph}{\mathbf{x}} \newcommand{\intx}[1]{\lambda(\xph_{#1})} \newcommand{\intbg}{\lambda_0} \newcommand{\intk}[1]{{\lambda_{#1}}} \newcommand{\alllam}{\mathbf{\lambda}} \newcommand{\fulllam}{\mathbf{\Lambda}} \newcommand{\parasz}{\beta} \newcommand{\parasnr}{\sigma} \newcommand{\Nroi}[1]{{{\rm Num}(\Sk_{#1})}} \newcommand{\Aroi}[1]{{{\rm Area}(\Sk_{#1})}} \newcommand{\Avor}[1]{{{\rm Area}(\vorcell_{#1})}} \newcommand{\rmprb}{{\sl f}} \newcommand{\iid}{\buildrel{\rm iid}\over{\sim}} \newcommand{\numgrid}{m_{\rm grid}} \newcommand{\numsubgraph}{m_{\rm graph}} \newcommand{\numnn}{m_{\rm nn}} \newcommand{\vorthr}{m_{\rm vorthr}} \newcommand{\strata}{m_{\rm R}} \newcommand{\numij}{\rm Num_{i,j}} \newcommand{\name}{{\gsrg}} \newcommand{\updatebf}[1]{{#1}} \usepackage{bm} \usepackage{bbm} \usepackage{amsmath} \usepackage{subfigure} \usepackage[ruled,vlined]{algorithm2e} \usepackage{dsfont} \usepackage{mathtools} \submitjournal{AJ} \shorttitle{Identifying diffuse structures} \shortauthors{Fan et al.} \begin{document} \title{Identifying diffuse spatial structures in high-energy photon lists} \correspondingauthor{Thomas C. M. Lee} \email{tcmlee@ucdavis.edu} \author{Minjie Fan} \affiliation{Department of Statistics, University of California, Davis, \\ One Shields Avenue, \\ Davis, CA 95616, USA} \author{Jue Wang} \affiliation{Department of Statistics, University of California, Davis, \\ One Shields Avenue, \\ Davis, CA 95616, USA} \author[0000-0002-3869-7996]{Vinay L.\ Kashyap} \affiliation{Center for Astrophysics $|$ Harvard \& Smithsonian, \\ 60 Garden Street,\\ Cambridge, MA 02138, USA} \author[0000-0001-7067-405X]{Thomas C.\ M.\ Lee} \affiliation{Department of Statistics, University of California, Davis, \\ One Shields Avenue, \\ Davis, CA 95616, USA} \author[0000-0002-0816-331X]{David A.\ van Dyk} \affiliation{Statistics Section, Department of Mathematics, Imperial College London, \\ 180 Queen's Gate, \\ London, SW7 2AZ, UK} \author{Andreas Zezas} \affiliation{Physics Department, deUniversity of Crete, \\ P. O. Box 2208, GF-710 03, \\ Heraklion, Crete, Greece} \keywords{Astrostatistics --- Astrostatistics techniques --- Astrostatistics strategies --- X-ray astronomy -- Galaxy structure} \section{Introduction} \label{sec:intro} A challenge often encountered in high-energy astronomical analysis is that the images are photon starved and sparse, and contain many `empty' pixels. Unlike photon-rich images encountered at longer wavelengths, complex features in X-ray and $\gamma$-ray data are difficult to recognize, characterize, and analyze. Working directly with Poisson distributed photon counts, while simultaneously separating out the contribution of the background, is a difficult process, especially when trying to detect faint non-uniform emission, or separating faint point sources from larger scale diffuse emission. Finding the boundaries of extended structures is thus a challenging problem. Such complex structures are common in high-energy astronomical images and include, for example, shock fronts, knots in supernova remnants, regions of diffuse emission in galaxies, point sources embedded in diffuse emission or conglomerates of point sources, entire galaxies or groups/clusters of galaxies, jets, or star forming regions which appear to be extended in the X-ray band even with intermediate resolution ($\lesssim0.5\arcmin$) X-ray telescopes. The analysis of extended X-ray sources is critical for several areas of astrophysics. The spatial scales of extended emission contain information regarding the physical processes that lead to their formation, while their boundaries are often determined by their physical environments. Therefore, identifying the boundary of these regions in a data-driven, rather than a model-driven, fashion is necessary for the scientifically valuable results. In such cases, a primary goal of the researcher is to segment the image into regions with similar properties and to analyze each segment individually. Multiscale methods like wavelets \citep{starck:02} for point source detection has been efficiently implemented for X-ray images \citep{Freeman-et-al02}, and matched-filter techniques have been successfully used to detect galaxy clusters in ROSAT data \citep{Vikhlinin+1998}, but extended structures remain difficult to find and characterize in these low counts Poisson data. Other techniques are generally optimized for high S/N images: they apply adaptive binning, or set S/N thresholds to smoothed images with point sources removed \cite[e.g.,][]{Sanders2001,Sanders2006}; adapt methods developed for the analysis of cosmic microwave background images \citep[\updatebf{e.g.,}][]{Bobin+2016}; limit themselves to restrictive assumptions like modeling a combination of point sources \citep[(E)BASCS;][]{Jones-15,Meyer-21}, or require spectral model similarity across the field of view \citep{Picquenot+2019,2021A&A...646A..82P}. Currently, most astronomical images with complex structures that are processed for public display use some form of flux-non-conserving adaptive smoothing \citep{Ebeling-et-al06}. This approach is inadequate for scientific analysis. Previous efforts at extended source detection using Voronoi tessellation techniques have been limited by computational cost and the imposition of global thresholding schemes \citep[e.g., {\tt vtpdetect},][]{Ebeling-Wiedenmann93}. Methods akin to seeded region growing (cf.\ {\tt SrcExtractor}, \citealt{BertinArnouts96}; {\tt NoiseChisel}, \citealt{2015ApJS..220....1A}) and Machine Learning (e.g., {\tt Morpheus}, \citealt{2020ApJS..248...20H}; {\tt Mask R-CNN}, \citealt{FARIAS2020100420}; {\tt galmask}, \citealt{2022RNAAS...6..128G}) have been used for the identification of features in optical images of galaxies. However, the Poisson nature and the sparsity of the X-ray data requires statistically better targeted methods. Here we develop a new method that combines aspects of Voronoi tessellation with region growing by using neighbor similarity clustering. The method can be applied to X-ray data and provides both separation between different structures in a complex image and well-defined apertures to perform photometry. We describe the statistical model that underlies the method in Section~\ref{sec:method}, and specific implementation details including computational methods in Section~\ref{sec:implement}. We carry out several simulations to test the limits of applicability of the algorithm in Section~\ref{sec:simulation}, and apply it to {\sl Chandra} data of the Antennae galaxies in Section~\ref{sec:application}. We discuss how and when the algorithm may be best used in \ref{sec:discuss}, and summarize our work in Section~\ref{sec:summary}. \section{Statistical Methodology} \label{sec:method} We have developed a method that iteratively aggregates contiguous sets of photons into distinct regions based on similarity of their surface brightness. We employ a likelihood based method to obtain a piece-wise constant estimate of the surface brightness across the image; the likelihood function is derived in Section~\ref{sec:statmodel}. The method starts with the high-resolution segmentation of the spatial distribution of the events based on the Voronoi cells described in Section~\ref{sec:voronoi} and combines segments by optimizing the Bayesian Information Criterion (BIC) given in Section~\ref{sec:BIC}. Table~\ref{tab:notation} provides a glossary of our notation. \begin{table*}[ht] \centering \caption{Glossary of variables and notation.} \begin{tabular}{l l} \hline\hline {\bf Notation} & {\bf Description} \\ \hline $\iid$ & Independent and identically distributed \\ $\hat{\zeta}$ & Estimate of a generic parameter $\zeta$ \\ $\fov$ & Field of view, a bounded domain in $\mathbb{R}^2$ that contains the observed photons \\ $\obsph$ & Number of observed photons \\ $\Data = \{\xph_1, \ldots, \xph_\obsph \}$ & Observed location of the $\obsph$ photons, denoting their (sky) coordinates \\ $\Ksrc$ & Number of segments within $\fov$ \\%, including the background region, $\Ksrc{\ge}0$ \\ $\Sroi = \{\Sk_1, \ldots, {\Sk_\Ksrc}\}$ & Partition of $\fov$, where $\Sk_{k}$ is the domain for segmented region $k$ \\ $\Nroi{k}$ & Number of photons observed in segment $\Sk_k$ \\ $\Aroi{k}$ & Area of segment $\Sk_k$ \\ $\mpar$ & Number of free parameters per segment \\ $\intx{}$ & Poisson intensity at location $\xph$ \\ $\bm{\alllam}=\{\intk{1},\ldots,\intk{\Ksrc}\}$ & Collection of the Poisson intensities over each of the segments \\ $\fulllam$ & Total integrated intensity of $\intx{}$ over $\fov$ \\ $\vorcell_i$ & Voronoi cell defined by photon $i$ \\ $\Avor{i}$ & Area of Voronoi cell $\vorcell_i$ \\ $\widehat\alllam_i^{{\rm \vorcell}_i}$ & The Voronoi estimator of the Poisson intensity across Voronoi cell $i$, with $\widehat\alllam_i^{{\rm \vorcell}_i} =1/\Avor{i}$ \\ $\rmprb(\Data, \obsph)$ & Joint probability mass/density function of the observed number of photons and their locations \\ $\llik(\Ksrc,\Sroi,\bm{\alllam}\ | \ \Data)$ & Log-likelihood of the model$^a$ \\ $\proflik(\Ksrc,\Sroi\ | \ \Data)$ & Profile log-likelihood$^a$ obtained by replacing $\alllam$ in $\llik(\Ksrc,\Sroi,\alllam\ | \ \xph)$ with its estimate $\hat{\alllam}$ \\ ${\rm BIC}(\Ksrc)$ & Bayesian Information Criterion as a function of the number of segments, $\Ksrc$ \\ $\numgrid$ & {In seed specification, the } number of grid points used in a regularly spaced grid \\ $\numsubgraph$ & Number of photons included in initial subgraph for each seed \\ $\numnn$ & Number of nearest neighbors over which local maxima are searched for to specify seeds \\ $\strata$ & Number of strata used during Voronoi-area stratified sampling to specify seeds \\ $\vorthr$ & Minimum number of photons required {for} a Voronoi-area derived seed {to be} accepted \\ \hline \multicolumn{2}{l}{$a:$ Notation is reversed, by convention, for $\llik(b|a)$ compared to conditional probabilities; e.g., } \\ [-5pt] \multicolumn{2}{l}{\quad \ ${\rm p}(a|b)$ represents the probability of $a$ given $b$.} \end{tabular} \label{tab:notation} \end{table*} \subsection{Statistical Model} \label{sec:statmodel} We consider an event list composed of $\obsph$ photons observed in a bounded domain that defines the field of view, $\fov \subset \mathbb{R}^2$. Ignoring instrumental pixelization, we model the set of sky coordinates for the $\obsph$ photons, \begin{equation} \Data=\{\xph_1,\ldots,\xph_\obsph\} \,, \end{equation} via an inhomogeneous Poisson process with intensity function $\intx{} \ge 0$. The intensity function must be integrable over $\fov$, i.e., $\fulllam = \int_\fov\intx{} d\xph$ must be finite. For simplicity, we assume that the intensity function is piece-wise constant. Specifically, we assume we can partition $\fov$ into $K$ segments, denoted $\Sroi=(\Sk_1, \ldots, \Sk_{\Ksrc})$, where the Poisson intensity is constant on each $\Sk_k$. Since $\Sroi$ partitions $\fov$, the $\Sk_k$ together cover $\fov$ and each pair is disjoint. For a given set of non-negative intensities, $\bm{\alllam}=\{\intk{1},\ldots,\intk{\Ksrc}\}$, we can then express the intensity function as \begin{equation} \intx{} =\sum_{k=1}^{\Ksrc} \intk{k} \mathbbm{1}_{\Sk_k}(\xph) \,, \label{eq:pw-constant} \end{equation} where $\mathbbm{1}_{\Sk_k}(\xph)$ is an indicator function that takes value 1 if $\xph \in \Sk_k$ and is otherwise 0. A property of the inhomogeneous Poisson process is that the number of photons, $\Nroi{k}$, recorded in segment $\Sk_k$ with area $\Aroi{k}$ follows a Poisson distribution with mean \begin{equation} \Aroi{k} \cdot \intk{k} = \int_{\Sk_k}\intx{} d\xph, ~\hbox{ for }~ k=1,\ldots, \Ksrc \,, \end{equation} with $\intk{k} \ge 0$. Likewise, the total photon count is distributed $\obsph \sim {\rm Pois}(\fulllam)$. Another property is that, given $\obsph$, the sky coordinates, $\xph_i$ are independent and identically distributed (iid) with (normalized) probability density function $\intx{}/\fulllam$ \citep[e.g.,][]{chiu2013stochastic}. This means that the $\xph_i$ are distinct -- no two photons can have the same recorded coordinates. (The discrete nature of detectors means that occasionally, two photons are recorded with identical coordinates. In this case, we add a very small random scatter, $\sim$10$^{-6}$.) Our goal is to estimate the number of segments, $K$, the segments, $\Sroi$, and their respective intensities, $\bm{\alllam}$. Thus far, we have not discussed sources or background. If the field of view includes multiple separated sources, we expect the piece-wise constant intensity function to capture the intensity peaks associated with point and extended sources. Between these sources (or around a single source) is the background region. If the background intensity is constant across $\fov$ and the source region(s) is/are isolated within the field of view, we expect a single large segment representing the background to encircle the source regions and to extend to the boundary of $\fov$. If the background intensity varies slowly, we might find several large segments that together comprise the background region. In any case, there are segments associated with background and with sources. We do not attempt to classify the segments in this regard. Of course, if there is a single large low-intensity segment encircling smaller higher intensity segments, it is easy enough to identify the background with the large low-intensity segment. We are particularly interested in the case where small-scale point-like sources lie within a larger extended source as this is a challenging task for existing methods. We do not distinguish between extended or point sources, and in fact ignore the effect of telescope's point spread function, assuming that it is small compared to the size of the $\fov$. Our method is agnostic by design to the sizes of individual structures, and is thus capable of isolating sources at all scales. To derive the likelihood function, recall $\obsph \sim {\rm Pois}(\fulllam)$ and given $\obsph$ the $\xph_i \iid \intx{}/\fulllam$. Thus, their joint probability mass/density function under the inhomogeneous Poisson process is \begin{eqnarray} \rmprb(\xph_1, \ldots, \xph_n, n) & = & \rmprb(\Data \mid n) \cdot \rmprb(n) \nonumber \\ & = & \frac{1}{\fulllam^n} \prod_{i=1}^n \intx{i} \cdot \frac{\exp\left({-\fulllam} \right) \fulllam^n}{n!} \nonumber \\ & = & \frac{\exp\left({-\fulllam} \right)}{n!} \prod_{i=1}^n \intx{i} \end{eqnarray} and their log-likelihood function is given by \begin{multline} \llik(\Ksrc, \Sroi, \bm{\alllam}\mid \Data, n) = \log{\rmprb(\Data, n)} \\ =\sum \limits_{i=1}^n \log{\intx{i}}-\int_\fov\intx{} d\xph-\log{n!} ~~, \end{multline} where we write out $\fulllam = \int_\fov\intx{} d\xph$. Replacing $\intx{}$ by the piece-wise constant expression given in (\ref{eq:pw-constant}), we have \begin{multline} \llik(\Ksrc, \Sroi, \bm{\alllam} \mid \Data, n) =\\ \sum \limits_{\substack{k=1 \\ \Nroi{k}\neq 0}}^{\Ksrc} \kern -13pt \Nroi{k} \log {\intk{k}} -\sum_{k=1}^{\Ksrc} \Aroi{k}\intk{k}-\log{n!} ~~. \label{eq:loglike} \end{multline} Recall that $\Nroi{k}$ and $\Aroi{k}$ denote the number of photons in $\Sk_k$ and the area of $\Sk_k$, respectively. (When $\Nroi{k}=0$, the summand $$\Nroi{k} \log {\intk{k}}$$ is excluded from the first sum in (\ref{eq:loglike}).) We aim to maximize $\llik$ as a function of $K$, $\Sroi$, and $\bm{\alllam}$ to obtain their maximum likelihood estimates. For fixed $K$ and $\Sroi$, $\llik$ is maximized as a function of $\bm{\alllam}$ by \begin{equation} \widehat{\lambda}_k =\Nroi{k}/\Aroi{k} ~\hbox{ for }~ k=1,\ldots, \Ksrc \,. \end{equation} Plugging the $\widehat{\lambda}_k$ into $\llik$, we obtain the profile log-likelihood of $K$ and $\Sroi$, i.e., \begin{multline}\label{eqn:prof_log_lik} \proflik(\Ksrc, \Sroi \mid \Data, n)=\\ \sum \limits_{\substack{k=1 \\ \Nroi{k}\neq 0}}^{\Ksrc} \kern -10pt \Nroi{k} \log \left(\frac{\Nroi{k}}{\Aroi{k}}\right)-n-\log{n!}. \end{multline} The same profile log-likelihood can be derived by modeling the data as a mixture of uniform distributions. \citet{Allard-97} considered a special case with a uniform background with a contiguous extended source superposed. To estimate $\Sroi$, we first deploy a (greedy) algorithm that finds an optimal segmentation, $\widehat\Sroi(\Ksrc) = {\operatorname{arg\, max}} \proflik(\Ksrc, \Sroi)$, for fixed $\Ksrc$, as described in Sections~\ref{sec:voronoi} and refined in Section~\ref{sec:implement}. In Section~\ref{sec:BIC}, we introduce a penalized version of $\proflik$ that we maximize over $\Ksrc$ to obtain final estimates of the number of segments, $\widehat\Ksrc$, and thereby of the segments themselves, $\widehat\Sroi(\widehat\Ksrc)$. \subsection{Estimating $\Sroi$ via Voronoi Tessellation} \label{sec:voronoi} Obtaining estimates of the segments, $\Sroi$, requires us to constrain the set of possible partitions. For any fixed $\Ksrc$, for example, we can make $\proflik$ arbitrarily large by including a segment that is small enough to contain exactly one photon and shrinking the segment's area toward zero (since $\Aroi{k}$ appears in the denominator of (\ref{eqn:prof_log_lik})). Similarly, any $\Sk_k$ with $\Nroi{k}=0$ can have arbitrary shape since it does not contribute to the profile likelihood. Since we cannot estimate the intensity function at a higher resolution than the data, we only consider candidate segments that include at least one photon. One way to do constrain $\Sroi$ is to only consider candidate segments that consist of the Voronoi cells derived from the Voronoi tessellation of the data, or the union of several Voronoi cells. The Voronoi tessellation of the observed photons uniquely partitions $\fov$ into $n$ convex cells, denoted $\vorcell_i, ~i=1, \ldots, n$, such that cell $\vorcell_i$ contains exactly one photon, say $\xph_i$, and consists of all locations in $\fov$ closer to photon $\xph_i$ than to any other photon. These cells are called Voronoi cells, and $\xph_i$ is called the nucleus of $\vorcell_i$. Figure~\ref{fig:voronoi_illustration} gives an example of the Voronoi tessellation of 50 photons drawn from a normal distribution truncated to the unit square. The photon locations are plotted in the left panel and the Voronoi cells in the middle panel. (We discuss the graph in the right panel in Section~\ref{sec:gsrg}.) To avoid unclosed Voronoi cells near the border of the field of view, we restrict the tessellation to Voronoi cells whose vertices are all in $\fov$. Based on the Voronoi tessellation, \citet{Barr-10} introduced the Voronoi estimator $\widehat\alllam_i^{{\rm \vorcell}_i}(\xph)=1/\Avor{i}$ for any location $\xph \in \vorcell_i$. They show that under certain conditions, the Voronoi estimator is approximately unbiased for the Poisson intensity $\intx{}$, and its sampling distribution is approximately the inverse Gamma distribution\footnote{The probability density function of an inverse Gamma distribution is $\frac{b^{a}}{\Gamma(a)}x^{-a-1}\exp \left( -\frac{b}{x} \right)$, where $x>0$, $a$ and $b$ are the shape and rate parameters, respectively, and $\Gamma(\cdot)$ denotes the Gamma function. }. The algorithm that we propose to combine the Voronoi cells to form the segments (by approximately maximizing $\proflik$ for each fixed $\Ksrc$) is detailed in Section~\ref{sec:implement}. When $\Ksrc$ is fixed in advance, this algorithm can be used to estimate $\Sroi$. To fit $\Ksrc$ we use the method in Section~\ref{sec:BIC}. \subsection{Estimating $\Ksrc$ via the Bayesian Information Criterion} \label{sec:BIC} Unfortunately, the number of sources, $\Ksrc$, cannot be reasonably estimated by maximizing the profile likelihood, because (\ref{eqn:prof_log_lik}) increases with $\Ksrc$ and is thus maximized by $\Ksrc=n$, i.e., with the full set of Voronoi cells\footnote{Since $n$ is fixed, maximizing $\proflik$ is equivalent to maximizing the sum in (\ref{eqn:prof_log_lik}), which can be written as $\sum_{i=1}^n \log \hat\lambda(\xph_i)$ where $\hat\lambda(\xph_i)$ is the local optimizer, $\Nroi{} / \Aroi{}$, for the segment containing $\xph_i$. Increasing the number of segments allows for better local optimization of local fluctuations and thus increases $\proflik$. Of course, with too many segments, better fitting of local fluctuations amounts to fitting noise, i.e., over-fitting the data.}. We avoid such over-fitting by adding a term to (\ref{eqn:prof_log_lik}) that suitably penalizes model complexity. Specifically, we use the so-called Bayesian Information Criterion (BIC), which has been shown to produce statistically consistent results for many model selection problems. For the current problem, the BIC is defined as \begin{equation} {\rm BIC}(\Ksrc)=-2\proflik(\Ksrc, \widehat\Sroi(\Ksrc) \mid \Data, \obsph)+\Ksrc\mpar\log\obsph \,, \label{eq:BICdefn} \end{equation} where $\mpar$ is the number of free/independent parameters per segment, thus $\mpar \Ksrc$ is the total number of free parameters in the model\footnote{Since the shape of the final segment is determined by the first $\Ksrc-1$ segments, a more precise formulation of the total number of parameters in the model is $ \mpar(\Ksrc-1)+1$, where the intensity parameter of the final segment is accounted for by the ``+1''. The difference between this more precise formulation and the one used in (\ref{eq:BICdefn}) is $(1-\mpar)n$, which does not depend on any of the unknown parameters and thus does not affect estimation.}. The BIC estimate for $\Ksrc$ is given by \begin{equation} \widehat{\Ksrc} = {\operatorname{arg\, min}}~{\rm BIC}(\Ksrc) \,. \label{eq:BICval} \end{equation} For fixed $\Ksrc$, optimizing BIC is equivalent to optimizing $\proflik$, thus this estimate of $\Sroi$ is equivalent to that described in Section~\ref{sec:voronoi}. Unfortunately, because we are using a non-parametric model, $\mpar$ is not well-defined. Following \citet{Aue-11}, we set $\mpar$ by approximating the model by a parametric one using an assumed specific shape for the segments. When the segments are close to ellipse-shaped, for example, $\mpar=6$ to account for the coordinates of the center, lengths of the two axes, orientation, and intensity. When the segments are close to circular, $\mpar$ is reduced to $4$. Another possibility is to simply set $\mpar = 1$ and the number of model parameters to the number of segments, which to some extent reflects the overall model complexity \citep{Magnussen-06}, but ignores the shapes of the segments. While these parametric approximations allow us to assign a reasonable value to $\mpar$, the model itself remains non-parametric. BIC is closely related to the ``fitness function" used in the Bayesian block method \citep{Scargle-13}, where model complexity is penalized via a geometric prior on the number of sources, i.e., $p(\Ksrc) = P_0\gamma ^{\Ksrc}$, with $\gamma$ being a tuning parameter and $P_0$ a normalization constant. Setting $\gamma= 1 / {\obsph}^{\Ksrc\mpar}$ makes the fitness function equivalent to the penalty term in BIC. \section{Algorithm for Combining Voronoi Cells into Segments} \label{sec:implement} \subsection{\gsrg: Seeded Region Growing on Graph} \label{sec:gsrg} Using the Voronoi cells as building blocks, we start by proposing an algorithm to estimate the segments in $\Sroi$ for fixed $\Ksrc$. The first step is to identify pairs or groups of Voronoi cells that can potentially be combined. We accomplish this via the dual graph of the Voronoi tessellation, known as the Delaunay triangulation. This graph's vertices are the centers of the Voronoi cells, i.e., the photons, and its edges connect pairs of the adjacent Voronoi cells. The right panel of Figure~\ref{fig:voronoi_illustration} depicts the graph derived from the Voronoi cells in the middle panel. We assign vertex $\xph_i$ the value of the Voronoi estimator, denoted by $\widehat\alllam_i^{{\rm \vorcell}_i}$, i.e., an estimate of the intensity in Voronoi cell $\vorcell_i$. Using the graph constructed by the Delaunay triangulation, the problem of estimating the $\Sk_k$ can be naturally translated to the problem of graph segmentation, i.e., partitioning the graph into subgraphs such that the Voronoi cells therein form a single segment, $\Sk_k$. Thus, each subgraph/$\Sk_k$ is formed by a set of vertices/photons connected by a collection of edges in the full graph. Since we assume that the intensity function is piece-wise constant, all the vertices in each subgraph should share similar values of $\widehat\alllam_i^{{\rm \vorcell}_i}$. Unfortunately, finding $\Sroi$ to maximize (\ref{eqn:prof_log_lik}) for fixed $\Ksrc$ remains an intractable combinatorial optimization problem even when confined to combinations of Voronoi cells. A distinct advantage of representing the problem as graph segmentation is that we implicitly impose an additional constraint that each $\Sk_k$ is a subgraph. In this way, traditional image segmentation\footnote{Image segmentation is the process of separating an image into a number of regions such that each region is composed of connected pixels with similar characteristics, such as similar pixel values.} methods can be adapted and used to segment the graph. In particular, we propose the Seeded Region Growing on Graph (\gsrg) method, which is similar to the original Seeded Region Growing (SRG) method used for images except that the concept of ``neighbors'' is determined by the edges of the graph instead of neighboring pixels. The original SRG is proposed in \citet{Adams-94} and is extended to several variants to deal with more complicated cases in \citet{Fan-14}. \gsrg\ starts by identifying, either manually or automatically, a set of initial seeds from the graph. Each seed can be a single vertex/photon or a seeding subgraph, i.e., a set of connected vertices/photons. For the moment, {we present a simplified version of \gsrg\ that requires} a perfect set of seeds, i.e., a set with exactly one seed in each $\hat \Sk_k$. Recall that $\Ksrc$ is fixed, thus initially we assume $\Ksrc$ seeds. The details of seed specification in more realistic settings are described in Section~\ref{sec:seed_spec} and {the full version of \gsrg\ (which requires an extra step to merge segments and estimate $\Ksrc$) } is detailed in Section~\ref{sec:subgraph_merge}. \gsrg\ grows the seeds into subgraphs by successively adding neighboring vertices to them. More specifically, at each iteration, the method selects a pair that consists of a growing subgraph, $S$, and one of its unassigned neighboring vertices, $i$, such that \begin{equation}\label{eqn: min_criterion} \delta(i, S)=\left \lvert \log \widehat\alllam_i^{{\rm \vorcell}_i} - \log\{\Nroi{}/\Aroi{}\} \right \rvert \end{equation} is minimized. This criterion compares the logarithm of the estimated intensities of the subgraph and the neighboring vertex because $\proflik$, which we aim to optimize, combines the segment-specific intensity estimates on the log scale. The vertex in the pair with the smallest difference is added to the corresponding subgraph. {This process} finishes when all the vertices of the full graph are assigned to exactly one subgraph. The Voronoi cells contained in the subgraphs give the final segmentation of $\fov$, i.e., $\widehat\Sroi(\Ksrc)$, for pre-specified $\Ksrc$. In practice, we save the index, $i$, of the neighboring Voronoi cell that minimizes $\delta(i, S)$ for each growing subgraph, $\Sk_k$, at each iteration. This reduces the time complexity of the method to be linear in terms of the number of photons. \subsection{Seed Specification} \label{sec:seed_spec} Since \gsrg\ begins by building out regions starting from a specified set of seeds, the number and location of the seeds are important considerations. As discussed in Section~\ref{sec:gsrg}, we would ideally have exactly one seed within each $\hat\Sk_k$. Unfortunately, this is not feasible in practice. A brute force solution is to over-specify the seed set to the extreme, by setting every photon location to be a seed, and devising an algorithm to merge the seeds into segments. \updatebf{But merging such a large seed set would be challenging in terms of both computational speed and statistical accuracy (see discussion in Section~\ref{sec:performance_antennae}). If the field being analyzed is known to have a large number of point sources, or if the scientific question requires focusing on point sources, then running a source detection algorithm first to find all such sources and specifying all of them as seeds will be helpful. Here we describe three generic strategies to specify smaller initial seed sets.} These strategies still over-specify the set in that they use a larger number of seeds than the expected number of segments (but less so than setting each photon to be a seed). Thus, after growing the seeds into subgraphs as described in Section~\ref{sec:gsrg}, we require a method to merge the resulting subgraphs into segments; we describe our merging algorithm in Section~\ref{sec:subgraph_merge}. \paragraph{Regular grid:} This method starts by overlaying a regular grid of $\numgrid$ points onto the field of view, $\fov$. For each grid point, we specify a seeding subgraph composed of the $\numsubgraph$ photons closest to the grid point (in terms of the Euclidean distance). We typically set $\numgrid$ and $\numsubgraph$ so that their product is much smaller than $n$ to enable the seeded regions to grow. Conflicts in the allocation of photons to seeding subgraphs (e.g., when a single photon is among the $\numsubgraph$ closest to two or more grid points) are broken by the order of assignment. The number of the photons, $\numsubgraph$, assigned to each seeding subgraph can be increased to stabilize the initial estimates of the growing subgraph intensities, especially when the contrast (i.e., the ratio between the intensities of an extended source and the background) is low. In practice, there is no universally best choice for $\numgrid$ and $\numsubgraph$, as their optimal values depend on factors including the number of observed photons $\obsph$ and the complexity and number of true astronomical sources. To ensure a sufficient number of photons for the seeding subgraphs, we require $$\numsubgraph{\leq}\frac{\obsph}{\numgrid}\,.$$ Since it is possible for a seed to fall on the boundary between two distinguishable segments and adversely affect subsequent processing, we propose an additional {\sl seed-rejection} step. Specifically, we compare the range of the Voronoi areas for each photon $\Avor{k}|_{\{k=1,\ldots,{\numsubgraph}\}}$ of a seeding subgraph of size $\numsubgraph$ with the expected empirical $2\sigma$ confidence interval for a homogeneous distribution of photons \citep[][Chapter 4.2]{Moller-94}, i.e., \begin{equation} \frac{1}{\widehat\alllam_s} \pm 2\times\frac{0.53}{\widehat\alllam_s} \end{equation} \citep[][Chapter 4.2]{Moller-94}, where \begin{equation} \frac{1}{\widehat\alllam_s} = \frac{1}{\numsubgraph}\sum_{k=1}^{\numsubgraph} \Avor{k} \end{equation} is the average of Voronoi areas for the photons in the seeding subgraph. Thus, if the actual range of Voronoi areas $\Avor{k}$ exceeds the expected empirical confidence interval, the seeding subgraph is rejected. \paragraph{Grid supplemented by local maxima:} If the regular grid used to generate the seeds is too sparse, some image structures may not be captured in the segmentation. For example, if there is not a grid point sufficiently near a point or extended source, the source may be merged into the background or another source. One remedy is to include additional seeds near the likely locations of sources. Sources induce an elevated intensity over small spatial scales. Thus, locations of high photon density are likely associated with sources. We propose to identify vertices that are local maxima of the graph constructed by the Delaunay triangulation, in the sense that the vertex value (i.e., $\widehat{\lambda}_i^{{\rm \vorcell }_i}$) is greater than or equal to that of its closest $k$ vertices (including itself), where closeness is measured by the Euclidean distance. For each local maxima we find in this way, we include a seeding subgraph composed of its closest $\numsubgraph$ vertices (including itself). \paragraph{Voronoi-area stratified sampling:} More complex schemes, designed to locate seeds over a broader range of surface brightness, can also be devised. Methods such as Otsu's thresholding \citep{Otsu-79} can also be used to specify seeds or seeding subgraphs for point-like or localized extended sources. As we discuss in Section~\ref{sec:performance_antennae}, the grid supplemented by local maxima is adequate to identify structures that exist at a large variety of scales in astronomical data. Here, as an example case, we describe a third method, which selects seeds via stratified sampling of the photons, with strata determined by the areas of the Voronoi cells, $\Avor{i}$. Specifically, the photons are divided into $\strata$ strata bounded by equally spaced quantiles of the distribution of $\Avor{i}$. The number of strata depends on the sample size, but we typically use $\strata\approx{10-20}$. Clumps of near-neighbor photons within each stratum are put together (see Appendix~\ref{sec:percolate}) into a set of labeled groups such that spatially nearby photons within a given stratum are all assigned the same label. If a given label is assigned to fewer than $\vorthr$ photons (typically $\vorthr=5$), then the photons with this labels are discarded for the purpose of seed specification; otherwise the central photon\footnote{Consider the set, $L$, of photons with a given label. For each photon $l\in L$, we calculate $d_l = \sqrt{\Avor{l}} + \sum_{k{\in}L}~d_{lk}$ where $d_{lk}$ is the Euclidean distance between photons $l$ and $k$. That photon in $L$ with the smallest $\{d_l\}$ is flagged as the central photon among the photons in $L$. This measure of centrality is better than computing a centroid as it ensures that the seed is guaranteed to be included inside the labeled region even when the region shape is complex, and that the seed is unambiguously assigned to one of the photons.} amongst those with each label is set as a seed. Subgraphs are then constructed for each retained seed photon in the same manner as described in Section~\ref{sec:gsrg}. \begin{algorithm*}[t]\label{GSRG_algo} \DontPrintSemicolon \KwData{Coordinates of observed photons $\xph_i=(x_{1i}, x_{2i}), i=1, \ldots, n$ in field of view $\fov$.} \KwResult{Piece-wise constant estimate of intensity function with a segmentation of $\fov$ into regions of constant intensity, $\widehat\Sroi=(\hat\Sk_0, \ldots, \hat\Sk_{\widehat\Ksrc})$.} \Begin{ \nl Use Voronoi tessellation to obtain a graph whose vertices are the observed photons with the Voronoi estimators $\widehat{\lambda}_i^{{\rm \vorcell}_i}$ as their values.\; \nl Using a method in Section \ref{sec:seed_spec}, specify the initial seeds for subgraph growing.\; \nl Grow seeds into subgraphs that over-segment the entire graph:\; \While{there are unassigned vertices}{ Select a pair of a growing subgraph $S$ and one of its neighboring vertices $i$ such that $ \delta(i, S)=\left \lvert \log \widehat\alllam_i^{{\rm \vorcell}_i} - \log\{\Nroi{}/\Aroi{}\} \right \rvert$ is minimized.\; Add the vertex in the pair with the smallest difference to the corresponding subgraph.\;} \nl Greedily merge subgraphs by minimizing BIC at each merger to obtain a nested sequence of segmentations.\; \nl Finally, set $\widehat\Ksrc$ and $\widehat\Sroi$ to the values of the nesting level with the smallest BIC. $\widehat\Ksrc$ is the final segmentation of $\fov$.\; } \caption{Seeded Region Growing on Graph (\gsrg)} \end{algorithm*} \subsection{Subgraph Merging} \label{sec:subgraph_merge} Using one of the seed sets of Section~\ref{sec:seed_spec} to grow subgraphs as described in Section~\ref{sec:gsrg} leads to an over-segmented graph since the number of seeds is invariably more than the predetermined $\Ksrc$ or the $\widehat\Ksrc$ that optimizes BIC. To merge the subgraphs into segments, we propose a subgraph merging method that aims to minimize BIC. Similar ideas were used by \citet{Lee-00} and \citet{Peng-11} in image segmentation. Specifically, the subgraph merging method starts by computing the BIC for the over-segmented graph and then iteratively selects two neighboring subgraphs, merges them, and recomputes BIC. The two merged subgraphs are selected so that their merger gives the largest decrease (or the smallest increase) of BIC among all possible merges (of neighboring subgraphs). In this sense, this is a greedy algorithm. We continue merging the subgraph until all subgraphs are merged into the entire graph, except when $\Ksrc$ is fixed, in which case we stop when $\Ksrc$ segments remain. In this way, we obtain a sequence of nested segmentations, $\{\widehat\Sroi(K), K=1,\ldots, n\}$, each with a BIC value. Finally, we set $\widehat\Ksrc$ and $\widehat\Sroi$ to the values of the nested level with the smallest BIC. At each iteration, we use an updating formula to speed the computation of BIC. Consider the graph segmentation associated with $\Ksrc$ and the graph segmentation after merging two of the subgraphs of $\Sroi(\Ksrc)$ and label the merged subgraphs $i$ and $j$, with $1\leq i <j \leq \Ksrc$. This merger decreases BIC by \begin{eqnarray} \Delta \mbox{BIC}_{\Ksrc, i, j} &=& \mbox{BIC}(\Ksrc)-\mbox{BIC}({\Ksrc-1}) \nonumber \\ &=& 2 \ \Nroi{i} \log \frac{\Nroi{i{\cup}j}\Aroi{i}}{\Aroi{i{\cup}j}\Nroi{i}} \nonumber\\ & & + 2 \ \Nroi{j}\log \frac{\Nroi{i{\cup}j}\Aroi{j}}{\Aroi{i{\cup}j}\Nroi{j}} \nonumber\\ & & + \mpar\log n \,, \end{eqnarray} where $i{\cup}j$ denotes the union of photons that belong to subgraphs $i$ and $j$. The complete procedure for \gsrg\ is summarized in Algorithm \ref{GSRG_algo}. \section{Simulation studies} \label{sec:simulation} Our simulation study is conducted assuming a hypothetical instrument that produces fields of view, $\fov$, with two-dimensional coordinates on the unit square. The simulations are designed to assess the performance of \gsrg\ when applied to fields of view of point-like sources embedded in extended sources of different shapes, while varying the exposure time (or equivalently, the overall counts in the field) and the contrast between the different components. In all our simulation settings, ``point-like sources'' are circular sources of \updatebf{radius 0.025 and extended sources are of area $\approx$0.2 relative to the $\fov$.} % We consider three ``true images'': \updatebf{ (a) four point-like sources embedded within a circular extended source of radius 0.25 (covering an area $0.196$ of the unit square), (b) three point-like sources embedded within a polygonal zig-zag shaped extended source comprising five squares of size $0.2{\times}0.2$ (total area of $0.2$), and (c) three point-like sources embedded within an arc-shaped extended source (a half-annular shape with inner radius 0.2 and outer radius 0.4, and total area $0.189$).} We consider these three settings because point-like sources embedded within a complex extended source are commonly observed in astrophysical fields of view, as illustrated in Section~\ref{sec:application} \updatebf{and the extended sources mimic typical astronomical shapes. Furthermore, the variety of shapes and the contrasts considered are a stringent test of the algorithm.} Letting $\parasz$ denote the exposure time (in arbitrary units) and $\parasnr$ denote the contrast \updatebf{level between the different components,} for each simulated $\fov$ we generated $\parasz \parasnr$ counts for each point-like source, $10\parasz \parasnr$ counts for the extended source, and $1000\parasz$ counts in expectation for the background, with the photons corresponding to each component distributed uniformly over the area allocated to it. In Figure~\ref{fig:illustration_all_scenario}, we have adopted $\parasz=1$ and $\parasnr=30$, so \updatebf{in all cases, the point-like sources have 30~photons, the extended shapes have 300~photons, and the background has $\sim$Poisson(1000)~photons, all distributed uniformly over their allocated areas, with $\approx$200 background counts under the area of the extended source. The contrast in surface brightness between the extended source and the background is thus $\approx$1.5$\times$, which is sufficiently large on the scale of the extended sources that the presence of the extended sources are clearly recognizable. However, it is clear from inspection of Figure~\ref{fig:illustration_all_scenario} that local fluctuations can be sufficiently large as to make estimating the boundary of the extended sources challenging. The photons randomly generated for each of the settings are shown for one case in the left column of Figure~\ref{fig:illustration_all_scenario}. The middle column shows shapes of the extended sources are also shown overlaid on the corresponding Delauney triangulation for each of the photons, as well as the seeds chosen for that case. The right column shows the segmentation, with points colored blue and extended source colored red, for the simulation in the left column, and superposed in grey lines, the result of segmentations from 10 additional simulations. The superpositions of the segment boundary lines over the expected lines of the shapes of both the point-like and the extended sources show that while fluctuations are present in individual simulations, on average the boundaries are picked out well. A detailed examination of the locations of the boundaries and their uncertainties requires modeling the boundaries, and we defer discussion to follow-up work (J.\ Wang, et al., in preparation). Here, we demonstrate that the components are well recovered in all cases. We show the distribution of the segment brightnesses found for all the simulations for all three cases in Figure~\ref{fig:brightness_distrib}: the components are clearly separated, with uncertainties of $\approx$10-15\% on the expected brightness in each component. We find that the brightness of the point-like component suffers from a bias because of the tendency of the segment areas to preferentially encroach on the much larger area of the surrounding extended source, thus causing a downward shift in the estimated brightness.} In our simulation design, in addition to varying the three ``true images'', we also vary the exposure time with $\parasz$ taking values 0.5, 1 and 2, and the contrast with $\parasnr$ taking values 10, 20 and 30. We simulate 500 fields of view under each of the 27 resulting simulation settings\footnote{The number of source counts was held fixed in all simulations, while the number of background counts was generated as a Poisson with mean $1000\parasz$ in order to explore the effect of background fluctuations. Thus, the total counts in a given dataset is $\parasz \parasnr + 10 \parasz \parasnr + {\rm Poisson}(1000\parasz)$.}. Each of the 13,500 simulated fields of view is analyzed with \gsrg, with initial seeds specified following the ``grid supplemented by local maxima'' method of Section~\ref{sec:seed_spec}. The regular grid used for seed specification is 5-by-5, with seed size {$\numsubgraph=5$} and a neighborhood size of $k=50$ for finding local maxima. Since the sources we are considering are simple, we set the BIC parameter to be $\mpar=4$; when more complicated shapes are expected, larger values of $\mpar$ should be used. The second and third columns of Figure~\ref{fig:illustration_all_scenario} show the initial seed specifications and segmentation results for the first of the 500 fields of view generated with $\parasz=1$ and $\parasnr=30$. All the point-like sources are clearly identified. The fitted boundaries of the extended sources are generally quite good, except for some mild leakage for the arc-shaped extended source. In general, we expect sources with longer perimeters per unit area\footnote{A standard measure of shape irregularity is the ``perimeter index'' of \citet{angel2010ten} which is defined to be the perimeter of a circle of area equal to that of the shape divided by the actual perimeter of the shape.} to be more challenging. This is because photons nearer the boundary of a segment are more likely to be misclassified than are those nearer the middle. Thus, more irregularly shaped sources, such as the arc-shaped source in this simulation, are more challenging, including for \gsrg. Furthermore, several of the initial seeds placed by the regular grid happen to fall near the boundary of arc-shaped source, which can also jeopardize the performance of seed-based methods. We use a clustering verification metric, specifically the \emph{adjusted Rand index} \citep[ARI;][]{Hubert-85}, to assess the quality of the \gsrg\ segmentations. The Rand Index \citep[RI;][]{rand:71} quantifies how well a given segmentation matches the ground truth segmentation. Specifically, each pair of photons is classified as either (a) being in the same fitted segment {\it and} in the same ground truth segment, (b) being in different fitted segments {\it and} in different ground truth segments, or (c) not being in class (a) or (b). (For the ground truth, the segments are the background, extended source, and each point-like source.) The Rand Index is defined to be the number of photons pairs in class (a) or (b), relative to the total number of photon pairs. Thus, a perfect match to the ground truth results in RI=$1$. The {\sl Adjusted RI} corrects the RI such that accidental overlaps of segments due to chance are accounted for, yielding values in the range $-1<\hbox{ARI}<+1$. Figure~\ref{fig:metrics_all_scenario} summarizes the ARI and the fitted value for the number of segments, $\widehat\Ksrc$ for the $500$ replicates under each of the 27 simulation setting. For each of the three true images, as expected, the \gsrg\ segmentation improves as either the exposure time or the contrast between the brightness of the components increases. This is seen in the progression from the top left panel to the bottom right panel of the right column of plots in Figure~\ref{fig:metrics_all_scenario}: the method fails to identify the embedded point sources when there are $\approx$5 counts in each source, but correctly identifies all components in $\geq$70\% of the cases (80\% for the circular and polygonal cases) when there are 60 counts in each point source. Similarly, the ARI increases to close to one (i.e., perfect agreement between ground truth and \gsrg\ segmentation, where the fitted number of segments equals the true number of sources) as $\parasz$ and $\parasnr$ increase. \section{Application to Antennae Galaxies} \label{sec:application} The \chandra\ observations of the Antennae galaxies provide a good test case for application of \gsrg. The X-ray data (see Figure~\ref{fig:antennae}, top left panel) shows complex structures. Specifically, the data reveal several point sources and extended regions, with several clumps of diffuse emission of different extent and surface brightness, along with a population of unresolved point-like sources superposed. Some of the point sources lie within the extended sources (e.g., in the extended region at the bottom of the image) and some of the extended sources are entangled with each other. As a conservative scenario we used the first \chandra\ observation of the Antennae galaxies obtained on December 1st 1999 \citep[OBSID 315;][]{Fabbiano2001}. The observation was performed with the ACIS-S detector for a total exposure of 72\,ksec. We process and screen the data \updatebf{(e.g. initial calibrations, removal of strong background flares, selection of good grades) as in \citeauthor{Zezas2002_Ant_proc} (2022; {\sl CIAO}~v3.2, CALDB~v2.11).} Again as a conservative scenario, we use the full dataset without any screening for events of very low or high energies which are dominated by background. The final dataset we use consists of $\approx$50,700 events within a $\sim{3.45}\arcmin\times{3.45}\arcmin$ region around the galaxy (screening for events in the generally used 0.5-8.0~keV band would result in a reduction of $\sim43\%$ in the total number of counts). Figures~\ref{fig:antennae} show different depictions of these data, with the coordinates scaled linearly to the range $[0,1]$, as is assumed by our implementation of \gsrg, and processed to show the resulting Voronoi tessellation. We apply \gsrg\ to these data in order to obtain statistically meaningful non-parametric segmentations of the different clumps of diffuse emission, as well as to separate diffuse and point-like emission sources. We apply the Voronoi tessellation to the photons and construct the graph of Delaunay triangulation (see top right panel of Figure~\ref{fig:antennae}). We specify the initial seeds for \gsrg\ via a regular grid supplemented by local maxima (see Section~\ref{sec:seed_spec}; shown in bottom left panel of Figure~\ref{fig:antennae}). We start with a regular 9$\times$9 grid (i.e., $\numgrid=81$), the initial estimates of which are stabilized by assigning the $\numsubgraph=20$ nearest photons to each seed; these cover the large scale variations in the data. Local maxima are determined over a neighborhood size of $k=100$. The 419 seeds that result from this process is a sufficiently large number to ensure that there is generally at least one seed in each point-like or extended source or the background. Since we expect the segments of the extended sources to be more irregularly shaped in the real data than in the simulation, we choose a larger value of the BIC parameter, $\mpar=6$ (see Equation~\ref{eq:BICdefn}); this corresponds to assuming that each segment has the complexity of an ellipse. The results of \gsrg\ are shown in the bottom right panel of Figure~\ref{fig:antennae} (the regions outlined in black are discussed in more detail in Section~\ref{sec:performance_antennae}), showing the boundaries of the fitted segments as thin red lines around the black dots depicting the photons. \gsrg\ correctly segments areas with similar surface brightness such that photons that correspond to these diffuse components are grouped together. The photons that belong to each of these segments can be trivially collected together for further analysis, depending on the scientific question being explored. For instance, in Figures~\ref{fig:antennae_hr}, we show segment-wise maps of the fractional hardness ratios HR$_{SM}=(S-M)/(S+M)$ and HR$_{MH}=(M-H)/(M+H)$, where $S, M, H$ are counts in PI channels [35:61] (${\approx}$0.5:0.9~keV), [62:82] (${\approx}$0.9:1.2~keV), and [83:135] (${\approx}$1.2:2~keV), respectively. Notice that the maps clearly demonstrate that the diffuse emission in the Antennae generally have softer spectra than the point sources. Maps such as these can be used to identify the extent of dust lanes in the Antennae system; e.g., the segments at $\sim$(0.4,0.22), $\sim$(0.3,0.25), and $\sim$(0.6,0.75), which are characterized by harder spectra than the surrounding segments, a characteristic of increased absorption \citep[cf.,][]{Zezas2006}. Furthermore, notice that the southern region (around Region~3 in the bottom left panel of Figure~\ref{fig:antennae}) is surrounded by a halo of relatively soft X-ray emission, in agreement with the spectral analysis of \cite{Baldi2006} who find emission from soft $\sim$0.6~keV thermal emitting gas. \section{Discussion}\label{sec:discuss} \subsection{Performance on the Antennae data }\label{sec:performance_antennae} Here we discuss the quality of the \gsrg\ segmentation of the Antennae in greater detail. To begin with, we note that \gsrg\ successfully identifies a number of point-like sources, characterized by the presence of a large number of photons within a small space. Several of these point-like sources are superposed on extended diffuse emission and surrounded by complex structures. Furthermore, unlike the case usually with methods that use piece-wise constant models, the point-like sources are invariably defined by single segments and not several concentric rings that approximate the typical profile of the PSF where intensity increases from the wings inward to rise to a peak at the core. Such cases are not entirely absent, however; see {Figure~\ref{fig:segment_result_mag_B}, specifically} the sources at $\sim$(0.23,0.57), $\sim$(0.43,0.54) and $\sim$(0.31,0.46) in Region~1, $\sim$(0.67,0.4) in Region~2, and $\sim$(0.49,0.12) and $\sim$(0.37,0.07) in Region~3. Further note that the segment boundaries are not smooth because of the boundary being formed by the outermost Voronoi cells. The photons that comprise the boundary are also subject to stochasticity, due both to PSF-induced statistical variations in photon arrival locations, as well as the greedy merging process. Visual inspection of the results suggests that at the lowest surface brightness levels, fluctuations in the counts could result in oversegmentation of what is usually considered the background (e.g., the two large extended regions along the left side of the bottom edge of the $\fov$). Nevertheless, the expanded views of the inset regions in Figure~\ref{fig:segment_result_mag_B} show that the segmentation correctly separates diffuse emission structures at different spatial scales and surface brightness levels. In particular, transitions in the spatial density of photons across the boundaries are clearly discernible by eye, such as those between segments B$\leftrightarrow$C, B$\leftrightarrow$D, D$\leftrightarrow$E, D$\leftrightarrow$F, E$\leftrightarrow$F in Region~1; between segment B and segments A, C, D, F, G, I, J in Region~2; and segment A and segments B, C, D, G, and H as well as C$\leftrightarrow$F and C$\leftrightarrow$D in Region~3. Some transitions are too subtle to be visually recognizable (e.g., A$\leftrightarrow$B in Region~1, B$\leftrightarrow$E and B$\leftrightarrow$H in Region~2, and D$\leftrightarrow$E in Region~3) but are required due to the computed contrasts in the counts per unit area. Conversely, the brightness transitions across B$\leftrightarrow$C$\leftrightarrow$D in Region~1, B$\leftrightarrow$C$\leftrightarrow$K and B$\leftrightarrow$I$\leftrightarrow$L in Region~2, and A$\leftrightarrow$B$\leftrightarrow$C, F$\leftrightarrow$C$\leftrightarrow$D, and D$\leftrightarrow$A$\leftrightarrow$G,H are apt demonstrations of the capability of \gsrg\ to perform at the level of human visual acuity. Parametric modeling to capture the spatial variations in such structures would be much more difficult than the segmentations achieved here. An important factor in obtaining a reliable segmentation is the initial seed specification. It is worthwhile establishing that the scheme we {propose} generates a useful segmentation and does not miss features. For this, we compare the \gsrg\ method against a brute-force {segmentation} where every photon in the dataset is taken to be a seed, and the corresponding Voronoi cells are merged {using the BIC criteria as described in Section~\ref{sec:subgraph_merge}}. This brute-force scheme is similar to Scargle's (\citeyear{Scargle-02}) method (but using the BIC criteria instead of Bayes Factors) in that it eschews the Seeded Region Growing {on Graph} step developed and described in Section~\ref{sec:gsrg}. In Figure~\ref{fig:segment_result}, we compare the \gsrg-based segmentation (left panel) against the brute-force segmentation (right panel). Although at first glance the two segmentations look similar, a closer inspection reveals crucial differences. While the quality of the identification of point-like sources does not differ significantly, there are significant differences in the diffuse emission regions that strongly favor the \gsrg\ segmentation. Notice that segment C of Region~1 from \gsrg\ (top panel of Figure~\ref{fig:segment_result_mag_B}) is missing in the brute-force segmentation, and is effectively subsumed into segment D, which in turn also subsumes segment B. These changes are prima facie unsupported by the visible variations in the surface density of photons. Similarly, we see that segment E is incorrectly extended, and a different segment extends down into the middle of segment D. Such cases are also seen in Region~2 (middle panel of Figure~\ref{fig:segment_result_mag_B}), where all of the complexity found as segments C, D, and E are lost in the brute-force segmentation; and in Region~3 (bottom panel of Figure~\ref{fig:segment_result_mag_B}) where the clear separation of segments A and B is lost in the brute-force segmentation, as is the point-like source at $\sim$(0.4,0.15). In summary, clear variations in surface brightness are recovered in the \gsrg\ segmentation, unlike in the brute-force method. Using a smaller set of seeds {\sl improves} the robustness of the segmentation by avoiding the chaotic development of early merging steps; errors in early stages accumulate because of the greedy merging process. We thus conclude that \gsrg\ is superior because of, and not despite, the much {smaller}, but perceptively selected number of seeds used to carry out the segmentation.\footnote{Just as Markov Chain Monte Carlo techniques rely on running multiple chains and verifying {consistency} to gain confidence in the analysis results \citep{gelm:rubi:92}, we recommend that analyses that use \gsrg\ also consider the sensitivity of the results to the adopted seed set. The schemes that we recommend in Section~\ref{sec:seed_spec} are adequate to handle most scenarios encountered in astronomy, but are nonetheless characterized by several run-time specified parameters ($\numgrid$, $\numsubgraph$, $\numnn$, $\strata$, $\vorthr$). Work to formalize this process via bootstrap analysis is ongoing (Jue Wu, private communication).} Although \gsrg\ is not designed as a point-source detection method, it is instructive to see how it behaves in the case of point-like sources. In Figure~\ref{fig:point_sources} we how point-like sources can be identified in \gsrg\ (left panel) compared to a wavelet-based method that is optimized to detect point sources \citep[right panel; {\tt wavdetect}; ][]{Freeman-et-al02}. Based on the typical size of the \chandra\ point spread function (PSF), we isolate all \gsrg\ segments that cover an area of comparable or smaller size to the PSF\footnote{We choose segments identified by areas $\Aroi{k}\leq{0.0003}$ in normalized coordinates, which corresponds to a circular area of radius $\approx$1.6$''$ on the sky, comparable to the extent of the \chandra\ PSF. } and show them in the left panel. {(We emphasize that this is not a method to {\sl detect} point-like sources; while \gsrg\ segments with larger areas than the PSF size can be flagged as extended, regions with small areas cannot be definitively flagged as point sources, since such segments can occur due to layered segmentation of extended sources or even due to statistical fluctuations in the surface brightness of diffuse emission.)} In the right panel, we show all the {\tt wavdetect} detected sources, superposed on a counts image of the same field. Since {\tt wavdetect} is optimized to find point sources in a variety of scales, it may also detect more diffuse sources. Sources that are identified as extended based on visual inspection and/or comparison with the PSF profile \citep[][e.g. lack of a core, or PSF fitting for sources with more than 100 counts;]{Zezas2002_Ant_proc} are marked by red circles, while point-like sources are marked by cyan circles. We note that the \gsrg\ segmentation invariably finds as ``point-like" (based on the segment area criterion) the point sources that are confirmed by the inspection process of \citet{Zezas2002_Ant_proc} and does not find the extended sources identified by \texttt{wavdetect}. The latter are instead components of larger diffuse emission segments. In this respect, although the \gsrg\ is not a point-source detection algorithm, screening of the identified segments based on the segment area, $\Aroi{k}$, can be used to distinguish extended regions from point-like sources. \subsection{Advantages and Limitations of \gsrg}\label{sec:gsrgvgsrg} {Our} simulations (Section~\ref{sec:simulation}) and {analysis of} the \chandra\ Antennae dataset (Sections~\ref{sec:application} and \ref{sec:performance_antennae}) illustrate \gsrg's strength in identifying sources at many different scales. The method allows the identification of extended diffuse structures in X-ray data regardless of their shape, i.e., no assumptions are made about the morphology or the homogeneity of the sources. \updatebf{Note that while the blurring due to the shape of the PSF is not explicitly modeled, this has negligible effect on any source structure at scales larger than the size scale of the PSF. Thus, {we expect that} useful results {can be} obtained even when the PSF varies across the field of view, which can happen due to several reasons (the quality of the telescope optics can degrade away from the aimpoint; or fields are observed which have a large diversity of soft and hard sources, each with significantly different PSF shapes and sizes; or when complex combinations of datasets, such as multiple observations carried out at different angular offsets, are combined).} At spatial scales larger than the PSF size, {we expect } that results are not reliant on the specific characteristics of the PSF\footnote{Note that PSF size information {\sl may} be incorporated into the analysis, e.g., by requiring that any segment that is found to have a smaller area than that of the PSF be subsumed into a surrounding or adjacent segment. We do not use such a criterion in this work, though such a strategy is demonstrated in Figure~\ref{fig:point_sources}.}. \updatebf{Even when the photons are sparsely distributed, e.g., when the observation is dominated by diffuse structures at low surface brightness, and blurring due to the shape of the PSF is not included, point sources that may exist in the field of view can be identified due to the increased concentration of photons at their locations. However, note that \gsrg\ is designed to identify large scale extended regions, so the focus and trade-offs are different. Thus, weak point sources with low contrast against the surrounding diffuse emission are likely to be subsumed into the diffuse regions. But because these are by definition weak, they are unlikely to contribute significantly to the brightness (or hardness) of the diffuse component. This situation is effectively similar to the situation where the detection sensitivity of a telescope is insufficient to resolve apparently diffuse emission into its point source population. An additional issue to consider is the bias in the point source brightness demonstrated in Figure~\ref{fig:brightness_distrib}. This bias arises as area fluctuations from small point sources are naturally bounded at zero, but can extend into the area of the diffuse emission, leading to a skewed error distribution. So point source intensities found by SRGonG should not be used directly, but must be re-estimated using appropriate techniques \citep[e.g.,][]{2014ApJ...796...24P}. However, note that the bias demonstrated in Figure~\ref{fig:brightness_distrib} comes from point sources that do not have PSF wings; in real sources where point sources are sharply peaked due to the PSF, the area distribution bias works to the advantage of \gsrg, incorporating more of the PSF wings into the point source and reducing the resulting contamination of the diffuse emission by strong point sources.} \updatebf{Unlike adaptive smoothing, source detection, or contouring methods, \gsrg\ does not set S/N thresholds or rely on thresholds of source significance to determine the presence or extent of contiguous regions. Thus, even regions that may be characterized by low surface brightness tend not to be over-segmented. Conversely, since the uncertainty in the estimated brightness is dependent on the number of photons that fall within the segment, small variations in adjacent regions can be more easily distinguished when the areas of the segments are sufficiently large.} Also of note is that \gsrg\ works directly on photon lists, the most basic form of high-energy X-ray and $\gamma$-ray datasets \updatebf{and the Poisson nature of the data is explicitly accounted for during the merging process}. While this can have detrimental effects on running time when the size of the dataset is large\footnote{For illustration, the analysis of the Antennae dataset, with $50,700$ photons and $491$ seeds, takes $\approx$240~s on a 2021 epoch 14$''$ MacBook Pro with an Apple Silicon M1 SOC.}, using the data at the highest available resolution avoids the requirements to define artificial binning sizes. Of greater concern is the dependence of \gsrg\ results on the distribution of the initial seeds, especially for fields with low contrast. This may result in unstable behaviour because of fluctuations in the local minima in the spatial intensity distribution, leading to both false segmentation and false merging. We caution that while the schemes we describe in Section~\ref{sec:seed_spec} are generally adequate and perform well (see, e.g., Section~\ref{sec:performance_antennae}), as is typical with seeded-region-growing methods, the sensitivity of the segmentation to the adopted seed structure must always be checked. Future extensions of this method will include quantification of the uncertainty of the segmentation resulting from the stochastic nature of the data, which would be quantified in terms of uncertainty on {the number of segments,} the outline of the {segments,} and the corresponding source flux {within each segment}. Other avenues to explore include different merging procedures as substitutes for the greedy merge to address the over-segmentation. {The goal of } such alternative merging options {would be to} search more possible {final segmentations} and make the algorithm more robust to seed initialization. Yet another potential extension is to perform the analysis in 3 dimensions, incorporating photon energy information. Currently spectral information can only be used by running the code on passband filtered data. \section{Summary}\label{sec:summary} We have developed an algorithm that provides a piece-wise {constant} segmentation of a photon event list that approximates the {spatial} structure present in the data. Point-wise surface brightnesses are initially estimated as the inverse of their Voronoi cell areas and cells with similar brightnesses are grouped together to grow segments. The seeds {needed to grow the segments} can be initialized as regular grids, additionally supplemented with local maxima, or set using more complex processes by stratified sampling of Voronoi cell areas. The process begins with a deliberate over-segmentation, and neighboring segments are sequentially merged by maximizing the BIC change. The resulting (greedy) segmentation generates apertures on the sky plane that can be used to collect photons and carry out further analysis in a way that removes manual intervention in selecting regions of interest. We have explored this method via both simulations and application to a complex \chandra\ dataset, and find that it consistently provides a good description of both point-like and extended diffuse regions of arbitrary shapes. % We note that this is not a source detection method, but a robust method for the definition of source regions, especially for extended sources. In this way, it can be used to perform photometry or spectroscopy on arbitrarily shaped extended sources. This method provides several advantages over other commonly used methods for the analysis of extended sources in high-energy photon data. Namely, it allows the identification of sources at different scales even when they are embedded within each other without imposing any restrictive assumptions on the {spatial} distribution of the source photons or {the source} intensity. \facilities \chandra\ (ACIS) \software {\sl CIAO} \citep[\url{https://cxc.harvard.edu/ciao/};][]{Fruscione-06}; {\sl PINTofALE} \citep{2000BASI...28..475K}; {\sl Matlab} (\url{https://www.mathworks.com/products/matlab.html}); \gsrg\ (\url{https://github.com/jujWang96/Astro_sim}). \section*{Acknowledgements} This work was conducted under the auspices of the CHASC International Astrostatistics Center. CHASC is supported by NSF DMS-18-11308, DMS-18-11083, DMS-18-11661, DMS-21-13615, DMS-21-13397, and DMS-21-13605; by the UK Engineering and Physical Sciences Research Council [EP/W015080/1]; and by NASA 18-APRA18-0019. We thank our CHASC colleagues for many helpful discussions, especially Jilei Yang for his valuable comments on an earlier draft. MF, JW and TCML acknowledge further support from NSF through CCF-19-34568, DMS-18-11405 and DMS-19-16125. VLK further acknowledges support from NASA contract to the Chandra X-ray Center NAS8-03060. DvD and AZ were also supported in part by a Marie-Skodowska-Curie RISE (H2020-MSCA-RISE-2015-691164, H2020-MSCA-RISE-2019-873089) Grants provided by the European Commission. \newpage \bibliography{refs} \bibliographystyle{aasjournal} \appendix \section{Nearest Neighbor Labeling} \label{sec:percolate} Here we describe the heuristic by which selected photons are collected into groups characterized by their proximity (used in the Voronoi-area stratified sampling scheme for seed specification; see Section~\ref{sec:seed_spec}). The photons considered in a given stratum are defined by a small range of Voronoi areas, or analogously, are located at similar contour levels if an image were constructed from the photons. Thus, they are likely to be sparsely distributed, but with clumps of photons surrounding higher intensity regions. The goal here is to group the clumped photons that are near each other, without breaking up rings or other complex shapes. We emphasize that this heuristic is a quick but approximate pre-processing method to pick seeds for the full-fledged \gsrg\ algorithm. We expect this heuristic to be useful in situations where the astronomical dataset is characterized by sparsely distributed structures with a large dynamic range in surface brightness. We first determine an average characteristic length scale for the ensemble of photons included in \newcommand{\stratum}{\Upsilon} stratum $\stratum$, as $$ {{L}}_{\stratum} = 2 \sqrt{\frac{1}{2}\left[\max_{i\in\stratum}\{\Avor{i}\} + \min_{i\in\stratum}\{\Avor{i}\}\right]}. $$ This ensures that the length scale is typical of stratum $\stratum$. We begin with an arbitrary photon from $\stratum$, assigning it a unique group label, and recursively assign this group label to any neighbor, i.e., any photon in stratum $\stratum$ located within a Euclidian distance of $L_\stratum$ from any photon assigned to this group. The recursive labeling ends when no new neighbors are present, and we move to another arbitrary as yet unlabeled photon in $\stratum$, assign it a different label, and repeat the process. We continue this labeling until all photons in $\stratum$ are assigned labels. For a case where the photons are placed uniformly on a regular grid, this results in all the photons being aggregated into one clump with one label. If there are multiple clumps separated by $>L_\stratum$, each clump will be assigned a separate label. We eventually discard all clumps with fewer than $\vorthr$ photons and do not use them to set a seed. The entire process is repeated for each of the $\strata$ strata.
Title: XMM-Newton observations of PSR J0554+3107: pulsing thermal emission from a cooling high-mass neutron star
Abstract: XMM-Newton observations of the middle-aged radio-quiet $\gamma$-ray pulsar J0554+3107 allowed us, for the first time, firmly identify it in X-rays by detection of pulsations with the pulsar period. In the 0.2-2 keV band, the pulse profile shows two peaks separated by about a half of the rotation phase with the pulsed fraction of $25 \pm 6$ per cent. The profile and spectrum in this band can be mainly described by thermal emission from the neutron star with the hydrogen atmosphere, dipole magnetic field of $\sim 10^{13}$ G and non-uniform surface temperature. Non-thermal emission from the pulsar magnetosphere is marginally detected at higher photon energies. The spectral fit with the atmosphere+power law model implies that J0554+3107 is a rather heavy and cool neutron star with the mass of 1.6-2.1 $M_\odot$, the radius of $\approx 13$ km and the redshifted effective temperature of $\approx 50$ eV. The spectrum shows an absorption line of unknown nature at $\approx 350$ eV. Given the extinction-distance relation, the pulsar is located at $\approx 2$ kpc and has the redshifted bolometric thermal luminosity of $\approx 2 \times 10^{32}$ erg s$^{-1}$. We discuss cooling scenarios for J0554+3107 considering plausible equations of state of super-dense matter inside the star, different compositions of the heat-blanketing envelope and various ages.
https://export.arxiv.org/pdf/2208.06160
\label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \begin{keywords} stars: neutron -- pulsars: general -- pulsars: individual: PSR \jpsr \end{keywords} \section{Introduction} \label{sec:intro} Studies of thermal emission from neutron stars (NSs) is one of the ways to investigate the properties of super-dense nuclear matter in their interiors \citep[e.g.][]{yakovlev&pethick2004}. If thermal emission originates from the entire NS surface, the comparison of the measured thermal luminosity (or, equivalently, mean effective temperature) with predictions of NS cooling theories can set constraints on the equation of state (EoS) of such matter. Middle-aged (10$^4$--10$^6$~year-old) stars are the best targets for such analysis since thermal components usually dominate over non-thermal ones in their X-ray spectra. However, effective temperatures and ages are estimated only for a few dozens of NSs, which is not enough to make definite conclusions \citep{potekhin2020}. \begin{table} \renewcommand{\arraystretch}{1.2} \caption{\psr\ parameters obtained from \citet{pletsch2013}. Numbers in parentheses denote 1$\sigma$ uncertainties relating to the last significant digit quoted.} \label{tab:pars} \begin{center} \begin{tabular}{lc} \hline R.A. (J2000) & 05\h54\m05\fs01(3) \\ Dec. (J2000) & +31\degs07\amin41\asec(4) \\ Galactic longitude $l$, deg & 179.058 \\ Galactic latitude $b$, deg & 2.697 \\ Spin period $P$, ms & 465 \\ Spin frequency $f$, Hz & 2.15071817570(7) \\ Frequency derivative $\dot{f}$, Hz s$^{-1}$ & $-$0.659622(5) $\times$ 10$^{-12}$ \\ Frequency second derivative $\ddot{f}$, Hz s$^{-2}$ & 0.18(2) $\times$ 10$^{-23}$ \\ Epoch of frequency determination, MJD & 55214 \\ Data time span, MJD & 54702--56383 \\ Solar system ephemeris model & DE405 \\ \hline Characteristic age \tc, kyr & 51.7 \\ Spin-down luminosity \edot, \ergs\ & 5.6 $\times$ 10$^{34}$ \\ Characteristic magnetic field $B_\mathrm{c}$, G & 8.2 $\times$ 10$^{12}$ \\ \hline \end{tabular} \end{center} \end{table} The middle-aged radio-quiet PSR \jpsr\ (hereafter \psr) was discovered by Einstein@Home\footnote{\url{http://einstein.phys.uwm.edu}} in a blind search of \fermi\ Large Area Telescope (LAT) \gr\ data \citep{pletsch2013}. Its parameters are presented in Table~\ref{tab:pars}. Upper limits on the integral flux density at 111, 150 and 1400 MHz are 0.5, 1.2 and 0.066~mJy, respectively \citep{tyulbashev2021,griesmeier2021,pletsch2013}. The pulsar is projected onto the supernova remnant (SNR) \snr\ and likely associated with it \citep{pletsch2013}. This is a shell-type oxygen-rich remnant \citep{how2018} with the diameter of about 70~arcmin. Its age is unclear: the large diameter and the low surface brightness imply the age of $\sim$~10--100~kyr, but the radial configuration of the magnetic field is typical for young SNRs \citep{fuerst&reich1986,gao2011}. There are different estimates of the distance to the remnant, from about 3 to 6~kpc \citep{case&bhattacharya1998,guseinov2003,pavlovic2014}, obtained through empirical correlations between the radio surface brightness and the diameter of the SNR. However, recent studies show that \snr\ can be much closer, at $\approx 0.9$~kpc \citep{zhao2020}. This result is based on an apparent interstellar extinction jump at this distance along the remnant line of sight and on the assumption that it is associated with the dust formed by the SNR. However, the detected extinction jump may correspond to a foreground molecular cloud and not the remnant itself. If so, this value should be considered as the lower limit to the distance. Since \psr\ has not been detected in radio, it is not possible to obtain the distance basing on the dispersion measure. The only available estimate is the so-called `pseudo-distance' of $\approx 1.9$~kpc calculated from the empirical relation between the distance and the pulsar \gr\ flux \citep{sazparkinson2010}. Though such an estimate is rather uncertain (within a factor of 2--3), it is consistent with the association of \psr\ with \snr. \citet*{zyuzin2018} found the likely X-ray counterpart of \psr\ in the 1st \sw-XRT Point Source Catalogue \citep[1SXPS][]{evans2014}. Seventeen counts were detected from the source in the 11~ks exposure, and thirteen of them are in the 0.3--1~keV band, which indicates that the source is soft. Indeed, fitting the X-ray spectrum with a power law model, \citet{zyuzin2018} obtained the photon index $\Gamma>4$, which is too high for pulsars and may indicate the presence of a thermal-like spectral component. Pure thermal emission models~-- the black body and the NS atmosphere~-- resulted in NS surface temperatures of $\approx$~50--100~eV. In any case, the obtained model parameters remained very uncertain because of the low \sw\ count statistics. We therefore performed deeper observations with \xmm\ to confirm the counterpart of \psr\ and clarify its X-ray properties. Here we present results of these observations. The data and the reduction procedure are presented in Sec.~\ref{sec:data}. Imaging is described in Sec.~\ref{sec:imaging}. Timing and spectral analyses are presented correspondingly in Sec.~\ref{sec:timing} and Sec.~\ref{sec:spec}. We discuss the results in Sec.~\ref{sec:discussion} and make conclusions in Sec.~\ref{sec:sum}. \section{Observations and data reduction} \label{sec:data} 45-ks \xmm\ observations of the \psr\ field were carried out on 2021 October 7 (ObsID 0883760101, PI A. Karpova). The European Photon Imaging Camera Metal Oxide Semiconductor (EPIC-MOS) detectors were operated in the full frame mode and the EPIC-pn (PN hereafter) detector~-- in the large window mode. The corresponding imaging areas are 28~arcmin~$\times$ 28~arcmin and 13.5~arcmin~$\times$ 26~arcmin. For all instruments, the thin filter was chosen. We used the \xmm\ Science Analysis Software ({\sc xmm-sas}) v.19.1.0 for analysis. The {\sc emproc} and {\sc epproc} routines were utilised to reprocess the data. To filter out the periods of background flares, we extracted high-energy light curves from the fields of view (FoVs) of all EPIC detectors. Only one short flare is present (see Fig.~\ref{fig:lc}), mostly affecting the PN detector, but also having a slight impact on the MOS1 light curve, so we cleaned only the PN and MOS1 data, leaving the MOS2 data unfiltered. As a result, the effective exposures are 44.3, 44.5 and 38.8~ks for MOS1, MOS2 and PN cameras, respectively. Single, double, triple and quadruple pixel events were selected for MOS ({\sc pattern}~$\leq$ 12) and single and double pixel events~-- for PN ({\sc pattern}~$\leq$ 4). \section{Imaging} \label{sec:imaging} Using the `images' script\footnote{\url{https://www.cosmos.esa.int/web/xmm-newton/images}} \citep{imagesscript}, we created combined MOS+PN exposure-corrected images in different energy bands. The resulting image is shown in Fig.~\ref{fig:img}. The \psr\ candidate counterpart is seen as a bright source in the centre. Its position R.A.~= 05\h54\m05\fss067(10) and Dec.~= +31\degs07\amin41\farcs40(13) was derived by the {\sc edetect\_chain} task using the data from all EPIC detectors (numbers in parentheses are 1$\sigma$ pure statistical uncertainties). Taking into account the \xmm\ absolute pointing accuracy of 1.2~arcsec\footnote{\label{CAL-TN-0018}\url{https://xmmweb.esac.esa.int/docs/documents/CAL-TN-0018.pdf}}, the coordinates are in agreement within 1$\sigma$ with the \psr\ ones obtained from the \fermi\ data (Table~\ref{tab:pars}). From Fig.~\ref{fig:img} one can see that the \psr\ candidate counterpart is a soft source. No extended emission like a pulsar wind nebula (PWN) or misaligned outflows are seen in the pulsar vicinity. However, the \xmm\ point spread function (PSF) is rather wide and the compact PWN if exists can be blurred with the pulsar. To treat this possibility more carefully we used the {\sc eradial} tool, which extracts the source radial brightness profile and fits the PSF to it. The result is shown in Fig.~\ref{fig:radprof}, and we can conclude that the radial profile is consistent with the PSF and no compact PWN is resolved. We also do not resolve any extended emission in the \xmm\ FoV that could be identified with the SNR \snr, whose radio shell is outside the FoV. \section{Timing analysis} \label{sec:timing} The PN detector operating in the large window mode provides the time resolution of $\approx 48$~ms, which is sufficient to search for regular X-ray pulsations with the 465~ms spin period of \psr. We used the {\sc barycen} task and DE405 ephemeris to apply the barycentric correction and then extracted filtered for background flaring activity events in the 0.2--2~keV band from the 22-arcsec radius circle aperture centred at the pulsar position. The aperture was selected using the {\sc eregionanalyse} task which produces the optimum extraction radius basing on signal-to-noise ratio. This results in 1026 source counts or $\approx 98$ per cent of the total number of source counts detected in the whole PN energy band. To search for the pulsations, we performed the $Z^2_n$-test \citep{ztest} for the number of harmonics $n$ from 1 to 5 using the 0.8~mHz window around the spin frequency of 2.150474376(14)~Hz expected at the epoch of the \xmm\ observation (MJD~59494) according to the \fermi\ timing results (Table~\ref{tab:pars}). We detected pulsations at the frequency of 2.150493(2)~Hz and found statistically significant contributions from two leading harmonics. The resulting periodogram is shown in Fig.~\ref{fig:ztest}. The maximum $Z^2_2=42.7$, which implies the detection confidence level of $\approx 4.7\sigma$\footnote{See Appendix \ref{a:simulations} for details.}. We note that the frequency detected in X-rays is somewhat higher than the predicted one. This can be due to the pulsar timing noise and/or glitches, which might have occurred during the $\sim 8$~yr period between the last $\gamma$-ray observations of 2013, included by \citet{pletsch2013} into the \fermi\ timing solution, and the \xmm\ observations of 2021. For instance, the relative difference $\log(\Delta f/f) \approx -5.1$ is consistent with the distribution of the relative glitch sizes $\log(\Delta f_\mathrm{g}/f)$ observed for pulsars \citep{Lower2021}. \begin{table*} \renewcommand{\arraystretch}{1.3} \caption{Best-fitting parameters for different spectral models$^\dag$.} \label{tab:mcmc:bestfit} \begin{center} \begin{tabular}{lccccc} \hline Model & 2BB + PL & 2BB + PL$^\ddag$ & \bbg\ & \nsl\ & \nsh\ \\ \hline \nh, $10^{21}$ cm$^{-2}$ & $3.1_{-0.2}^{+0.2}$ & $5.2_{-0.8}^{+0.4}$ & $2.3_{-0.4}^{+0.5}$ & $1.62_{-0.06}^{+0.08}$ & $1.66_{-0.07}^{+0.08}$ \\ $D$, kpc & $5.5_{-0.5}^{+0.3}$ & & $4.1_{-1.4}^{+1.4}$ & $2.0_{-0.4}^{+0.2}$ & $2.0_{-0.2}^{+0.4}$ \\ $\Gamma$ & $2.0^\mathrm{fixed}$ & $2.0^\mathrm{fixed}$ & $2.0^\mathrm{fixed}$ & $2.2_{-0.4}^{+0.6}$ & $2.3_{-0.4}^{+0.5}$ \\ $K$, $10^{-6}$ \phs\ & $2.2_{-0.6}^{+0.6}$ & $1.9_{-0.7}^{+0.6}$ & $2.1_{-0.5}^{+0.6}$ & $1.6_{-0.7}^{+1.3}$ & $2.0_{-0.7}^{+1.5}$ \\ log $L_\mathrm{X}$, \ergs & $31.27_{-0.15}^{+0.13}$& $29.62_{-0.12}^{+0.20}+\mathrm{log}D_{0.9}^2$& $31.08_{-0.28}^{+0.20}$ & $30.21_{-0.30}^{+0.15}$ & $30.27_{-0.19}^{+0.19}$ \\ \hline $\alpha$, deg & & & & $70_{-20}^{+20}$ & $60_{-20}^{+20}$ \\ $\zeta$, deg & & & & $60_{-10}^{+20}$ & $80_{-30}^{+10}$ \\ $M$, \msun & & & & $1.9_{-0.2}^{+0.2}$ & $1.8_{-0.2}^{+0.3}$ \\ $R$, km & & & & $13.5_{-1.7}^{+1.2}$ & $13.0_{-1.3}^{+1.5}$ \\ $R^\infty$, km & & & & $16.9_{-1.3}^{+1.5}$ & $17.1_{-1.3}^{+1.3}$ \\ $k_\mathrm{B}T^\infty$, eV & & & & $47_{-2}^{+2}$ & $49_{-2}^{+2}$ \\ $R^\infty_\mathrm{cold}$, km & $19_{-5}^{+1}$ & $14_{-6}^{+5}\ D_{0.9}$ & $19_{-9}^{+1}$ & & \\ $R^\infty_\mathrm{hot}$, km & $1.12_{-0.11}^{+1.05}$ & $0.33_{-0.20}^{+0.23}\ D_{0.9}$ & $1.19_{-0.17}^{+0.87}$ & & \\ $k_\mathrm{B}T^\infty_\mathrm{cold}$, eV & $86_{-4}^{+5}$ & $65_{-3}^{+7}$ & $84_{-10}^{+6}$ & & \\ $k_\mathrm{B}T^\infty_\mathrm{hot}$, eV & $156_{-25}^{+12}$ & $140_{-17}^{+23}$ & $135_{-13}^{+19}$ & & \\ log $L^\infty$, \ergs\ & $33.32_{-0.19}^{+0.10}$& $32.79_{-0.39}^{+0.11}+\mathrm{log}D_{0.9}^2$& $33.23_{-0.61}^{+0.12}$ & $32.25_{-0.11}^{+0.12}$ & $32.33_{-0.10}^{+0.10}$ \\ \hline $E_0$, eV & & & $370_{-70}^{+30}$ & $340_{-40}^{+40}$ & $350_{-50}^{+30}$ \\ $\sigma$, eV & & & $24_{-14}^{+18}$ & $25_{-9}^{+13}$ & $24_{-8}^{+14}$ \\ $\tau$, eV & & & $> 40$ & $> 680$ & $> 890$ \\ EW, eV & & & $< 430$ & $150_{-40}^{+120}$ & $180_{-70}^{+70}$ \\ \hline $W$/\dof & $240 / 216$ & $231 / 216$ & $228 / 213$ & $228 / 211$ & $228 / 211$ \\ \chir/\dof & $1.46 / 40$ & $1.16 / 40$ & $1.14 / 37$ & $1.27 / 34$ & $1.27 / 34$ \\ \hline \end{tabular} \end{center} \begin{tablenotes} \item $^\dag$ \nh\ is the absorbing column density, $D$ is the distance, $\Gamma$ is the photon index, $K$ is the PL normalisation, $L_\mathrm{X}$ is the non-thermal luminosity in the 2--10~keV band, $\alpha$ is the angle between the axis of rotation and the magnetic axis, $\zeta$ is the angle between the rotation axis and the line of sight, $M$ is the NS mass, $R$ and $R^\infty=R(1+z_\mathrm{g})$ are the intrinsic and apparent radii of the NS, $T^\infty=T/(1+z_\mathrm{g})$ is the atmosphere redshifted effective temperature, $R^\infty_\mathrm{cold}$ and $R^\infty_\mathrm{hot}$ are the apparent radii of the equivalent emitting spheres, $T^\infty_\mathrm{cold}$ and $T^\infty_\mathrm{hot}$ are the redshifted effective temperatures of the cold and hot BB components, $L^\infty=L/(1+z_\mathrm{g})^2$ is the apparent bolometric thermal luminosity, $E_0$, $\sigma$, $\tau$ and EW are the absorption line centre, width, depth and equivalent width, $z_\mathrm{g}$ is the gravitational redshift. All errors are at 1$\sigma$ credible intervals, while the lower and upper limits are set at 98 per cent. The last two rows provide the minimum values of $W$-statistics, which was used as a log-likelihood in the MCMC procedure, and \chir\ calculated for the spectra which were grouped to ensure at least 25~counts per bin. \item $^\ddag$ In this case, in contrast to all other models, the \nh--$D$ relation was not used in the fitting procedure (see text for details). $D_{0.9} \equiv D/0.9$~kpc, where 0.9~kpc is the lower limit of the distance to \psr\ obtained from its association with the SNR \snr\ \citep{zhao2020}. \end{tablenotes} \end{table*} Folded X-ray pulse profile in the 0.2--2~keV energy band is presented in the upper panel of Fig.~\ref{fig:prof}. Following the approach suggested by \citet{becker1999} we calculated the optimal number of phase bins to be 11. However, this value exceeds the maximum of 9 bins set by the time resolution of the PN camera and the pulsar period, so we used the latter value to construct the folded light curve. Two peaks separated by about a half of the pulsar rotation phase are clearly resolved. The pulsed fraction (PF) of the emission was calculated from the photon phases using the bootstrap method outlined in \citet*{swanepoel1996}. The result obtained with this technique is not affected by binning effects, which is particularly important in our case since the number of counts is low and the phase bins are wide. The resulted background-corrected PF in the 0.2--2~keV band is $25\pm6$ per cent. In the lower panel of Fig.~\ref{fig:prof} we also show the pulse profile in the 2--4~keV energy band. The pulsar possibly demonstrates single-peaked pulsations, but the count statistics in this band is too low to make definite conclusions. \section{Spectral analysis} \label{sec:spec} We extracted the pulsar spectra from the 19-arcsec radius aperture using {\sc evselect} task. This radius was chosen using the {\sc eregionanalyse} task as the optimum value in terms of signal-to-noise ratio for the 0.2--10~keV band. The redistribution matrix and the ancillary response files were generated by {\sc rmfgen} and {\sc arfgen} tools. For the background, we chose the 38.5-arcsec radius circular source-free region, located at the same CCD chip and approximately the same CCD RAWY pixel position as the source (see Fig.~\ref{fig:img}), in accordance with general recommendations by the EPIC Consortium\footnote{See, e.g., footnote \ref{CAL-TN-0018}, pages 28--29}. As a result, we obtained 221, 246 and 1001 net counts in the 0.2--10~keV band from the MOS1, MOS2 and PN data, respectively. We fitted the spectra simultaneously with the X-Ray Spectral Fitting Package ({\sc xspec}) v.12.11.1 \citep{arnaud1996} using the {\sc tbabs} model with the {\sc wilm} abundances \citep*{wilms2000} to take into account the interstellar medium (ISM) absorption. At first, we grouped the spectra to ensure at least 25 counts per energy bin and used $\chi^2$-statistics to preliminarily check, which of the models, typically used to describe pulsars X-ray emission, can fit the data. The single {\sc powerlaw} (PL) model, corresponding to the NS magnetosphere emission, though statistically acceptable (\chir/\dof~= 1.03/41, \dof~$\equiv$ degrees of freedom) resulted in a photon index $\Gamma \approx 7$, which is too high to be consistent with typical slopes of non-thermal X-ray spectra of pulsars \citep[e.g.][]{kargaltsev2008}. Addition of the blackbody (BB) component (model \textsc{bbodyrad} in \textsc{xspec}) still gives a too high photon index, while the single BB model fits the spectrum worse (\chir/\dof~= 2.02/43). In contrast, the BB~+ BB model, composed of two BB components with different temperatures and radii of equivalent emitting spheres, is statistically acceptable with \chir/\dof~= 1.09/41. This model is usually assumed to describe thermal emission from some colder and hotter areas of the NS surface. However, there is some flux excess above the model at energies $\gtrsim 2$~keV. Although its significance is low, this may indicate the presence of the non-thermal emission. Thus, we added the PL component to the BB~+ BB model, which resulted in \chir/\dof~= 1.04/39 and a reasonable photon index $\Gamma \lesssim 3$. We also constructed the hydrogen atmosphere models {\sc nsmdintb} for the NS with a dipole magnetic field to fit the thermal spectral component. In these models, which are described in Appendix~\ref{sec:atm}, free parameters are: the NS mass $M$ and radius $R$, the distance $D$, the angle $\alpha$ between the axis of rotation and the magnetic axis, the angle $\zeta$ between the rotation axis and the line of sight, and the redshifted effective temperature $T^\infty = T/(1+z_\mathrm{g})$, where $z_\mathrm{g}$ is the gravitational redshift at the NS surface. The effective surface temperature $T$ is defined by the total thermal luminosity $L$ according to the relation $L \equiv 4\pi\sigma_\mathrm{SB}R^2 T^4$, where $\sigma_\mathrm{SB}$ is the Stefan-Boltzmann constant. We used two values of the magnetic field at the pole close to the estimates based on the dipole spin-down formula (see Appendix~\ref{sec:atm}), $B_\mathrm{p}=10^{13}$~G and $2\times10^{13}$~G (hereafter NS130 and NS133, respectively). We obtained rather poor fits with \chir/\dof~= 1.60/39 for NS130 and \chir/\dof~= 1.55/39 for NS133. However, we found that addition of an absorption Gaussian line (model \textsc{gabs} in {\sc xspec}) at $\approx 0.35$~keV significantly improves the fit resulting in \chir/\dof~= 1.15/36 and 1.17/36. Addition of the PL component to describe the flux excess at high energies slightly improves the fits leading to \chir/\dof~= 1.15/34 and 1.07/34. Basing on these preliminary tests, we further focused on the models 2BB~+ PL and \nsm\ as the ones providing the best fit statistics. The number of collected source counts is not large and for more rigorous analysis we regrouped the spectra to ensure at least 1 count per energy bin. This allows us to obtain the most robust estimates of the \psr\ spectral parameters. We applied $W$-statistics \citep*{wachter1979} appropriate for Poisson data with Poisson background\footnote{See \url{https://heasarc.gsfc.nasa.gov/xanadu/xspec/manual/XSappendixStatistics.html}.} and performed the fitting using the Markov chain Monte Carlo (MCMC) technique. We utilised the Bayesian parameter estimation procedure using {\sc pyxspec} interface and the {\sc python} package {\sc emcee} \citep{emcee2013}, which implements the affine-invariant MCMC sampler developed by \citet{goodman&weare2010}. The best-fitting model parameters, defined as the ones corresponding to the maximum values of probability density, were derived from the sampled posterior distributions together with their 1$\sigma$ credible intervals. The Bayesian inference also allows us to include some additional information in the fitting procedure. We used the 3D map\footnote{\url{http://argonaut.skymaps.info/}} of the dust distribution in the Galaxy based on \textit{Gaia}, Pan-STARRS 1 and 2MASS data \citep{green2019} to obtain the extinction--distance relation in the \psr\ direction. In Fig.~\ref{fig:nh-d} we show five representative samples of this relation drawn from the posterior distribution of the distance--reddening profiles (see \citealt{green2019} for details). The following procedure was used. At each step of the MCMC fitting, we randomly take one of the five samples and use the relation \nh\ = $a \times10^{21}E(B-V)$ to convert the selective reddening $E(B-V)$ into the equivalent hydrogen column density \nh, which is responsible for the ISM absorption in X-rays. The conversion factor $a$ is drawn from the normal distribution with the mean of 8.9 and the standard deviation of 0.4 according to the empirical relation \nh\ = $(2.87\pm0.12)\times10^{21}A_{V}$~cm$^{-2}$ \citep{foight2016} and implying the standard reddening law $A_{V} = 3.1 E(B-V)$, where $A_{V}$ is the optical extinction. Then we compute the distance $D$ using linear interpolation between the closest sample values $\underline{N_\mathrm{H}}$ and $\overline{N_\mathrm{H}}$ such that $\underline{N_\mathrm{H}} \leq$ \nh\ $\leq \overline{N_\mathrm{H}}$. In contrast to {\sc nsmdintb} models, where both the NS radius and the distance are free parameters, the BB model has a normalisation $N=(R_\mathrm{BB}^{\infty}/D)^2$ as a free parameter, where $R_\mathrm{BB}^{\infty}$ is the apparent radius of the equivalent emitting sphere. Thus, in the case of the 2BB~+ PL model at each step we independently sampled the radii $R^{\infty}_\mathrm{cold}$ and $R^{\infty}_\mathrm{hot}$ for colder and hotter emitting areas and then calculated normalisations using these values together with the computed distance. We constrained both radii to be $\leq 20$~km as a reasonable value of an NS radius measured by a distant observer. For the \nsm\ models, we used a prior $\alpha+\zeta~\geq 90\degs$, which follows from the shape of the folded light curve and the PF value. The PL photon indices $\Gamma$ were constrained in the range of 0.5--3 typical for pulsars \citep[e.g.][]{kargaltsev2008}. However, in the case of the 2BB~+ PL model, $\Gamma$ tends to the upper bound, while the temperature of the hotter BB component in general takes slightly lower values than that in the case of the pure BB~+ BB model. This behaviour is typical for situations when the PL component competes with the hotter BB component at lower energies, while the number of counts above $\sim 2$~keV is very low and the PL slope there is poorly constrained. This is exactly the case of J0554. Thus, we applied the fixed photon index $\Gamma=2$ for the 2BB~+ PL model. The best-fitting parameters are presented in Table~\ref{tab:mcmc:bestfit}. The pulsar spectrum with the best-fitting model \nsh\ is shown in Fig.~\ref{fig:spec}. In Fig.~\ref{fig:triangle} we present a corner plot of posterior distribution functions for some parameters of this model. To check the fit quality, we calculated values of the $\chi^2$-statistics using the spectra grouped to ensure at least 25 counts per bin. They are provided in the last row of Table~\ref{tab:mcmc:bestfit}. These $\chi^2$ values differ from the preliminary ones partially due to inclusion of parameter constraints in the fitting procedure. The Gaussian line model \textsc{gabs} describing the spectral feature is represented as \begin{equation} \mathfrak{G}(E) =\mathrm{exp}\left(-\frac{\tau}{\sqrt{2\pi} \sigma} e^{-\frac{(E-E_0)^2}{2\sigma^2}} \right), \end{equation} where $E_0$, $\sigma$ and $\tau$ are the line centre, width and depth. $\tau$ is poorly defined from the spectral fits and we provide only its lower limits. We also calculated the line equivalent width (EW) defined as: \begin{equation} \mathrm{EW} = \int (1 - \mathfrak{G}(E))\mathrm{d}E, \end{equation} which is better constrained by the fits. From the 2nd column of Table~\ref{tab:mcmc:bestfit}, one can see that the 2BB~+ PL model fails to fit the data when the \nh--$D$ relation is used. We have found that it describes the spectrum poorly at energies $\lesssim 0.4$~keV, where the model flux becomes systematically higher than the observed one. Exclusion of the \nh--$D$ prior removes the discrepancy in the soft band and results in the statistically acceptable fit (the 3rd column of Table~\ref{tab:mcmc:bestfit}). However, for reasonable BB normalisations, which imply the apparent radii of $\lesssim 20$~km and the distances of $\gtrsim 0.9$~kpc, the obtained column densities are about 1.5 times higher than the maximum value provided by the dust map of \citet[][see Fig.~\ref{fig:nh-d}]{green2019}. Recalling that the addition of the Gaussian absorption line significantly improves the fit in the case of the atmosphere models, we tried to add the line to the 2BB~+ PL model as well. This leads to the good fit without \nh\ anomalies even when the \nh--$D$ relation is implemented (see the 4th column in Table~\ref{tab:mcmc:bestfit}). \section{Discussion} \label{sec:discussion} \subsection{\psr\ spectral and timing properties} \label{subsec:spec&tmg} The time-integrated X-ray spectrum of \psr\ can be well described by the composite emission models containing the thermal and non-thermal spectral components. For the former, we tried the BB model, which imitates the emission from the solid or liquid state NS surface, and the hydrogen atmosphere models with dipole magnetic fields. In the case of the 2BB~+ PL model without the \nh--$D$ prior, we assume that the cold BB component originates from the bulk of the stellar surface while the hot component describes emission from a hot spot (e.g. polar caps). Then the distance to \psr\ should be $\lesssim 2$~kpc, otherwise the radius of the cold component will be inadequately large (see the 3rd column of Table~\ref{tab:mcmc:bestfit}). However, the model requires a much higher column density than expected for such distance according to the dust map by \citet{green2019}. This means that either the model, being formally statistically acceptable, results in a physically inappropriate \nh\ and hence wrong NS parameters, or the considered \nh--$D$ relation is incorrect in the \psr\ direction. Since the pulsar is projected onto the SNR \snr, the derived absorption excess could be provided by some filament of the remnant even if it is not visible in X-rays. Unfortunately, there are no X-ray extragalactic objects in the pulsar field which are bright enough to make an independent estimate of the maximum absorption in the pulsar direction. However, we checked some other extinction maps by \citet*{drimmel2003}, \citet{chen2014}, \citet{chen2019}, \citet{sale2014} and \citet{lallement2014,lallement2018,capitanio2017}. All of them provide $N_\mathrm{H} \lesssim 3.5\times 10^{21}$~cm$^{-2}$ at 2~kpc which is significantly smaller than the best-fitting value. Moreover, basing on the [O~{\sc iii}]/H$\alpha$ emission ratio measured for the SNR \snr\ associated with \psr, \citet{how2018} argued that $E(B-V)$ should not be much greater than 0.3 (i.e. \nh~$\lesssim2.7\times10^{21}$~cm$^{-2}$). This makes the 2BB~+ PL model hardly acceptable without additional constraints and model components. Implementation of the \nh--$D$ prior along with the addition of the low energy absorption feature solves the 2BB~+ PL problem, resulting in about twice lower column density and acceptable fit statistics. If we assume that the cold BB component describes the emission from the bulk of the stellar surface while the hot component corresponds to a hot spot, then the size of the latter is larger than the `standard' polar cap of a radius of a few hundred meters, expected according to the model of \citet{sturrock1971}. The obtained effective temperatures are in agreement with results for other NSs of similar age \citep{potekhin2020}. The \nsm\ models are also plausible. Parameters obtained for two different magnetic fields are very similar (see the last two columns of Table~\ref{tab:mcmc:bestfit}).\footnote{We also tried analogous models with other field strengths and found that a decrease of $B_\mathrm{p}$ to $\sim 2\times10^{12}$~G worsens the fitting statistics, while an increase to $5\times10^{13}$~G does not noticeably change the results.} These models indicate that \psr\ should be a rather heavy NS, with the mass of 1.6--2.1~\msun. Its redshifted effective temperature $T^\infty = 48 \pm 3$~eV ($0.56 \pm 0.3$~MK), which is about twice lower than the temperature of the cold BB component from the bulk surface of the NS in the 2BB~+ PL model. This is a typical situation when the thermal NS component is equally well described by the BB and atmosphere models, since the latter spectra are harder \citep{potekhin2014}. It is important, that all three statistically acceptable models implementing the \nh--$D$ prior require the absorption feature at $\approx 0.35$~keV regardless the local continuum shape. This supports the presence of the feature in the data. In the 2BB~+ PL model without the \nh--$D$ relation, which we consider as hardly acceptable, the absence of the absorption line is compensated by the implausibly high \nh\ value. We note that there are only a few rotation powered pulsars for which absorption lines have been reported: PSR J1740$+$1000 \citep{kargaltsev2012}, PSR J0659$+$1414 \citep{arumugasamy2018,zharikov2021,schwope2021}, PSR J0726$-$2612 \citep{rigoselli2019}, PSR J1819$-$1458 \citep{mclaughlin2007,miller2013} and Calvera \citep{shevchuk2009,shibanov2016,mereghetti2021}. For none of these objects the nature of lines has been unambiguously established. The nature of the \psr\ feature is also unclear. One of possible explanations is a cyclotron absorption line. Such line position measured by a distant observer is given by: \begin{equation} E_\mathrm{cyc}^\infty = 11.577\ (1+z_\mathrm{g})^{-1} Z\ \frac{m_\mathrm{e}}{m} \frac{B}{10^{12}\ \mathrm{G}}\ \mathrm{keV}, \end{equation} where $m_\mathrm{e}$ is the electron mass, $Z$ and $m$ are the charge number and mass of the particle that is responsible for the cyclotron absorption. Hence, the surface magnetic field is $\approx 4\times10^{10}$~G if the line is produced by electrons, and $\approx 7\times10^{13}$~G if it is produced by protons or $\approx (3-4)\times10^{13}$~G if it is produced by heavier ions. The first value is $\sim 200$ times lower and the second is about an order of magnitude higher than the estimated spin-down magnetic field (see Table~\ref{tab:pars} and Appendix~\ref{sec:atm}). Thus, the electron cyclotron line can be created at a few stellar radii above the NS surface, e.g. in a radiation belt, where the magnetic field is much weaker \citep[see e.g.][]{luo&melrose2007}. Otherwise, if it is an ion cyclotron line, then the magnetic field at the surface is considerably stronger than the $B$ values estimated in Appendix~\ref{sec:atm} and used in our atmosphere modelling. Such a cyclotron line might indicate the presence of strong multipole field components \citep[cf.][]{bilous2019,lockhart2019} or magnetic loops \citep*{Tiengo_13,MereghettiPM15,Rodriguez_Castillo_16}, which are not considered in our models. We also note that magnetized atmosphere models predict too low equivalent width of the proton cyclotron line in comparison with the observed one (cf.{} the discussion of a similar case by \citealt{Hambaryan_17}). In particular, our dipolar models show that the proton cyclotron line is damped by the smearing due to magnetic field variations over the surface (see Figs.~\ref{fig:spec_theor}). Alternatively, the feature may be formed by atomic transitions in a non-hydrogen atmosphere (e.g., \citealt{mori2007}), the ISM or a cloud near the outer part of the magnetosphere \citep[cf.][]{Hambaryan_09,Pires_19}. Finally, we cannot exclude the possibility that the absorption feature is an instrumental artefact. The addition of the line improves significantly the fit of the PN spectrum but does not influence the statistics for the MOS data. However, this is not surprising since the former spectrum contains much more counts. To check whether it is the artefact, it would be useful to examine whether the spectra of other sources in the PN FoV show similar features. Unfortunately, all other sources are not bright enough to perform such analysis. Phase-resolved spectral analysis of \psr\ could also help to clarify the nature of the feature in its emission. However, this is impossible because of the low count statistics. The non-thermal X-ray luminosity of \psr\ in the 2--10~keV band $L_\mathrm{X} \approx (1.6-1.9)\times10^{30}$~\ergs\ for the \nsm\ models and $1.2\times10^{31}$~\ergs\ for the \bbg\ model. The corresponding X-ray efficiencies $\eta_\mathrm{X}=L_\mathrm{X}/\dot{E}\approx10^{-4.8}$--$10^{-4.9}$ and $10^{-3.7}$, respectively\footnote{For the \nsm\ models, we recalculated the spin-down luminosity using the best-fitting parameters and the formula for the moment of inertia by \citet{RavenhallPethick94}. The resulting $\dot{E}=(1.2-1.4)\times10^{35}$~\ergs. For the \bbg\ model, we used $\dot{E}$ from Table~\ref{tab:pars}.}. For a 52~kyr old pulsar these values are compatible with the dependencies $L_{X}(t_\mathrm{c})$ and $\eta_\mathrm{X}(t_\mathrm{c})$ based on observations of other X-ray emitting pulsars \citep[see e.g.][]{zharikov&mignani2013}. Using the \nh--$D$ relation, we found the distance to \psr\ to be 1.6--2.4~kpc in the case of the atmosphere models. This is compatible with the `pseudo-distance' of 1.9~kpc based on the $\gamma$-ray data. If this estimate is correct, the SNR \snr\ is located somewhat closer than relations between the radio surface brightness and the diameter of the SNR predict. On the other hand, the \bbg\ model resulted in the larger distance of 2.7--5.5~kpc. This agrees with the upper limit on the distance to \snr\ of about 5~kpc provided by \citet{how2018}. We detected, for the first time, X-ray pulsations with the \psr\ spin period. The pulse profile shows two peaks per period. The pulsed fraction in the 0.2--2~keV band is $25\pm6$ per cent. This is a typical value for the thermal emission originated from the bulk of an NS surface \citep[e.g.][]{pavlov&zavlin2000}. Using the pulse profile, we can set additional constraints on the angles $\alpha$ and $\zeta$ in the case of the atmosphere models. The \psr\ observed and theoretical light curves are shown in Fig.~\ref{fig:lc-fit}. The maximum PF provided by the models {\sc nsmdintb} in the 0.2--2~keV band is $\approx 20$ per cent. This is somewhat lower than the measured value but is compatible with it within uncertainties. The corresponding $\alpha$ and $\zeta$ both lie in the range of 50\degs--70\degs\ which is compatible with the results of the spectral analysis (Table~\ref{tab:mcmc:bestfit}). Despite the low number of counts, there is a marginal peak in the 2--4~keV pulse profile which coincides with the smaller peak in the 0.2--2~keV band (see Fig.~\ref{fig:prof}). This may indicate the pulsations of the non-thermal component. As can be seen from Fig.~\ref{fig:spec}, its flux in the 0.2--2~keV band is significantly lower than that of the thermal component. Nevertheless, it may contribute to the pulsed flux and thus somewhat increase the model predicted PF. It may also be responsible for some asymmetry of the pulse profile. The inset in Fig.~\ref{fig:lc-fit} shows the 90 per cent confidence contours of the 2D distribution of the angles $\alpha$ and $\zeta$ at fixed values of $M$, $R$, $B$ and $T^\infty$. These contours are obtained assuming that the flux values, derived from the number of counts in the 9 phase bins, are normally distributed and uncorrelated. In contrast to the atmosphere models, the light curves for the 2BB model are almost flat. This is because the BB components assume uniform temperature distribution over the emitting areas and isotropic radiation (unlike the peaked radiation from the magnetized atmospheres), but mainly because of the smallness of the ratio of the fluxes from the hot and cold BB components. Taking the most probable values for the model \bbg{} from Table~\ref{tab:mcmc:bestfit}, we obtained the strict upper limit $\mbox{PF} < 3.2$ per cent, which is incompatible with the observed one. \subsection{\psr\ and NS cooling theories} \label{subsec:cooling} The comparison of the measured thermal luminosity with predictions of NS cooling theories can provide constraints on the EoS and other properties of the NS matter and tests for theoretical models of such matter. In Fig.~\ref{fig:cool} we demonstrate examples of such comparison. Here, the cooling curves, which show the time-dependence of the bolometric thermal luminosity of the star in the reference frame of a distant observer, $L^\infty(t^\infty)$, have been produced using the computer code presented by \citet{PotekhinChabrier18} with essentially the same microphysics input. The most uncertain microphysics ingredients that can significantly affect the cooling are the EoS and composition of the neutron star core, composition of the heat-blanketing outer envelopes, and density-dependences of the critical temperatures of the baryon superfluidity in the core (see, e.g., \citealt*{PotekhinPP15}, for review). Here we show the cooling curves for two EoSs and for two models of the heat-blanketing envelopes composition. For the EoS and composition of the core we employ two widely used models: the model BSk24 \citep{Pearson_18} and the model A18+$\delta$v+UIX$^*$ \citep*{AkmalPR98}, which is hereafter named APR$^*$ and parametrized according to \citet{PotekhinChabrier18}. Both EoSs describe the $npe\mu$ matter, which consists of the leptons and nucleons without allowance for other baryons or free quarks. For the composition of the non-accretred crust and envelopes, we use the BSk24 model of the ground-state matter. The alternative model is the accreted envelope with helium filling the heat-blanketing layers up to the density $\rho=10^9$~\gcc{} \citep*[cf.][]{BeznogovPY21} and with the ground-state composition at higher densities. We use the model of a magnetized envelope with the surface distribution of magnetic field and local effective temperature consistent with the atmosphere model described in Appendix~\ref{sec:atm}, with the polar magnetic strength $B_\mathrm{p}=2\times10^{13}$~G. We assume that the field is a relativistic dipole not only in the atmosphere but also in the stellar core. Accordingly, we take into account the effects of Landau quantization on the EoS and thermal conductivities in the outer crust and envelopes and the heat loss due to the synchrotron and fluxoid-scattering mechanisms of neutrino emission, as well as magnetically induced modifications of other neutrino emission mechanisms (see \citealt{PotekhinPP15} and references therein). For comparison, for each EoS and each envelope composition, we also plot a cooling curve for the same NS without magnetic field. The baryon superfluidity in the $npe\mu$ matter of an NS can be of three types, characterized by the neutron singlet or triplet and proton singlet pairing. The theory of the neutron singlet superfluidity, which occurs in the inner crust of an NS, is sufficiently robust \citep[see][]{Ding_16}; specifically, here we use the MSH model \citep{MargueronSH08} in the parametrized form by \citet{Ho_ea15}. In contrast, there are several substantially different theoretical models for the other two types of superfluidity that operate in the NS core and considerably affect the cooling (see, e.g., \citealt{Page_14,SedrakianClark19}, for review). For illustration, we use the parametrizations from \citet{Ho_ea15} for the BS \citep{BaldoSchulze07} and TTav \citep{TakatsukaTamagaki04} models of the proton singlet and neutron triplet superfluidity types, respectively. The error bar in Fig.~\ref{fig:cool} embraces the 1$\sigma$ limits to the measured thermal luminosity $L^\infty$ for the atmosphere models (the last two columns of Table~\ref{tab:mcmc:bestfit}). It is placed at $t^\infty=t_\mathrm{c}$, and the leftward arrow indicates that the true age of the pulsar is likely somewhat smaller. This expectation is based on the fact that usually (although not always) true ages of pulsars are smaller than their characteristic ages (see, e.g., \citealt{potekhin2020}). Figure~\ref{fig:cool}a shows the cooling curves for the BSk24 model. These cooling curves pass above the observational error bar for the NS model with $M=1.4\msun$, but below it for the models with $M\geq1.6\msun$. The fast cooling of the massive NSs is mainly due to the powerful direct Urca processes of neutrino emission, which operate at sufficiently high densities in the cores of these stars and overpower the more common modified Urca processes. This cooling enhancement occurs if $M$ exceeds a certain threshold value \citep[e.g.,][]{Haensel95}, which is slightly below $1.6\msun$ for the BSk24 model \citep{Pearson_18}. The observations can be made compatible with the enhanced cooling models, if we assume that the true age of J0554 is smaller than \tc. Among the cooling curves shown in Fig.~\ref{fig:cool}a, the smallest discrepancy between the true and characteristic ages is required for the NS with $M=1.6\msun$ and accreted envelope. In this case $M$ is only slightly larger than $M_\mathrm{DU}$, so that the direct Urca processes operate only in a small central part of the NS core. Figure~\ref{fig:cool}b shows the cooling curves for the APR$^*$ model. For this EoS, the direct Urca processes cannot occur for $M\lesssim 2\msun$. However, the non-enhanced (so-called minimal) cooling can be compatible with the observations of J0554 in this case, if the blanketing envelope is non-accreted. It is worthwhile to note that the latter compatibility is achieved due to the enhancement of the modified Urca processes, recently discovered by \citet*{Shternin_18}. This enhancement becomes very strong near the threshold density for opening the direct Urca process. For the APR$^*$ model, the threshold mass only slightly exceeds $2\msun$, therefore the effect of \citet{Shternin_18} significantly enhances the total neutrino luminosity of the NS with $M=2\msun$ and thus decreases its temperature. In the absence of such enhancement, the luminosity would be higher for NS models with higher masses in the minimal cooling scenario, as illustrated by the long-dashed curves in the inset to Fig.~\ref{fig:cool}b, because more massive stars have larger heat capacities. The presented analysis is self-consistent in the sense that cooling scenarios for BSk24 and APR$^*$ models are considered taking into account the NS parameters (mass, radius, magnetic field) obtained from the spectral fit. In order to give preference to one of the EoSs, better constraints on the NS mass and true age are required. \section{Summary} \label{sec:sum} Using 45-ks \xmm\ observations we detected a soft point-like X-ray source within 1$\sigma$ area of the \fermi\ position of the \gr\ pulsar \jpsr, confirming the earlier \sw\ detection at much higher significance. We firmly established the pulsar nature of the source by detecting coherent pulsations with the \psr\ frequency in the 0.2--2~keV band. The pulse profile demonstrates two peaks separated by about a half of the rotation phase. The background-corrected pulsed fraction is $25\pm6$ per cent. Marginal single-peaked pulsations are seen in the hard band above 2~keV, but low number of counts precludes definite conclusions. The spectral analysis shows that the thermal emission from the surface of the NS dominates at energies below $\approx 2$~keV, while a weak non-thermal magnetospheric component may be present in the hard band. To describe the former, we constructed a set of the NS hydrogen atmosphere models with dipole magnetic field. In order to fit the data they require addition of the absorption line at $\approx 0.35$~keV. The nature of the feature is unclear. Among the possibilities are cyclotron absorption, atomic transitions in the ISM or NS atmosphere, or an instrumental artefact. Implementing the absorption column density--distance relation, we estimated the distance to the pulsar to be $\approx 2$~kpc. We note that the combination of two blackbody components, corresponding to the cold NS surface with a hot spot on it, and the absorption line also provides statistically acceptable fit, resulting in about twice as large distance. However, this model is less physically realistic and it fails to reproduce the observed pulse profile with the pulsed fraction $\gtrsim 20$ per cent. Therefore, we consider the atmosphere models as more appropriate ones. The best fitting parameters obtained for the atmosphere models suggest that \psr\ is a rather heavy NS with the mass in the range of 1.6--2.1~\msun\ and the radius of about 13~km. The redshifted effective temperature of $\approx 50$~eV corresponds to the bolometric luminosity of $\approx 2\times10^{32}$~\ergs\ as seen by a distant observer. Utilising this value, together with the pulsar characteristic age of 50~kyr, we investigated cooling scenarios for \psr\ in the frame of two popular EoSs. For the BSk24 model, the observed bolometric luminosity is consistent with the predictions if the mass of the NS is close to the lower limit of the range obtained from the spectral fit, and if the true age of \psr\ is substantially smaller than the characteristic one. This model also favours the presence of the accreted heat-blanketing envelope. On the other hand, the APR$^*$ model requires non-accreted envelope and the mass close to 2.0\msun, which leads to effective cooling of the NS through modified Urca processes and compatibility of the model cooling curve with observations if the true age of \psr\ is close to the characteristic one. Deeper X-ray observations are necessary to better constrain the \psr\ parameters. Phase-resolved spectral analysis would allow one to unveil the nature of the absorption feature and to better constrain the pulsar geometry. Measurement of the pulsar proper motion could confirm the association of \psr\ and SNR \snr\ and provide an independent estimate of their age. \section*{Acknowledgements} We are grateful to V. Suleimanov for providing calculations of specific spectral fluxes from hydrogen atmospheres of NSs with strong magnetic fields. We also thank the anonymous referee for valuable suggestions that improved the quality of the paper. DAZ thanks Pirinem School of Theoretical Physics for hospitality. The work of AST and AYP was supported by the Russian Science Foundation grant 19-12-00133-P. Scientific results reported in this paper are based on observations obtained with \xmm, en ESA science mission with instruments and contributions directly funded by ESA Member States and NASA. \section*{Data Availability} The \xmm\ data are available through the data archive \url{https://www.cosmos.esa.int/web/xmm-newton/xsa}. \bibliographystyle{mnras} \bibliography{ref} % \appendix \section{Pulsations detection significance and spin frequency uncertainty} \label{a:simulations} To estimate the confidence level of the detection of the periodic signal from \psr, we generated 1 million synthetic light curves with the length and the mean count rate equal to the ones observed from \psr, but consisting of pure Poisson noise without any periodic component. For each of the light curves, we performed $Z_2^2$-test using the same frequency window and the same number of trial frequencies as for $Z_2^2$-test on the observed light curve. The obtained highest $Z_2^2$ values were used to construct the cumulative distribution function (CDF) of $Z_2^2$ in the absence of the periodic signal (see Fig.~\ref{fig:cdf}). We found that the probability to get $Z_2^2 = 42.7$ from pure noise is about $2\times10^{-6}$, which corresponds to the detection confidence level of $\approx 4.7\sigma$. The frequency uncertainty was estimated in a similar fashion. We fitted the pulse profile with the sum of the first two harmonics. We simulated 1000 event lists of the periodic signal with the measured frequency, amplitudes and relative phases of two harmonics, keeping the mean count rate fixed and varying the number of photons and their times of arrival according to the Poisson statistics. For each event list, we found the frequency of the signal performing $Z_2^2$-test identical to the one applied to the real data. The nearly symmetrical 68 per cent confidence interval of the resulting frequency distribution (see Fig.~\ref{fig:freqerr}) was taken as the desired uncertainty. \section{Atmosphere models} \label{sec:atm} The computation of the spectrum that can be measured by a distant observer is patterned after \citet{Zyuzin_21}. We construct the integral spectrum by assembling local spectra at different patches on the surface. We assume a dipolar magnetic field, modified by the effects of General Relativity \citep{GinzburgOzernoy,PavlovZavlin00}. The temperature distribution, which is associated with this magnetic field, is calculated following \citet{Potekhin_03}. For every selected field strength at the magnetic pole $B_\mathrm{p}$ and selected NS mass and radius, the local radiative flux density was computed at three magnetic latitudes, including the pole, for a set of 480 directions of the photon wave vector and for 150--200 photon energies in the X-ray band, using an advanced version of the code described in \citet*{SuleimanovPW09}. The fourth latitude is the equator, which is too cold to allow construction of an atmosphere model with the currently available opacities in strong magnetic fields for the selected range of effective temperatures log$T^\infty$~[K] = 5.4--5.8. However, its contribution to the total flux is small, so we replace it by the blackbody model (we have checked that using alternative models does not lead to a noticeable change in the total spectrum). The code of \citet{SuleimanovPW09} has been modified to allow for different angles $\theta_B$ between the magnetic field and the normal to the surface. Hydrogen composition is considered, taking into account incomplete ionization. The effects of the strong magnetic field and the atomic thermal motion across the field on the plasma opacities are treated following \citet{PotekhinChabrier03} with the improvements described in \citet*{PotekhinCH14}. Polarization vectors and opacities of normal electromagnetic modes are calculated as in \citet{Potekhin_04}. Then flux values at arbitrary latitudes, energies and directions are obtained by interpolation (or extrapolation, whenever needed). The monochromatic spectral flux density measured by a distant observer is computed by integrating the emission from different local patches over the stellar surface for any selected angle $\Theta_\mathrm{m}$ between the magnetic dipole axis and the line of sight (see Appendix~A of \citealt{Zyuzin_21} for details). In the axisymmetric model, the pulsar geometry is determined by the angles $\alpha$ and $\zeta$ that the spin axis makes with the magnetic axis and with the line of sight, respectively \citep[e.g.,][]{PavlovZavlin00}. To produce phase-resolved spectra, it is sufficient to calculate $\cos\Theta_\mathrm{m} = \sin\zeta\sin\alpha\cos\phi + \cos\alpha\cos\zeta$ for each rotation phase $\phi$. Then the light curve and the phase-integrated spectrum are given by integration of the phase-resolved spectrum over the energy or over the phase $\phi$, respectively. For isolated pulsars, the widely used estimate of the magnetic field strength is based on the expression \begin{equation} B \approx 3.2\times10^{19} \,C\, \sqrt{P\dot{P}}\textrm{~~G}, \label{PPdot} \end{equation} where $P$ is the period in seconds, $\dot{P}$ is the period time derivative, and $C$ is a coefficient, which depends on stellar parameters. For the non-relativistic rotating magnetic dipole in vacuo \citep{Deutsch55}, the magnetic field strength at the equator $B_\mathrm{eq}$ is given by setting \begin{equation} C=R_{10}^{-3}\,(\sin\alpha)^{-1}\,\sqrt{I_{45}}, \label{Cdip} \end{equation} where $R_{10}\equiv R/(10\mbox{~km})$ and $I_{45}$ is the moment of inertia in units of $10^{45}$~g~cm$^2$. The latter depends on the EoS, but in most plausible settings it can be estimated with an accuracy within 10 per cent by the approximation of \citet{RavenhallPethick94}, which can be written in the form \begin{equation} I_{45} \approx 0.42(M/\msun)(R^\infty/10\mbox{~km})^2 \label{I45} \end{equation} (see \citealt{BejgerHaensel02} for a more general fitting formula). The characteristic field $B_\mathrm{c}$ (Table~\ref{tab:pars}) is defined by equation~(\ref{PPdot}) with $C=1$ \citep[e.g.,][]{atnf2005}. For the likely values of $\alpha\gtrsim50^\circ$, $M\approx(1.6$--2.1)~\msun\ and $R\approx 11.7$--14.7~km, implied by the fitting results in Table~\ref{tab:mcmc:bestfit}, equations (\ref{PPdot})--(\ref{I45}) give $B_\mathrm{eq}\sim(4$--$11)\times10^{12}$~G, which implies, for the non-relativistic dipole field, $B_\mathrm{p}\sim(0.8$--$2.2)\times10^{13}$~G. These values are consistent with the values $B_\mathrm{p}=10^{13}$~G and $B_\mathrm{p}=2\times10^{13}$~G that we used to construct the atmosphere models. We have also tried models with lower and higher field strengths, but found that they do not provide a better fit. A real pulsar differs from a rotating magnetic dipole, because its magnetosphere is filled with plasma, carrying electric charges and currents. According to the results of numerical simulations of plasma behavior in the pulsar magnetosphere \citep{Spitkovsky06}, the equatorial magnetic field can be approximately described by equation~(\ref{PPdot}) with \begin{equation} C\approx(0.8\pm0.1) R_{10}^{-3}\,(1+\sin^2\alpha)^{-1/2} \,\sqrt{I_{45}}, \label{CSpit} \end{equation} which gives estimates in the range of $B_\mathrm{p}\sim(3$--$9)\times10^{12}$~G. Additional uncertainties arise from the effects of General Relativity, pulsar wind and deviations from the pure dipole geometry (see \citealt{Petri19} for discussion and references). For each of the selected $B_\mathrm{p}$ values, we have considered $M$, $R$, $T^\infty$, $\alpha$ and $\zeta$ as fitting parameters, using interpolation and extrapolation based on the computed spectra for $M=1.4\msun$ and 2.0\msun, $R=10$~km, 12~km and 14~km, $\log T^\infty\mbox{\,[K]}=5.4$, 5.5, 5.6, 5.7 and 5.8, and various $\Theta_\mathrm{m}$. Examples of the computed spectra are shown in the top and bottom panels of Fig.~\ref{fig:spec_theor} for less and more compact NS models, respectively. \bsp % \label{lastpage}
Title: A multifrequency characterization of the extragalactic hard X-ray sky
Abstract: Nowadays we know that the origin of the Cosmic X-ray Background (CXB) is due to the integrated emission of nearby active galactic nuclei. Thus, to obtain a precise estimate of the contribution of different source classes to the CXB it is crucial to fully characterize the hard X-ray sky. We present a multifrequency analysis of all sources listed in the 3d release of the Palermo Swift-BAT hard X-ray catalog (3PBC) with the goal of (i) identifying and classifying the largest number of sources adopting multifrequency criteria, with particular emphasis on extragalactic populations and (ii) extracting Seyfert galaxies to present here the 2nd release of the Turin-SyCAT catalog. We outline a classification scheme based on radio, infrared and optical criteria that allows us to distinguish between unidentified and unclassified hard X-ray sources, and classify and classify the remaining ones discriminating between Galactic and extragalactic classes. Our revised version of the 3PBC lists 1176 classified, 820 extragalactic, and 356 Galactic, 199 unclassified, and 218 unidentified sources. According to our analysis, the hard X-ray sky is mainly populated by Seyfert galaxies and blazars. For the Seyfert galaxies, we present the 2nd release of the Turin-SyCAT including a total of 633 Seyfert galaxies, with 282 new sources, which is an 80% increment from the previous release. We confirm, that there is no clear difference between the flux distribution of the infrared-to-hard X-ray flux ratio of Seyfert galaxies Type 1 and Type 2. However, we confirm a significant trend between the mid-IR flux and hard X-ray flux.
https://export.arxiv.org/pdf/2208.14181
\title{A multifrequency characterization \\ of the extragalactic hard X-ray sky} \subtitle{Presenting the 2$^{nd}$ release of the Turin-SyCAT} \author{M. Kosiba\inst{1, 2} \and H. A. Pe\~{n}a-Herazo\inst{3} \and F. Massaro\inst{2, 4, 5} \and N. Masetti\inst{6, 7} \and A. Paggi\inst{2, 3, 4} \and \\ V. Chavushyan\inst{8} \and E. Bottacini\inst{9, 10} \and N. Werner\inst{1} } \institute{Department of Theoretical Physics and Astrophysics, Faculty of Science, Masaryk University, Kotl\'a\v rsk\'a 2, Brno, 611 37, Czech \\ Republic \and Dipartimento di Fisica, Universit\`{a} degli Studi di Torino, via Pietro Giuria 1, I-10125 Torino, Italy. \and East Asian Observatory, 660 North A'oh{\=o}k{\=u} Place, Hilo, Hawaii 96720, USA. \and Istituto Nazionale di Fisica Nucleare, Sezione di Torino, I-10125 Torino, Italy \and INAF–Osservatorio Astrofisico di Torino, via Osservatorio 20, I-10025 Pino Torinese, Italy \and INAF - Osservatorio di Astrofisica e Scienza dello Spazio, via Piero Gobetti 101, 40129 Bologna, Italy \and Departamento de Ciencias F\'isicas, Universidad Andr\'es Bello, Fern\'andez Concha 700, Las Condes, Santiago, Chile \and Instituto Nacional de Astrofísica, \'Optica y Electr\'onica, Apartado Postal 51-216, 72000 Puebla, M\'exico. \and Dipartimento di Fisica e Astronomia G. Galilei, Univerist\`a di Padova, Padova, Italy. \and Eureka Scientific, 2452 Delmer Street Suite 100, Oakland, CA 94602-3017, USA. } \date{Received Month Day, Year; accepted Month Day, Year} \abstract {Nowadays we know that the origin of the Cosmic X-ray Background (CXB) is due to the integrated emission of nearby active galactic nuclei. Thus to obtain a precise estimate of the contribution of different source classes to the CXB it is crucial to have a full characterization of the hard X-ray sky.} {We present a multifrequency analysis of all sources listed in the 3$^{rd}$ release of the Palermo \textit{Swift}-BAT hard X-ray catalog (3PBC) with the goal of (i) identifying and classifying the largest number of sources adopting multifrequency criteria, with particular emphasis on extragalactic populations and (ii) extracting sources belonging to the class of Seyfert galaxies to present here the release of the 2$^{nd}$ version of the Turin-SyCAT.} {We outline a classification scheme based on radio, infrared and optical criteria that allows us to distinguish between unidentified and unclassified hard X-ray sources, as well as to classify those sources belonging to the Galactic and the extragalactic populations.} {Our revised version of the 3PBC lists 1176 classified, 820 extragalactic, and 356 Galactic ones, 199 unclassified, and 218 unidentified sources. According to our analysis, the hard X-ray sky is mainly populated by Seyfert galaxies and blazars. For the blazar population, we report trends between the hard X-ray and the gamma-ray emissions since a large fraction of them have also a counterpart detected by the \textit{Fermi} satellite. These trends are all in agreement with the expectations of inverse Compton models widely adopted to explain the blazar broadband emission. For the Seyfert galaxies, we present the $2^{nd}$ version of the Turin-SyCAT including a total of 633 Seyfert galaxies, with 282 new sources corresponding to an increasement by $\sim$80\,\% with respect to the previous release. Comparing the hard X-ray and the infrared emissions of Seyfert galaxies we confirm, that there is no clear difference between the flux distribution of the infrared-to-hard X-ray flux ratio of Seyfert galaxies Type 1 and Type 2. However, there is a significant trend between the mid-IR flux and hard X-ray flux, confirming previous statistical results in the literature.} {We provide two catalog tables. The first is the revised version of the 3PBC catalog based on our multifrequency analyses. The second catalog table is a release of the second version of the Turin-SyCAT catalog. Finally, we highlight that the SWIFT archive has already extensive soft X-ray data available to search for potential counterparts of unidentified hard X-ray sources. All these datasets will be reduced and analyzed in a forthcoming analysis to determine the precise position of low energy counterparts in the 0.5 -- 10 keV energy range for 3PBC sources that can be targets of future optical spectroscopic campaigns, necessary to obtain their precise classification.} \keywords{catalogs -- methods: data analyses -- X-rays: general} \section{Introduction} The Cosmic X-ray background (CXB) was discovered when the earliest X-ray astronomical rocket experiments were carried out \citep[see e.g.][]{Giacconi1962}. It appeared as a diffuse component of X-ray radiation distributed in all directions. In the last decades, after its discovery, several different scenarios were proposed to interpret its origin, such as considering spanning new types of faint discrete X-ray sources whose integrated emission could be responsible for the CXB (e.g. \citep{Gilli1999}, \cite{Gilli2001}) up to diffuse radiative processes occurring in the intergalactic space, as for example exotic emission from dark matter particle decay \citep[see e.g.][]{Abazajian2001}. However, the solution to this puzzle arose thanks to deep images obtained first with the ROentgen SATellite \textit{ROSAT} \citep{Hasinger1999}, collected in the early ninety's, and more recently with {\it Chandra} X-ray telescope \citep{Weisskopf2000}, all revealing that about 80\,\% of CXB is resolved \citep{Hasinger1998}, between 0.5\,keV and\,2 keV, as suggested by \citep{Cavaliere1976}. At hard X-ray energies the fraction of resolved CXB by {\em Swift} and {\em INTEGRAL} is of 2\% \citep{Bottacini2012} and by {\em NuSTAR} is of 35\% \citep{Harrison2016}. Thus, the origin of the CXB is nowadays established to be mainly due to the high energy emission of the extragalactic discrete sources, whose large fraction belongs to different classes of active galactic nuclei (AGNs) \citep{Gilli2007}. The first survey in the hard X-ray band was carried by the \textit{UHURU} satellite \citep[a.k.a SAS-1][]{Giacconi1971}. Since the discovery of the CXB, many surveys were performed in the soft and hard X-ray bands, including \citet{Forman1978} who produced a catalog of 339 X-ray sources observed by the \textit{UHURU} satellite in the 2--20~keV energy band. \citet{Levine1984}, using the X-ray and Gamma-ray detector HEAO-A4 on board the \textit{HEAO~1} satellite \citep{Rothschild1979} presented an all-sky survey in 13--180~keV range detecting 77 new sources. The hard X-ray component of the CXB radiation, observable between 3\,keV and up to 300\,keV, shows a distinct peak at $\sim$\,30 keV \citep{Gruber1999} being extremely uniform across the sky with the only exception of an overdensity along the Galactic plane \citep{Valinia1998, Revnivtsev2006, Krivonos2007a} and it is again strictly connected with AGN population emitting in the hard X-rays \citep{Frontera2007}. The currently flying satellites as the INTErnational Gamma-Ray Astrophysics Laboratory \textit{INTEGRAL} \citep{Winkler2003}, with its Imager on Board the \textit{INTEGRAL} Satellite \textit{IBIS} \citep{Ubertini2003} and Neil Gehrels \textit{Swift} Observatory \citep{Gehrels2004} with its Burst Alert Telescope \citep[BAT][]{Barthelmy2004} on board, carrying measurements in the hard X-rays significantly improved our understanding of the origin of the CXB and refined its measurement. The observed spectrum of the CXB is currently well described by the standard population synthesis model of AGNs, including the fraction of Compton-thick AGNs and the reflection strengths from the accretion disk and torus based on the luminosity- and redshift-dependent unified scheme \citep{Ajello2008, Ueda2014}. This is shown also in recent results achieved thanks to the NuSTAR observations in the 3– 20 keV band \citep{Krivonos2021b}. This has been also possible thanks to improvements achieved in the preparation of hard X-ray source catalogues \citep[see e.g.][]{Markwardt2005, Beckmann2006, Churazov2007, Krivonos2007b, Sazonov2007, Tueller2008, Cusumano2010, Bottacini2012, Bird2016, Mereminskiy2016, Krivonos2017, Oh2018, Krivonos2021, Krivonos2022} and the association of hard X-ray sources with their low energy counterparts \citep[e.g.][]{Malizia2010, Koss2019, Bar2019, Smith2020} and their optical spectroscopy follow-ups \citep[e.g.][]{Masetti2006a, Masetti2006b, Masetti2008, Masetti2012, Masetti2013, Parisi2014, Rojas2017, Marchesini2019}. There are three major catalogues built on observations collected in the last decade with two major space missions still active: (i) the Palermo \textit{Swift}-BAT hard X-ray catalogue \citep{Cusumano2010} based on 54 months of the Swift-BAT operation, currently updated to its 3$^{rd}$ release and with a 4$^{th}$ release ongoing\footnote{\href{https://www.ssdc.asi.it/bat54/}{https://www.ssdc.asi.it/bat54/}}, (ii) the \textit{Swift}-BAT all-sky hard X-ray survey, that published the 105 month \textit{Swift}-BAT catalogue \citep[see e.g.][]{Oh2018}, and (iii) the \textit{INTEGRAL} IBIS catalogue in the energy range 17-100 keV \citep{Bird2016}, performed using the \textit{INTEGRAL} Soft $\gamma$-ray Imager (ISGRI) \citep{Lebrun2003}, the low energy CdTe $\gamma$-ray detector on (\textit{IBIS}) telescope \citep{Ubertini2003}. Here we focus on the investigation of the 3$^{rd}$ release of the Palermo \textit{Swift}-BAT hard X-ray catalog (hereinafter 3PBC), with particular emphasis on extragalactic sources, since the release of the next version is currently ongoing, thus results provided by our analysis could be used therein. The 3PBC is based on the data reduction and detection algorithms of the first Palermo {\it Swift}-BAT Catalog hard X-ray catalog \citep{Segreto2010, Cusumano2010}. The 3PBC is available only online\footnote{\href{http://bat.ifc.inaf.it/bat\_catalog\_web/66m\_bat\_catalog.html}{http://bat.ifc.inaf.it/bat\_catalog\_web/66m\_bat\_catalog.html}} thus we point for the reference the publication of its 2$^{nd}$ release, the 2PBC \citep{Cusumano2010}. The 2PBC provides data in three energy bands, namely: 15 -- 30 keV, 15 -- 70 keV, 15 -- 150 keV for a total of 1256 sources above 4.8 $\sigma$ level of significance, where 1079 hard X-ray sources have an assigned soft X-ray counterpart, while the remaining 177 are still unassociated. The total source number increased in 3PBC to 1593 when considering a signal-to-noise ratio above 3.8, which is the catalog release we analyzed here. Please note, that only three 3PBC sources are detected at a signal-to-noise ratio lower than 5. The 3PBC catalogue covers 90\,\% of the sky down to a flux limit of 1.1\,$\times$ 10$^{-11}$\,erg\,cm$^{-2}$\,s$^{-1}$, decreasing to $\sim$50\,\% when decreasing the flux limit to 0.9\,$\times$ 10$^{-11}$\,erg\,cm$^{-2}$\,s$^{-1}$. First, we verified source classification for all associated counterparts listed in the 3PBC, adopting a multifrequency approach. This analysis was corroborated by checking if additional studies, available in the literature and carried out after the last 3PBC release, allowed us to obtain a more complete overview of source populations emitting in the hard X-rays. Then, our final goal was to explore in detail those extragalactic sources being identified as Seyfert galaxies \citep{Antonucci1985} to (i) release the 2$^{nd}$ version of the Turin-SyCAT \citep{Herazo2022} and thus (ii) refine our statistical analysis on the correlation found between the infrared (IR) and the hard X-ray fluxes for this extragalactic population. Additionally, we also aim at investigating possible connections between the hard X-ray and the gamma-ray emission in those blazars detected by Fermi-LAT. It is worth noting that given our final aim, the classification task performed on the Galactic sources is mainly devoted to excluding them from the final sample of new Seyfert galaxies. The present work will be also relevant for the association of hard X-ray sources with their low energy counterpart, which will be included in the next releases of hard X-ray catalogs. In addition, we highlight that the proposed investigation will also provide a more complete overview of those sources, that lack an assigned low energy counterpart as still unidentified. Finally, we remark that the reason underlying the choice of working with the 3PBC rather than subsequent versions of hard X-ray catalogs is mainly motivated by the opportunity of having more multifrequency information available in the literature. However a comparison with other recent catalogues as: the 105 month \textit{Swift}-BAT catalog\footnote{\href{https://heasarc.gsfc.nasa.gov/W3Browse/swift/swbat105m.html}{https://heasarc.gsfc.nasa.gov/W3Browse/swift/swbat105m.html}} \citep{Oh2018} and the \textit{INTEGRAL} hard X-ray catalogue \citep{Bird2016} are also included in the present analysis. The structure of the paper is outlined as follows: in \hyperref[sec:section2]{Section~2}, we described various catalogs and surveys used to search for multifrequency information related to high and low energy counterparts of hard X-ray sources; in \hyperref[sec:section3]{Section~3}, we present our multifrequency classification scheme adopted to label source counterparts. Then \hyperref[sec:section4]{Section~4} focuses on the main results of the characterization of the extragalactic hard X-ray sources while \hyperref[sec:section5]{Section~5} is entirely devoted to the second release of the Turin-SyCAT catalog and the statistical analysis for the IR -- hard X-ray connection. % Finally, our summary, conclusions, and future perspectives are given in \hyperref[sec:section6]{Section~6}. A comparison between our classification analysis and the previous one of the 3PBC is then reported in Appendix \ref{app:3PBC_reclassification}. We used cgs units unless stated otherwise. We adopted $\Lambda$CDM cosmology with $\Omega_M = 0.286$, and Hubble constant $H_{0} = 69.6$\,km\,$s^{-1}\,Mpc^{-1}$ \citep{Bennett2014} to compute cosmological corrections, the same used for the 1$^{sth}$ release of the Turin-SyCAT \citep{Herazo2022}. WISE magnitudes are in the Vega system and are not corrected for the Galactic extinction. As shown in our previous analyses \citep{D'Abrusco2014, Massaro2016, D'Abrusco2019}, such correction affects mainly the magnitude at 3.4 $\mu$\,m for sources lying at low Galactic latitudes (i.e., $|b| <20^{\circ}$ ), and it ranges between 2\,\% and 5\,\% of their magnitude values, thus not significantly affect our results. We indicate the WISE magnitudes at 3.4, 4.6, 12, and 22 $\mu$\,m as W1, W2, W3, and W4, respectively. For all WISE magnitudes of sources flagged as extended in the AllWISE catalog (i.e., extended flag ``ext\_flg'' greater than 0) we used values measured in the elliptical aperture. Sloan Digital Sky Survey (SDSS) \citep{Blanton2017, Abdurro2021} and Panoramic Survey Telescope \& Rapid Response System (Pan-STARRS) \citep{Chambers2016} magnitudes are in the AB system. Given the large number of acronyms used here, mostly due to different classifications and telescopes used, we summarized them in Table \ref{table:Table of Acronyms}. \begin{table}[h] \caption{Table of Acronyms used in the text.} \label{table:Table of Acronyms} \begin{tabular}{cc} \hline \hline Acronym & Meaning \\ \hline \hline ATNF & australian telescope national facility \\ CXB & cosmic X-ray background \\ AGN & active galactic nuclei \\ BLL & BL-Lac object \\ BZG & Galaxy Dominated Blazars \\ BZU & blazar of uncertain type \\ CV & cataclysmic variable \\ FSRQ & flat spectrum radio quasar \\ HERG & high excitation radio galaxy \\ LERG & low excitation radio galaxy \\ LINER & Low-ionization nuclear \\ & emission-line region galaxy \\ NOV & novae \\ PN & planetary nebulae \\ PSR & pulsar \\ QSO & quasi stellar objects \\ RDG & radio galaxy \\ SNR & supernovae remnant \\ WD & white dwarf \\ XBONG & X-ray bright optically normal galaxy \\ \hline \end{tabular} \end{table} \section{Hunting Counterparts of Hard X-ray Sources: Catalogues and Surveys} \label{sec:section2} This section provides a basic overview of all major catalogs used to carry out cross-matching analysis across the whole electromagnetic spectrum. Here we considered several (i) low energy and multifrequency catalogs, listing sources detected in radio, infrared and optical surveys and or based on literature analyses, and (ii) high energy catalogs, based on hard X-rays and $\gamma$-ray surveys. It is worth noting that the 3PBC catalog is based on a moderate shallow survey thus we expect relatively bright sources in the hard X-rays to be also bright at lower energies, at least for the extragalactic population of 3PBC sources, being mainly constituted by AGNs. This limits the number of catalogs used to perform the cross-matching analysis and we used the same adopted in the original 3PBC analysis. Our analysis has been also augmented by using NED\footnote{\href{https://ned.ipac.caltech.edu/}{https://ned.ipac.caltech.edu/}} and SIMBAD\footnote{\href{http://simbad.cds.unistra.fr/simbad/}{http://simbad.cds.unistra.fr/simbad/}} databases, where we queried all sources having a low energy counterpart listed in the 3PBC before providing a final classification to verify the presence of updated literature information that is not reported in the catalogs adopted for the cross-matching analysis. All catalogs used in the current analysis are listed in Table \ref{table:catalogs_table}. \subsection{Low energy catalogues for cross-matching analysis} At low frequencies, from radio to X-ray energies below 10\,keV, we mainly considered: \begin{enumerate} \item The Revised Third Cambridge catalog\footnote{\href{https://ned.ipac.caltech.edu/uri/NED::InRefcode/1985PASP...97..932S}{https://ned.ipac.caltech.edu/uri/NED::InRefcode/1985PASP...97..932S}} (3CR, \citep{Spinrad1985}). This catalog provides radio and optical data for 298 extragalactic sources, being the most powerful at low radio frequencies. It includes their positions, magnitudes, classification, and redshifts with only 25 sources being still unidentified \citep{Massaro2013, Maselli2016, Missaglia2021}. More than 90\% of the CR population have available multifrequency observations at radio, infrared, optical, and X-ray energies \citep[see e.g.][]{Massaro2015b, Maselli2016, Stuardi2018}. The 3CR catalogue was create with a flux density limit S$_{178}$\,$\geq$\,2\,$\times$\,10$^{-26}$\,W\,m\,(Hz)$^{-1}$ at 178 MHz, spanning across the northern hemisphere with declination above -5 degrees. The 3CR catalog has been also augmented by a vast suite of multifrequency observations carried out in the last decades that provides all information necessary to have a completed overview of the source classification \citep{Madrid2006, Privon2008, Massaro2010, Massaro2012c, Kotyla2016, Hilbert2016, Balmaverde2019, Gallardo2021, Balmaverde2021}. \item The Fourth Cambridge Survey catalog (4C) \footnote{\href{http://astro.vaporia.com/start/fourc.html}{ http://astro.vaporia.com/start/fourc.html}} is based on the radio survey which used the large Cambridge interferometric telescope at the Mullard Radio Astronomy Observatory at frequency 178 Mc\,s$^{-1}$, detecting sources that have flux density S$_{178}$\,$\geq$\,2\,$\times$\,10$^{-26}$\,W\,m\,(Hz)$^{-1}$. It is published in two papers, the first one listing 1219 sources at declination between $+$\,20$^{\circ}$\,and\,$+$\,40$^{\circ}$ \citep{Pilkington1965}, while the second one includes 3624 sources in two declination ranges, $-$\,07$^{\circ}$\,to\, $+$\,20$^{\circ}$ and $+$\,40$^{\circ}$\,to\, $+$\,80$^{\circ}$ \citep{Gower1967}. \item The Australia Telescope National Facility (ATNF) \footnote{\href{https://heasarc.gsfc.nasa.gov/W3Browse/all/atnfpulsar.html}{https://heasarc.gsfc.nasa.gov/W3Browse/all/atnfpulsar.html}} Pulsar catalog \citep{Manchester2005} is a complete catalog listing more than 1500 pulsars (PSR). Accretion-powered X-ray PSRs are not included in this catalog, because they have different periods, unstable on short timescales. The catalog is based on the PSR database of 558 PSRs \citep{Taylor1993} which was further supplemented by more recent PSR databases \citep{Manchester2001, Edwards2001} to establish the ATNF PSR catalog. \item The Catalog of Galactic Supernovae Remnants (SNRs) \footnote{\href{https://heasarc.gsfc.nasa.gov/W3Browse/all/snrgreen.html}{https://heasarc.gsfc.nasa.gov/W3Browse/all/snrgreen.html}} \citep{Green2017}, which is an updated version of the original catalog of galactic SNRs \citep{Green1984}, currently listing 295 SNRs built on the available results published in literature updated to 2016. \item The 4$^{th}$ edition of the catalog of High mass X-ray binaries in the Galaxy \footnote{\href{https://heasarc.gsfc.nasa.gov/w3browse/all/hmxbcat.html}{https://heasarc.gsfc.nasa.gov/w3browse/all/hmxbcat.html}} \citep{Liu2006} provides 114 sources, updated with 35 new sources detected, most of them being X-ray binaries having a Be type star or a supergiant star as an optical companion. \item The 7$^{th}$ edition \footnote{\href{https://heasarc.gsfc.nasa.gov/W3Browse/all/ritterlmxb.html}{https://heasarc.gsfc.nasa.gov/W3Browse/all/ritterlmxb.html}} of the catalog of cataclysmic variables (CVs), low mass X-ray binaries and related objects (original paper \cite{Ritter2003}) lists 1166 cataclysmic variables, 105 low-mass X-ray binaries, and 500 related objects for a total of 1771 sources. The sources are provided with coordinates, apparent magnitudes, orbital parameters, stellar parameters, and other characteristics. The entire catalog is split into three tables provided online. \item The 4$^{th}$ edition of the catalog of Low mass X-ray binaries in the Galaxy and Magellanic Clouds\footnote{\href{https://heasarc.gsfc.nasa.gov/w3browse/all/hmxbcat.html}{https://heasarc.gsfc.nasa.gov/W3Browse/all/lmxbcat.html}} \citep{Liu2007} contains 187 sources, updated by 44 newly discovered sources. The companion star of a Low mass X-ray binary is typically a K or M-type dwarf star. Small percentages of the companion stars are G type, red giants, or white dwarfs, and even smaller percentages of companions are A and F type stars. Sources are provided with their optical counterparts, spectra, X-ray luminosities, system parameters, stellar parameters of the components, and other parameters. \item The Catalog and Atlas of Cataclysmic Variables\footnote{\href{https://heasarc.gsfc.nasa.gov/W3Browse/all/cvcat.html}{https://heasarc.gsfc.nasa.gov/W3Browse/all/cvcat.html}} (CVcat, \cite{Downes2005}) presented its final release in January 2006 listing 1600 sources. The catalog provides all types of cataclysmic variables like novae, dwarf-novae, nova-like variables, sources classified only as CVs, interacting binary WDs, and possible supernovae. This catalog contains also all objects that have been classified as CVs at some point in the past and are no longer considered to be CVs. Those stars are labeled as NON-CV and are provided also with relevant references. \item To cross-match the sources with galaxy clusters we used only the Abell catalog of rich galaxy clusters\footnote{\href{https://heasarc.gsfc.nasa.gov/W3Browse/all/abell.html}{https://heasarc.gsfc.nasa.gov/W3Browse/all/abell.html}} \citep{Abell1989}. This catalog was conducted by a manual all-sky search for overdensities of galaxies on photographic plates. The catalog contains 4073 rich galaxy clusters, with at least 30 galaxies in magnitude range between m$_3$ and m$_3$ + 2, where m$_3$ is the magnitude of the third brightest cluster galaxy. \end{enumerate} \subsection{High energy surveys for cross-matching analysis} We also compared our classification of 3PBC sources with those of two hard X-ray catalogs (energies larger than 10\,keV) and one of the latest releases of the Fermi catalog of $\gamma$-ray sources. The former comparison allows us also to obtain more information about the source classification in particular for the Galactic population, while the latter one allows us to look for any trend between the hard X-ray and the $\gamma$-ray emission for the class of blazars. To carry out this task we used the following catalogs. \begin{enumerate} \item The 105 month \textit{Swift}-BAT catalog\footnote{\href{https://heasarc.gsfc.nasa.gov/W3Browse/swift/swbat105m.html}{https://heasarc.gsfc.nasa.gov/W3Browse/swift/swbat105m.html}} \citep{Oh2018} is created from data of a uniform hard X-ray all-sky survey in 14-195 keV band. It was developed using the same detector as the 3PBC catalog, but implementing different source algorithms to build X-ray images, data reduction, and source detection. Over 90\,\% of the sky is covered down to a flux limit of 8.40\,$\times$ 10$^{-12}$\,erg\,cm$^{-2}$\,s$^{-1}$ and over 50\,\% of the sky is covered down to a flux limit of 7.24\,$\times$ 10$^{-12}$\,erg\,cm$^{-2}$\,s$^{-1}$. The catalog provides 1632 hard X-ray sources detected above the 4.8\,$\sigma$ level, presenting 422 new detections compared to the previous version of 70-month \textit{Swift}-BAT catalog \citep{Baumgartner2013}. The catalog contains 1132 extragalactic sources, out of which 379 are Seyfert\,I and 448 Seyfert\,II type galaxies, 361 are Galactic sources and 139 are unidentified sources. Objects in the 105 month \textit{Swift}-BAT catalogue are identified together with their optical counterparts by searching the NED and SIMBAD databases and archival X-ray data (e.g., \textit{Swift-XRT}, \textit{Chandra}, \textit{ASCA}, \textit{ROSAT}, \textit{XMM-Newton}, and \textit{NuSTAR}). \item The \textit{INTEGRAL} IBIS survey hard X-ray catalogue\footnote{\href{https://heasarc.gsfc.nasa.gov/W3Browse/all/ibiscat.html}{https://heasarc.gsfc.nasa.gov/W3Browse/all/ibiscat.html}}, \citep{Bird2016}\footnote{\href{https://heasarc.gsfc.nasa.gov/W3Browse/integral/intibisass.html}{https://heasarc.gsfc.nasa.gov/W3Browse/integral/intibisass.html}} consists of 939 sources detected above a 4.5\,$\sigma$ significance threshold in energy band 17\,--\,100, using the (IBIS) hard X-ray telescope \citep{Winkler2003}. The catalog showed 120 previously undiscovered soft $\gamma$-ray emitters. We also checked our results by comparing them to the findings in \cite{Krivonos2022}. \item The second release of the fourth \textit{Fermi}-LAT catalog of $\gamma$-ray sources\footnote{\href{https://heasarc.gsfc.nasa.gov/W3Browse/fermi/fermilpsc.html}{https://heasarc.gsfc.nasa.gov/W3Browse/fermi/fermilpsc.html}} (4FGL-DR2, \citep{Ballet2020}, using the Large Area Telescope (LAT) on the \textit{Fermi} Gamma-ray space telescope mission \citep{Atwood2009}, reports 723 new sources, increasing up to 5064 $\gamma$-ray sources. The catalog processed the first 10 years of the data in the energy range between 50 MeV to 1 TeV. The largest class of Galactic sources in the 4FGL-DR2 is constituted by PSRs listing 292 sources, while the extragalactic sample is dominated by blazars with 2226 identified and/or associated BL Lac objects and Flat spectrum radio quasars, and 1517 additional blazar candidates of uncertain type. \end{enumerate} \subsection{Multifrequency catalogs for low energy associations} \begin{enumerate} \item The current 5$^{th}$ edition of Roma-BZCAT catalog of blazars based on multi-frequency surveys and extensive review of literature\footnote{\href{http://www.ssdc.asi.it/bzcat}{http://www.ssdc.asi.it/bzcat}} \citep{Massaro2015} lists coordinates and multifrequency data for 3561 sources which are either confirmed blazars or sources exhibiting blazar-like behavior. All sources included in the Roma-BZCAT are detected at radio frequencies. According to the Unified AGN model \citep{Antonucci1993, Urry1995}, blazars are AGNs whose jet happens to be closely aligned with our line of sight, exhibiting strong variations, apparent superluminal motion, and emission extending across all electromagnetic spectrum. \item The Turin-SyCAT \citep{Herazo2022} multifrequency catalog of Seyfert galaxies was built using optical, infrared, and radio selection criteria. Seyfert galaxies are AGNs, which are distinguished as type 1 and type 2 based on the observer's angle \citep{Antonucci1985}. All objects included in its 1$^{st}$ release have an optical spectroscopic classification, allowing us to establish precisely their redshifts and class. The catalog presents 351 Seyfert galaxies, out of which 233 are type 1 and 118 are type 2. In the analysis presented here, the 2$^{nd}$ release of the Turin-SyCAT, increased their number substantially by 80\% to 633 Seyfert galaxies. Details can be find in \hyperref[sec:section5]{Section~5}. All Turin-SyCAT sources with a 3PBC counterpart are detected in the 3PBC at a signal-to-noise ratio above 6. \end{enumerate} \section{Classification} \label{sec:section3} For classifying the sources considered in the presented analysis, we adopted the following step-by-step analysis, as shown in Figure~\ref{fig:flowch_scheme} and according to the criteria outlined below. It is worth noting that we are not associating 3PBC sources with their low energy counterparts, but we only update the classification of the associated counterpart based on the latest release of several multifrequency catalogs as those previously listed, and/or follow-up observations that were performed after the 3PBC release \citep[see e.g.,][and references therein]{Molina2009, Malizia2010, Malizia2016, Landi2017, Ricci2017, Koss2017}. \subsection{Classification scheme} We start inspecting the 3PBC \citep{Cusumano2010} catalogue. If a 3PBC source has an assigned counterpart we just adopted the multifrequency criteria reported below in this section to classify it. Then, for sources belonging to the extragalactic population, we also verified its redshift estimate. In particular, for all extragalactic sources, being the main focus of the current analysis, the presence of the optical spectrum or a description of it published in the literature is mandatory to consider it as \textit{classified}. For all sources lacking an assigned low energy counterpart in the original 3PBC, thus being unassociated, we perform the cross-matching analysis with all catalogs reported in section \ref{sec:section2} and we also checked updated information in NED and SIMBAD databases, if any. If no reliable counterpart is found within the BAT positional uncertainty region, we flagged the 3PBC source as \textit{unidentified}. On the other hand, if a potential counterpart is found, as in the previous step, we adopted the multifrequency criteria to classify it and eventually provide a redshift estimate, and, when successful, the associated source is indicated as \textit{classified}. Moreover, all associated sources that do not have optical spectrum available for their low energy counterpart and/or lack relevant information to determine their classification were labeled as \textit{unclassified}. All classified sources were then split into two main samples distinguishing between Galactic and extragalactic populations. The sky distribution for both the Galactic and the extragalactic populations are shown in Figure~\ref{fig:flowch_classification} and compared in Figure~\ref{fig:piechart_main}. We identified 9 classes and a few sub-classes for both the extragalactic and the Galactic sources discussed in detail in the following subsections, see Table \ref{table:extragalpop} and Table \ref{table:galpop}. \subsection{Criteria, classes \& distributions} \subsubsection{Extragalactic sources} The largest fraction of sources identified in the extragalactic hard X-ray sky belongs mainly to the two classes of Seyfert galaxies and blazars (\cite{Oh2018}, \cite{Paliya2019}, \cite{Ajello2009}), of which the latter account for 10\% - 20\% of the entire survey population \citep{Diana2022}. Thus, given the possibility to use both the Roma-BZCAT and the Turin-SyCAT \citep{Herazo2022}, built on the basis of multifrequency criteria, for all 3PBC classified sources that belong to these two catalogs we adopted the same classification reported therein. Moreover, we also used their classification schemes to identify new blazars and Seyfert galaxies, to help future releases. Blazars (class symbol: \textit{blz}) are the largest known population of $\gamma$-ray sources \citep{Abdo2010b, Massaro2012, Abdollahi2020}, dominated by non-thermal radiation over the whole electromagnetic spectrum \citep{Urry1995, Massaro2009}. Their observational features also include high and variable polarization, superluminal motions, very high observed luminosities coupled with a flat radio spectrum \citep{Healey2007, Hovatta2012} peculiar infrared colors \citep{Massaro2011, Abrusco2012, Massaro2013c, Massaro2014} and a rapid variability from the radio to X-ray bands with weak or absent emission lines \citep{Stickel1991}. \cite{Blandford1978} suggested that radiation of blazars could be interpreted as arising from a relativistic jet closely aligned with the line of sight. Blazars were thus classified into 4 categories: BL Lac objects (subclass symbol: \textit{bll}), with featureless optical spectra or presenting only relatively weak and narrow emission lines mainly due to their host galaxies. BL Lacs with their optical-UV spectral energy distribution dominated by the emission of their host galaxy has a subclass symbol: \textit{bzg}. Flat spectrum radio quasars (subclass symbol: \textit{fsrq}) show typical broad emission lines over a blue continuum. Sources exhibiting blazar-like broad-band features, but lacking optical spectroscopic classification are classified as blazars of uncertain type (subclass symbol: \textit{bzu}). According to the nomenclature of the Roma-BZCAT BL Lacs and FSRQs are labeled as BZBs and BZQs, respectively, while, to avoid confusion here they are marked with the classification symbols \textit{bll} and \textit{fsrq}. This choice was adopted because, given the recent optical spectroscopic campaigns devoted to the search for $\gamma$-ray blazars \citep{Landoni2015, Massaro2014, Ricci2015, Herazo2017, Paiano2017, Herazo2019, Paiano2020} a few more blazars, not yet listed in the Roma-BZCAT, were found as low energy counterparts of 3PBC sources and thus, to avoid confusion, we did not use the Roma-BZCAT nomenclature. No further BZGs and/or BZUs were discovered in our analysis and thus no different classification symbols with respect to those of the Roma-BZCAT were used in these cases. Names for blazar-like counterparts of 3PBC sources were collected from the Roma-BZCAT if the source is listed therein otherwise in the final table the name reported in one of the major radio surveys as NVSS \citep{Condon1998} and/or SUMSS \citep{Bock1999, Mauch2003}, taken from the NED database. Seyfert galaxies (class symbol: \textit{sey}) were originally defined mainly by their morphology \citep{Seyfert1943} as galaxies with high surface brightness nuclei. Nowadays, they are identified spectroscopically as (mostly spiral) galaxies with strong, highly ionized emission lines. Seyfert galaxies come in two flavors distinguished by the presence (or absence) of broad lines emission in their optical spectra \citep{Khachikian1971, Khachikian1974}. Type 1 Seyfert galaxies (subclass symbol: \textit{sy1}) have both narrow and broad emission lines superimposed to their optical continuum. The former lines originate from a low-density ionized gas with density ranging between $\sim$10$^3$ and 10$^6$ cm$^{-3}$ and line widths corresponding to velocities of several hundred kilometers per second (e.g. \citep{Vaona2012}), while broad lines are located only in permitted transitions, correspondent to electron densities of $\sim$10$^9$\,cm$^{-3}$ and velocities of 10$^4$\,km\,s$^{-1}$ (e.g. \cite{Kollatschny2013}). Type 2 Seyfert galaxies (subclass symbol: \textit{sy2}) show only narrow lines in their optical spectra \citep[e.g.][]{Weedman1977, Miyaji1992, Capetti1999}. To classify Seyfert galaxies we adopted all the same criteria reported in the Turin-SyCAT \citep{Herazo2022} in terms of (i) presence of the optical spectrum in the literature, (ii) radio, infrared and optical luminosities, (iii) radio morphology. This was chosen because we include the new Seyfert galaxies discovered here in the 2$^{nd}$ release of the Turin-SyCAT as described in the following sections. Names for Seyfert-like counterparts of 3PBC sources were collected from the 1$^{st}$ edition of the Turin-SyCAT if the source is listed therein otherwise, are reported in the main table with a NED name taken mainly out of one of the following catalogs: 1RXS \citep{Voges1999}, 2MASSS \citep{Skrutskie2006}, 2MASX \citep{Jarrett2000} and BAT105 \citep{Oh2018}. All Seyfert galaxies were then renamed according to the Turin-SyCAT nomenclature. All extragalactic sources that did not fall into the blazar and Seyfert classes mainly belong to the other two major classes: quasars and radio galaxies. Quasars (QSOs), (class symbol: \textit{qso}) are AGNs with bolometric luminosities above $\sim10^{40}~\mathrm{erg\,s^{-1}}$. They have broad spectral energy distribution and are emitting from radio up to hard X-ray energies, having variable flux densities almost at all frequencies, mid-IR emission due to the dusty torus, and broad emission lines superimposed to an optical blue continuum \cite{Schmidt1969}. For this extragalactic source class, we also distinguished type 1 and type 2 QSOs on the basis of the presence of broad emission lines in their optical spectra according to the same criteria adopted for the Seyfert galaxies \citep{Khachikian1974}. Then to distinguish a Seyfert galaxy from a QSO we also considered the same thresholds used to create the Turin-SyCAT \citep{Herazo2022}, indicating QSOs as sources with both (i) radio luminosity above $10^{40}$ erg\,s$^{-1}$ and (ii) mid-IR luminosity estimate at 3.4$\mu$ $m$ above $10^{11} L_{\odot}$. Names for the QSOs counterparts of 3PBC sources were collected from is reported in the final table \ref{tab:main_table} with a NED name taken mainly from the following catalogues: 1RXS \citep{Voges1999}, 2MASSS \citep{Skrutskie2006}, 2MASX \citep{Jarrett2000} and 1SXPS \citep{Evans2014}. Radio galaxies (RDGs), (class symbol: \textit{rdg}) are radio-loud AGNs whose radio emission is at least 100 times that of normal elliptical galaxies and extends beyond tens of kpc scale\citep{Urry1995, Moffet1966, Massaro2011, Velzen2012}, thus being neatly distinct from the Seyfert galaxies. On the other hand, to distinguish between QSO and RDG we adopted a radio morphological criterion where the latter clearly presents diffuse radio emission at a large scale when radio maps are available to check it. We used the same criteria and classification scheme recently adopted by \citep{Capetti2017a, Capetti2017b}. If the source was not listed with those names, we took the NED name mainly from 3C \citep{Spinrad1985}, 4C \citep{Pilkington1965, Gower1967} or 7C \citep{Hales2007} catalogs. We firstly classified RDGs on the basis of their radio morphologies at 1.4 GHz distinguishing between classical FR\,I and FR\,II sources \citep{Fanaroff1974}. On the other hand, we also considered the two subclasses of radio galaxies defined on the basis of their optical emission lines, distinguishing between high excitation radio galaxies (HERGs), (subclass symbol: \textit{herg}) and low excitation radio galaxies (LERGs), (subclass symbol: \textit{lerg}) \citep{Hine1979}. HERGs are almost always FRIIs, while LERGs can be either FRIs or FRIIs \citep{Buttiglione2010}. We also considered galaxy clusters (class symbol: \textit{clu}) as extragalactic sources of hard X-rays. Galaxy clusters are the largest gravity-bounded structures in the Universe, composed primarily of dark matter, highly ionized and extremely hot intra-cluster gas of low density, and galaxies \citep{Sarazin1986, Giodini2009}. Their X-ray emission is mainly due to bremsstrahlung radiation of relatively hot particles in their intra-cluster medium in the soft X-rays \citep[i.e., between 0.5 and 10 keV][]{Nevalainen2003}, although a tail of this emission is also detectable at higher energies \citep{Ajello2010}. Since it is well known that some galaxy clusters were also detected by the BAT instrument on board SWIFT \citep{Ajello2010} we reported 3PBC sources associated with them mainly when the cross-match with the Abell catalog indicated the possible presence of a galaxy cluster within the hard X-ray positional uncertainty region. Finally, we highlight that a handful of extragalactic sources, not belonging to the five major classes listed above, fall into the following categories, being classified as starburst galaxies (class symbol: \textit{sbg}), \citep{Searle1973, Weedman1981}, galaxies forming stars at unusually fast rates (10$^3$ times faster than in an average galaxy), X-ray bright optically normal galaxies (class symbol: \textit{xbong}), which are normal galaxies, not hosting an AGN, but having substantial X-ray luminosity \citep{Elvis1981, Comastri2002, Yuan2004}, low-ionization nuclear emission-line region galaxies (class symbol: \textit{liner}) \citep{Singh2013} and normal galaxies (class symbol: \textit{gal}), the latter not hosting an AGN but in a few cases interacting with nearby companions. Names of the 3PBC counterparts for those sources were collected mainly from 2MASX \citep{Jarrett2000} and 2MASS \citep{Skrutskie2006} catalogues. We list a preview of the first 10 sources included in Table\ref{tab:main_table}, our revised version of the 3PBC catalog in which we provide the 3PBC catalog name, coordinates, counterpart name, counterpart coordinates, spectroscopic redshifts, the classification in our class and subclass system and the WISE counterpart name. We show examples of spectra of a few objects in Figure\ref{fig:spectra_examples} and Figure\ref{fig:spectra_examples_2}. \begin{table}[h] \caption{Numbers of 3PBC extragalactic sources associated in each class and subclass.} % \label{table:extragalpop} \begin{tabular}{cccc} \hline \hline Class & Class & Subclass & Subclass \\ symbol & number & symbol & number \\ \hline blz & 129 & bll & 30 \\ & & bzg & 7 \\ & & bzu & 24 \\ & & fsrq & 68 \\ gal & 10 & interacting & 3 \\ & & - & 7 \\ clu & 27 & & \\ liner & 1 & & \\ qso & 26 & type 1 & 18 \\ & & type 2 & 1 \\ & & ? & 7 \\ rdg & 25 & herg & 21 \\ & & lerg & 3 \\ & & ? & 1 \\ sey & 593 & sy1 & 325 \\ & & sy2 & 268 \\ sbg & 5 & & \\ xbong & 4 & & \\ \hline \end{tabular} Note: This is the classification of the 3PBC Galactic sources according to our classification scheme. \end{table} \subsubsection{Galactic sources} In our Milky Way, most of the sources emitting in the hard X-rays are X-ray binaries \citep{Grimm2002}, while the second dominant class of hard X-ray sources is the cataclysmic variables \citep{Revnivtsev2008}. X-ray binaries (BINs) (class symbol: \textit{bin}) are systems of double stars containing compact stellar remnants, such as neutron stars, pulsars, or black holes, and a normal star which can range a variety of masses (e.g. \cite{Charles2003, Knigge2011}). The compact stellar remnant accretes material from its companion, creating continual or transient X-ray emissions. X-ray binaries are classified based on their companion star distinguishing between low mass X-ray binaries (subclass symbol: \textit{lmxb}) having the companion star of mass $\lesssim$\,1\, M$_{\odot}$ and high mass X-ray binaries (subclass symbol: \textit{hmxb}) usually accompanied by a star of mass $\gtrsim$\,10\, M$_{\odot}$, where the accretion happens directly from a stellar wind of the companion star. Names for the BINs counterparts of 3PBC sources were collected mainly from the following catalogues: IGR \citep{Bird2004}, 1H, SWIFT \citep{Ajello2010} and (RX+XTE+SAX) \citep{Bade1992, Voges1999, Frontera2009}. Cataclysmic variables (CVs), (class symbol: \textit{cv}) are binary systems composed of a main-sequence companion star and a compact stellar remnant which is a white dwarf (WD) \citep{Revnivtsev2008}. The accretion happens almost always via filling the Roche-lobe of the companion star and subsequent formation of an accretion disk around the WD \citep{Warner1995}. Their X-ray emission can originate from a variety of processes depending on the type of the CV. CVs which do not have strong magnetic fields accrete matter closer to the surface of the WD and produce sporadic eruptions. For 4 sources belonging to the CV class, we also indicated if they are symbiotic stars or novae, however, given their relatively low number with respect to all CVs identified we did not label these as subclasses and we only report the source class. Names for the CVs counterparts of 3PBC sources were collected mainly from the following catalogues: CV \citep{Downes2005}, IGR \citep{Bird2004}, 1RXS \citep{Voges1999} and 2MASS \citep{Skrutskie2006}. The hard X-ray sky is also populated by isolated X-ray pulsars (PSR), (class symbol: \textit{psr}), not being hosted in X-ray binaries. Since they can be indeed hosted in pulsar wind nebulae (subclass symbol: \textit{pwne}) or supernova remnants we highlight the presence of this extended emission around the PSR in the subclass column. On the other hand, if the hard X-ray emission is indeed due to a supernova remnant not hosting a neutron star then we adopted a different class (subclass symbol: \textit{snr}), in these cases, their hard X-ray emission is due to the thermal radiation of plasma heated in shocks, coupled with non-thermal synchrotron radiation (see e.g., \cite{Vink2012}). Names for the PSRs counterparts of 3PBC sources were collected mainly from the ATNF PSR catalog or other radio surveys. As occurred for the extragalactic hard X-ray population a handful of sources were also identified belonging to normal stars (class symbol: \textit{str}), (coming with a subclass: \textit{yso} for young stellar objects) and star clusters (class symbol: \textit{scl}). X-ray emission from main sequence stars of masses >\,10\, M$_{\odot}$ can be due to discrete ionized metal lines in their spectrum. Young stellar objects, protostars, and T Tauri stars also exhibit X-ray radiation, predominantly emerging from magnetic coronae accreting material where shocks occur \citep{Gudel2009}. On the other hand, star clusters can appear as an amalgamation of point-like sources and extended X-ray emissions. Their point-like component can be produced by hot stars and/or SNR, lasting a few thousand years while their extended component is produced by star cluster wind, formed by the interaction of stellar winds of massive O or B type stars, Wolf-Rayet stars, and supernovae explosions (e.g. \cite{Cant2000, Law2004, Oskinova2005}). In addition to them we also reported the classification for one microquasar (class symbol: \textit{mqso}), namely: 3PBC J0804.7-2748. Microquasars are similar to quasars but in a much smaller case. Their radiation comes from a stellar mass black hole or a neutron star accreting matter from a normal star \citep{Mirabel2010}. In addition, we report one planetary nebula (class symbol: \textit{pn}): 3PBC J1701.5-4306. Planetary nebulae are the ejected red giant's atmosphere ionized by the leftover star's core, forming at the end of life of stars with initial masses in the range $\sim$ 1 to 8 solar masses. Lastly, we also labeled the Galactic center Sgr A$^*$ with the symbol: \textit{galcent}. \begin{table}[h] \caption{Numbers of the Galactic 3PBC sources associated in each class and subclass.} \label{table:galpop} \begin{tabular}{cccc} \hline \hline Class & Class & Subclass & Subclass \\ symbol & number & symbol & number \\ \hline bin & 231 & hmxb & 117 \\ & & lmxb & 108 \\ & & ? & 6 \\ cv & 83 & & \\ str & 12 & - & 6 \\ & & yso & 1 \\ & & ? & 1 \\ psr & 21 & bin & 1 \\ & & - & 5 \\ & & snr & 12 \\ & & pwn & 3 \\ scl & 2 & & \\ snr & 4 & & \\ mqso & 1 & & \\ pn & 1 & & \\ galcent & 1 & & \\ \hline \end{tabular} Note: This is the classification of the 3PBC Galactic sources according to our classification scheme. Classes: galcent, mqso, and pn were omitted due to a small member count (1 each). \end{table} \subsubsection{Sky distributions} Starting from the total number of 1593 sources listed in the 3PBC catalogue \citep{Cusumano2010} we found that according to our analysis there are 218 unidentified hard X-ray sources ($\sim$13.7\,\%) and 199 unclassified sources ( $\sim$12.5\,\%), see Figure~\ref{fig:piechart_main}. The classified sources are distinguished into two main groups: Galactic objects including 356 sources ($\sim$22.2\,\%), and extragalactic objects having 820 sources ($\sim$51.5\,\%). We show the sky distribution of 3PBC sources via the Hammer-Aitoff projection for both unclassified and unidentified cases in Figure~\ref{fig:hammer_aitoff_main_table_uni_unc} and for classified sources, distinguishing between Galactic and extragalactic ones in Figure~\ref{fig:hammer_aitoff_main_table_gal_extgal}. Given the source distributions for both unidentified and unclassified sources that appear to be quite uniform over the whole sky, we could expect that a large fraction of them could have an extragalactic origin. This could imply that the lack of classified counterparts is mainly due to missing follow-up spectroscopic observations, thus strengthening the need to complete optical campaigns carried out to date \citep[see e.g.][]{Masetti2006a, Masetti2006b, Masetti2009, Cowperthwaite2013}. Then fractions of other classes for extragalactic sources are shown in Figure~\ref{fig:piechart_extragal} and for Galactic classes in Figure~\ref{fig:piechart_gal}. \begin{sidewaystable} \caption{Our revised version of the 3PBC catalog. The entire catalogue table is available in the online material.} \label{tab:main_table} \begin{tabular}{cccccccccc} \hline \hline 3PBC & R.A.$^{3PBC}$ & Dec.$^{3PBC}$ & counterpart & R.A.$^{ctp}$ & Dec.$^{ctp}$ & $z$ & class & subclass & WISE \\ name & (deg) & (deg) & name & (deg) & (deg) & & & & name \\ \hline \hline J0000.9-0708 & 0.228 & -7.134 & 2MASS J00004877-0709115 & 0.203216 & -7.153221 & 0.03748 & sey & sy2 & J000048.77-070911.6 \\ J0001.7-7659 & 0.429 & -76.986 & 2MASX J00014596-7657144 & 0.441917 & -76.953972 & 0.05839 & sey & sy1 & J000146.08-765714.2 \\ J0002.5+0322 & 0.636 & 3.367 & SY1 J0002+0322 & 0.610046 & 3.351961 & 0.025518 & sey & sy1 & J000226.42+032106.8 \\ J0002.5+0322 & 0.853 & 27.638 & 2MASX J00032742+2739173 & 0.864283 & 27.654828 & 0.03969 & sey & sy2 & J000327.41+273917.0 \\ J0002.5+0322 & 1.009 & 70.312 & SY2 J0004+7020 & 1.008228 & 70.32175 & 0.096 & sey & sy2 & J000401.97+701918.3 \\ J0006.3+2012 & 1.584 & 20.205 & SY1 J0006+2013 & 1.581389 & 20.202968 & 0.025785 & sey & sy1 & J000619.53+201210.6 \\ J0010.4+1058 & 2.624 & 10.976 & 5BZQ J0010+1058 & 2.629166 & 10.974888 & 0.089100 & blz & fsrq & J001031.00+105829.5 \\ J0016.7-2611 & 4.194 & -26.2 & & 0. & 0. & 0. & uhx & & \\ J0017.4+0519 & 4.37 & 5.326 & HS 0014+0504 & 4.344167 & 5.352778 & 0.11 & sey & sy1 & J001722.71+052111.4 \\ J0017.8+8135 & 4.454 & 81.591 & 5BZQ J0017+8135 & 4.28525 & 81.58561 & 3.387000 & blz & fsrq & J001708.50+813508.1 \\ \end{tabular} Only the first 10 lines are reported here. Col. (1) 3PBC source name; Cols. (2,3) Right Ascension and Declination of the 3PBC source (Equinox J2000); Col. (4) name of the counterpart assigned in our refined analysis; col. (5,6) Right Ascension and Declination of the counterpart (Equinox J2000); Col. (7) counterpart redshift if extragalactic; Cols. (8,9) class and subclass assigned according to our classification scheme; Col. (10) WISE name of the counterpart. \end{sidewaystable} \section{Characterizing the Extragalactic Hard X-Ray Sky} \label{sec:section4} Our revised analysis of the 3PBC lists 820 extragalactic sources, classified into 9 classes: 129 blazars (\textit{blz}), 10 galaxies (\textit{gal}), 27 galaxy clusters (\textit{clu}), 1 low-ionization nuclear emission-line region galaxy (\textit{liner}), 26 quasars (\textit{qso}), 25 radio galaxies (\textit{rdg}), 593 Seyfert galaxies (\textit{sey}), 5 star-burst galaxies (\textit{sbg}) and 1 X-ray bright optically normal galaxy (\textit{xbong}). Table~\ref{table:extragalpop} reports those numbers together with the number of sources in their associated subclasses. The most abundant class of extragalactic sources are Seyfert galaxies (Figure~\ref{fig:piechart_extragal}), while the second largest population of extragalactic sources emitting in the hard X-rays is constituted by blazars. The hard X-ray luminosity, K-corrected, is shown in Figure~\ref{fig:luminosity} as a function of the redshift with particular emphasis on the two classes of Seyfert galaxies and blazars. We used the measured spectral index reported in the 3PBC for K-correlation computation. Once we assigned the coordinates of each counterpart we also crossmatched the 3PBC catalog with the AllWISE survey\footnote{\href{https://wise2.ipac.caltech.edu/docs/release/allwise/}{https://wise2.ipac.caltech.edu/docs/release/allwise/}} \citep{Cutri2014} and we found that adopting an association radius of 3.3$\arcsec$, as typically used in other analyses \citep{D'Abrusco2019, Massaro2012b, Menezes2020} we found 1279 mid-IR counterparts in the 1593 3PBC sources. It is worth noting that associating sources within this angular separation corresponds to a chance probability of having a spurious match lower than $\sim$2\,\% \citep{Massaro2013b, Massaro2015c}. We also used the counterpart coordinates to carry out a crossmatch between all blazars listed in the 3PBC and those associated with the 4FGL catalog. There are 92 out of 129 blazars with a {\it Fermi} counterpart and with known redshift, with 25 of them belonging to the BL Lac class and 52 to that of FSRQs. For all these $\gamma$-ray emitting blazars we also found two neat trends/correlations between their hard X-ray and $\gamma$-ray emissions as highlighted in Figure~\ref{fig:gamma}. The first trend is between their hard X-ray and $\gamma$-ray fluxes, where a mild correlation is also reported: 0.52 is the measured value for the correlation coefficient for the whole blazar sample. The p-chance for all correlations is below 10$^{-5}$ level of significance due to the high number of sources used to compute the correlation coefficients. Then a second trend was indeed found between the photon indices of blazars measured in the 3PBC and in the 4FGL catalogs. Both trends highlighted for the blazar population emitting in the hard X-rays are expected given the nature of their emission (e.g. \cite{Acharyya2021}). For BL Lac objects the steep hard X-ray spectra could be due to emission arising from the tail of their synchrotron \citep {Maraschi1992} component, and the flat $\gamma$-ray spectra are related to the peak of their inverse Compton bump at $\gamma$-ray energies \citep{Maraschi1999, Marscher1985, Dermer1995}. On the other hand for the FSRQs both the hard X-ray and the $\gamma$-ray emission are due to their inverse Compton component peaking in the $\gamma$-ray band \citep{Acharyya2021}. Then we also note that even if the broadband spectral energy distributions (SEDs) of BL Lacs are mainly interpreted as due to Synchrotron Self Compton emission while that of FSQRs to external Compton radiation \citep{Abdo2010} relativistic particles responsible for both SED bumps are the same and thus we could expect that fluxes in the hard X-rays and in the $\gamma$-rays are, on average, connected \citep[see e.g.]{Wolter2008}. \section{Second Release of the Turin-SyCAT} \label{sec:section5} We found 282 new Seyfert galaxies resulting from our analysis of the extragalactic hard X-ray sky presented in the previous sections. Adding all new Seyfert galaxies to those already included in the $1^{st}$ release of the Turin-SyCAT its $2^{nd}$ release lists 633 Seyfert galaxies: 351 types 1 and 282 types 2. Thus we added a total of 118 types 1 and 164 types 2 Seyfert galaxies and we also present here an updated analysis of the infrared - hard X-ray connection including all new sources. Sources added in the $2^{nd}$ release of the Turin-SyCAT were selected according to the same procedure as in \citet{Herazo2022}. These strict selection criteria allow us to have a negligible fraction of contaminants since we selected only extragalactic sources with a Seyfert-like optical spectrum and having: \begin{enumerate} \item a published optical spectrum;\\ \item a luminosity in Radio lower than $<$10$^{40}$ erg~s$^{-1}$ if a counterpart is listed in the two major radio surveys \cite[i.e., NVSS and SUMSS][respectively]{Condon1998, Mauch2003}; \item a counterpart in the AllWISE Source catalog with a mid-IR luminosity at $3.4 \mu\,m$ less than $ 3 \times 10^{11}$ $L_\sun$. This was mainly adopted to avoid the selection of QSOs. \end{enumerate} In Figure~\ref{fig:z} we present the redshift distribution of Turin-SyCAT $2^{nd}$ release. The source number for both classes drastically drops after z $ > 0.2 $ as occurs for those listed in the first release, and the source with the highest redshift is SY2 J0304-3026 at 0.436. We compare the redshift distribution of all Seyfert galaxies (Figure~\ref{fig:z_sy1sy2_v1_v2}), only Type 1 Seyfert galaxies (Figure~\ref{fig:z_sy1_v1_v2}) and only Type 2 Seyfert galaxies (Figure~\ref{fig:z_sy2_v1_v2}) between the 1$^{st}$ release of the Turin-Sycat and it's presented 2$^{nd}$ release. With respect to the previous Turin-SyCAT $1^{st}$ release, we modified the name of SY2 J2328+0330 in SY2 J2329+0331 having a WISE counterpart J232903.90+033159.9 and since a new Seyfert type 2 galaxy was associated with its mid-IR counterpart J232846.65+033041.1 thus being named as SY2 J2328+0330. We list all sources included in the Turin-SyCAT $2^{nd}$ release in Table~\ref{tab:SyCAT} where we provide SyCAT $1^{st}$ release and SyCAT $2^{nd}$ release ids, SyCAT name, coordinates, spectroscopic redshifts, WISE counterpart, and 3PBC counterpart names, flux and flags to indicate if the source is also associated in 3PBC and BAT105 catalogs, and a flag to point out those added in this $2^{nd}$ release. On the basis of the new Seyfert galaxies discovered here, we refined the connection between their hard X-ray and the mid-IR emission \citep{Assef2013}. This connection is related to the reprocessed radiation from the dust of all energy absorbed from the optical and UV wavelengths in the central engine of Seyfert galaxies (e.g. \cite{Elvis2009}). The high-energy emission measures an intrinsic radiated luminosity above $\sim$10 keV, while WISE 12 $\mu$m and 22 $\mu$m is related to the reprocessed radiation from the dust of all energy absorbed from the optical and UV wavelengths. Mid-IR fluxes show a significant correlation with the hard X-ray fluxes, similar to those highlighted using Seyfert galaxies listed in the Turin-SyCAT $1^{st}$ release, as shown in Figure~\ref{fig:xconn}. Comparing integrated fluxes as $F_{12}$ and $F_{HX}$ we found a linear correlation coefficient of 0.54 (correspondent to a slope of $ 1.09\pm0.10$ given the measured dispersion) for Seyfert 1 and 0.45 (slope of $1.20\pm0.16$) for Seyfert 2 galaxies, respectively. $F_{12}$ is the integrated flux at 12 microns derived from the WISE magnitude and $F_{HX}$ is the integrated hard X-ray flux in the 15-150 keV energy range both in units of erg cm$^{-2}$ s$^{-1}$. This is in agreement with results presented on the statistical analysis of Seyfert galaxies listed in the Turin-SyCAT $1^{st}$ release where we measured a correlation coefficient of 0.57, with a slope of 1.02 $\pm$ 0.10 and a coefficient of 0.52 (slope of 0.93 $\pm$ 0.16), for Seyfert 1 and 2 galaxies, respectively. On the other hand, also comparing mid-IR at lower frequencies with the hard X-ray flux (i.e., $F_{22}$ vs $F_{HX}$), where $F_{22}$ is the integrated flux at 22 microns derived from the WISE magnitude in units of erg cm$^{-2}$ s$^{-1}$, we found a correlation coefficient of 0.55 (with a slope of $1.11\pm0.10$) for type 1 Seyfert galaxies and 0.46 (slope of $1.08\pm0.17$) for type 2 Seyfert galaxies. Considering both classes together, since they show similar mid-IR to hard X-ray ratios, we found a correlation coefficient of 0.51 and a slope of $1.1\pm0.08$ for both hard X-ray flux $F_{HX}$ correlation with $F_{12}$ and $F_{22}$, all in agreement with previous results based on the Turin-SyCAT $1^{st}$ release. We also cross-matched sources listed in Turin-SyCAT $2^{nd}$ release with the Point Source catalog of the Infrared Astronomical Satellite (\textit{IRAS})\footnote{\url{https://heasarc.gsfc.nasa.gov/W3Browse/iras/iraspsc.html}}, using the positional uncertainties reported therein. We obtained 67 new matches for a total of 216 Seyfert galaxies with an IRAS counterpart, 89 types 1 and 127 types 2 counterparts at both 60 $\mu$m and 100 $\mu$m, respectively. Then, as occurred in our previous analysis \citep{Herazo2022}, we also tested possible trends between the infrared fluxes, at 60 $\mu$m and at 100 $\mu$m, and the hard X-ray one. We found no clear correlation as evident in Figure~\ref{fig:iras} and again these results are in agreement with our previous findings based on Turin-SyCAT $1^{st}$ release. Moreover, we did not expect any correlation while inspecting trends between infrared and hard X-ray fluxes since the cold dust, mainly responsible for the emission at 60 $\mu$m and 100 $\mu$m is not significantly affected by the behavior of the central AGN but it is mainly linked to the star formation occurring in Seyfert galaxies \citep{Espinosa1987}. The strict multi-frequency selection criteria that we used to select Turin-SyCAT sources allowed us to minimize the possible contamination of other source classes, thus strengthening our results. Thus we remind that we visually inspected all Turin-SyCAT galaxies' optical spectra, allowing us to measure their redshifts and establish their proper optical classification. \begin{center} \begin{sidewaystable}[] \tiny \caption{The $2^{nd}$ version of the Turin-SyCAT catalog. Only the first 10 rows, the full catalog table is available in the online material.} \label{tab:SyCAT} \makebox[\textwidth]{ \begin{tabular}{rlllllllllll} \hline IDv2 & IDv1 & SyCAT & R.A. & Dec. & z & WISE & 3PBC & $F_{HX}$ & 3PBC flag & BAT105 flag & SyCAT v2 flag\\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11)& (12) \\ \noalign{\smallskip} \hline \noalign{\smallskip} \hline 1 & & SY2 J0000-0709 & 0.203216 & -7.153221 & 0.03748 & J000048.77-070911.6 & 3PBC J0000.9-0708 & 1.25E-11 $\pm$ 1.4E-12 & \checkmark & \checkmark & \checkmark\\ 2 & & SY1 J0001-7657 & 0.441917 & -76.953972 & 0.05839 & J000146.08-765714.2 & 3PBC J0001.7-7659 & 1.09-11 $\pm$ 1.5E-12 & \checkmark & \checkmark & \checkmark\\ 3 & 1 & SY1 J0002+0322 & 0.6102917 & 3.352 & 0.025518 & J000226.41+032107.0 & 3PBC J0002.5+0322 & 1.39E-11 $\pm$ 1.9E-12 & \checkmark & \checkmark & --\\ 4 & & SY2 J0003+2739 & 0.864283 & 27.654828 & 0.03969 & J000327.41+273917.0 & 3PBC J0003.4+2738 & 1.82E-11 $\pm$ 2.6E-12 & \checkmark & \checkmark & \checkmark\\ 5 & 2 & SY2 J0004+7020 & 1.00817 & 70.32175 & 0.096 & J000401.97+701918.2 & 3PBC J0004.0+7018 & 1.1E-11 $\pm$ 1.30E-12 & \checkmark & \checkmark & --\\ 6 & 3 & SY1 J0006+2013 & 1.58133 & 20.202917 & 0.025785 & J000619.53+201210.6 & 3PBC J0006.3+2012 & 1.76E-11 $\pm$ 1.40E-12 & \checkmark & \checkmark & --\\ 7 & & SY1 J0017+0521 & 4.344167 & 5.352778 & 0.11 & J001722.71+052111.4 & 3PBC J0017.4+0519 & 8.69E-12 $\pm$ 1.5E-12 & \checkmark & & \checkmark\\ 8 & & SY2 J0021-1910 & 5.281417 & -19.168222 & 0.09558 & J002107.53-191005.4 & 3PBC J0021.1-1909 & 1.76E-11 $\pm$ 1.60E-12 & \checkmark & \checkmark & \checkmark\\ 9 & 4 & SY2 J0025+6821 & 6.38542 & 68.3622 & 0.012 & J002532.37+682144.9 & 3PBC J0025.5+6822 & 1.79E-11 $\pm$ 1.40E-12 & \checkmark & \checkmark & --\\ 10 & 5 & SY1 J0025-1859 & 6.4265 & -19.002917 & 0.24622 & J002542.34-190010.1 & 3PBC J0025.6-1859 & 1.05E-11 $\pm$ 1.60E-12 & \checkmark & -- & --\\ \hline \noalign{\smallskip} \hline \end{tabular} } Column description: (1) Unique catalog identified (ID) from SyCAT $2^{nd}$ version; (2) Unique catalog identified (ID) from SyCAT $1^{st}$ version; (3) SyCAT name; (4) Right Ascencion J2000; (5) Declination J2000; (6) Redshift; (7) name in WISE; (8) name in 3PBC; (9) Flux; (10) flag if the source is in 3PBC; (11) flag if the source is in BAT105; (11) flag if the source was added in SyCAT v2. \end{sidewaystable} \end{center} \section{Summary, Conclusions and Future Perspectives} \label{sec:section6} The CXB is nowadays established to constitute mainly an integrated emission of discrete sources, primarily arising from AGNs \citep{Gilli2007}. Having a precise knowledge of the population and properties of various types of AGNs is thus crucial to improving our knowledge of the CXB. In this work, we focus on the analysis of the 3PBC catalog \citep{Cusumano2010}, in particular focusing on the extragalactic source population, aiming also at discovering new Seyfert galaxies that can be included in the presented Turin-SyCAT $2^{nd}$ release. The 3PBC provides 1593 sources above signal to noise ratio of 3.8, approximately 57\,\% sources appear to have a clear extragalactic origin while 19\,\% belong to our Milky Way, and the remaining 24\,\% are yet unknown. Results of our multifrequency investigation are also based on those recently found for the 105 month \textit{Swift}-BAT catalogue \citep{Oh2018} and the \textit{INTEGRAL} IBIS hard X-ray survey in energy range 17-100 keV \citep{Bird2016}. For comparison, the original release of the 3PBC catalog listed 521 Seyfert galaxies, 109 blazars, 362 unclassified sources, and 244 unidentified sources, all classified according to our classification scheme while in the refined version presented here there are 593 Seyfert galaxies, 129 blazars, 199 unclassified sources, and 218 unidentified sources. It is worth highlighting that, on the basis of our classification criteria, we indicated 98 sources as unclassified, even though they had an assigned class in the original 3PBC catalog, due to a lack of information. All details about how we interpreted 3PBC original classes according to our classification scheme are reported in Appendix \ref{app:3PBC_reclassification} and Table \ref{table:3PBC_reclassification}. Thanks to our analysis we (i) developed a multifrequency classification scheme for hard X-ray sources, that can be later adopted also to investigate different high-energy surveys, (ii) investigate the main properties of sources populating the extragalactic hard X-ray sky and finally, extract other Seyfert galaxies now included in the $2^{nd}$ release of the Turin-SyCAT catalog presented here. We worked with the 1593 sources of the 3PBC catalog, comparing them with various other catalogs mentioned in the paper and adopting the following classification scheme criteria. Firstly, we checked if the 3PBC source has an assigned counterpart, if not, we performed multifrequency crossmatching analyses across the available literature to search for counterparts. Sources without counterparts were assigned with \textit{unidentified} category. Those which were found with counterparts, together with sources already having a counterpart in the 3PBC catalog, were further inspected with multifrequency analyses. Sources lacking sufficient information to assign their class were put to \textit{unclassified} category, the rest to \textit{classified} category. We further distinguish the classified sources into Galactic and extragalactic and we purely focus on the extragalactic sources in this work. Results obtained from our analysis can be outlined as follows. \begin{enumerate} \item The final revised 3PBC catalog we present in this study lists 1176 classified, 820 extragalactic, and 356 Galactic ones, 218 unidentified and 199 unclassified sources, respectively. The original version of the 3PBC catalog listed 244 unidentified and 362 unclassified sources counted according to our classification scheme (see Appendix for more details). We improved the fraction of 15.3\% unidentified sources to 13.7\% (from 244 to 218 sources) and the fraction of 22.7\% unclassified sources to 12.5\% (from 362 to 199 sources). It is important to highlight that 98 sources were classified in the original 3PBC catalog, but they were indeed listed as unclassified according to our refined analysis since they lacked multifrequency information. \item The hard X-ray sky is mainly populated by nearby AGNs, where the two largest known populations of associated AGNs are: Seyfert galaxies ($\sim 79\%$) and blazars ($\sim 17\%$). \item We report the trends between the hard X-ray and the gamma-ray emissions of those blazars that are also listed in the 4FGL as expected by the models widely adopted to explain their broadband SED. \item In the presented $2^{nd}$ release of the Turin-SyCAT, we list 633 Seyfert galaxies, 282 new ones added here thus correspondent to increase their number by $\sim$80\,\% with respect to its $1^{st}$ release. \item We updated the statistical analysis carried out comparing the hard X-ray and the IR emissions of Seyfert galaxies.  All results obtained are in agreement with those previously found even if now the analysis appears more robust since it was performed with a sample of Seyfert galaxies increased by $\sim$80\,\% with respect to the 1$^{st}$ release of the Turin-SyCAT. \end{enumerate} Finally, we already checked the presence of SWIFT observations carried out using the X-ray telescope on board for the sample of unidentified hard X-ray sources and we found that more than 95\,\% of them have at least a few ksec exposure time available. Thus the next step of the present analysis will be to search for the potential soft X-ray counterpart of these 3PBC unidentified sources to obtain their precise position necessary to carry out optical spectroscopic campaigns aimed at identifying the whole sky seen between 15 and 150 keV. \begin{table*}[h] \caption[]{\label{table:catalogs_table}Table of catalogs used in the cross-matching analysis.} \begin{tabular}{ccc} \hline \hline Acronym & Catalogue Name & Reference \\ \hline \hline 4FGL-DR2 & The second release of the fourth \textit{Fermi}-LAT catalog \\ & of $\gamma$-ray sources & 1\\ 3PBC & The 3$^{rd}$ Palermo \textit{Swift}-BAT Hard X-ray catalog & 2\\ BAT105 & The 105-month \textit{Swift}-BAT catalog & 3\\ \textit{INTEGRAL} & The IBIS soft gamma-ray sky after 1000 \textit{INTEGRAL} orbits & 4\\ Homa-BZCAT & 5$^{th}$ edition of Roma-BZCAT catalog of blazars & 5\\ 3CR & The Revised Third Cambridge catalog & 6\\ 4C & The Fourth Cambridge Survey & 7, 8\\ SyCAT & The Turin-SyCAT catalog & 9\\ CVcat & The Catalog and Atlas of Cataclysmic Variables & 10\\ SNRcat & The Catalog of Galactic Supernovae Remnants & 11\\ hmxb & The 4$^{th}$ edition of the catalog of High mass X-ray binaries \\ & in the Galaxy & 12\\ lmxb & The 4$^{th}$ edition of the catalog of Low mass X-ray binaries \\ & in the Galaxy and Magellanic Clouds & 13\\ Rlmxb & The 7$^{th}$ edition of the catalog of cataclysmic binaries, \\ & low mass X-ray binaries and related objects & 14\\ ANTF & The Australian Telescope National Facility Pulsar Catalog & 15\\ Abellcat & Abell catalog of rich galaxy clusters & 16\\ \hline \end{tabular} \tablebib{(1)~\citet{Ballet2020}; (2) \citet{Cusumano2010}; (3) \citet{Oh2018}; (4) \citet{Bird2016}; (5) \citet{Massaro2015}; (6) \citet{Spinrad1985}; (7) \citet{Pilkington1965}; (8) \citet{Gower1967}; (9) \citet{Herazo2022}; (10) \citet{Downes2005}; (11) \citet{Green2017}; (12) \citet{Liu2006}; (13) \citet{Liu2007}; (14) \citet{Ritter2003}; (15) \citet{Manchester2005}; (16) \citet{Abell1989}. } \end{table*} \begin{acknowledgements} We thank the anonymous referee for useful comments that led to improvements in the paper. M. K. and N. W. are supported by the GACR grant 21-13491X. E. B. acknowledges NASA grant 80NSSC21K0653. M. K. was supported by the Italian Government Scholarship issued by the Italian MAECI. VC acknowledges support from CONACyT research grants 280789 (Mexico). F. M. wishes to thank Dr. G. Cusumano for introducing him to the Palermo BAT Catalog project. We would like to thank A. Capetti for his work done on the 1$^{st}$ version of the Turin-SyCAT, which was relevant for this work. This investigation is supported by the National Aeronautics and Space Administration (NASA) grants GO0-21110X, GO1-22087X, and GO1-22112A. This research has made use of the NASA/IPAC Infrared Science Archive, which is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS website is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics | Harvard \& Smithsonian, the Chilean Participation Group, the French Participation Group, Instituto de Astrof\'isica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut f\"ur Astrophysik Potsdam (AIP),  Max-Planck-Institut f\"ur Astronomie (MPIA Heidelberg),Max-Planck-Institut f\"ur Astrophysik (MPA Garching), Max-Planck-Institut f\"ur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observat\'ario Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Aut\'onoma de M\'exico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society, and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G was issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST–1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. TOPCAT and STILTS astronomical software \citep{Taylor2005} were used for the preparation and manipulation of the tabular data and the images. \end{acknowledgements} \bibliographystyle{aa} % \bibliography{aanda} \begin{appendix} % \section{Re-classification of the original 3PBC based on our classification scheme} \label{app:3PBC_reclassification} We compare our refined classification for the counterparts of 3PBC sources with those previously assigned in the original catalog. To perform this, we first ``translate'' the original 3PBC classes into our classification scheme according to the following criteria. For the Galactic objects we consider (i) all sources classified in the original 3PBC as HXB, LXB, XB, XB*, V* being indicated as ``bin'' (i.e., binary systems); (ii) those previously labeled as AM*, CV, CV*, DN*, DQ*, EB*, NL*, No currently belong to the ``cv'' class (i.e., cataclysmic variables); while (iii) those few classified as Psr are all ``psr'', SNR is ``snr'' and PN simply ``pn'' (i.e., pulsars, supernova remnants, and planetary nebulae, respectively). On the other hand, for the extragalactic classes: (i) sources labeled as BLA and BZC in the original 3PBC are indicated as ``blz'' (i.e., blazars); (ii) Sy*, SyG, Sy1, Sy2 are all classified as ``sey'' according to our scheme (i.e., Seyfert galaxies) while (iii) QSO are ``qso'' and rG is ``rdg'' (i.e., being quasars and radio galaxies respectively) and then (iv) objects classified as LIN and ClG are now indicated as ``liner'' and ``clu'', respectively being LINERs and galaxy clusters. The remaining handful of sources had the same classification in both lists, for example, the Galactic center. For the unknown sources, we considered those having a question mark in the classification label, remarking that this could be unsettled, as well as those indicated as AGN, BRT, EmG, G, GiC, GiP, IG, IR, X, gam and Rad, all ``unc''. This is because even if for example the associated counterpart in the original 3PBC is recognized as an infrared or an X-ray source, or a simple AGN, this does not provide us precise information about its nature and its hard X-ray emission. Finally, 3PBC sources originally lacking an assigned counterpart were all labeled as ``uhx'', being unidentified hard X-ray sources. In Table \ref{table:3PBC_reclassification} we report the (i) 3PBC name, (ii) the original 3PBC classification, (iii) the new label corresponding to our new classification scheme but assigned on the basis of the previous criteria and on the information available before our refined analysis and (iv) our new classification based on the multifrequency analysis carried out here. This allowed us to compare previous and new associations and classifications to obtain an estimate of the improvements achieved. We found that, according to our classification scheme, the 3PBC catalog presented 521 Seyfert galaxies, 109 blazars, 362 unclassified sources, and 244 unidentified sources. In our revised version of the 3PBC, we present 593 Seyfert galaxies, 129 blazars, 199 unclassified sources, and 218 unidentified sources. It is important to highlight, that due to our classification criteria, we not only classified some of the yet unclassified or unidentified sources, but we also re-classified 98 sources that had a classification class in the original 3PBC, as unclassified according to our classification criteria. This was done in cases of a lack of information in the literature, e.g. if we could not find an optical spectrum or luminosities, etc. \begin{table}[h] \caption{The comparison between the original classes assigned in the 3PBC, how they are interpreted according to our new scheme and that obtained thanks to our refined analysis, for all 3PBC sources. For each source, we report the following columns: (i) the 3PBC name; (ii) the original class; (iii) the class interpreted according to our scheme; (iv) the class assigned in our refined analysis.} \label{table:3PBC_reclassification} \begin{tabular}{ccccc} \hline \hline name\_3PBC & class3PBC & reclass & class & subclass\\ \hline J0000.9-0708 & X & unc & sey & sy2 \\ J0001.7-7659 & G & unc & sey & sy1 \\ J0002.5+0322 & Sy1 & sey & sey & sy1 \\ J0003.4+2738 & G & unc & sey & sy2 \\ J0004.0+7018 & AG? & unc & sey & sy2 \\ \hline \end{tabular} \end{table} \end{appendix}
Title: Transmission spectroscopy of the ultra-hot Jupiter MASCARA-4 b: Disentangling the hydrostatic and exospheric regimes of ultra-hot Jupiters
Abstract: Ultra-hot Jupiters (UHJs), rendering the hottest planetary atmospheres, offer great opportunities of detailed characterisation with high-resolution spectroscopy. MASCARA-4 b is a recently discovered close-in gas giant belonging to this category. In order to refine system and planet parameters, we carried out radial velocity measurements and transit photometry with the CORALIE spectrograph and EulerCam at the Swiss 1.2m Euler telescope. We observed two transits of MASCARA-4 b with the high-resolution spectrograph ESPRESSO at ESO's Very Large Telescope. We searched for atomic, ionic, and molecular species via individual absorption lines and cross-correlation techniques. These results are compared to literature studies on UHJs characterised to date. With CORALIE and EulerCam observations, we updated the mass of MASCARA-4 b (1.675 +/- 0.241 Jupiter masses) as well as other system and planet parameters. In the transmission spectrum derived from ESPRESSO observations, we resolve excess absorption by H$\alpha$, H$\beta$, Na D1 & D2, Ca+ H & K, and a few strong individual lines of Mg, Fe and Fe+. We also present the cross-correlation detection of Mg, Ca, Cr, Fe and Fe+. The absorption strength of Fe+ significantly exceeds the prediction from a hydrostatic atmospheric model, as commonly observed in other UHJs. We attribute this to the presence of Fe+ in the exosphere due to hydrodynamic outflows. This is further supported by the positive correlation of absorption strengths of Fe+ with the H$\alpha$ line. Comparing transmission signatures of various species in the UHJ population allows us to disentangle the hydrostatic regime (as traced via the absorption by Mg and Fe) from the exospheres (as probed by H$\alpha$ and Fe+) of the strongly irradiated atmospheres.
https://export.arxiv.org/pdf/2208.11427
\title{Transmission spectroscopy of the ultra-hot Jupiter \M \thanks{Transit photometry and radial velocity data are only available in electronic form at the CDS via anonymous ftp to \href{ftp://130.79.128.5}{cdsarc.u-strasbg.fr (130.79.128.5)} or via \url{http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/}}} \subtitle{Disentangling the hydrostatic and exospheric regimes of ultra-hot Jupiters} \author{Yapeng Zhang \inst{1}, Ignas A.G. Snellen \inst{1}, Aur\'elien Wyttenbach \inst{2}, Louise D. Nielsen \inst{3,2}, Monika Lendl \inst{2}, N\'uria Casasayas-Barris \inst{1}, Guillaume Chaverot \inst{2}, Aurora Y. Kesseli \inst{5}, Christophe Lovis \inst{2}, Francesco A. Pepe\inst{2}, Angelica Psaridi \inst{2}, Julia V. Seidel \inst{2, 4}, St\'ephane Udry \inst{2}, Sol{\`e}ne Ulmer-Moll \inst{2} } \institute{Leiden Observatory, Leiden University, Postbus 9513, 2300 RA, Leiden, The Netherlands \\ \email{yzhang@strw.leidenuniv.nl} \and Observatoire astronomique de l'Universit{\'e} de Gen{\`e}ve, Chemin Pegasi 51, 1290 Versoix, Switzerland \and European Southern Observatory, Karl-Schwarzschildstr. 2, D-85748 Garching bei M{\"u}nchen, Germany \and European Southern Observatory, Alonso de C\'ordova 3107, Vitacura, Regi\'on Metropolitana, Chile \and IPAC, Mail Code 100-22, Caltech, 1200 E. California Blvd., Pasadena, CA 91125, USA } \authorrunning{Y. Zhang et al.} \titlerunning{MASCARA-4b} \date{Received June 7, 2022; accepted August 16, 2022} \abstract{ Ultra-hot Jupiters (UHJs), rendering the hottest planetary atmospheres, offer great opportunities of detailed characterisation with high-resolution spectroscopy. \M is a recently discovered close-in gas giant belonging to this category. } { We aim to characterise MASCARA-4~b, search for chemical species in its atmosphere, and put these in the context of the growing knowledge on the atmospheric properties of UHJs. } { In order to refine system and planet parameters, we carried out radial velocity measurements and transit photometry with the CORALIE spectrograph and EulerCam at the Swiss 1.2\,m Euler telescope. We observed two transits of \M with the high-resolution spectrograph ESPRESSO at ESO’s Very Large Telescope. We searched for atomic, ionic, and molecular species via individual absorption lines and cross-correlation techniques. These results are compared to literature studies on UHJs characterised to date. } { With CORALIE and EulerCam observations, we update the mass of \M ($M_{\rm p}=1.675\pm0.241$ $M_{\rm Jup}$) as well as other system and planet parameters. In the transmission spectrum derived from ESPRESSO observations, we resolve excess absorption by H$\alpha$, H$\beta$, \ion{Na}{i} D1\&D2, \ion{Ca}{ii} H\&K, and a few strong lines of \ion{Mg}{i}, \ion{Fe}{i}, and \ion{Fe}{ii}. We also present the cross-correlation detection of \ion{Mg}{i}, \ion{Ca}{i}, \ion{Cr}{i}, \ion{Fe}{i}, and \ion{Fe}{ii}. The absorption strength of \ion{Fe}{ii} significantly exceeds the prediction from a hydrostatic atmospheric model, as commonly observed in other UHJs. We attribute this to the presence of \ion{Fe}{ii} in the exosphere due to hydrodynamic outflows. This is further supported by the positive correlation of absorption strengths of \ion{Fe}{ii} with the H$\alpha$ line, which is expected to probe the extended upper atmosphere and the mass loss process. Comparing transmission signatures of various species in the UHJ population allows us to disentangle the hydrostatic regime (as traced via the absorption by \ion{Mg}{i} and \ion{Fe}{i}) from the exospheres (as probed by H$\alpha$ and \ion{Fe}{ii}) of the strongly irradiated atmospheres. } {} \keywords{ planetary systems – planets and satellites: atmospheres – techniques: spectroscopic – individual: \M } \section{Introduction} Transmission spectroscopy of close-in giant planets provides great opportunities to characterise the composition, structure, and dynamics of exoplanet atmospheres \citep{Charbonneau2002, Redfield2008, Snellen2010, Huitson2012, Madhusudhan2019}. These strongly irradiated planets undergo significant atmospheric escape as traced by absorption signatures from exospheres extending beyond the Roche limit \citep{Vidal-Madjar2003, Vidal-Madjar2004, Spake2018}. The mass loss drives the evolution of close-in planets and shapes the exoplanet population as observed today \citep{Owen2019}. Ultra-hot Jupiters (UHJs) represent a subclass of close-in hot Jupiters that are extremely irradiated, with day-side temperatures above 2200 K. As a result of such high temperatures, their day-side atmospheres are predicted to be cloud-free and with effective thermal dissociation of molecules such as \HTWOO\ to produce OH \citep{Parmentier2018, Nugroho2021, Landman2021}. The transmission spectra are dominated by neutral and ionized atomic species in the optical, similar to the photosphere of dwarf stars \citep{Kitzmann2018, Lothringer2018}, which are well suited for atmospheric characterisation. The dissociation of hydrogen combined with electrons from metal ionisation to form H$^-$, adds strong continuum opacity which plays an important role in shaping the spectra \citep{Arcangeli2018}. The extreme irradiation also makes UHJs interesting targets for studying the mass loss via hydrodynamic escapes \citep{Fossati2018, Sing2019}. Transmission spectra under high spectral resolution ($\mathcal{R}=\lambda/\Delta\lambda\sim10^5$) provide unique access to the information contained in resolved absorption lines, such as \ion{Na}{i} D lines, \ion{H}{i} Balmer series, and \ion{He}{i} triplet, allowing better constraints on the atmospheric structure and the escaping process \citep[e.g.][]{Wyttenbach2015, Wyttenbach2017, Yan2018, Allart2018, Nortmann2018, Casasayas-Barris2019, Wyttenbach2020}. In addition, high-resolution spectroscopy has been a powerful tool that leads to the detection of a profusion of metal species in UHJs using the cross-correlation method that co-adds a forest of spectral lines to enhance the signal of a certain species \citep{Snellen2010, Brogi2012}. For example, \citet{Hoeijmakers2018, Hoeijmakers2019} detected neutral and ionized metals (such as \ion{Fe}{i}, \ion{Fe}{ii}, \ion{Ti}{ii}, \ion{Cr}{ii}, etc.) in KELT-9b, the hottest known planet \citep[T$_\mathrm{eq} $$\sim$4000 K,][]{Gaudi2017}. Subsequent studies have quickly extended detections to more UHJs, including WASP-121b \citep{Ben-Yami2020, Gibson2020, Hoeijmakers2020a, Merritt2021}, MASCARA-2b \citep{Stangret2020, Nugroho2020, Hoeijmakers2020b}, WASP-76b \citep{Kesseli2022}, TOI-1518b \citep{Cabot2021}, WASP-189b \citep{Prinoth2022}, and HAT-P-70b \citep{Bello-Arufe2022}. Here we present the transmission spectroscopy of \M \citep{Dorval2020}, an ultra-hot Jupiter with an equilibrium temperature of $\sim$2250 K, orbiting at 0.047 au away from an A7V star ($m_\mathrm{V}=8.2$) with an effective temperature of $\sim$7800 K. The properties of the system are summarised in Table~\ref{tab:M4}. Two transit observations were taken with Echelle SPectrograph for Rocky Exoplanets and Stable Spectroscopic Observations \citep[ESPRESSO,][]{Pepe2021} at the VLT. This analysis adds \M to the ensemble of UHJs that have been characterised with high-resolution transmission spectroscopy and show absorption features from various atomic species. We describe the observations and data reduction in Section~\ref{sec:observation}. The analyses are presented in Section~\ref{sec:data-analysis}, including fitting the spin-orbit misalignment angle, modeling the Rossiter-McLaughlin (RM) effect, extracting transmission spectra, and carrying out cross-correlation. We then present in Section~\ref{sec:result} the detection of planetary absorption signals in the transmission spectrum from both single-line and cross-correlation analysis. In Section~\ref{sec:discussion}, we put the results of \M in context of the UHJ population and discuss trends of absorption strengths among UHJs that may shed light on the atmospheric structures. \begin{table}[] \centering \caption{Properties of the MASCARA-4 system.} \resizebox{\columnwidth}{!}{ \begin{tabular}{lc} \hline \hline Parameter & Value \\ \hline \multicolumn{2}{c}{\dotfill\it Stellar parameters\dotfill}\\\noalign{\smallskip} Effective temperature, $T_{\rm eff}$ (K) $^1$ & $7800 \pm 200$ \\ Stellar mass, $M_\ast$ ($M_\odot$) $^1$ & $1.75\pm0.05$ \\ Stellar radius, $R_\ast$ ($R_\odot$) $^2$ & $1.79\pm0.04$ \\ Surface gravity, log($g$) $^1$ & $4.10\pm0.05$ \\ Projected spin, \vsini (\kms) & $43.0\pm0.1$ (Spectral)\\ & $46.2^{+7.7}_{-2.5}$ (RM reloaded) \\ Differential rotation, $\alpha$ & $0.09\pm0.03$ (RM reloaded) \\ Limb-darkening coeff., $u_1$ & 0.333 \\ Limb-darkening coeff., $u_2$ & 0.332 \\ \hline \multicolumn{2}{c}{\dotfill\it Updated system parameters \dotfill}\\\noalign{\smallskip} RV amplitude, $K_{\ast}$ (\ms) & $165.9\pm 23.7$ \\ Mid-transit time, $T_0$ (BJD) & $ 2458909.66419\pm 0.00046 $ \\ Transit duration, $T_{14}$ (day) & $ 0.1654 \pm 0.0013 $ \\ Orbital period, $P$ (day) & $2.8240932 \pm 0.0000046 $ \\ Radius ratio, $R_{\rm p}/R_\ast$ & $ 0.0869 \pm 0.0015 $ \\ Impact parameter, $b$ & $ 0.309 \pm 0.044 $ \\ Orbital inclination , $i_{\rm p}$ (deg) & $ 86.89 \pm 0.49 $ \\ Semi-major axis, $a$ (au) & $ 0.0474 \pm 0.0013 $ \\ $a/R_\ast$ & $ 5.704_{-0.096}^{+0.086} $ \\ Spin-orbit angle, $\lambda$ (deg) & $250.34\pm0.14$ \\ \hline \multicolumn{2}{c}{\dotfill\it Updated planet parameters \dotfill}\\\noalign{\smallskip} Planetary radius, $R_{\rm p}$ ($R_{\rm Jup}$) & $ 1.515 \pm 0.044 $ \\ Planetary mass, $M_{\rm p}$ ($M_{\rm Jup}$) & $1.675\pm0.241$ \\ Planetary density, $\rho_{\rm p}$ ($\rho_{\rm Jup}$) & $ 0.481_{-0.079}^{+0.085} $ \\ Surface gravity, $g_{\rm p}$ (m\,s$^{-2}$) & $ 18.1 \pm 2.9 $ \\ Equilibrium temperature, $T_{\rm eq}$ (K) & $2250 \pm 62$ \\ Semi-amplitude velocity, $K_{\rm p}$ (\kms) & $182\pm 5$ \\ \hline \end{tabular} } \tablebib{ $(1)$ \citet{Dorval2020}; $(2)$ \citet{Ahlers2020}.} \label{tab:M4} \end{table} \section{Observations and data reduction} \label{sec:observation} \subsection{Radial velocity measurements with CORALIE and updated planet mass} \label{sec:coralie} To refine the mass of \M, 20 high-resolution spectra were obtained with the CORALIE spectrograph \citep{Baranne1996, Queloz2000} at the Swiss 1.2\,m Euler telescope at La Silla Observatories, Chile. The observations took place between January 4, 2020 and February 10, 2021. The SNR per pixel at 550 nm varied between 55 and 90, according to sky condition and exposure time (20-30 min depending on target visibility and scheduling requirements). Radial velocity (RV) measurements were extracted by cross-correlating the spectra with a binary A0 mask. The shape of the cross-correlation functions (CCFs) were dominated by rotational broadening (FWHM $\approx 55$ \kms) and a linear slope across the continuum due to imperfect flux correction across the CCF window. Similar to the approach demonstrated for WASP-189 \citep{Anderson2018}, we fitted a rotational profile with a linear slope to the CCFs \citep[for more information, see Sect. 2.3.3 in][]{NielsenThesis}. The inverse bisector-span \citep[BIS,][]{Queloz2001} was computed on the continuum-corrected CCFs. The RV fitting were carried out with the Data and Analysis Center for Exoplanets web platform (DACE) \footnote{\url{https://dace.unige.ch/}}. The Keplerian model described in \citet{Delisle2016} was fit to the RV data points using a Markov chain Monte Carlo (MCMC) algorithm \citep{Diaz2014, Diaz2016}, while applying Gaussian priors on the stellar mass and planetary orbit ($P$, $T_0$ and $i_p$) from Table \ref{tab:M4}. When allowing for an eccentric orbit, we found an eccentricity consistent with zero and adopted a circular model to avoid overestimating the eccentricity \citep{2019MNRAS.489..738H}. A traditional RV fit yielded a moderate anti-correlation between the RV-residuals and BIS (weighted Pearson coefficient $R_w = -0.68$). We, therefore, applied a linear detrending of the RVs with BIS in the model. The final RV analysis derives a semi-amplitude of the stellar reflex motion of $165.9\pm 23.7$ \ms, corresponding to a planet mass of $1.675\pm0.241$ $M_{\rm Jup}$. We tested the fitting without the detrending step and found no sensible change in the resultant semi-amplitude. The phase folded data, detrended with BIS, is shown in Fig.~\ref{fig:rv} along with the Keplerian model. The planet mass deviates by $2\sigma$ from the previous measurement of $3.1\pm0.9$ $M_{\rm Jup}$ (and RV semi-amplitude $310 \pm 90$ \ms) in \citet{Dorval2020}, which relied on one particular data point with large uncertainty. Hence we adopt the revised values. \subsection{Photometry with EulerCam} \label{sec:ecam} We observed two transits of \M on February 12 and 29, 2020 using EulerCam, the CCD imager installed at the 1.2~m Euler telescope located at La Silla. The observations were scheduled to be simultaneous with the two nights of observations with ESPRESSO (see Section~\ref{sec:espresso}), delivering updated transit parameters for the analysis of transmission spectroscopic data. For more details on the instrument and associated data reduction procedures the reader is referred to \citet{Lendl2012}. As the star is brighter (V=8.19) than exoplanet hosts usually observed with EulerCam \citep[V $\sim10-14$,][]{Lendl2012, Lendl2013, Lendl2019}, using a broad-band filter would lead to saturation of the detector. We therefore used a narrower band to avoid saturation issues, namely the \emph{Geneva V1} filter \citep{Rufener1988}, which peaks at 539\,nm and has a transmission above 50\% from 509\,nm to 562\,nm (accounting for detector quantum efficiency). The telescope was also defocused slightly to improve PSF sampling and observation efficiency. An exposure time of 20~s was used throughout both sequences. The light curves shown in Fig. \ref{fig:LC} were obtained using relative aperture photometry with two bright reference stars and apertures of 26 pixel (5.6\arcsec) radius. The night of February 12, 2020 was affected by recurrent cloud passages, leading to gaps in the observed data. We used the EulerCam data to compute the physical system parameters and in particular derive a planetary radius in the \emph{Geneva V1} band, which is comparably close in wavelength covered with ESPRESSO. TESS \citep{Ricker2015} has previously observed \M, revealing a slightly asymmetric transit shape created by gravity darkening on the host star and a misaligned planetary orbit \citep{Ahlers2020}. Our ground-based observations do not possess sufficient precision to reveal this effect. However, to propagate the information encoded in the TESS data into our fit, we place Gaussian priors on the transit duration ($T_{\mathrm{14}}$) and the impact parameter ($b$) corresponding to the values presented by \citet{Ahlers2020}. We used a Markov Chain Monte Carlo approach as implemented in \emph{CONAN} \citep{Lendl2020b} to derive the system parameters, fitting for $R_p / R_\ast$, $b$, $T_{\mathrm{14}}$, $T_0$ and $P$. With the exception of $b$ and $T_{\mathrm{14}}$ for which broad uniform priors were assumed. We assumed a quadratic limb-darkening law with parameters derived with LDCU\footnote{\url{https://github.com/delinea/LDCU}} \citep{Deline2022}. Correlated noise was modelled individually for each light curve using approximate Mat\'ern-3/2 kernels implemented through \emph{celerite} \citep{FM2017}. For the light curve of February 12, 2020, we included an evident correlation between the residual flux and the stellar FWHM as a linear trend fitted together with the transit model and Gaussian Process (GP). We allowed for an additional white noise by inclusion of a jitter term for each light curve. To derive planetary parameters, we used our derived radial velocity amplitude of $165\pm23$~\ms as presented in Section~\ref{sec:coralie} and pulled values from a corresponding Normal distribution at each MCMC step. Similarly, normal distributions for $M_\ast$ and $R_\ast$ were assumed corresponding to the values inferred from our spectral analysis. The raw and phase-folded light curves are shown in Fig.~\ref{fig:LC}, and the resulting updated parameters are given in Table \ref{tab:M4}. \subsection{High-resolution transmission spectroscopy with ESPRESSO} \label{sec:espresso} \begin{table*} \caption{Observing log of \M with ESPRESSO} \label{tab:obs} \centering \begin{tabular}{c c c c c c c c} \hline\hline Night & Date & Exposure time (sec) & N$_\mathrm{spectra}$ & On-target time (hour) & airmass & seeing (\arcsec) & S/N@580\,nm \\ % \hline 1 & Feb 12, 2020 & 360 & 85 & 7.5 & 1.34 - 1.99 & 0.34 - 0.87 & $\sim$212\\ 2 & Feb 29, 2020 & 300 & 96 & 7.9 & 1.34 - 2.46 & 0.32 - 0.77 & $\sim$208\\ \hline \end{tabular} \end{table*} We observed two transits of \M with ESPRESSO on February 12 and 29, 2020 under ESO program 0104.C-0605 (PI: Wyttenbach). ESPRESSO is a fiber-fed, ultra-stabilized echelle high-resolution spectrograph installed at the incoherent combined Coud\'e facility of the VLT. The observations were taken with the single-UT HR21 mode, providing a spectral resolving power of $\mathcal{R}\sim138\,000$, covering the optical wavelength range of 380-788 nm. The observations are summarised in Table~\ref{tab:obs}. The exposure time during transit was 360s and 300s in night 1 and night 2 respectively. The airmass ranges from 1.34 to 2.46, and the seeing condition varied from 0.3\arcsec\ to 0.9\arcsec. The total on-target time is 7.5h (85 exposures) and 7.9h (96 exposures) in the two nights respectively, delivering an average S/N per pixel of 208 and 212 at 580 nm. We took the sky-subtracted 1D spectra extracted with the Data Reduction Software (DRS) pipeline, and then corrected for telluric absorption features caused by H$_2$O and O$_2$ in the Earth's atmosphere following \citet{Allart2017} using the ESO sky tool \texttt{molecfit} \citep[version 4.2,][]{Smette2015}. The tool uses a line-by-line radiative transfer model (\texttt{LBLRTM}) to derive telluric atmospheric transmission spectra and accounts for molecular abundances, instrument resolution, continuum level, and wavelength solution that can best fit observations, whereas other telluric or interstellar contamination such as the absorption of \ion{Na}{i} was not removed with this correction. We also used the ESPRESSO DRS to generate stellar cross-correlation functions (CCFs) with an A0 mask as presented in \citet{Wyttenbach2020}. The stellar CCFs outside of the transit were average-combined to build a master out-of-transit CCF, representing the unocculted stellar line shape. We measured the projected spin velocity \vsini and systemic velocity $V_\mathrm{sys}$ of the target by fitting a rotationally broadened model \citep{Gray2005} to the line shape. We found the \vsini of 43.0$\pm$0.1 \kms and the $V_\mathrm{sys}$ of -5.68$\pm$0.09 \kms. We then obtained the residual CCFs by subtracting the CCF at each phase from the master out-of-transit CCF. The residual CCFs were later used to extract Rossiter-McLaughlin information as detailed in Section~\ref{sec:RM-reloaded}. \section{Data analysis} \label{sec:data-analysis} \subsection{Stellar pulsations} \label{sec:pulsation} In residual CCFs we note a strong rippled pattern caused by the stellar pulsations as shown in Fig.~\ref{fig:M4_pulsation}. The pulsations generate streak features throughout the course of observations, entangling with the Rossiter-McLaughlin and planetary signal during transit. We empirically mitigated the stellar pulsation pattern in CCFs following \citet{Wyttenbach2020}. To achieve this, we suppose the pulsation features in the two-dimensional diagram (Fig.~\ref{fig:M4_pulsation}) stay static in terms of radial velocity, which can be approximated with positive or negative Gaussian profiles. We co-added the out-of-transit residual CCFs before ingress and after egress respectively, where the pulsation pattern appears symmetric before and after transit. We fit a Gaussian profile to the strongest peak in the combined out-of-transit residual CCFs and subtracted the fitted Gaussian component from all the individual out-of-transit CCFs, while the rest in-transit spectra remain untouched. Then the steps above were repeated to iteratively remove one Gaussian component at a time, until the major pulsation features were cleaned (5 iterations in our case, and the results are not sensitive to the number of iterations). The pulsation signal is mitigated while some structure remains visible in Fig.~\ref{fig:M4_pulsation} bottom panel, as we will also notice in Fig.~\ref{fig:ccfs}. \subsection{Rossiter-McLaughlin reloaded} \label{sec:RM-reloaded} The Rossiter-McLaughlin (RM) effect (also known as the Doppler shadow) is the deformation of the stellar lines as a result of the planet blocking part of the stellar disk during transit. It encodes information of the stellar rotation \vsini and the projected misalignment angle $\lambda$ between the planet’s orbital axis and the star's rotation axis. We used the reloaded RM method \citep{Cegla2016} to model the doppler shift of the CCF profiles due to the occultation by the planet during transit. To extract RM information from the data, we combined the residual CCFs from both transits by binning in orbital phase with a step size of 0.002 and then fit the residual CCF at each phase with a Gaussian profile to determine the local RV of the occulted stellar surface. The measured local RVs plotted against orbital phases are shown in Fig.~\ref{fig:RM-reloaded}. We model the local RVs by computing the brightness-weighted average rotational velocity of the stellar surface blocked by the planet at each phase. Here we fixed the parameters such as $a/R_\ast$, $R_\mathrm{p}/R_\ast$ and $i_\mathrm{p}$ to the values listed in Table~\ref{tab:M4}, while making the spin-orbit angle $\lambda$, the stellar spin velocity $v$, inclination $i_\ast$, and differential rotation rate $\alpha$ free parameters. Fitting the model to the measured local RVs, we derived $\alpha=0.09\pm0.03$, $\lambda=250.34\pm0.14^{\circ}$, and \vsini= $46.2^{+7.7}_{-2.5}$ \kms. The values are consistent with the previous measurement of $\lambda=244.9_{-3.6}^{+2.7}$ and \vsini= $45.66_{-0.9}^{+1.1}$ \kms\ by \citet{Dorval2020}, with the slight difference in the spin-orbital angle likely resulting from the systematic differences in $P$ and $T_0$. Since we updated $P$ and $T_0$ from the simultaneous photometry as the spectroscopic observations, the updated epoch is more reliable for our analysis of the local RVs. We caution that the uncertainties quoted here are underestimations because they did not account for systematics in the transit epoch and system parameters. \subsection{Modeling RM and CLV effects} \label{sec:RM+CLV} To disentangle the stellar signal from the planetary signal, we modeled the transit effects on stellar lines, including the Rossiter-McLaughlin (RM) and center-to-limb variation (CLV) effects, following \citep{Yan2017, Casasayas-Barris2019}. We first computed synthetic stellar spectrum at different limb-darkening angles ($\mu$) using \texttt{Spectroscopy Made Easy} tool \citep{Valenti1996} with \texttt{VALD} line lists \citep{Ryabchikova2015}. The stellar disk was divided into cells of size 0.01 $R_\ast$ $\times$ 0.01 $R_\ast$, each assigned with a spectrum obtained by the interpolation to its corresponding $\mu$ and applying a radial velocity shift according to its local rotational velocity. We then integrated the whole stellar disk while excluding the region blocked by the planet during transit. We divided the integrated spectrum at each phase through the out-of-transit stellar spectrum, resulting in the model of RM+CLV effects such as shown in Fig~\ref{fig:line2d} (panel b). The system and stellar parameters used in the modeling are presented in Table~\ref{tab:M4}, including the best-fit parameters $\alpha=0.09$, $\lambda=250.3^{\circ}$, and \vsini= 48.6 \kms\ obtained via RM reloaded method in Section~\ref{sec:RM-reloaded}. \subsection{Transmission spectrum} \label{sec:trasmission} Using the telluric corrected 1D spectra, we extracted the transmission spectrum following the similar procedure in previous studies \citep[such as ][]{Wyttenbach2015, Casasayas-Barris2019}. It is summarised as follows. The spectra were median-normalised and shifted to the stellar rest frame. The out-of-transit spectra were co-added to build the master stellar spectrum, which was then removed from each individual spectrum via division. In the residuals, there remained sinusoidal wiggles as also seen in other ESPRESSO data \citep{Tabernero2021, Borsa2021, Casasayas-Barris2021, Kesseli2022}. We applied a Gaussian smoothing filter with a width of 5 \AA\ to each exposure and removed it to correct for the low-frequency noise. Moreover, outliers exceeding 4$\sigma$ threshold in a sliding 25 \AA\ window were corrected through linear interpolation over nearby pixels. Finally, we combined the data of both transits by binning in orbital phase with a step size of 0.002. The in-transit residuals at this stage contain the variation due to RM+CLV effects and the absorption of the planet. Following \citet{Yan2018}, we fit the data with a model composed of both the stellar effects (as detailed in Section~\ref{sec:RM+CLV}) and the planetary absorption signal (modeled as a Gaussian profile) assuming the expected planetary orbital motion amplitude ($K_{\rm p}$) as listed in Table~\ref{tab:M4}. The free parameters include the Gaussian amplitude ($h$), Gaussian width (FWHM), wind speed ($v_{\rm wind}$), and a scaling factor of the stellar effects ($f$) to account for the fact that the effective absorption radius can be larger than the nominal planet radius used in the RM+CLV model. The fitting process was performed with \texttt{PyMultiNest} \citep{Buchner2014}, a \texttt{Python} interface for the Bayesian inference technique \texttt{MultiNest} \citep{Feroz2009}. Once obtaining the best-fit values, we removed the stellar RM+CLV effects from the residuals, which were then average-combined in the planet rest frame to form the 1D transmission spectrum such as presented in Fig~\ref{fig:line2d} panel e. \subsection{Cross-correlation analysis} In addition to inspecting individual lines, we carried out cross-correlation analyses \citep{Snellen2010, Brogi2012} to co-add hundreds of absorption lines in the full range of the optical transmission spectrum to search for atoms \citep{Hoeijmakers2018, Hoeijmakers2019, Kesseli2022}. We computed transmission models of different atoms and ions (\ion{Fe}{i}, \ion{Fe}{ii}, \ion{Mg}{i}, etc.) for cross-correlation analysis using the radiative transfer tool \texttt{petitRADTRANS} \citep{Molliere2019}. Here we assumed an isothermal temperature profile of 2500 K, a continuum level of 1 mbar, and a gray cloud deck at 1 mbar. The volume mixing ratios (VMR) were set to the solar abundance. We utilized the formula for cross-correlation as presented in \citep{Hoeijmakers2018, Hoeijmakers2019, Hoeijmakers2020b}, \begin{equation} \label{eq:ccf} c (v, t) = \frac{\sum_{i} x_i(t) T_i(v)}{\sum_{i} T_i(v)}, \end{equation} where $x_i(t)$ is the observation at time $t$. $T_i(v)$ is the transmission model of the species shifted to a radial velocity $v$, such that the CCF effectively is a weighted average of multiple absorption lines, representing the average strength of the absorption. Following this convention allows us to compare the line strengths across different ultra-hot Jupiters presented in previous cross-correlation studies. However, we caution that such CCF amplitudes depend on the specific models used. This will be further discussed in the comparison to other planets in Section~\ref{sec:correlation}. The transmission templates were cross-correlated with the telluric corrected spectra in the wavelength range of 380-685 nm (beyond which the spectra are heavily contaminated by telluric lines, therefore excluded in the analysis). This led to the stellar CCFs dominated by signals from the stellar spectra. We then obtained the residual CCFs by removing the average out-of-transit CCF and mitigated the stellar pulsation pattern following Section~\ref{sec:pulsation}. Similar as the transmission spectrum, the residual CCFs contains both the stellar RM+CLV effects and the planetary signal. We carried out the same cross-correlation analysis on the synthetic stellar spectra computed in Section~\ref{sec:RM+CLV} to simulate the RM+CLV contribution to the residual CCFs, which was multiplied by a scaling factor $f$ and then removed from the data to obtain the final CCFs originating from the planetary absorption. The values of free parameters including $f$, the Gaussian amplitude of the absorption signal $h$, the Gaussian FWHM and the central velocity offset $v_{\rm wind}$ were determined similarly as presented in Section~\ref{sec:trasmission}. \section{Results} \label{sec:result} \subsection{Detection of individual lines of \texorpdfstring{ \ion{H}{i}, \ion{Na}{i}, \ion{Ca}{ii}, \ion{Mg}{i}, \ion{Fe}{i}, \ion{Fe}{ii}}{}} \label{sec:result_lines} We report the detection of individual absorption lines of H$\alpha$, H$\beta$, \ion{Na}{i} D1\&D2, \ion{Ca}{ii} H\&K, \ion{Mg}{i}, \ion{Fe}{i}, and \ion{Fe}{ii} in \M. The transmission spectra around these absorption lines are shown in Fig.~\ref{fig:line1d}, and the measured properties are summarised in Table~\ref{tab:lines}. The centre of the absorption features generally agree with zero, while H$\alpha$, H$\beta$, and \ion{Na}{i} doublet appear blueshifted by up to $\sim$-4 \kms, which is usually interpreted as the evidence of day-to-night side wind \citep{Snellen2010, Brogi2016, Hoeijmakers2018, Casasayas-Barris2019, Seidel2021}. The various velocity offsets of different species may indicate that the lines probe distinct atmospheric layers dominated by different dynamic processes. The wind velocity offset for the \ion{Na}{i} doublet differs from each other by $\sim 2\sigma$. We suppose this may be systematics as a result of multiple sodium absorptions by the interstellar medium located around 13 to 24 \kms away from the line center. We mitigated the effect by excluding this velocity range at the barycentric rest-frame when calculating the transmission spectrum, but some artefacts might still persist to contribute to the line offset. We also note in Fig.~\ref{fig:Ha} panel c the 'gap' in the absorption signal when the planetary trail intersects the Doppler shadow, meaning that the planetary transmission lines overlap with the stellar lines from the region blocked by the planet. At this moment, the effective size of the planet appears larger because of the absorption, therefore enhancing the RM effect. This is not accounted for in our RM+CLV modelling, so we commonly see such under-correction that leads to the gap near the overlapping orbital phases. We quantified the effect of the under-correction on the planetary absorption depths by excluding the overlapping orbital phases (e.g. from -0.015 to 0.015) when fitting the planetary signal and co-adding the transmission spectra. We found that the absorption amplitude increases by $\sim$20\%-25\% for lines such as H$\alpha$, sodium, and ionised iron, typically around 2$\sigma$ of our measurements. The nominal uncertainties shown in Table~\ref{tab:lines} did not account for such systematic noise, therefore likely to be underestimations. \begin{table*} \caption{Summary of individual line detection in the transmission spectrum of \M.} \label{tab:lines} \centering \begin{tabular}{c c c c c c c c} \hline\hline Line & $\lambda_0$ (\AA) & $h$ (\%) & $N_\mathrm{sc}$ & S/N & $f$ & FWHM (\kms) & $v_{\rm wind}$ (\kms)\\ % \hline H$\alpha$ & 6564.61 & $ -0.317 \pm 0.021 $ & $ 48.4 \pm 3.3 $ & 14.8 & $ 1.89 $ & $ 31.4 \pm 2.4 $ & $ -3.0 \pm 1.0 $ \\ H$\beta$ & 4862.72 & $ -0.143 \pm 0.030 $ & $ 21.9 \pm 4.5 $ & 4.8 & $ 1.49 $ & $ 27.2 \pm 8.1 $ & $ -4.5 \pm 2.3 $ \\ \ion{Ca}{ii} H & 3969.59 & $ -0.705 \pm 0.082 $ & $ 107.6 \pm 12.6 $ & 8.6 & $ 1.48 $ & $ 23.0 \pm 3.0 $ & $ 0.2 \pm 1.3 $ \\ \ion{Ca}{ii} K & 3934.77 & $ -0.844 \pm 0.082 $ & $ 128.9 \pm 12.5 $ & 10.3 & $ 1.31 $ & $ 29.6 \pm 3.1 $ & $ 0.4 \pm 1.5 $ \\ \ion{Na}{i} D1 & 5897.55 & $ -0.168 \pm 0.014 $ & $ 25.6 \pm 2.2 $ & 11.9 & $ 1.32 $ & $ 28.6 \pm 2.8 $ & $ -1.8 \pm 1.1 $ \\ \ion{Na}{i} D2 & 5891.58 & $ -0.214 \pm 0.017 $ & $ 32.7 \pm 2.6 $ & 12.4 & $ 1.37 $ & $ 19.9 \pm 2.1 $ & $ -3.6 \pm 0.7 $ \\ \ion{Mg}{i} & 5174.12 & $ -0.151 \pm 0.017 $ & $ 23.0 \pm 2.6 $ & 8.7 & $ 1.04 $ & $ 22.6 \pm 3.5 $ & $ 1.5 \pm 1.3 $ \\ \ion{Mg}{i} & 5185.05 & $ -0.125 \pm 0.019 $ & $ 19.1 \pm 2.9 $ & 6.5 & $ 1.08 $ & $ 25.2 \pm 5.8 $ & $ 2.3 \pm 1.6 $ \\ \ion{Fe}{i} & 4046.96 & $ -0.162 \pm 0.025 $ & $ 24.8 \pm 3.8 $ & 6.6 & $ 1.04 $ & $ 27.8 \pm 4.4 $ & $ -5.0 \pm 2.1 $ \\ \ion{Fe}{i} & 4384.78 & $ -0.135 \pm 0.016 $ & $ 20.6 \pm 2.5 $ & 8.3 & $ 1.16 $ & $ 29.2 \pm 4.1 $ & $ -0.0 \pm 1.6 $ \\ \ion{Fe}{ii} & 4925.30 & $ -0.226 \pm 0.022 $ & $ 34.5 \pm 3.3 $ & 10.3 & $ 1.13 $ & $ 13.1 \pm 1.7 $ & $ -0.1 \pm 0.6 $ \\ \ion{Fe}{ii} & 5019.83 & $ -0.211 \pm 0.020 $ & $ 32.2 \pm 3.0 $ & 10.8 & $ 1.11 $ & $ 19.1 \pm 2.5 $ & $ -0.7 \pm 0.8 $ \\ \hline \end{tabular} \tablefoot{$\lambda_0$ is the central wavelength of the line in vacuum. $h$, FWHM, and $v_{\rm wind}$ are the amplitude, width, and center of the best-fit Gaussian profile to the planetary absorption. S/N is simply calculated as the value of $h$ divided by its uncertainty. $N_\mathrm{sc}$ represents the number of atmospheric scale heights that the peak absorption corresponds to. $f$ is the scaling factor applied to the stellar RM+CLV model.} \end{table*} \subsection{Detection of species in cross-correlation} In addition to elements with individual lines detected in the transmission spectrum, we performed cross-correlation analyses for a range of atoms, ions, and molecules, guided by their observability at high spectral resolution as presented in \citet{Kesseli2022}. Here we present the detection of \ion{Mg}{i}, \ion{Ca}{i}, \ion{Cr}{i}, \ion{Fe}{i}, and \ion{Fe}{ii} in Fig.~\ref{fig:ccfs} and Table~\ref{tab:ccfs}. We found no evidence of other species such as \ion{Ti}{i}, \ion{Ti}{ii}, \ion{V}{i}, \ion{V}{ii}, \ion{Mn}{i}, \ion{Co}{i}, \ion{Ni}{i}, TiO, VO. The lack of detection of \ion{Ti}{i}, \ion{Ti}{ii}, and TiO are commonly seen in UHJs, although not well understood. Temperatures in the atmosphere seem to play a key role in determining the chemical composition. For example, KELT-9~b shows strong \ion{Ti}{ii} and no \ion{Ti}{i}. Therefore the lack of \ion{Ti}{i} is likely attributed to the dominant ionization at the extreme temperature of 4000 K \citep{Hoeijmakers2018}. The detection of \ion{Ti}{i}, \ion{Ti}{ii}, and TiO was found in WASP-189b \citep{Prinoth2022} with a temperature of $\sim$2700 K. For other UHJs with slightly lower temperatures (including WASP-76~b, WASP-121~b, HAT-P-70~b MASCARA-2~b, and \M), there is no conclusive detection of Ti in any form. This has been proposed to be due to Ti being trapped in condensates on the night side of those cooler planets \citep{Spiegel2009, Parmentier2013, Kesseli2022}. \begin{table*} \caption{Summary of cross-correlation detection.} \label{tab:ccfs} \centering \begin{tabular}{c c c c c c c} \hline\hline Species & S/N & $K_{\rm p}$ (\kms) & $h$ (\%) & $f$ & FWHM (\kms) & $v_{\rm wind}$ (\kms) \\ % \hline \ion{Mg}{i} & 6.3 & $ 153 ^{+60}_{-62} $ & $ 0.0221 \pm 0.0009 $ & 0.54 & $ 23.8 \pm 0.8 $ & $ -0.5 \pm 1.6 $ \\ \ion{Ca}{i} & 6.3 & $ 207 ^{+53}_{-60} $ & $ 0.0039 \pm 0.0001 $ & 0.69 & $ 22.7 \pm 1.2 $ & $ -3.2 \pm 1.5 $ \\ \ion{Cr}{i} & 6.7 & $ 204 ^{+14}_{-45} $ & $ 0.0114 \pm 0.0006 $ & 0.42 & $ 16.3 \pm 0.6 $ & $ -3.9 \pm 1.0 $ \\ \ion{Fe}{i} & 25.3 & $ 204 ^{+13}_{-21} $ & $ 0.0150 \pm 0.0001 $ & 0.35 & $ 22.5 \pm 0.2 $ & $ -2.4 \pm 0.4 $ \\ \ion{Fe}{ii} & 11.8 & $ 179 ^{+14}_{-12} $ & $ 0.0444 \pm 0.0011 $ & 0.62 & $ 16.5 \pm 0.4 $ & $ -1.2 \pm 0.6 $ \\ \hline \end{tabular} \tablefoot{S/N is the signal-to-noise ratio at the maximum in the $K_{\rm p}$-$V_\mathrm{sys}$ map as shown in Fig.~\ref{fig:ccfs} bottom row. The noise level is measured in the map as the standard deviation in the velocity range of (-150, -75) and (75, 150) \kms, away from the peak signal. The uncertainties of parameters are computed following the method as described in \citet{Kesseli2022}. $h$, $f$, FWHM, and $v_{\rm wind}$ are defined the same as in Table~\ref{tab:lines}.} \end{table*} \subsection{Neutral and Ionized iron in MASCARA-4b and other UHJs} Here we focus on properties of \ion{Fe}{i} and \ion{Fe}{ii} absorption in \M and draw comparisons with previous detection in other ultra-hot Jupiters. The absorption strength of \ion{Fe}{ii} exceeds \ion{Fe}{i}, and is more than an order of magnitude higher than what is predicted under the assumption of a hydrostatic atmospheric model. This has been commonly seen in other UHJs such as KELT-9b \citep{Hoeijmakers2019}, MASCARA-2b \citep{Hoeijmakers2020b}, HAT-P-70b \citep{Bello-Arufe2022} and WASP-189b \citep{Prinoth2022}. The contribution from the hydrostatic region of atmospheres is often too small to account for the measured absorption level of \ion{Fe}{ii}. Several deviations from the model assumption may help explain the strong absorption by \ion{Fe}{ii}, for instance, photochemistry in the upper atmosphere, non local thermodynamic equilibrium (non-LTE) effects, hydrodynamic outflows \citep{Huang2017, Hoeijmakers2019, Prinoth2022}. These all suggest that \ion{Fe}{ii} traces the upper atmospheres, higher than \ion{Fe}{i} does. In particular, hydrodynamic outflows can raise the species to the extended upper atmosphere, followed by progressive ionisation of \ion{Fe}{i} in the exosphere, giving rise to strong \ion{Fe}{ii} absorption features. Moreover, the strong lines of \ion{Fe}{ii} and \ion{Mg}{ii} observed in the near-ultraviolet (NUV) in WASP-121b also provide evidence of the exospheric origins of the ionic species \citep{Sing2019}. As shown in Table~\ref{tab:ccfs}, the FWHM of \ion{Fe}{i} signal in \M is larger than that of \ion{Fe}{ii}. This has also been observed in MASCARA-2b \citep{Hoeijmakers2020b}, while HAT-P-70b shows the opposite trend that \ion{Fe}{ii} is broader than \ion{Fe}{i} \citep{Bello-Arufe2022}. This discrepancy of line width adds to the indication that neutral and ionised iron may probe different region in the atmosphere. \ion{Fe}{i} traces deep down layers, while \ion{Fe}{ii} originates from the upper atmosphere, where the distinct dynamic regimes contribute to the different line widths and radial velocities \citep{Showman2013, Louden2015, Brogi2016, Seidel2019, Hoeijmakers2019, Bello-Arufe2022}. For instance, super-rotational jets may be present in the lower atmospheres of \M and MASCARA-2b, resulting in broadened \ion{Fe}{i} signatures. Whereas, the atmosphere of HAT-P-70b may undergo strong outflow in the upper atmosphere, broadening \ion{Fe}{ii} signal instead. \section{Discussion} \label{sec:discussion} \subsection{Disentangling the hydrostatic atmosphere and extended exosphere of UHJs} \label{sec:correlation} As the detection of atoms with high-resolution transmission spectroscopy accumulates quickly, a small sample of ultra-hot Jupiters starts to build up, providing us with the opportunity to study potential trends of atomic signatures in the UHJ population. Among various species, \ion{H}{i}, \ion{Na}{i}, \ion{Mg}{i}, \ion{Ca}{ii}, \ion{Fe}{i}, and \ion{Fe}{ii}, have been commonly detected in a handful of UHJs, including KELT-9b \citep{Yan2018, Cauley2019, Borsa2019, Hoeijmakers2019, Yan2019, Turner2020, Wyttenbach2020}, MASCARA-2b \citep{Casasayas-Barris2020, Hoeijmakers2020b, Nugroho2020, Stangret2020}, WASP-121b \citep{Cabot2020, Gibson2020, Hoeijmakers2020a, Borsa2021, Merritt2021}, WASP-76b \citep{Tabernero2021, Seidel2021, Casasayas-Barris2021, Kesseli2022}, WASP-189b \citep{Prinoth2022}, HAT-P-70b \citep{Bello-Arufe2022}. We compile properties of the detections (including the transmission amplitude $h$ and the FWHM of each species) in Table~\ref{tab:UHJs}. Fig.~\ref{fig:scaleheight} shows the sample of UHJs with the absorption amplitude ($h$) of each species plotted against the typical transmission strength of absorbers extending one scale height ($2H_0R_p/R_\ast^2$). Under the assumption of hydrostatic atmospheres, we expect the line strength of one species to be proportional to the typical transmission amplitude $2H_0R_p/R_\ast^2$, if the absorption forms at a similar pressure level. In this case, the slope of the linear correlation represents the vertical extent of the atom in UHJ atmospheres in units of scale height $H_0$. According to Fig.~\ref{fig:scaleheight}, neutral metal species such as \ion{Mg}{i}, \ion{Fe}{i}, (and possibly \ion{Na}{i}) follow the trend well, with the Pearson correlation coefficients $r$ close to 1. The number of scale heights (see the slopes in Fig.~\ref{fig:scaleheight}) probed by \ion{Na}{i}, \ion{Mg}{i}, and \ion{Fe}{i} decreases with the atomic mass of the element. Another underlying assumption for the linear correlation is that the abundances of the neutral species do not vary significantly in all the UHJs. The good correlations shown in Fig.~\ref{fig:scaleheight} seem to hint at the validity of this hypothesis, which could be verified by future retrieval analysis to constrain the abundances in these UHJs. On the other hand, H$\alpha$ and \ion{Fe}{ii} are two apparent exceptions of the correlation. In particular, although the scale height of WASP-76b is large, only upper limits have been estimated for H$\alpha$ and \ion{Fe}{ii} \citep{Casasayas-Barris2021, Kesseli2022}. Instead, this agrees with the argument that the absorption of H$\alpha$ probes extended upper atmospheres where the hydrostatic and LTE assumptions no longer apply. Hence the \ion{Fe}{ii} signatures, with such a similar behaviour as H$\alpha$, likely also originate from UHJ exospheres. In this light, we find a positive correlation of absorption signals between \ion{Fe}{ii} and H$\alpha$ as shown in Fig.~\ref{fig:Ha}, consolidating that they probe the similar atmospheric region and process. The planet WASP-76b, with only an upper limit detection of the H$\alpha$ absorption, also show no evidence of \ion{Fe}{ii}, well in line with the correlation. WASP-121b is a baffling case where the detection of \ion{Fe}{ii} in the optical is debated (the tentative detection was claimed in \citet{Ben-Yami2020, Borsa2021, Merritt2021}, but contradicted in \citet{Hoeijmakers2020a, Gibson2020}), while the strong detection in the NUV \citep{Sing2019} does suggest its presence in the extended exosphere up to $2R_p$. Based on the strong H$\alpha$ detection in WASP-121b \citep{Cabot2020, Borsa2021}, we expect significant \ion{Fe}{ii} absorption if it follows the trend. Further observations are needed to unravel this. We also note that in order to ensure the trend is not obstructed by the model-dependency of the CCF signal, we compared individual \ion{Fe}{ii} lines in transmission spectra of KELT-9b \citep{Cauley2019, Hoeijmakers2019}, MASCARA-2b \citep{Casasayas-Barris2020}, and \M (this work), and confirm that the correlation with H$\alpha$ still holds. Therefore, our comparison of atomic transmission features among the UHJ population hints at two distinct regimes of origin, the hydrostatic lower atmosphere and the extended exosphere. The linear correlation between the absorption strengths of metals (such as \ion{Na}{i}, \ion{Mg}{i}, and \ion{Fe}{i}), and the expected transmission amplitudes under the hydrostatic assumption validates their origin from the hydrostatic lower atmosphere. On the other hand, hydrogen and ions such as \ion{Fe}{ii} deviate from the scale height correlation, possibly because of the prevailing contribution from hydrodynamic outflows in the upper atmosphere. The positive relation of absorption strength between \ion{Fe}{ii} and H$\alpha$ is further indicative of their exospheric origins (as discussed in Section~\ref{sec:exosphere}). The overall picture for \ion{Ca}{ii} is less clear, probably involving contribution of both regimes. The absorption strengths of \ion{Ca}{ii} are commonly large enough to be attributed to the extended upper atmospheres, sometimes even beyond the Roche lobe \citep{Borsa2021, Bello-Arufe2022}. Yet we find no clear correlation between \ion{Ca}{ii} and H$\alpha$, while the linear trend with scale heights still hold to some extent (see Fig.~\ref{fig:scaleheight}). Our speculation is that both lower and upper atmospheric regimes contribute to the absorption, considering \ion{Ca}{i} atoms are prone to be readily ionized in the lower atmosphere in contrast to \ion{Fe}{i}, the ionization of which may only be significant in the upper atmosphere. A larger sample size is required to draw more solid conclusions. We caution that studying an ensemble of UHJs is challenging because of two aspects. First, it is difficult to account for the uncertainty of measurements either from different instruments or from different data reduction. For instance, systematic differences have been found in H$\alpha$ in KELT-9b \citep{Yan2018, Cauley2019, Turner2020, Wyttenbach2020} and MASCARA-2b \citep{Casasayas-Barris2019}. However, the systematics are not expected to be large enough to break the general trend that we show here. Second, the model-dependency of cross-correlation signals presents challenges for the comparison of the UHJ population. Previous studies of individual UHJs use cross-correlation templates that are modeled differently. A standard set of models such as presented in \citet{Kitzmann2022} will be beneficial if it can be commonly used in such analyses. Yet the choice of temperature in the model affects the relative weights assigned to weak versus strong lines, which changes the amplitude of CCFs by up to a factor of two. \subsection{Hydrodynamic exospheres as probed via \texorpdfstring{H$\alpha$}{} and ions}\label{sec:exosphere} The positive correlation of absorption signals between \ion{Fe}{ii} and H$\alpha$ as shown in Fig.~\ref{fig:Ha} indicates that both species trace the similar atmospheric region in UHJs. \ion{H}{i} and \ion{He}{i} absorption lines have been previously modeled as a probe for the escaping exosphere \citep{Huang2017, Allan2019, Garcia-Munoz2019, Wyttenbach2020, Yan2021, Oklopcic2018, Lampon2021, DosSantos2022}, which is expected to be the consequence of hydrodynamic outflows driven by stellar X-ray, extreme Ultraviolet (EUV) or NUV radiation. The driving mechanism may depend on the stellar type, as early A type stars are not expected to emit strongly at EUV \citep{Fossati2018}, while having high levels of NUV flux \citep{Garcia-Munoz2019}. The modeling of outflows also extends to heavy atoms such as C, O, Si by \citet{Koskinen2013}, suggesting that heavy elements dragged to the upper atmospheres stay well mixed as a result of collisions with rapidly escaping hydrogen. \citet{Gebek2020} modeled Na and K in evaporative exospheres to interpret high-resolution transmission observations. Furthermore, the strong \ion{Mg}{i}, \ion{Mg}{ii}, and \ion{Fe}{ii} lines in the NUV are discussed as tracers of upper atmospheres and hydrodynamic escapes \citep{Bourrier2014, Sing2019, Dwivedi2019}, while the optical lines have not been explored. Based on the presented correlation of absorption signals between \ion{Fe}{ii} and H$\alpha$, we suggest that \ion{Fe}{ii} lines in the optical also probe exospheres of UHJs. Therefore, further modeling of \ion{Fe}{ii} in exosphere can help constrain the structure of upper atmosphere, outflows, and the mass loss process. Without the intention to model any particular atmosphere in detail, we aim to get insights into the trend in the UHJ population by some simplified estimations as follows. We examine the role of hydrodynamic outflows and photoionisation of atoms in the absorption signals of \ion{H}{i} and \ion{Fe}{ii} in UHJs. For a rough estimation, we assume the extended exosphere as a homogeneous optically thin cloud subject to stellar high energy radiation. In the optically thin limit, the absorption level is proportional to the total mass of the absorbing material regardless of the shape of the cloud \citep{Hoeijmakers2020a, Gebek2020}. Hence, the absorption signal depends on the mass loss rate $\dot{M}$ that determines the inflow of absorbing material to the exosphere and the ionisation degree $f_X$ of the element. More details of the estimation can be found in Appendix~\ref{app:estimation}. We find that the photoionisation plays a marginal role in the discrepancy of absorption strengths because the degree of ionization is expected to vary by less than a factor of a few among different UHJs. Instead, the amount of absorption is dominated by the planetary mass loss $\dot{M}$ driven by the stellar EUV or NUV flux, which usually varies by orders of magnitudes from system to system. Therefore, we argue that the dominant outflow drives the positive correlation between the H$\alpha$ and \ion{Fe}{ii} absorption (see Fig.~\ref{fig:Ha}), and they likely trace the exospheres of UHJs. The absorption level reflects the properties of individual planets such as the mass loss rate and the irradiation environment. Although the sample size of UHJs with detailed spectral characterisation is still small, we suggest that the correlation between \ion{Fe}{ii} and H$\alpha$ absorption signal is expected from the analytical estimation. It therefore calls for more future observations on UHJs to populate this plot. Detailed modeling of individual planets will be valuable for constraining the hydrodynamic outflows and mass loss rate as traced by atomic absorption lines in the optical. \section{Conclusion} With the purpose of detailed characterisation of the ultra-hot Jupiter \M, we carried out transit photometry and radial velocity measurements using EulerCam and CORALIE at the 1.2~m Euler telescope, delivering a refined planet mass of $1.675\pm0.241$ $M_{\rm Jup}$, together with other updated system and planet parameters. We analysed the optical transmission spectrum of \M observed with the high-resolution spectrograph ESPRESSO at the VLT and report the detection of various species in the atmosphere, including \ion{H}{i}, \ion{Na}{i}, \ion{Mg}{i}, \ion{Ca}{i}, \ion{Ca}{ii}, \ion{Cr}{i}, \ion{Fe}{i}, and \ion{Fe}{ii}. This adds \M to the ensemble of UHJs showing a profusion of atomic absorption features. Putting the measurements into perspective, we explored the trends of atomic absorption features within the UHJ population, indicating two distinct atmospheric regimes as probed through different absorption signatures. The absorption by metals such as \ion{Mg}{i} and \ion{Fe}{i} appears to trace the hydrostatic region of atmospheres as the line strengths correlate well with the scale heights of different planets. The H$\alpha$ and \ion{Fe}{ii} absorption strengths, which deviate from the scale height correlation, yet show a positive relation with each other among the UHJ population. Through analytical estimations, we suggest that the correlation is consistent with the exospheric origin of \ion{Fe}{ii} and H$\alpha$ absorption in UHJs, driven by the dominant outflows subject to stellar high-energy radiation. This shows the potential of using both species as probes for the hydrodynamic escape and mass loss of UHJs. Studying the diverse atomic transmission signatures allows us to disentangle the hydrostatic and the exospheric regime of the extremely irradiated planets. \begin{acknowledgements} Based on observations collected at the European Southern Observatory under ESO programme 0104.C-0605. We thank the referee for insightful comments that help improve the manuscript. We thank Aline Vidotto for the discussion on the atmospheric escape of ultra-hot Jupiters. Y.Z. and I.S. acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program under grant agreement No. 694513. M.L. acknowledges support of the Swiss National Science Foundation under grant number PCEFP2\_194576. The contribution of M.L. and A.P. have been carried out within the framework of the NCCR PlanetS supported by the Swiss National Science Foundation. DACE is a web platform hosted in Geneva and developed by the Swiss National Center of Competence in Research (NCCR) PlanetS. \end{acknowledgements} \bibliographystyle{aa} % \bibliography{ref} % \begin{appendix} % \section{Analytical estimation of atomic absorption in exospheres} \label{app:estimation} We consider the gas composed of atomic hydrogen in the upper atmosphere (exosphere) escaping the planet with a velocity of $u$ and a constant mass loss rate $\dot{M}$. The conservation of mass provides \begin{equation} \label{eq:mass_loss_conserve} \dot{M} = 4\pi R_c^2 u \rho, \end{equation} where $R_c$ is the size of the exobase, $\rho$ is the density of hydrogen, and $u=x_u v_\mathrm{esc}$ is a fraction of the planet's escape velocity $v_\mathrm{esc}=\sqrt{2GM_p/R_p}$. The energy-limited mass loss rate following \citet{Erkaev2007} is given by \begin{equation} \label{eq:mass_loss_energy} \dot{M} = \frac{\pi R_c^2 F_\mathrm{EUV}\epsilon}{\Phi_0 K}, \end{equation} where $R_p$ is the planet radius, $F_\mathrm{EUV}$ is the stellar EUV flux at the location of the planet, $\epsilon$ is the heating efficiency, $\Phi_0$ is the gravitational potential at the planetary radius ($\Phi_0=GM_p/R_p$), and $K(\frac{R_{Rl}}{R_p})$ is the coefficient accounting for the potential difference between the Roche lobe boundary ($R_{Rl}$) and the planetary surface ($R_p$) as a result of stellar tidal forces. Combining Equation~\ref{eq:mass_loss_conserve} and \ref{eq:mass_loss_energy}, we get the number density of H atom in the exosphere as a result of outflows \begin{equation} \label{eq:density} \rho = \frac{\epsilon F_\mathrm{EUV}}{2 x_u K v_\mathrm{esc}^{3}} \simeq \rho_0 \bigg( \frac{\epsilon F_\mathrm{EUV}}{F_0} \bigg) \bigg( \frac{v_0}{v_\mathrm{esc}} \bigg)^3, \end{equation} where $F_0=450$ erg cm$^{-2}$ s$^{-1}$, $v_\mathrm{esc}=60$ \kms, and $\rho_0 = 10^{-15}$ g cm$^{-3}$, as informed by previous simulations such as \citet{Murray-Clay2009, Allan2019}. Equation \ref{eq:density} suggests that under the assumption of the energy-limited mass loss, the exospheric density $\rho$ scales with the planet's escape velocity (or gravitational potential) and the EUV flux received. In addition to the outflow, the photoionisation of species also plays a role in determining the level of transmission signal. For a rough estimation of the photoionisation, we simply assume the extended exosphere as a homogeneous optically thin cloud subject to stellar EUV radiation. The characteristic temperature of the exosphere is determined by the balance of heating ($Q$) and cooling ($C$) following \citet{Murray-Clay2009}. \begin{equation} \label{eq:Q} Q = \epsilon F_\mathrm{EUV} \sigma_{\nu_0} n_n, \end{equation} where $\sigma_{\nu_0}$ is the cross section for the photoionisation of hydrogen and $n_n$ is the number density of neutral hydrogen. For the cooling term, we assume it is driven by radiative losses resulting from collisional excitation of Ly$\alpha$ line \begin{equation} \label{eq:C} C = 7.5 \times 10^{-19} n_n n_+ \exp[-1.183\times 10^{5}/T], \end{equation} where $n_+$ is the number density of protons, equivalent to the number density of electrons. Considering the photochemistry of \ion{H}{i}, we solve for the ionisation balance (the rate of photoionisation and radiative recombination) to estimate the degree of ionization $f_\mathrm{H}$. \begin{equation} \label{eq:ion} \frac{F_\mathrm{EUV} \sigma_{\nu_0} n_n}{e_\mathrm{in}} = n_+ n_e \alpha_\mathrm{rec}, \end{equation} where $\sigma_{\nu_0}=1.89\times10^{-18} \mathrm{cm}^2$ is the cross section for photoionisation of hydrogen \citep{Spitzer1978}; the recombination coefficient $\alpha_\mathrm{rec} = 2.7\times10^{-13}(T/10^{4})^{-0.9}$ taken from \citet{Storey1995}; $n_+ = n_e = n\ f_\mathrm{H}$ and $n_n = n\ (1-f_\mathrm{H})$, where $n=\rho/m_0$, $m_0$ is the mass of H atom; $e_\mathrm{in}$ is the input photon energy, assumed to be 20 eV, and the heating efficiency is $\epsilon = 1-13.6 \mathrm{eV}/e_\mathrm{in}=0.32$. Combining Equation~\ref{eq:density}, \ref{eq:Q}, \ref{eq:C}, and \ref{eq:ion}, we solve for the ionization degree of hydrogen and the temperature as follows \begin{equation} \label{eq:fH} \begin{aligned} \frac{1}{f_\mathrm{H}} &= 1+0.015/(T^{0.9} \exp[-1.183\times 10^{5}/T]), \\ \frac{1}{f_\mathrm{H}} &= \frac{0.4\rho_0}{m_0 F_0} \bigg( \frac{v_0}{v_\mathrm{esc}} \bigg)^3 \exp[-1.183\times 10^{5}/T]. \end{aligned} \end{equation} Similarly, for other species such as \ion{Fe}{i}, the ionisation balance of Fe combined with Equation \ref{eq:ion} gives \begin{equation} \label{eq:fFe} \frac{1}{f_\mathrm{Fe}} = 1+ \frac{1-f_\mathrm{H}}{f_\mathrm{H}} \frac{\sigma_\mathrm{H}\alpha_\mathrm{rec,Fe}}{\sigma_\mathrm{Fe} \alpha_\mathrm{rec,H}}, \end{equation} where $\sigma_\mathrm{Fe}=3.66\times10^{-18} \mathrm{cm}^2$ \citep{Verner1996}, the recombination coefficient $\alpha_\mathrm{rec,Fe} = 3.7\times10^{-12}(T/300)^{-0.65}$ \citep{Woodall2007}. Hence, the exospheric temperature and degree of ionisation depend on the planet's potential in terms of $v_\mathrm{esc}$ in our simplified model, as shown in Fig.~\ref{fig:degree_of_ion}. In the optically thin limit, the equivalent width of absorption is proportional to the total mass of the absorbing material \citep{Hoeijmakers2020a, Gebek2020}. For neutral hydrogen and ionised iron, they can be written as follows \begin{equation} \label{eq:M} \begin{aligned} \mathcal{T}_\mathrm{H} &\sim \rho (1-f_\mathrm{H}) \sim \rho_0 \bigg( \frac{\epsilon F_\mathrm{EUV}}{F_0} \bigg) \bigg( \frac{v_0}{v_\mathrm{esc}} \bigg)^3 (1-f_\mathrm{H}), \\ \mathcal{T}_\mathrm{Fe+} &\sim \rho_\mathrm{Fe} f_\mathrm{Fe} \sim \rho_0 \bigg( \frac{\epsilon F_\mathrm{EUV}}{F_0} \bigg) \bigg( \frac{v_0}{v_\mathrm{esc}} \bigg)^3 A_\mathrm{Fe} f_\mathrm{Fe}, \end{aligned} \end{equation} where a constant mixing of the metal species is assumed, $A_\mathrm{Fe}$ is the mass fraction of iron. We note that the photoionisation term ($f_\mathrm{H}$ and $f_\mathrm{Fe}$) can result in variations of the signal strength from different planets by a factor of 4 at most (see Fig.~\ref{fig:degree_of_ion}). Whereas, equation~\ref{eq:M} contains the linear term of $F_\mathrm{EUV}$ that usually varies by orders of magnitudes from system to system. Therefore, the absorption signal from the exosphere is dominated by the planetary outflow $\dot{M}$ induced by the EUV flux, which drives the positive correlation between H$\alpha$ and \ion{Fe}{ii} as we note in Section~\ref{sec:correlation}. Hence such correlation is indeed expected, which reflects properties such as the mass-loss rate and EUV irradiation of different UHJs. For simplicity we did not take into account the fraction of atoms at the right state (e.g. the principal quantum number n=2 for H$\alpha$ absorption). This requires non-local thermal equilibrium (NLTE) calculation of the radiation field and gets even more complicated for the case of \ion{Fe}{ii} where we combined multiple lines in the observation. The ionisation of \ion{Fe}{ii} is also ignored here. We do not attempt to make detailed interpretation of observed lines in any particular system. Instead, our aim is to draw the relation between H$\alpha$ and \ion{Fe}{ii} across the UHJ population. These assumptions hopefully do not deter the overall trend, yet we need to rely on detailed simulations for a definite answer. \section{Summary of UHJs with detailed characterisation with high-resolution transmission spectroscopy} \begin{table*} \caption{Properties of atomic absorption features detected in transmission spectra of ultra-hot Jupiters.} \label{tab:UHJs} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{l c c c c c c c} \hline\hline & HAT-P-70b & KELT-9b & MASCARA-2b & MASCARA-4b & WASP-76b & WASP-121b & WASP-189b \\ \hline $R_p/R_\ast$ & $0.099$ & $0.082$ & $0.113$ & $0.087$ & $0.109$ & $0.125$ & $0.061$ \\ $T_\mathrm{eq}$ (K) & $ 2562 \pm 52 $ & $ 3921 \pm 182 $ & $ 2260 \pm 50 $ & $ 2250 \pm 62 $ & $ 2228 \pm 120 $ & $ 2358 \pm 52 $ & $ 2641 \pm 31 $ \\ $T_\ast$ (K) & $ 8450 \pm 690 $ & $ 9600 \pm 400 $ & $ 8980 \pm 130 $ & $ 7800 \pm 200 $ & $ 6329 \pm 65 $ & $ 6586 \pm 59 $ & $ 8000 \pm 100 $ \\ $R_p$ ($R_\mathrm{Jup}$) & $ 1.87 \pm 0.15 $ & $ 1.926 \pm 0.047 $ & $ 1.83 \pm 0.07 $ & $ 1.515 \pm 0.044 $ & $ 1.863 \pm 0.083 $ & $ 1.865 \pm 0.044 $ & $ 1.619 \pm 0.021 $ \\ $M_p$ ($M_\mathrm{Jup}$) & $ <6.78 $ & $ 2.88 \pm 0.35 $ & $ <3.5$ & $ 1.675 \pm 0.241 $ & $ 0.894 \pm 0.014 $ & $ 1.183 \pm 0.064 $ & $ 1.99 \pm 0.16 $ \\ $H_0$ (km) & $ >184 $ & $ 704 \pm 95 $ & $ >302$ & $ 430 \pm 66 $ & $ 1206 \pm 102 $ & $ 967 \pm 65 $ & $ 485 \pm 40 $ \\ $h$ H$\alpha$ (\%) & $ 1.560 \pm 0.150 $ & $ 1.150 \pm 0.050 $ & $ 0.765 \pm 0.090 $ & $ -0.317 \pm 0.021 $ & $ <0.470$ & $ 1.700 \pm 0.048 $ & $ 0.13 \pm 0.02 $ \\ FWHM H$\alpha$ (\kms) & $ 34.1 \pm 3.9 $ & $ 51.2 \pm 2.6 $ & $ 20.8 \pm 3.4 $ & $ 31.4 \pm 2.4 $ & - & $ 40.9 \pm 1.7 $ & $ 27.4 \pm 4.6 $ \\ $h$ \ion{Fe}{i} (\%) & $ 0.037 \pm 0.003 $ & $ 0.016 \pm 0.001 $ & $ 0.013 \pm 0.002 $ & $ 0.0150 \pm 0.0001 $ & $ 0.049 \pm 0.003 $ & $ 0.039 \pm 0.002 $ & $ 0.0075 \pm 0.0004 $ \\ FWHM \ion{Fe}{i} (\kms) & $ 9.7 \pm 1.0 $ & $ 20.1 \pm 1.0 $ & $ 12.39 \pm 1.69 $ & $ 22.5 \pm 0.2 $ & $ 8.6 \pm 0.7 $ & $ 15.9 \pm 1.1 $ & $ 15.3 \pm 1.0 $ \\ $h$ \ion{Fe}{ii} (\%) & $ 0.437 \pm 0.017 $ & $ 0.183 \pm 0.004 $ & $ 0.130 \pm 0.011 $ & $ 0.0444 \pm 0.0011 $ & $ <0.063 $ & - & $ 0.023 \pm 0.002 $ \\ FWHM \ion{Fe}{ii} (\kms) & $ 13.7 \pm 0.6 $ & $ 21.8 \pm 0.7 $ & $ 8.54 \pm 0.87 $ & $ 16.5 \pm 0.4 $ & - & - & $ 14.1 \pm 1.2 $ \\ $h$ \ion{Na}{i} D (\%) & $ 0.655 \pm 0.150 $ & $> 0.095 \pm 0.007 $ & $ 0.320 \pm 0.050 $ & $ 0.191 \pm 0.017 $ & $ 0.360 \pm 0.050 $ & $ 0.480 \pm 0.047 $ & $ 0.153 \pm 0.031 $ \\ $h$ \ion{Mg}{i} (\%) & $ 0.131 \pm 0.018 $ & $ 0.056 \pm 0.005 $ & $ 0.062 \pm 0.005 $ & $ 0.0221 \pm 0.0009 $ & $ 0.114 \pm 0.016 $ & $ 0.123 \pm 0.015 $ & $ 0.018 \pm 0.002 $ \\ $h$ \ion{Ca}{ii} (\%) & $ 3.750 \pm 0.370 $ & $ 0.780 \pm 0.040 $ & $ 0.560 \pm 0.050 $ & $ 0.775 \pm 0.082 $ & $ 2.440 \pm 0.340 $ & $ 4.700 \pm 0.180 $ & $ 0.40 \pm 0.05 $ \\ References & (1),(2) & (3)-(10) & (11)-(15) & (16)-(18) & (19)-(24) & (25)-(28) & (29)-(31) \\ \hline \end{tabular} } \tablefoot{For the calculation of the lower atmosphere scale height $H_0$, we assume a mean molecular weight $\mu$ of 2.3. The amplitude of \ion{Na}{i} absorption ($h$ \ion{Na}{i} D) takes the average value of the \ion{Na}{i} doublet at 5891 and 5897 \AA. The $h$ \ion{Ca}{ii} takes the average value of the \ion{Ca}{ii} H \& K lines at 3969 and 3934 \AA. Properties of \ion{Mg}{i}, \ion{Fe}{i}, and \ion{Fe}{ii} are taken from cross-correlation outcomes. } \tablebib{ (1) \citet{Zhou2019}, (2) \citet{Bello-Arufe2022}, (3) \citet{Gaudi2017}, (4) \citet{Yan2018}, (5) \citet{Cauley2019}, (6) \citet{Borsa2019}, (7) \citet{Hoeijmakers2019}, (8) \citet{Yan2019}, (9) \citet{Turner2020}, (10) \citet{Wyttenbach2020}, (11) \citet{Talens2018}, (12) \citet{Casasayas-Barris2020}, (13) \citet{Hoeijmakers2020b}, (14) \citet{Nugroho2020}, (15) \citet{Stangret2020}, (16) \citet{Dorval2020}, (17) \citet{Ahlers2020}, (18) this work, (19) \citet{West2016}, (20) \citet{Ehrenreich2020}, (21) \citet{Tabernero2021}, (22) \citet{Seidel2021}, (23) \citet{Casasayas-Barris2021}, (24) \citet{Kesseli2022}, (25) \citet{Delrez2016}, (26) \citet{Cabot2020}, (27) \citet{Hoeijmakers2020a}, (28) \citet{Borsa2021}, (29) \citet{Anderson2018}, (30) \citet{Prinoth2022}, (31) \citet{Stangret2022}. } \end{table*} \end{appendix}